1
|
Gonçalves ÓF, Sayal J, Lisboa F, Palhares P. The experimental study of consciousness: Is psychology travelling back to the future? Int J Clin Health Psychol 2024; 24:100475. [PMID: 39021679 PMCID: PMC11253270 DOI: 10.1016/j.ijchp.2024.100475] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2023] [Accepted: 05/29/2024] [Indexed: 07/20/2024] Open
Abstract
It was with the promise of rendering an experimental approach to consciousness that psychology started its trajectory as an independent science more than 150 years ago. Here, we will posit that the neurosciences were instrumental in leading psychology to resume the study of consciousness by projecting an empirical agenda for the future. First, we will start by showing how scientists were able to venture into the consciousness of supposedly unconscious patients, opening the door for the identification of important neural correlates of distinct consciousness states. Then, we will describe how different technological advances and elegant experimental paradigms helped in establishing important neuronal correlates of global consciousness (i.e., being conscious at all), perceptual consciousness (i.e., being conscious of something), and self-consciousness (i.e., being conscious of itself). Finally, we will illustrate how the study of complex consciousness experiences may contribute to the clarification of the mechanisms associated with global consciousness, the relationship between perceptual and self-consciousness, and the interface among distinct self-consciousness domains. In closing, we will elaborate on the road ahead of us for re-establishing psychology as a science of consciousness.
Collapse
Affiliation(s)
| | - Joana Sayal
- Proaction Lab – CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Colégio de Jesus, R. Inácio Duarte 65, Coimbra 3000-481, Portugal
| | - Fábio Lisboa
- Proaction Lab – CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Colégio de Jesus, R. Inácio Duarte 65, Coimbra 3000-481, Portugal
| | - Pedro Palhares
- Proaction Lab – CINEICC, Faculty of Psychology and Educational Sciences, University of Coimbra, Colégio de Jesus, R. Inácio Duarte 65, Coimbra 3000-481, Portugal
| |
Collapse
|
2
|
Zhu R, Lilak S, Loeffler A, Lizier J, Stieg A, Gimzewski J, Kuncic Z. Online dynamical learning and sequence memory with neuromorphic nanowire networks. Nat Commun 2023; 14:6697. [PMID: 37914696 PMCID: PMC10620219 DOI: 10.1038/s41467-023-42470-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Accepted: 10/11/2023] [Indexed: 11/03/2023] Open
Abstract
Nanowire Networks (NWNs) belong to an emerging class of neuromorphic systems that exploit the unique physical properties of nanostructured materials. In addition to their neural network-like physical structure, NWNs also exhibit resistive memory switching in response to electrical inputs due to synapse-like changes in conductance at nanowire-nanowire cross-point junctions. Previous studies have demonstrated how the neuromorphic dynamics generated by NWNs can be harnessed for temporal learning tasks. This study extends these findings further by demonstrating online learning from spatiotemporal dynamical features using image classification and sequence memory recall tasks implemented on an NWN device. Applied to the MNIST handwritten digit classification task, online dynamical learning with the NWN device achieves an overall accuracy of 93.4%. Additionally, we find a correlation between the classification accuracy of individual digit classes and mutual information. The sequence memory task reveals how memory patterns embedded in the dynamical features enable online learning and recall of a spatiotemporal sequence pattern. Overall, these results provide proof-of-concept of online learning from spatiotemporal dynamics using NWNs and further elucidate how memory can enhance learning.
Collapse
Affiliation(s)
- Ruomin Zhu
- School of Physics, The University of Sydney, Sydney, NSW, Australia.
| | - Sam Lilak
- Department of Chemistry and Biochemistry, University of California, Los Angeles, Los Angeles, CA, US
| | - Alon Loeffler
- School of Physics, The University of Sydney, Sydney, NSW, Australia
| | - Joseph Lizier
- School of Computer Science, The University of Sydney, Sydney, NSW, Australia
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia
| | - Adam Stieg
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, US.
- WPI Center for Materials Nanoarchitectonics (MANA), National Institute for Materials Science (NIMS), Tsukuba, Japan.
| | - James Gimzewski
- Department of Chemistry and Biochemistry, University of California, Los Angeles, Los Angeles, CA, US.
- California NanoSystems Institute, University of California, Los Angeles, Los Angeles, CA, US.
- WPI Center for Materials Nanoarchitectonics (MANA), National Institute for Materials Science (NIMS), Tsukuba, Japan.
- Research Center for Neuromorphic AI Hardware, Kyutech, Kitakyushu, Japan.
| | - Zdenka Kuncic
- School of Physics, The University of Sydney, Sydney, NSW, Australia.
- Centre for Complex Systems, The University of Sydney, Sydney, NSW, Australia.
- The University of Sydney Nano Institute, Sydney, NSW, Australia.
| |
Collapse
|
3
|
Huang J, Kelber F, Vogginger B, Liu C, Kreutz F, Gerhards P, Scholz D, Knobloch K, Mayr CG. Efficient SNN multi-cores MAC array acceleration on SpiNNaker 2. Front Neurosci 2023; 17:1223262. [PMID: 37609449 PMCID: PMC10440698 DOI: 10.3389/fnins.2023.1223262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Accepted: 07/13/2023] [Indexed: 08/24/2023] Open
Abstract
The potential low-energy feature of the spiking neural network (SNN) engages the attention of the AI community. Only CPU-involved SNN processing inevitably results in an inherently long temporal span in the cases of large models and massive datasets. This study introduces the MAC array, a parallel architecture on each processing element (PE) of SpiNNaker 2, into the computational process of SNN inference. Based on the work of single-core optimization algorithms, we investigate the parallel acceleration algorithms for collaborating with multi-core MAC arrays. The proposed Echelon Reorder model information densification algorithm, along with the adapted multi-core two-stage splitting and authorization deployment strategies, achieves efficient spatio-temporal load balancing and optimization performance. We evaluate the performance by benchmarking a wide range of constructed SNN models to research on the influence degree of different factors. We also benchmark with two actual SNN models (the gesture recognition model of the real-world application and balanced random cortex-like network from neuroscience) on the neuromorphic multi-core hardware SpiNNaker 2. The echelon optimization algorithm with mixed processors realizes 74.28% and 85.78% memory footprint of the original MAC calculation on these two models, respectively. The execution time of echelon algorithms using only MAC or mixed processors accounts for ≤ 24.56% of the serial ARM baseline. Accelerating SNN inference with algorithms in this study is essentially the general sparse matrix-matrix multiplication (SpGEMM) problem. This article explicitly expands the application field of the SpGEMM issue to SNN, developing novel SpGEMM optimization algorithms fitting the SNN feature and MAC array.
Collapse
Affiliation(s)
| | - Florian Kelber
- Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
| | - Bernhard Vogginger
- Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
| | - Chen Liu
- Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
| | | | | | | | | | - Christian G. Mayr
- Highly-Parallel VLSI-Systems and Neuro-Microelectronics, Faculty of Electrical and Computer Engineering, Institute of Principles of Electrical and Electronic Engineering, Technische Universität Dresden, Dresden, Germany
- Centre for Tactile Internet with Human-in-the-Loop (CeTI), Cluster of Excellence, Technische Universität Dresden, Dresden, Germany
| |
Collapse
|
4
|
Precise Spiking Motifs in Neurobiological and Neuromorphic Data. Brain Sci 2022; 13:brainsci13010068. [PMID: 36672049 PMCID: PMC9856822 DOI: 10.3390/brainsci13010068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 12/20/2022] [Accepted: 12/23/2022] [Indexed: 12/31/2022] Open
Abstract
Why do neurons communicate through spikes? By definition, spikes are all-or-none neural events which occur at continuous times. In other words, spikes are on one side binary, existing or not without further details, and on the other, can occur at any asynchronous time, without the need for a centralized clock. This stands in stark contrast to the analog representation of values and the discretized timing classically used in digital processing and at the base of modern-day neural networks. As neural systems almost systematically use this so-called event-based representation in the living world, a better understanding of this phenomenon remains a fundamental challenge in neurobiology in order to better interpret the profusion of recorded data. With the growing need for intelligent embedded systems, it also emerges as a new computing paradigm to enable the efficient operation of a new class of sensors and event-based computers, called neuromorphic, which could enable significant gains in computation time and energy consumption-a major societal issue in the era of the digital economy and global warming. In this review paper, we provide evidence from biology, theory and engineering that the precise timing of spikes plays a crucial role in our understanding of the efficiency of neural networks.
Collapse
|
5
|
Klassert R, Baumbach A, Petrovici MA, Gärttner M. Variational learning of quantum ground states on spiking neuromorphic hardware. iScience 2022; 25:104707. [PMID: 35992070 PMCID: PMC9386107 DOI: 10.1016/j.isci.2022.104707] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2022] [Revised: 05/05/2022] [Accepted: 06/28/2022] [Indexed: 11/26/2022] Open
Abstract
Recent research has demonstrated the usefulness of neural networks as variational ansatz functions for quantum many-body states. However, high-dimensional sampling spaces and transient autocorrelations confront these approaches with a challenging computational bottleneck. Compared to conventional neural networks, physical model devices offer a fast, efficient and inherently parallel substrate capable of related forms of Markov chain Monte Carlo sampling. Here, we demonstrate the ability of a neuromorphic chip to represent the ground states of quantum spin models by variational energy minimization. We develop a training algorithm and apply it to the transverse field Ising model, showing good performance at moderate system sizes (N≤10). A systematic hyperparameter study shows that performance depends on sample quality, which is limited by temporal parameter variations on the analog neuromorphic chip. Our work thus provides an important step towards harnessing the capabilities of neuromorphic hardware for tackling the curse of dimensionality in quantum many-body problems. Variational scheme for representing quantum ground states with neuromorphic hardware Accelerated physical system yields system-size independent sample generation time Accurate learning of ground states across a quantum phase transition Detailed analysis of algorithmic and technical limitations
Collapse
|
6
|
Ostrau C, Klarhorst C, Thies M, Rückert U. Benchmarking Neuromorphic Hardware and Its Energy Expenditure. Front Neurosci 2022; 16:873935. [PMID: 35720731 PMCID: PMC9201569 DOI: 10.3389/fnins.2022.873935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Accepted: 04/27/2022] [Indexed: 11/13/2022] Open
Abstract
We propose and discuss a platform overarching benchmark suite for neuromorphic hardware. This suite covers benchmarks from low-level characterization to high-level application evaluation using benchmark specific metrics. With this rather broad approach we are able to compare various hardware systems including mixed-signal and fully digital neuromorphic architectures. Selected benchmarks are discussed and results for several target platforms are presented revealing characteristic differences between the various systems. Furthermore, a proposed energy model allows to combine benchmark performance metrics with energy efficiency. This model enables the prediction of the energy expenditure of a network on a target system without actually having access to it. To quantify the efficiency gap between neuromorphics and the biological paragon of the human brain, the energy model is used to estimate the energy required for a full brain simulation. This reveals that current neuromorphic systems are at least four orders of magnitude less efficient. It is argued, that even with a modern fabrication process, two to three orders of magnitude are remaining. Finally, for selected benchmarks the performance and efficiency of the neuromorphic solution is compared to standard approaches.
Collapse
|
7
|
Albers J, Pronold J, Kurth AC, Vennemo SB, Haghighi Mood K, Patronis A, Terhorst D, Jordan J, Kunkel S, Tetzlaff T, Diesmann M, Senk J. A Modular Workflow for Performance Benchmarking of Neuronal Network Simulations. Front Neuroinform 2022; 16:837549. [PMID: 35645755 PMCID: PMC9131021 DOI: 10.3389/fninf.2022.837549] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2021] [Accepted: 03/11/2022] [Indexed: 11/13/2022] Open
Abstract
Modern computational neuroscience strives to develop complex network models to explain dynamics and function of brains in health and disease. This process goes hand in hand with advancements in the theory of neuronal networks and increasing availability of detailed anatomical data on brain connectivity. Large-scale models that study interactions between multiple brain areas with intricate connectivity and investigate phenomena on long time scales such as system-level learning require progress in simulation speed. The corresponding development of state-of-the-art simulation engines relies on information provided by benchmark simulations which assess the time-to-solution for scientifically relevant, complementary network models using various combinations of hardware and software revisions. However, maintaining comparability of benchmark results is difficult due to a lack of standardized specifications for measuring the scaling performance of simulators on high-performance computing (HPC) systems. Motivated by the challenging complexity of benchmarking, we define a generic workflow that decomposes the endeavor into unique segments consisting of separate modules. As a reference implementation for the conceptual workflow, we develop beNNch: an open-source software framework for the configuration, execution, and analysis of benchmarks for neuronal network simulations. The framework records benchmarking data and metadata in a unified way to foster reproducibility. For illustration, we measure the performance of various versions of the NEST simulator across network models with different levels of complexity on a contemporary HPC system, demonstrating how performance bottlenecks can be identified, ultimately guiding the development toward more efficient simulation technology.
Collapse
Affiliation(s)
- Jasper Albers
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
- *Correspondence: Jasper Albers
| | - Jari Pronold
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Anno Christopher Kurth
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- RWTH Aachen University, Aachen, Germany
| | - Stine Brekke Vennemo
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | | | - Alexander Patronis
- Jülich Supercomputing Centre (JSC), Jülich Research Centre, Jülich, Germany
| | - Dennis Terhorst
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
| | - Susanne Kunkel
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
8
|
Chen H, Huo D, Zhang J. Gas Recognition in E-Nose System: A Review. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2022; 16:169-184. [PMID: 35412988 DOI: 10.1109/tbcas.2022.3166530] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Gas recognition is essential in an electronic nose (E-nose) system, which is responsible for recognizing multivariate responses obtained by gas sensors in various applications. Over the past decades, classical gas recognition approaches such as principal component analysis (PCA) have been widely applied in E-nose systems. In recent years, artificial neural network (ANN) has revolutionized the field of E-nose, especially spiking neural network (SNN). In this paper, we investigate recent gas recognition methods for E-nose, and compare and analyze them in terms of algorithms and hardware implementations. We find each classical gas recognition method has a relatively fixed framework and a few parameters, which makes it easy to be designed and perform well with limited gas samples, but weak in multi-gas recognition under noise. While ANN-based methods obtain better recognition accuracy with flexible architectures and lots of parameters. However, some ANNs are too complex to be implemented in portable E-nose systems, such as deep convolutional neural networks (CNNs). In contrast, SNN-based gas recognition methods achieve satisfying accuracy and recognize more types of gases, and could be implemented with energy-efficient hardware, which makes them a promising candidate in multi-gas identification.
Collapse
|
9
|
Yang S, Liu R, Liu T, Zhuang Y, Li J, Teng Y. Constructing artificial neural networks using genetic circuits to realize neuromorphic computing. CHINESE SCIENCE BULLETIN-CHINESE 2021. [DOI: 10.1360/tb-2021-0501] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
10
|
Primavera BA, Shainline JM. Considerations for Neuromorphic Supercomputing in Semiconducting and Superconducting Optoelectronic Hardware. Front Neurosci 2021; 15:732368. [PMID: 34552465 PMCID: PMC8450355 DOI: 10.3389/fnins.2021.732368] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 08/09/2021] [Indexed: 11/24/2022] Open
Abstract
Any large-scale spiking neuromorphic system striving for complexity at the level of the human brain and beyond will need to be co-optimized for communication and computation. Such reasoning leads to the proposal for optoelectronic neuromorphic platforms that leverage the complementary properties of optics and electronics. Starting from the conjecture that future large-scale neuromorphic systems will utilize integrated photonics and fiber optics for communication in conjunction with analog electronics for computation, we consider two possible paths toward achieving this vision. The first is a semiconductor platform based on analog CMOS circuits and waveguide-integrated photodiodes. The second is a superconducting approach that utilizes Josephson junctions and waveguide-integrated superconducting single-photon detectors. We discuss available devices, assess scaling potential, and provide a list of key metrics and demonstrations for each platform. Both platforms hold potential, but their development will diverge in important respects. Semiconductor systems benefit from a robust fabrication ecosystem and can build on extensive progress made in purely electronic neuromorphic computing but will require III-V light source integration with electronics at an unprecedented scale, further advances in ultra-low capacitance photodiodes, and success from emerging memory technologies. Superconducting systems place near theoretically minimum burdens on light sources (a tremendous boon to one of the most speculative aspects of either platform) and provide new opportunities for integrated, high-endurance synaptic memory. However, superconducting optoelectronic systems will also contend with interfacing low-voltage electronic circuits to semiconductor light sources, the serial biasing of superconducting devices on an unprecedented scale, a less mature fabrication ecosystem, and cryogenic infrastructure.
Collapse
Affiliation(s)
- Bryce A. Primavera
- National Institute of Standards and Technology, Boulder, CO, United States
- Department of Physics, University of Colorado Boulder, Boulder, CO, United States
| | | |
Collapse
|
11
|
Chatterjee R, Paluh JL, Chowdhury S, Mondal S, Raha A, Mukherjee A. SyNC, a Computationally Extensive and Realistic Neural Net to Identify Relative Impacts of Synaptopathy Mechanisms on Glutamatergic Neurons and Their Networks in Autism and Complex Neurological Disorders. Front Cell Neurosci 2021; 15:674030. [PMID: 34354570 PMCID: PMC8330424 DOI: 10.3389/fncel.2021.674030] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 05/25/2021] [Indexed: 11/24/2022] Open
Abstract
Synaptic function and experience-dependent plasticity across multiple synapses are dependent on the types of neurons interacting as well as the intricate mechanisms that operate at the molecular level of the synapse. To understand the complexity of information processing at synaptic networks will rely in part on effective computational models. Such models should also evaluate disruptions to synaptic function by multiple mechanisms. By co-development of algorithms alongside hardware, real time analysis metrics can be co-prioritized along with biological complexity. The hippocampus is implicated in autism spectrum disorders (ASD) and within this region glutamatergic neurons constitute 90% of the neurons integral to the functioning of neuronal networks. Here we generate a computational model referred to as ASD interrogator (ASDint) and corresponding hardware to enable in silicon analysis of multiple ASD mechanisms affecting glutamatergic neuron synapses. The hardware architecture Synaptic Neuronal Circuit, SyNC, is a novel GPU accelerator or neural net, that extends discovery by acting as a biologically relevant realistic neuron synapse in real time. Co-developed ASDint and SyNC expand spiking neural network models of plasticity to comparative analysis of retrograde messengers. The SyNC model is realized in an ASIC architecture, which enables the ability to compute increasingly complex scenarios without sacrificing area efficiency of the model. Here we apply the ASDint model to analyse neuronal circuitry dysfunctions associated with autism spectral disorder (ASD) synaptopathies and their effects on the synaptic learning parameter and demonstrate SyNC on an ideal ASDint scenario. Our work highlights the value of secondary pathways in regard to evaluating complex ASD synaptopathy mechanisms. By comparing the degree of variation in the synaptic learning parameter to the response obtained from simulations of the ideal scenario we determine the potency and time of the effect of a particular evaluated mechanism. Hence simulations of such scenarios in even a small neuronal network now allows us to identify relative impacts of changed parameters and their effect on synaptic function. Based on this, we can estimate the minimum fraction of a neuron exhibiting a particular dysfunction scenario required to lead to complete failure of a neural network to coordinate pre-synaptic and post-synaptic outputs.
Collapse
Affiliation(s)
- Rounak Chatterjee
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, India
| | - Janet L Paluh
- SUNY Polytechnic Institute, College of Nanoscale Science and Engineering, Nanobioscience, Albany, NY, United States
| | - Souradeep Chowdhury
- Department of Electronics and Telecommunication Engineering, Jadavpur University, Kolkata, India
| | - Soham Mondal
- Flash Controller Team, Memory Solutions, Samsung Semiconductor India Research, Samsung Electronics Co., Ltd., Bangalore, India
| | - Arnab Raha
- Advanced Architecture Research, Intel Intelligent Systems Group, Intel Edge AI, Intel Corporation, Santa Clara, CA, United States
| | | |
Collapse
|
12
|
Zhu R, Hochstetter J, Loeffler A, Diaz-Alvarez A, Nakayama T, Lizier JT, Kuncic Z. Information dynamics in neuromorphic nanowire networks. Sci Rep 2021; 11:13047. [PMID: 34158521 PMCID: PMC8219687 DOI: 10.1038/s41598-021-92170-7] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Accepted: 05/31/2021] [Indexed: 12/18/2022] Open
Abstract
Neuromorphic systems comprised of self-assembled nanowires exhibit a range of neural-like dynamics arising from the interplay of their synapse-like electrical junctions and their complex network topology. Additionally, various information processing tasks have been demonstrated with neuromorphic nanowire networks. Here, we investigate the dynamics of how these unique systems process information through information-theoretic metrics. In particular, Transfer Entropy (TE) and Active Information Storage (AIS) are employed to investigate dynamical information flow and short-term memory in nanowire networks. In addition to finding that the topologically central parts of networks contribute the most to the information flow, our results also reveal TE and AIS are maximized when the networks transitions from a quiescent to an active state. The performance of neuromorphic networks in memory and learning tasks is demonstrated to be dependent on their internal dynamical states as well as topological structure. Optimal performance is found when these networks are pre-initialised to the transition state where TE and AIS are maximal. Furthermore, an optimal range of information processing resources (i.e. connectivity density) is identified for performance. Overall, our results demonstrate information dynamics is a valuable tool to study and benchmark neuromorphic systems.
Collapse
Affiliation(s)
- Ruomin Zhu
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia.
| | - Joel Hochstetter
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Alon Loeffler
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Adrian Diaz-Alvarez
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan
| | - Tomonobu Nakayama
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan
- Graduate School of Pure and Applied Sciences, University of Tsukuba, Tsukuba, Japan
| | - Joseph T Lizier
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia
| | - Zdenka Kuncic
- School of Physics, The University of Sydney, Sydney, NSW, 2006, Australia.
- International Center for Materials Nanoarchitectonics (WPI-MANA), National Institute for Materials Science (NIMS), 1-1 Namiki, Tsukuba, Ibaraki, 305-0044, Japan.
- Centre for Complex Systems, Faculty of Engineering, The University of Sydney, Sydney, NSW, 2006, Australia.
- Sydney Nano Institute, The University of Sydney, Sydney, NSW, 2006, Australia.
| |
Collapse
|
13
|
Stapmanns J, Hahne J, Helias M, Bolten M, Diesmann M, Dahmen D. Event-Based Update of Synapses in Voltage-Based Learning Rules. Front Neuroinform 2021; 15:609147. [PMID: 34177505 PMCID: PMC8222618 DOI: 10.3389/fninf.2021.609147] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Accepted: 04/07/2021] [Indexed: 11/13/2022] Open
Abstract
Due to the point-like nature of neuronal spiking, efficient neural network simulators often employ event-based simulation schemes for synapses. Yet many types of synaptic plasticity rely on the membrane potential of the postsynaptic cell as a third factor in addition to pre- and postsynaptic spike times. In some learning rules membrane potentials not only influence synaptic weight changes at the time points of spike events but in a continuous manner. In these cases, synapses therefore require information on the full time course of membrane potentials to update their strength which a priori suggests a continuous update in a time-driven manner. The latter hinders scaling of simulations to realistic cortical network sizes and relevant time scales for learning. Here, we derive two efficient algorithms for archiving postsynaptic membrane potentials, both compatible with modern simulation engines based on event-based synapse updates. We theoretically contrast the two algorithms with a time-driven synapse update scheme to analyze advantages in terms of memory and computations. We further present a reference implementation in the spiking neural network simulator NEST for two prototypical voltage-based plasticity rules: the Clopath rule and the Urbanczik-Senn rule. For both rules, the two event-based algorithms significantly outperform the time-driven scheme. Depending on the amount of data to be stored for plasticity, which heavily differs between the rules, a strong performance increase can be achieved by compressing or sampling of information on membrane potentials. Our results on computational efficiency related to archiving of information provide guidelines for the design of learning rules in order to make them practically usable in large-scale networks.
Collapse
Affiliation(s)
- Jonas Stapmanns
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Jan Hahne
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Institute for Theoretical Solid State Physics, RWTH Aachen University, Aachen, Germany
| | - Matthias Bolten
- School of Mathematics and Natural Sciences, Bergische Universität Wuppertal, Wuppertal, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - David Dahmen
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA Institute Brain Structure Function Relationship (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
14
|
Szczecinski NS, Quinn RD, Hunt AJ. Extending the Functional Subnetwork Approach to a Generalized Linear Integrate-and-Fire Neuron Model. Front Neurorobot 2020; 14:577804. [PMID: 33281592 PMCID: PMC7691602 DOI: 10.3389/fnbot.2020.577804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 10/08/2020] [Indexed: 11/24/2022] Open
Abstract
Engineering neural networks to perform specific tasks often represents a monumental challenge in determining network architecture and parameter values. In this work, we extend our previously-developed method for tuning networks of non-spiking neurons, the “Functional subnetwork approach” (FSA), to the tuning of networks composed of spiking neurons. This extension enables the direct assembly and tuning of networks of spiking neurons and synapses based on the network's intended function, without the use of global optimization or machine learning. To extend the FSA, we show that the dynamics of a generalized linear integrate and fire (GLIF) neuron model have fundamental similarities to those of a non-spiking leaky integrator neuron model. We derive analytical expressions that show functional parallels between: (1) A spiking neuron's steady-state spiking frequency and a non-spiking neuron's steady-state voltage in response to an applied current; (2) a spiking neuron's transient spiking frequency and a non-spiking neuron's transient voltage in response to an applied current; and (3) a spiking synapse's average conductance during steady spiking and a non-spiking synapse's conductance. The models become more similar as additional spiking neurons are added to each population “node” in the network. We apply the FSA to model a neuromuscular reflex pathway two different ways: Via non-spiking components and then via spiking components. These results provide a concrete example of how a single non-spiking neuron may model the average spiking frequency of a population of spiking neurons. The resulting model also demonstrates that by using the FSA, models can be constructed that incorporate both spiking and non-spiking units. This work facilitates the construction of large networks of spiking neurons and synapses that perform specific functions, for example, those implemented with neuromorphic computing hardware, by providing an analytical method for directly tuning their parameters without time-consuming optimization or learning.
Collapse
Affiliation(s)
- Nicholas S Szczecinski
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV, United States
| | - Roger D Quinn
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Alexander J Hunt
- Department of Mechanical and Materials Engineering, Portland State University, Portland, OR, United States
| |
Collapse
|
15
|
Abstract
This study presents a computational model to reproduce the biological dynamics of "listening to music." A biologically plausible model of periodicity pitch detection is proposed and simulated. Periodicity pitch is computed across a range of the auditory spectrum. Periodicity pitch is detected from subsets of activated auditory nerve fibers (ANFs). These activate connected model octopus cells, which trigger model neurons detecting onsets and offsets; thence model interval-tuned neurons are innervated at the right interval times; and finally, a set of common interval-detecting neurons indicate pitch. Octopus cells rhythmically spike with the pitch periodicity of the sound. Batteries of interval-tuned neurons stopwatch-like measure the inter-spike intervals of the octopus cells by coding interval durations as first spike latencies (FSLs). The FSL-triggered spikes synchronously coincide through a monolayer spiking neural network at the corresponding receiver pitch neurons.
Collapse
Affiliation(s)
- Frank Klefenz
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
| | - Tamas Harczos
- Fraunhofer Institute for Digital Media Technology IDMT, Ilmenau, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- audifon GmbH & Co. KG, Kölleda, Germany
| |
Collapse
|
16
|
Abstract
In many stochastic dynamical systems, ordinary chaotic behavior is preceded by a full-dimensional phase that exhibits 1/f-type power spectra and/or scale-free statistics of (anti)instantons such as neuroavalanches, earthquakes, etc. In contrast with the phenomenological concept of self-organized criticality, the recently found approximation-free supersymmetric theory of stochastics (STS) identifies this phase as the noise-induced chaos (N-phase), i.e., the phase where the topological supersymmetry pertaining to all stochastic dynamical systems is broken spontaneously by the condensation of the noise-induced (anti)instantons. Here, we support this picture in the context of neurodynamics. We study a 1D chain of neuron-like elements and find that the dynamics in the N-phase is indeed featured by positive stochastic Lyapunov exponents and dominated by (anti)instantonic processes of (creation) annihilation of kinks and antikinks, which can be viewed as predecessors of boundaries of neuroavalanches. We also construct the phase diagram of emulated stochastic neurodynamics on Spikey neuromorphic hardware and demonstrate that the width of the N-phase vanishes in the deterministic limit in accordance with STS. As a first result of the application of STS to neurodynamics comes the conclusion that a conscious brain can reside only in the N-phase.
Collapse
|
17
|
Bio-Inspired Strategies for Improving the Selectivity and Sensitivity of Artificial Noses: A Review. SENSORS 2020; 20:s20061803. [PMID: 32214038 PMCID: PMC7146165 DOI: 10.3390/s20061803] [Citation(s) in RCA: 21] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/26/2020] [Revised: 03/18/2020] [Accepted: 03/21/2020] [Indexed: 12/20/2022]
Abstract
Artificial noses are broad-spectrum multisensors dedicated to the detection of volatile organic compounds (VOCs). Despite great recent progress, they still suffer from a lack of sensitivity and selectivity. We will review, in a systemic way, the biomimetic strategies for improving these performance criteria, including the design of sensing materials, their immobilization on the sensing surface, the sampling of VOCs, the choice of a transduction method, and the data processing. This reflection could help address new applications in domains where high-performance artificial noses are required such as public security and safety, environment, industry, or healthcare.
Collapse
|
18
|
Mirigliano M, Decastri D, Pullia A, Dellasega D, Casu A, Falqui A, Milani P. Complex electrical spiking activity in resistive switching nanostructured Au two-terminal devices. NANOTECHNOLOGY 2020; 31:234001. [PMID: 32202254 DOI: 10.1088/1361-6528/ab76ec] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Networks of nanoscale objects are the subject of increasing interest as resistive switching systems for the fabrication of neuromorphic computing architectures. Nanostructured films of bare gold clusters produced in gas phase with thickness well beyond the electrical percolation threshold, show a non-ohmic electrical behavior and resistive switching, resulting in groups of current spikes with irregular temporal organization. Here we report the systematic characterization of the temporal correlations between single spikes and spiking rate power spectrum of nanostructured Au two-terminal devices consisting of a cluster-assembled film deposited between two planar electrodes. By varying the nanostructured film thickness we fabricated two different classes of devices with high and low initial resistance respectively. We show that the switching dynamics can be described by a power law distribution in low resistance devices whereas a bi-exponential behavior is observed in the high resistance ones. The measured resistance of cluster-assembled films shows a [Formula: see text] scaling behavior in the range of analyzed frequencies. Our results suggest the possibility of using cluster-assembled Au films as components for neuromorphic systems where a certain degree of stochasticity is required.
Collapse
Affiliation(s)
- M Mirigliano
- CIMAINA and Department of Physics, Università degli Studi di Milano, via Celoria 16, I-20133, Milano, Italy
| | | | | | | | | | | | | |
Collapse
|
19
|
Kungl AF, Schmitt S, Klähn J, Müller P, Baumbach A, Dold D, Kugele A, Müller E, Koke C, Kleider M, Mauch C, Breitwieser O, Leng L, Gürtler N, Güttler M, Husmann D, Husmann K, Hartel A, Karasenko V, Grübl A, Schemmel J, Meier K, Petrovici MA. Accelerated Physical Emulation of Bayesian Inference in Spiking Neural Networks. Front Neurosci 2019; 13:1201. [PMID: 31798400 PMCID: PMC6868054 DOI: 10.3389/fnins.2019.01201] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2019] [Accepted: 10/23/2019] [Indexed: 11/13/2022] Open
Abstract
The massively parallel nature of biological information processing plays an important role due to its superiority in comparison to human-engineered computing devices. In particular, it may hold the key to overcoming the von Neumann bottleneck that limits contemporary computer architectures. Physical-model neuromorphic devices seek to replicate not only this inherent parallelism, but also aspects of its microscopic dynamics in analog circuits emulating neurons and synapses. However, these machines require network models that are not only adept at solving particular tasks, but that can also cope with the inherent imperfections of analog substrates. We present a spiking network model that performs Bayesian inference through sampling on the BrainScaleS neuromorphic platform, where we use it for generative and discriminative computations on visual data. By illustrating its functionality on this platform, we implicitly demonstrate its robustness to various substrate-specific distortive effects, as well as its accelerated capability for computation. These results showcase the advantages of brain-inspired physical computation and provide important building blocks for large-scale neuromorphic applications.
Collapse
Affiliation(s)
- Akos F Kungl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Sebastian Schmitt
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johann Klähn
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Paul Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Baumbach
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dominik Dold
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Alexander Kugele
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Eric Müller
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christoph Koke
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mitja Kleider
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Christian Mauch
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Luziwei Leng
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Nico Gürtler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Maurice Güttler
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Dan Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Kai Husmann
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Hartel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Vitali Karasenko
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Andreas Grübl
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany
| | - Mihai A Petrovici
- Kirchhoff-Institute for Physics, Heidelberg University, Heidelberg, Germany.,Department of Physiology, University of Bern, Bern, Switzerland
| |
Collapse
|
20
|
Jordan J, Petrovici MA, Breitwieser O, Schemmel J, Meier K, Diesmann M, Tetzlaff T. Deterministic networks for probabilistic computing. Sci Rep 2019; 9:18303. [PMID: 31797943 PMCID: PMC6893033 DOI: 10.1038/s41598-019-54137-7] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Accepted: 11/06/2019] [Indexed: 01/13/2023] Open
Abstract
Neuronal network models of high-level brain functions such as memory recall and reasoning often rely on the presence of some form of noise. The majority of these models assumes that each neuron in the functional network is equipped with its own private source of randomness, often in the form of uncorrelated external noise. In vivo, synaptic background input has been suggested to serve as the main source of noise in biological neuronal networks. However, the finiteness of the number of such noise sources constitutes a challenge to this idea. Here, we show that shared-noise correlations resulting from a finite number of independent noise sources can substantially impair the performance of stochastic network models. We demonstrate that this problem is naturally overcome by replacing the ensemble of independent noise sources by a deterministic recurrent neuronal network. By virtue of inhibitory feedback, such networks can generate small residual spatial correlations in their activity which, counter to intuition, suppress the detrimental effect of shared input. We exploit this mechanism to show that a single recurrent network of a few hundred neurons can serve as a natural noise source for a large ensemble of functional networks performing probabilistic computations, each comprising thousands of units.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.
- Department of Physiology, University of Bern, Bern, Switzerland.
| | - Mihai A Petrovici
- Department of Physiology, University of Bern, Bern, Switzerland
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Oliver Breitwieser
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Johannes Schemmel
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, Ruprecht-Karls-University Heidelberg, Heidelberg, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Tom Tetzlaff
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain-Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
21
|
Rozenberg MJ, Schneegans O, Stoliar P. An ultra-compact leaky-integrate-and-fire model for building spiking neural networks. Sci Rep 2019; 9:11123. [PMID: 31366958 PMCID: PMC6668387 DOI: 10.1038/s41598-019-47348-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 07/12/2019] [Indexed: 11/21/2022] Open
Abstract
We introduce an ultra-compact electronic circuit that realizes the leaky-integrate-and-fire model of artificial neurons. Our circuit has only three active devices, two transistors and a silicon controlled rectifier (SCR). We demonstrate the implementation of biologically realistic features, such as spike-frequency adaptation, a refractory period and voltage modulation of spiking rate. All characteristic times can be controlled by the resistive parameters of the circuit. We built the circuit with out-of-the-shelf components and demonstrate that our ultra-compact neuron is a modular block that can be associated to build multi-layer deep neural networks. We also argue that our circuit has low power requirements, as it is normally off except during spike generation. Finally, we discuss the ultimate ultra-compact limit, which may be achieved by further replacing the SCR circuit with Mott materials.
Collapse
Affiliation(s)
- M J Rozenberg
- Laboratoire de Physique des Solides, UMR8502 CNRS - Université Paris-Sud, Université Paris-Saclay, 91405, Orsay, Cedex, France.
| | - O Schneegans
- Laboratoire Génie électrique et électronique de Paris, CentraleSupélec, UMR8507 CNRS - Sorbonne Université, Université Paris-Saclay, 91192, Gif-sur-Yvette, Cedex, France
| | - P Stoliar
- National Institute of Advanced Industrial Science and Technology (AIST), 305-8565, Tsukuba, Japan
| |
Collapse
|
22
|
Steffen L, Reichard D, Weinland J, Kaiser J, Roennau A, Dillmann R. Neuromorphic Stereo Vision: A Survey of Bio-Inspired Sensors and Algorithms. Front Neurorobot 2019; 13:28. [PMID: 31191287 PMCID: PMC6546825 DOI: 10.3389/fnbot.2019.00028] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2019] [Accepted: 05/07/2019] [Indexed: 11/16/2022] Open
Abstract
Any visual sensor, whether artificial or biological, maps the 3D-world on a 2D-representation. The missing dimension is depth and most species use stereo vision to recover it. Stereo vision implies multiple perspectives and matching, hence it obtains depth from a pair of images. Algorithms for stereo vision are also used prosperously in robotics. Although, biological systems seem to compute disparities effortless, artificial methods suffer from high energy demands and latency. The crucial part is the correspondence problem; finding the matching points of two images. The development of event-based cameras, inspired by the retina, enables the exploitation of an additional physical constraint—time. Due to their asynchronous course of operation, considering the precise occurrence of spikes, Spiking Neural Networks take advantage of this constraint. In this work, we investigate sensors and algorithms for event-based stereo vision leading to more biologically plausible robots. Hereby, we focus mainly on binocular stereo vision.
Collapse
Affiliation(s)
- Lea Steffen
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Daniel Reichard
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jakob Weinland
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Jacques Kaiser
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Arne Roennau
- FZI Research Center for Information Technology, Karlsruhe, Germany
| | - Rüdiger Dillmann
- FZI Research Center for Information Technology, Karlsruhe, Germany.,Humanoids and Intelligence Systems Lab, Karlsruhe Institute of Technology (KIT), Karlsruhe, Germany
| |
Collapse
|
23
|
Jokar E, Abolfathi H, Ahmadi A. A Novel Nonlinear Function Evaluation Approach for Efficient FPGA Mapping of Neuron and Synaptic Plasticity Models. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2019; 13:454-469. [PMID: 30802873 DOI: 10.1109/tbcas.2019.2900943] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Efficient hardware realization of spiking neural networks is of great significance in a wide variety of applications, such as high-speed modeling and simulation of large-scale neural systems. Exploiting the key features of FPGAs, this paper presents a novel nonlinear function evaluation approach, based on an effective uniform piecewise linear segmentation method, to efficiently approximate the nonlinear terms of neuron and synaptic plasticity models targeting low-cost digital implementation. The proposed approach takes advantage of a high-speed and extremely simple segment address encoder unit regardless of the number of segments, and therefore is capable of accurately approximating a given nonlinear function with a large number of straight lines. In addition, this approach can be efficiently mapped into FPGAs with minimal hardware cost. To investigate the application of the proposed nonlinear function evaluation approach in low-cost neuromorphic circuit design, it is applied to four case studies: the Izhikevich and FitzHugh-Nagumo neuron models as 2-dimensional case studies, the Hindmarsh-Rose neuron model as a relatively complex 3-dimensional model containing two nonlinear terms, and a calcium-based synaptic plasticity model capable of producing various STDP curves. Simulation and FPGA synthesis results demonstrate that the hardware proposed for each case study is capable of producing various responses remarkably similar to the original model and significantly outperforms the previously published counterparts in terms of resource utilization and maximum clock frequency.
Collapse
|
24
|
Vasquez HG, Zocchi G. Analog control with two Artificial Axons. BIOINSPIRATION & BIOMIMETICS 2018; 14:016017. [PMID: 30523907 DOI: 10.1088/1748-3190/aaf123] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
The artificial axon is a recently introduced synthetic assembly of supported lipid bilayers and voltage gated ion channels, displaying the basic electrophysiology of nerve cells. Here we demonstrate the use of two artificial axons as control elements to achieve a simple task. Namely, we steer a remote control car towards a light source, using the sensory input dependent firing rate of the axons as the control signal for turning left or right. We present the result in the form of the analysis of a movie of the car approaching the light source. In general terms, with this work we pursue a constructivist approach to exploring the nexus between machine language at the nerve cell level and behavior.
Collapse
Affiliation(s)
- Hector G Vasquez
- Department of Physics and Astronomy, University of California, Los Angeles, CA, United States of America
| | | |
Collapse
|
25
|
Dalgaty T, Vianello E, De Salvo B, Casas J. Insect-inspired neuromorphic computing. CURRENT OPINION IN INSECT SCIENCE 2018; 30:59-66. [PMID: 30553486 DOI: 10.1016/j.cois.2018.09.006] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Revised: 08/21/2018] [Accepted: 09/17/2018] [Indexed: 06/09/2023]
Abstract
The steady improvement in the performance of computing systems seen for many decades is levelling off as the miniaturization of semiconducting technology approaches the atomic limit, facing severe physical and technological issues. Neuromorphic computing is an emerging solution which makes use of silicon technology in a different way, inline with the computational principles observed in animal nervous systems. In this article, we argue that the nervous systems of insects in particular offer themselves as an ideal starting point for incorporation into realistic neuromorphic systems and review research in developing insect-inspired neuromorphic systems. We conclude with an exciting yet tangible vision of a full neuromorphic sensory-motor system where a liquid state machine modelling the function of the insect mushroom body links input to output and allows for amalgamation of the work discussed in a hierarchical framework of a full system inspired by the concept of how information flows through insects.
Collapse
Affiliation(s)
| | | | | | - Jerome Casas
- Insect Biology Research Institute, UMR CNRS 7261, University of Tours, Tours 37200, France.
| |
Collapse
|
26
|
Pfeiffer M, Pfeil T. Deep Learning With Spiking Neurons: Opportunities and Challenges. Front Neurosci 2018; 12:774. [PMID: 30410432 PMCID: PMC6209684 DOI: 10.3389/fnins.2018.00774] [Citation(s) in RCA: 128] [Impact Index Per Article: 21.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2018] [Accepted: 10/04/2018] [Indexed: 01/16/2023] Open
Abstract
Spiking neural networks (SNNs) are inspired by information processing in biology, where sparse and asynchronous binary signals are communicated and processed in a massively parallel fashion. SNNs on neuromorphic hardware exhibit favorable properties such as low power consumption, fast inference, and event-driven information processing. This makes them interesting candidates for the efficient implementation of deep neural networks, the method of choice for many machine learning tasks. In this review, we address the opportunities that deep spiking networks offer and investigate in detail the challenges associated with training SNNs in a way that makes them competitive with conventional deep learning, but simultaneously allows for efficient mapping to hardware. A wide range of training methods for SNNs is presented, ranging from the conversion of conventional deep networks into SNNs, constrained training before conversion, spiking variants of backpropagation, and biologically motivated variants of STDP. The goal of our review is to define a categorization of SNN training methods, and summarize their advantages and drawbacks. We further discuss relationships between SNNs and binary networks, which are becoming popular for efficient digital hardware implementation. Neuromorphic hardware platforms have great potential to enable deep spiking networks in real-world applications. We compare the suitability of various neuromorphic systems that have been developed over the past years, and investigate potential use cases. Neuromorphic approaches and conventional machine learning should not be considered simply two solutions to the same classes of problems, instead it is possible to identify and exploit their task-specific advantages. Deep SNNs offer great opportunities to work with new types of event-based sensors, exploit temporal codes and local on-chip learning, and we have so far just scratched the surface of realizing these advantages in practical applications.
Collapse
Affiliation(s)
- Michael Pfeiffer
- Bosch Center for Artificial Intelligence, Robert Bosch GmbH, Renningen, Germany
| | | |
Collapse
|
27
|
Aamir SA, Muller P, Kiene G, Kriener L, Stradmann Y, Grubl A, Schemmel J, Meier K. A Mixed-Signal Structured AdEx Neuron for Accelerated Neuromorphic Cores. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2018; 12:1027-1037. [PMID: 30047897 DOI: 10.1109/tbcas.2018.2848203] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Here, we describe a multicompartment neuron circuit based on the adaptive-exponential I&F (AdEx) model, developed for the second-generation BrainScaleS hardware. Based on an existing modular leaky integrate-and-fire (LIF) architecture designed in 65-nm CMOS, the circuit features exponential spike generation, neuronal adaptation, intercompartmental connections as well as a conductance-based reset. The design reproduces a diverse set of firing patterns observed in cortical pyramidal neurons. Further, it enables the emulation of sodium and calcium spikes, as well as N-methyl-D-aspartate plateau potentials known from apical and thin dendrites. We characterize the AdEx circuit extensions and exemplify how the interplay between passive and nonlinear active signal processing enhances the computational capabilities of single (but structured) on-chip neurons.
Collapse
|
28
|
Spiking neurons with short-term synaptic plasticity form superior generative networks. Sci Rep 2018; 8:10651. [PMID: 30006554 PMCID: PMC6045624 DOI: 10.1038/s41598-018-28999-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Accepted: 07/04/2018] [Indexed: 11/29/2022] Open
Abstract
Spiking networks that perform probabilistic inference have been proposed both as models of cortical computation and as candidates for solving problems in machine learning. However, the evidence for spike-based computation being in any way superior to non-spiking alternatives remains scarce. We propose that short-term synaptic plasticity can provide spiking networks with distinct computational advantages compared to their classical counterparts. When learning from high-dimensional, diverse datasets, deep attractors in the energy landscape often cause mixing problems to the sampling process. Classical algorithms solve this problem by employing various tempering techniques, which are both computationally demanding and require global state updates. We demonstrate how similar results can be achieved in spiking networks endowed with local short-term synaptic plasticity. Additionally, we discuss how these networks can even outperform tempering-based approaches when the training data is imbalanced. We thereby uncover a powerful computational property of the biologically inspired, local, spike-triggered synaptic dynamics based simply on a limited pool of synaptic resources, which enables them to deal with complex sensory data.
Collapse
|
29
|
Li W, Ovchinnikov IV, Chen H, Wang Z, Lee A, Lee H, Cepeda C, Schwartz RN, Meier K, Wang KL. A Basic Phase Diagram of Neuronal Dynamics. Neural Comput 2018; 30:2418-2438. [PMID: 29894659 DOI: 10.1162/neco_a_01103] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The extreme complexity of the brain has attracted the attention of neuroscientists and other researchers for a long time. More recently, the neuromorphic hardware has matured to provide a new powerful tool to study neuronal dynamics. Here, we study neuronal dynamics using different settings on a neuromorphic chip built with flexible parameters of neuron models. Our unique setting in the network of leaky integrate-and-fire (LIF) neurons is to introduce a weak noise environment. We observed three different types of collective neuronal activities, or phases, separated by sharp boundaries, or phase transitions. From this, we construct a rudimentary phase diagram of neuronal dynamics and demonstrate that a noise-induced chaotic phase (N-phase), which is dominated by neuronal avalanche activity (intermittent aperiodic neuron firing), emerges in the presence of noise and its width grows with the noise intensity. The dynamics can be manipulated in this N-phase. Our results and comparison with clinical data is consistent with the literature and our previous work showing that healthy brain must reside in the N-phase. We argue that the brain phase diagram with further refinement may be used for the diagnosis and treatment of mental disease and also suggest that the dynamics may be manipulated to serve as a means of new information processing (e.g., for optimization). Neuromorphic chips, similar to the one we used but with a variety of neuron models, may be used to further enhance the understanding of human brain function and accelerate the development of neuroscience research.
Collapse
Affiliation(s)
- Wenyuan Li
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Igor V Ovchinnikov
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Honglin Chen
- Department of Mathematics, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Zhe Wang
- Department of Mechanical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Albert Lee
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Houchul Lee
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Carlos Cepeda
- David Geffen School of Medicine, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Robert N Schwartz
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| | - Karlheinz Meier
- Kirchhoff Institute for Physics, Heidelberg University, 69120 Heidelberg, Germany
| | - Kang L Wang
- Department of Electrical Engineering, UCLA, Los Angeles, CA 90095, U.S.A.
| |
Collapse
|
30
|
Wang RM, Thakur CS, van Schaik A. An FPGA-Based Massively Parallel Neuromorphic Cortex Simulator. Front Neurosci 2018; 12:213. [PMID: 29692702 PMCID: PMC5902707 DOI: 10.3389/fnins.2018.00213] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2017] [Accepted: 03/16/2018] [Indexed: 11/13/2022] Open
Abstract
This paper presents a massively parallel and scalable neuromorphic cortex simulator designed for simulating large and structurally connected spiking neural networks, such as complex models of various areas of the cortex. The main novelty of this work is the abstraction of a neuromorphic architecture into clusters represented by minicolumns and hypercolumns, analogously to the fundamental structural units observed in neurobiology. Without this approach, simulating large-scale fully connected networks needs prohibitively large memory to store look-up tables for point-to-point connections. Instead, we use a novel architecture, based on the structural connectivity in the neocortex, such that all the required parameters and connections can be stored in on-chip memory. The cortex simulator can be easily reconfigured for simulating different neural networks without any change in hardware structure by programming the memory. A hierarchical communication scheme allows one neuron to have a fan-out of up to 200 k neurons. As a proof-of-concept, an implementation on one Altera Stratix V FPGA was able to simulate 20 million to 2.6 billion leaky-integrate-and-fire (LIF) neurons in real time. We verified the system by emulating a simplified auditory cortex (with 100 million neurons). This cortex simulator achieved a low power dissipation of 1.62 μW per neuron. With the advent of commercially available FPGA boards, our system offers an accessible and scalable tool for the design, real-time simulation, and analysis of large-scale spiking neural networks.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| | - Chetan S Thakur
- Department of Electronic Systems Engineering, Indian Institute of Science, Bangalore, India
| | - André van Schaik
- The MARCS Institute, University of Western Sydney, Sydney, NSW, Australia
| |
Collapse
|
31
|
Unsupervised heart-rate estimation in wearables with Liquid states and a probabilistic readout. Neural Netw 2018; 99:134-147. [PMID: 29414535 DOI: 10.1016/j.neunet.2017.12.015] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 12/08/2017] [Accepted: 12/26/2017] [Indexed: 01/28/2023]
Abstract
Heart-rate estimation is a fundamental feature of modern wearable devices. In this paper we propose a machine learning technique to estimate heart-rate from electrocardiogram (ECG) data collected using wearable devices. The novelty of our approach lies in (1) encoding spatio-temporal properties of ECG signals directly into spike train and using this to excite recurrently connected spiking neurons in a Liquid State Machine computation model; (2) a novel learning algorithm; and (3) an intelligently designed unsupervised readout based on Fuzzy c-Means clustering of spike responses from a subset of neurons (Liquid states), selected using particle swarm optimization. Our approach differs from existing works by learning directly from ECG signals (allowing personalization), without requiring costly data annotations. Additionally, our approach can be easily implemented on state-of-the-art spiking-based neuromorphic systems, offering high accuracy, yet significantly low energy footprint, leading to an extended battery-life of wearable devices. We validated our approach with CARLsim, a GPU accelerated spiking neural network simulator modeling Izhikevich spiking neurons with Spike Timing Dependent Plasticity (STDP) and homeostatic scaling. A range of subjects is considered from in-house clinical trials and public ECG databases. Results show high accuracy and low energy footprint in heart-rate estimation across subjects with and without cardiac irregularities, signifying the strong potential of this approach to be integrated in future wearable devices.
Collapse
|
32
|
An Investigation into Spike-Based Neuromorphic Approaches for Artificial Olfactory Systems. SENSORS 2017; 17:s17112591. [PMID: 29125586 PMCID: PMC5713038 DOI: 10.3390/s17112591] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2017] [Revised: 11/06/2017] [Accepted: 11/07/2017] [Indexed: 02/07/2023]
Abstract
The implementation of neuromorphic methods has delivered promising results for vision and auditory sensors. These methods focus on mimicking the neuro-biological architecture to generate and process spike-based information with minimal power consumption. With increasing interest in developing low-power and robust chemical sensors, the application of neuromorphic engineering concepts for electronic noses has provided an impetus for research focusing on improving these instruments. While conventional e-noses apply computationally expensive and power-consuming data-processing strategies, neuromorphic olfactory sensors implement the biological olfaction principles found in humans and insects to simplify the handling of multivariate sensory data by generating and processing spike-based information. Over the last decade, research on neuromorphic olfaction has established the capability of these sensors to tackle problems that plague the current e-nose implementations such as drift, response time, portability, power consumption and size. This article brings together the key contributions in neuromorphic olfaction and identifies future research directions to develop near-real-time olfactory sensors that can be implemented for a range of applications such as biosecurity and environmental monitoring. Furthermore, we aim to expose the computational parallels between neuromorphic olfaction and gustation for future research focusing on the correlation of these senses.
Collapse
|
33
|
Kim JZ, Soffer JM, Kahn AE, Vettel JM, Pasqualetti F, Bassett DS. Role of Graph Architecture in Controlling Dynamical Networks with Applications to Neural Systems. NATURE PHYSICS 2017; 14:91-98. [PMID: 29422941 PMCID: PMC5798649 DOI: 10.1038/nphys4268] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 08/18/2017] [Indexed: 05/25/2023]
Abstract
Networked systems display complex patterns of interactions between components. In physical networks, these interactions often occur along structural connections that link components in a hard-wired connection topology, supporting a variety of system-wide dynamical behaviors such as synchronization. While descriptions of these behaviors are important, they are only a first step towards understanding and harnessing the relationship between network topology and system behavior. Here, we use linear network control theory to derive accurate closed-form expressions that relate the connectivity of a subset of structural connections (those linking driver nodes to non-driver nodes) to the minimum energy required to control networked systems. To illustrate the utility of the mathematics, we apply this approach to high-resolution connectomes recently reconstructed from Drosophila, mouse, and human brains. We use these principles to suggest an advantage of the human brain in supporting diverse network dynamics with small energetic costs while remaining robust to perturbations, and to perform clinically accessible targeted manipulation of the brain's control performance by removing single edges in the network. Generally, our results ground the expectation of a control system's behavior in its network architecture, and directly inspire new directions in network analysis and design via distributed control.
Collapse
Affiliation(s)
- Jason Z Kim
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104
| | - Jonathan M Soffer
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104
| | - Ari E Kahn
- Department of Neuroscience, University of Pennsylvania, Philadelphia, PA, 19104 and U.S. Army Research Laboratory, Aberdeen, MD 21001
| | - Jean M Vettel
- Human Research & Engineering Directorate, U.S. Army Research Laboratory, Aberdeen, MD 21001, Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104
| | - Fabio Pasqualetti
- Department of Mechanical Engineering, University of California, Riverside, Riverside, CA, 92521
| | - Danielle S Bassett
- Department of Bioengineering, University of Pennsylvania, Philadelphia, PA, 19104
| |
Collapse
|
34
|
Stöckel A, Jenzen C, Thies M, Rückert U. Binary Associative Memories as a Benchmark for Spiking Neuromorphic Hardware. Front Comput Neurosci 2017; 11:71. [PMID: 28878642 PMCID: PMC5572441 DOI: 10.3389/fncom.2017.00071] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2016] [Accepted: 07/20/2017] [Indexed: 11/14/2022] Open
Abstract
Large-scale neuromorphic hardware platforms, specialized computer systems for energy efficient simulation of spiking neural networks, are being developed around the world, for example as part of the European Human Brain Project (HBP). Due to conceptual differences, a universal performance analysis of these systems in terms of runtime, accuracy and energy efficiency is non-trivial, yet indispensable for further hard- and software development. In this paper we describe a scalable benchmark based on a spiking neural network implementation of the binary neural associative memory. We treat neuromorphic hardware and software simulators as black-boxes and execute exactly the same network description across all devices. Experiments on the HBP platforms under varying configurations of the associative memory show that the presented method allows to test the quality of the neuron model implementation, and to explain significant deviations from the expected reference output.
Collapse
Affiliation(s)
- Andreas Stöckel
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Christoph Jenzen
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Michael Thies
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| | - Ulrich Rückert
- Cognitronics and Sensor Systems, Cluster of Excellence Cognitive Interaction Technology, Faculty of Technology, Bielefeld UniversityBielefeld, Germany
| |
Collapse
|
35
|
Broccard FD, Joshi S, Wang J, Cauwenberghs G. Neuromorphic neural interfaces: from neurophysiological inspiration to biohybrid coupling with nervous systems. J Neural Eng 2017; 14:041002. [PMID: 28573983 DOI: 10.1088/1741-2552/aa67a9] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Computation in nervous systems operates with different computational primitives, and on different hardware, than traditional digital computation and is thus subjected to different constraints from its digital counterpart regarding the use of physical resources such as time, space and energy. In an effort to better understand neural computation on a physical medium with similar spatiotemporal and energetic constraints, the field of neuromorphic engineering aims to design and implement electronic systems that emulate in very large-scale integration (VLSI) hardware the organization and functions of neural systems at multiple levels of biological organization, from individual neurons up to large circuits and networks. Mixed analog/digital neuromorphic VLSI systems are compact, consume little power and operate in real time independently of the size and complexity of the model. APPROACH This article highlights the current efforts to interface neuromorphic systems with neural systems at multiple levels of biological organization, from the synaptic to the system level, and discusses the prospects for future biohybrid systems with neuromorphic circuits of greater complexity. MAIN RESULTS Single silicon neurons have been interfaced successfully with invertebrate and vertebrate neural networks. This approach allowed the investigation of neural properties that are inaccessible with traditional techniques while providing a realistic biological context not achievable with traditional numerical modeling methods. At the network level, populations of neurons are envisioned to communicate bidirectionally with neuromorphic processors of hundreds or thousands of silicon neurons. Recent work on brain-machine interfaces suggests that this is feasible with current neuromorphic technology. SIGNIFICANCE Biohybrid interfaces between biological neurons and VLSI neuromorphic systems of varying complexity have started to emerge in the literature. Primarily intended as a computational tool for investigating fundamental questions related to neural dynamics, the sophistication of current neuromorphic systems now allows direct interfaces with large neuronal networks and circuits, resulting in potentially interesting clinical applications for neuroengineering systems, neuroprosthetics and neurorehabilitation.
Collapse
Affiliation(s)
- Frédéric D Broccard
- Institute for Neural Computation, UC San Diego, United States of America. Department of Bioengineering, UC San Diego, United States of America
| | | | | | | |
Collapse
|
36
|
Wang R, Thakur CS, Cohen G, Hamilton TJ, Tapson J, van Schaik A. Neuromorphic Hardware Architecture Using the Neural Engineering Framework for Pattern Recognition. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2017; 11:574-584. [PMID: 28436888 DOI: 10.1109/tbcas.2017.2666883] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
We present a hardware architecture that uses the neural engineering framework (NEF) to implement large-scale neural networks on field programmable gate arrays (FPGAs) for performing massively parallel real-time pattern recognition. NEF is a framework that is capable of synthesising large-scale cognitive systems from subnetworks and we have previously presented an FPGA implementation of the NEF that successfully performs nonlinear mathematical computations. That work was developed based on a compact digital neural core, which consists of 64 neurons that are instantiated by a single physical neuron using a time-multiplexing approach. We have now scaled this approach up to build a pattern recognition system by combining identical neural cores together. As a proof of concept, we have developed a handwritten digit recognition system using the MNIST database and achieved a recognition rate of 96.55%. The system is implemented on a state-of-the-art FPGA and can process 5.12 million digits per second. The architecture and hardware optimisations presented offer high-speed and resource-efficient means for performing high-speed, neuromorphic, and massively parallel pattern recognition and classification tasks.
Collapse
|
37
|
Esser SK, Merolla PA, Arthur JV, Cassidy AS, Appuswamy R, Andreopoulos A, Berg DJ, McKinstry JL, Melano T, Barch DR, di Nolfo C, Datta P, Amir A, Taba B, Flickner MD, Modha DS. Convolutional networks for fast, energy-efficient neuromorphic computing. Proc Natl Acad Sci U S A 2016; 113:11441-11446. [PMID: 27651489 PMCID: PMC5068316 DOI: 10.1073/pnas.1604850113] [Citation(s) in RCA: 156] [Impact Index Per Article: 19.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Deep networks are now able to achieve human-level performance on a broad spectrum of recognition tasks. Independently, neuromorphic computing has now demonstrated unprecedented energy-efficiency through a new chip architecture based on spiking neurons, low precision synapses, and a scalable communication network. Here, we demonstrate that neuromorphic computing, despite its novel architectural primitives, can implement deep convolution networks that (i) approach state-of-the-art classification accuracy across eight standard datasets encompassing vision and speech, (ii) perform inference while preserving the hardware's underlying energy-efficiency and high throughput, running on the aforementioned datasets at between 1,200 and 2,600 frames/s and using between 25 and 275 mW (effectively >6,000 frames/s per Watt), and (iii) can be specified and trained using backpropagation with the same ease-of-use as contemporary deep learning. This approach allows the algorithmic power of deep learning to be merged with the efficiency of neuromorphic processors, bringing the promise of embedded, intelligent, brain-inspired computing one step closer.
Collapse
Affiliation(s)
- Steven K Esser
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Paul A Merolla
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - John V Arthur
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Andrew S Cassidy
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | | | | | - David J Berg
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | | | - Timothy Melano
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Davis R Barch
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Carmelo di Nolfo
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Pallab Datta
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Arnon Amir
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Brian Taba
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | - Myron D Flickner
- Brain-Inspired Computing, IBM Research-Almaden, San Jose, CA 95120
| | | |
Collapse
|
38
|
D'Angelo E, Antonietti A, Casali S, Casellato C, Garrido JA, Luque NR, Mapelli L, Masoli S, Pedrocchi A, Prestori F, Rizza MF, Ros E. Modeling the Cerebellar Microcircuit: New Strategies for a Long-Standing Issue. Front Cell Neurosci 2016; 10:176. [PMID: 27458345 PMCID: PMC4937064 DOI: 10.3389/fncel.2016.00176] [Citation(s) in RCA: 48] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 06/23/2016] [Indexed: 11/13/2022] Open
Abstract
The cerebellar microcircuit has been the work bench for theoretical and computational modeling since the beginning of neuroscientific research. The regular neural architecture of the cerebellum inspired different solutions to the long-standing issue of how its circuitry could control motor learning and coordination. Originally, the cerebellar network was modeled using a statistical-topological approach that was later extended by considering the geometrical organization of local microcircuits. However, with the advancement in anatomical and physiological investigations, new discoveries have revealed an unexpected richness of connections, neuronal dynamics and plasticity, calling for a change in modeling strategies, so as to include the multitude of elementary aspects of the network into an integrated and easily updatable computational framework. Recently, biophysically accurate “realistic” models using a bottom-up strategy accounted for both detailed connectivity and neuronal non-linear membrane dynamics. In this perspective review, we will consider the state of the art and discuss how these initial efforts could be further improved. Moreover, we will consider how embodied neurorobotic models including spiking cerebellar networks could help explaining the role and interplay of distributed forms of plasticity. We envisage that realistic modeling, combined with closed-loop simulations, will help to capture the essence of cerebellar computations and could eventually be applied to neurological diseases and neurorobotic control systems.
Collapse
Affiliation(s)
- Egidio D'Angelo
- Department of Brain and Behavioral Sciences, University of PaviaPavia, Italy; Brain Connectivity Center, C. Mondino National Neurological InstitutePavia, Italy
| | - Alberto Antonietti
- NearLab - NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano Milano, Italy
| | - Stefano Casali
- Department of Brain and Behavioral Sciences, University of Pavia Pavia, Italy
| | - Claudia Casellato
- NearLab - NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano Milano, Italy
| | - Jesus A Garrido
- Department of Computer Architecture and Technology, University of Granada Granada, Spain
| | - Niceto Rafael Luque
- Department of Computer Architecture and Technology, University of Granada Granada, Spain
| | - Lisa Mapelli
- Department of Brain and Behavioral Sciences, University of Pavia Pavia, Italy
| | - Stefano Masoli
- Department of Brain and Behavioral Sciences, University of Pavia Pavia, Italy
| | - Alessandra Pedrocchi
- NearLab - NeuroEngineering and Medical Robotics Laboratory, Department of Electronics, Information and Bioengineering, Politecnico di Milano Milano, Italy
| | - Francesca Prestori
- Department of Brain and Behavioral Sciences, University of Pavia Pavia, Italy
| | - Martina Francesca Rizza
- Department of Brain and Behavioral Sciences, University of PaviaPavia, Italy; Dipartimento di Informatica, Sistemistica e Comunicazione, Università degli Studi di Milano-BicoccaMilan, Italy
| | - Eduardo Ros
- Department of Computer Architecture and Technology, University of Granada Granada, Spain
| |
Collapse
|
39
|
Cohen E, Malka D, Shemer A, Shahmoon A, Zalevsky Z, London M. Neural networks within multi-core optic fibers. Sci Rep 2016; 6:29080. [PMID: 27383911 PMCID: PMC4935875 DOI: 10.1038/srep29080] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2015] [Accepted: 06/10/2016] [Indexed: 11/15/2022] Open
Abstract
Hardware implementation of artificial neural networks facilitates real-time parallel processing of massive data sets. Optical neural networks offer low-volume 3D connectivity together with large bandwidth and minimal heat production in contrast to electronic implementation. Here, we present a conceptual design for in-fiber optical neural networks. Neurons and synapses are realized as individual silica cores in a multi-core fiber. Optical signals are transferred transversely between cores by means of optical coupling. Pump driven amplification in erbium-doped cores mimics synaptic interactions. We simulated three-layered feed-forward neural networks and explored their capabilities. Simulations suggest that networks can differentiate between given inputs depending on specific configurations of amplification; this implies classification and learning capabilities. Finally, we tested experimentally our basic neuronal elements using fibers, couplers, and amplifiers, and demonstrated that this configuration implements a neuron-like function. Therefore, devices similar to our proposed multi-core fiber could potentially serve as building blocks for future large-scale small-volume optical artificial neural networks.
Collapse
Affiliation(s)
- Eyal Cohen
- Life Science Institute, Hebrew University, Jerusalem, Israel.,Faculty of Engineering, Bar Ilan University, Ramat Gan, Israel
| | - Dror Malka
- Faculty of Engineering, Bar Ilan University, Ramat Gan, Israel.,Faculty of Engineering, Holon Institute of Technology, Holon, Israel
| | - Amir Shemer
- Faculty of Engineering, Bar Ilan University, Ramat Gan, Israel
| | - Asaf Shahmoon
- Faculty of Engineering, Bar Ilan University, Ramat Gan, Israel
| | - Zeev Zalevsky
- Faculty of Engineering, Bar Ilan University, Ramat Gan, Israel
| | - Michael London
- Life Science Institute, Hebrew University, Jerusalem, Israel.,The Edmond and Lily Safra Center for Brain Sciences, Hebrew University, Jerusalem, Israel
| |
Collapse
|
40
|
Knight JC, Tully PJ, Kaplan BA, Lansner A, Furber SB. Large-Scale Simulations of Plastic Neural Networks on Neuromorphic Hardware. Front Neuroanat 2016; 10:37. [PMID: 27092061 PMCID: PMC4823276 DOI: 10.3389/fnana.2016.00037] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2015] [Accepted: 03/18/2016] [Indexed: 11/17/2022] Open
Abstract
SpiNNaker is a digital, neuromorphic architecture designed for simulating large-scale spiking neural networks at speeds close to biological real-time. Rather than using bespoke analog or digital hardware, the basic computational unit of a SpiNNaker system is a general-purpose ARM processor, allowing it to be programmed to simulate a wide variety of neuron and synapse models. This flexibility is particularly valuable in the study of biological plasticity phenomena. A recently proposed learning rule based on the Bayesian Confidence Propagation Neural Network (BCPNN) paradigm offers a generic framework for modeling the interaction of different plasticity mechanisms using spiking neurons. However, it can be computationally expensive to simulate large networks with BCPNN learning since it requires multiple state variables for each synapse, each of which needs to be updated every simulation time-step. We discuss the trade-offs in efficiency and accuracy involved in developing an event-based BCPNN implementation for SpiNNaker based on an analytical solution to the BCPNN equations, and detail the steps taken to fit this within the limited computational and memory resources of the SpiNNaker architecture. We demonstrate this learning rule by learning temporal sequences of neural activity within a recurrent attractor network which we simulate at scales of up to 2.0 × 104 neurons and 5.1 × 107 plastic synapses: the largest plastic neural network ever to be simulated on neuromorphic hardware. We also run a comparable simulation on a Cray XC-30 supercomputer system and find that, if it is to match the run-time of our SpiNNaker simulation, the super computer system uses approximately 45× more power. This suggests that cheaper, more power efficient neuromorphic systems are becoming useful discovery tools in the study of plasticity in large-scale brain models.
Collapse
Affiliation(s)
- James C Knight
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Philip J Tully
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Bernhard A Kaplan
- Department of Visualization and Data Analysis, Zuse Institute Berlin Berlin, Germany
| | - Anders Lansner
- Department of Computational Biology, Royal Institute of TechnologyStockholm, Sweden; Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden; Department of Numerical analysis and Computer Science, Stockholm UniversityStockholm, Sweden
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| |
Collapse
|
41
|
Self-Adaptive Spike-Time-Dependent Plasticity of Metal-Oxide Memristors. Sci Rep 2016; 6:21331. [PMID: 26893175 PMCID: PMC4759564 DOI: 10.1038/srep21331] [Citation(s) in RCA: 128] [Impact Index Per Article: 16.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2015] [Accepted: 01/18/2016] [Indexed: 12/11/2022] Open
Abstract
Metal-oxide memristors have emerged as promising candidates for hardware implementation of artificial synapses - the key components of high-performance, analog neuromorphic networks - due to their excellent scaling prospects. Since some advanced cognitive tasks require spiking neuromorphic networks, which explicitly model individual neural pulses ("spikes") in biological neural systems, it is crucial for memristive synapses to support the spike-time-dependent plasticity (STDP). A major challenge for the STDP implementation is that, in contrast to some simplistic models of the plasticity, the elementary change of a synaptic weight in an artificial hardware synapse depends not only on the pre-synaptic and post-synaptic signals, but also on the initial weight (memristor's conductance) value. Here we experimentally demonstrate, for the first time, an STDP behavior that ensures self-adaptation of the average memristor conductance, making the plasticity stable, i.e. insensitive to the initial state of the devices. The experiments have been carried out with 200-nm Al2O3/TiO2-x memristors integrated into 12 × 12 crossbars. The experimentally observed self-adaptive STDP behavior has been complemented with numerical modeling of weight dynamics in a simple system with a leaky-integrate-and-fire neuron with a random spike-train input, using a compact model of memristor plasticity, fitted for quantitatively correct description of our memristors.
Collapse
|
42
|
Diamond A, Nowotny T, Schmuker M. Comparing Neuromorphic Solutions in Action: Implementing a Bio-Inspired Solution to a Benchmark Classification Task on Three Parallel-Computing Platforms. Front Neurosci 2016; 9:491. [PMID: 26778950 PMCID: PMC4705229 DOI: 10.3389/fnins.2015.00491] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2015] [Accepted: 12/10/2015] [Indexed: 01/24/2023] Open
Abstract
Neuromorphic computing employs models of neuronal circuits to solve computing problems. Neuromorphic hardware systems are now becoming more widely available and "neuromorphic algorithms" are being developed. As they are maturing toward deployment in general research environments, it becomes important to assess and compare them in the context of the applications they are meant to solve. This should encompass not just task performance, but also ease of implementation, speed of processing, scalability, and power efficiency. Here, we report our practical experience of implementing a bio-inspired, spiking network for multivariate classification on three different platforms: the hybrid digital/analog Spikey system, the digital spike-based SpiNNaker system, and GeNN, a meta-compiler for parallel GPU hardware. We assess performance using a standard hand-written digit classification task. We found that whilst a different implementation approach was required for each platform, classification performances remained in line. This suggests that all three implementations were able to exercise the model's ability to solve the task rather than exposing inherent platform limits, although differences emerged when capacity was approached. With respect to execution speed and power consumption, we found that for each platform a large fraction of the computing time was spent outside of the neuromorphic device, on the host machine. Time was spent in a range of combinations of preparing the model, encoding suitable input spiking data, shifting data, and decoding spike-encoded results. This is also where a large proportion of the total power was consumed, most markedly for the SpiNNaker and Spikey systems. We conclude that the simulation efficiency advantage of the assessed specialized hardware systems is easily lost in excessive host-device communication, or non-neuronal parts of the computation. These results emphasize the need to optimize the host-device communication architecture for scalability, maximum throughput, and minimum latency. Moreover, our results indicate that special attention should be paid to minimize host-device communication when designing and implementing networks for efficient neuromorphic computing.
Collapse
Affiliation(s)
- Alan Diamond
- School of Engineering and Informatics, University of SussexBrighton, UK
| | | | | |
Collapse
|
43
|
Federating and Integrating What We Know About the Brain at All Scales: Computer Science Meets the Clinical Neurosciences. RESEARCH AND PERSPECTIVES IN NEUROSCIENCES 2016. [DOI: 10.1007/978-3-319-28802-4_10] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/09/2023]
|
44
|
Stromatias E, Neil D, Pfeiffer M, Galluppi F, Furber SB, Liu SC. Robustness of spiking Deep Belief Networks to noise and reduced bit precision of neuro-inspired hardware platforms. Front Neurosci 2015. [PMID: 26217169 PMCID: PMC4496577 DOI: 10.3389/fnins.2015.00222] [Citation(s) in RCA: 56] [Impact Index Per Article: 6.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
Increasingly large deep learning architectures, such as Deep Belief Networks (DBNs) are the focus of current machine learning research and achieve state-of-the-art results in different domains. However, both training and execution of large-scale Deep Networks require vast computing resources, leading to high power requirements and communication overheads. The on-going work on design and construction of spike-based hardware platforms offers an alternative for running deep neural networks with significantly lower power consumption, but has to overcome hardware limitations in terms of noise and limited weight precision, as well as noise inherent in the sensor signal. This article investigates how such hardware constraints impact the performance of spiking neural network implementations of DBNs. In particular, the influence of limited bit precision during execution and training, and the impact of silicon mismatch in the synaptic weight parameters of custom hybrid VLSI implementations is studied. Furthermore, the network performance of spiking DBNs is characterized with regard to noise in the spiking input signal. Our results demonstrate that spiking DBNs can tolerate very low levels of hardware bit precision down to almost two bits, and show that their performance can be improved by at least 30% through an adapted training mechanism that takes the bit precision of the target platform into account. Spiking DBNs thus present an important use-case for large-scale hybrid analog-digital or digital neuromorphic platforms such as SpiNNaker, which can execute large but precision-constrained deep networks in real time.
Collapse
Affiliation(s)
- Evangelos Stromatias
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Daniel Neil
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| | - Francesco Galluppi
- Centre National de la Recherche Scientifique UMR 7210, Equipe de Vision et Calcul Naturel, Vision Institute, UMR S968 Inserm, CHNO des Quinze-Vingts, Université Pierre et Marie Curie Paris, France
| | - Steve B Furber
- Advanced Processor Technologies Group, School of Computer Science, University of Manchester Manchester, UK
| | - Shih-Chii Liu
- Institute of Neuroinformatics, University of Zurich and ETH Zurich Zurich, Switzerland
| |
Collapse
|
45
|
Wang RM, Hamilton TJ, Tapson JC, van Schaik A. A neuromorphic implementation of multiple spike-timing synaptic plasticity rules for large-scale neural networks. Front Neurosci 2015; 9:180. [PMID: 26041985 PMCID: PMC4438254 DOI: 10.3389/fnins.2015.00180] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2014] [Accepted: 05/06/2015] [Indexed: 11/24/2022] Open
Abstract
We present a neuromorphic implementation of multiple synaptic plasticity learning rules, which include both Spike Timing Dependent Plasticity (STDP) and Spike Timing Dependent Delay Plasticity (STDDP). We present a fully digital implementation as well as a mixed-signal implementation, both of which use a novel dynamic-assignment time-multiplexing approach and support up to 226 (64M) synaptic plasticity elements. Rather than implementing dedicated synapses for particular types of synaptic plasticity, we implemented a more generic synaptic plasticity adaptor array that is separate from the neurons in the neural network. Each adaptor performs synaptic plasticity according to the arrival times of the pre- and post-synaptic spikes assigned to it, and sends out a weighted or delayed pre-synaptic spike to the post-synaptic neuron in the neural network. This strategy provides great flexibility for building complex large-scale neural networks, as a neural network can be configured for multiple synaptic plasticity rules without changing its structure. We validate the proposed neuromorphic implementations with measurement results and illustrate that the circuits are capable of performing both STDP and STDDP. We argue that it is practical to scale the work presented here up to 236 (64G) synaptic adaptors on a current high-end FPGA platform.
Collapse
Affiliation(s)
- Runchun M Wang
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Tara J Hamilton
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - Jonathan C Tapson
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| | - André van Schaik
- The MARCS Institute, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
46
|
A digital implementation of neuron–astrocyte interaction for neuromorphic applications. Neural Netw 2015; 66:79-90. [DOI: 10.1016/j.neunet.2015.01.005] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2014] [Revised: 01/08/2015] [Accepted: 01/25/2015] [Indexed: 11/17/2022]
|
47
|
Petrovici MA, Vogginger B, Müller P, Breitwieser O, Lundqvist M, Muller L, Ehrlich M, Destexhe A, Lansner A, Schüffny R, Schemmel J, Meier K. Characterization and compensation of network-level anomalies in mixed-signal neuromorphic modeling platforms. PLoS One 2014; 9:e108590. [PMID: 25303102 PMCID: PMC4193761 DOI: 10.1371/journal.pone.0108590] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2014] [Accepted: 08/22/2014] [Indexed: 11/18/2022] Open
Abstract
Advancing the size and complexity of neural network models leads to an ever increasing demand for computational resources for their simulation. Neuromorphic devices offer a number of advantages over conventional computing architectures, such as high emulation speed or low power consumption, but this usually comes at the price of reduced configurability and precision. In this article, we investigate the consequences of several such factors that are common to neuromorphic devices, more specifically limited hardware resources, limited parameter configurability and parameter variations due to fixed-pattern noise and trial-to-trial variability. Our final aim is to provide an array of methods for coping with such inevitable distortion mechanisms. As a platform for testing our proposed strategies, we use an executable system specification (ESS) of the BrainScaleS neuromorphic system, which has been designed as a universal emulation back-end for neuroscientific modeling. We address the most essential limitations of this device in detail and study their effects on three prototypical benchmark network models within a well-defined, systematic workflow. For each network model, we start by defining quantifiable functionality measures by which we then assess the effects of typical hardware-specific distortion mechanisms, both in idealized software simulations and on the ESS. For those effects that cause unacceptable deviations from the original network dynamics, we suggest generic compensation mechanisms and demonstrate their effectiveness. Both the suggested workflow and the investigated compensation mechanisms are largely back-end independent and do not require additional hardware configurability beyond the one required to emulate the benchmark networks in the first place. We hereby provide a generic methodological environment for configurable neuromorphic devices that are targeted at emulating large-scale, functional neural networks.
Collapse
Affiliation(s)
- Mihai A. Petrovici
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Bernhard Vogginger
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Paul Müller
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Oliver Breitwieser
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Mikael Lundqvist
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - Lyle Muller
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Matthias Ehrlich
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Alain Destexhe
- CNRS, Unité de Neuroscience, Information et Complexité, Gif sur Yvette, France
| | - Anders Lansner
- Department of Computational Biology, School of Computer Science and Communication, Stockholm University and Royal Institute of Technology, Stockholm, Sweden
| | - René Schüffny
- Technische Universität Dresden, Institute of Circuits and Systems, Dresden, Germany
| | - Johannes Schemmel
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| | - Karlheinz Meier
- Ruprecht-Karls-Universität Heidelberg, Kirchhoff Institute for Physics, Heidelberg, Germany
| |
Collapse
|
48
|
Carlson KD, Nageswaran JM, Dutt N, Krichmar JL. An efficient automated parameter tuning framework for spiking neural networks. Front Neurosci 2014; 8:10. [PMID: 24550771 PMCID: PMC3912986 DOI: 10.3389/fnins.2014.00010] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2013] [Accepted: 01/17/2014] [Indexed: 11/13/2022] Open
Abstract
As the desire for biologically realistic spiking neural networks (SNNs) increases, tuning the enormous number of open parameters in these models becomes a difficult challenge. SNNs have been used to successfully model complex neural circuits that explore various neural phenomena such as neural plasticity, vision systems, auditory systems, neural oscillations, and many other important topics of neural function. Additionally, SNNs are particularly well-adapted to run on neuromorphic hardware that will support biological brain-scale architectures. Although the inclusion of realistic plasticity equations, neural dynamics, and recurrent topologies has increased the descriptive power of SNNs, it has also made the task of tuning these biologically realistic SNNs difficult. To meet this challenge, we present an automated parameter tuning framework capable of tuning SNNs quickly and efficiently using evolutionary algorithms (EA) and inexpensive, readily accessible graphics processing units (GPUs). A sample SNN with 4104 neurons was tuned to give V1 simple cell-like tuning curve responses and produce self-organizing receptive fields (SORFs) when presented with a random sequence of counterphase sinusoidal grating stimuli. A performance analysis comparing the GPU-accelerated implementation to a single-threaded central processing unit (CPU) implementation was carried out and showed a speedup of 65× of the GPU implementation over the CPU implementation, or 0.35 h per generation for GPU vs. 23.5 h per generation for CPU. Additionally, the parameter value solutions found in the tuned SNN were studied and found to be stable and repeatable. The automated parameter tuning framework presented here will be of use to both the computational neuroscience and neuromorphic engineering communities, making the process of constructing and tuning large-scale SNNs much quicker and easier.
Collapse
Affiliation(s)
- Kristofor D Carlson
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | - Nikil Dutt
- Department of Computer Science, University of California Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA ; Department of Computer Science, University of California Irvine Irvine, CA, USA
| |
Collapse
|
49
|
Abstract
Computational neuroscience has uncovered a number of computational principles used by nervous systems. At the same time, neuromorphic hardware has matured to a state where fast silicon implementations of complex neural networks have become feasible. En route to future technical applications of neuromorphic computing the current challenge lies in the identification and implementation of functional brain algorithms. Taking inspiration from the olfactory system of insects, we constructed a spiking neural network for the classification of multivariate data, a common problem in signal and data analysis. In this model, real-valued multivariate data are converted into spike trains using "virtual receptors" (VRs). Their output is processed by lateral inhibition and drives a winner-take-all circuit that supports supervised learning. VRs are conveniently implemented in software, whereas the lateral inhibition and classification stages run on accelerated neuromorphic hardware. When trained and tested on real-world datasets, we find that the classification performance is on par with a naïve Bayes classifier. An analysis of the network dynamics shows that stable decisions in output neuron populations are reached within less than 100 ms of biological time, matching the time-to-decision reported for the insect nervous system. Through leveraging a population code, the network tolerates the variability of neuronal transfer functions and trial-to-trial variation that is inevitably present on the hardware system. Our work provides a proof of principle for the successful implementation of a functional spiking neural network on a configurable neuromorphic hardware system that can readily be applied to real-world computing problems.
Collapse
|
50
|
Denk C, Llobet-Blandino F, Galluppi F, Plana LA, Furber S, Conradt J. Real-Time Interface Board for Closed-Loop Robotic Tasks on the SpiNNaker Neural Computing System. ARTIFICIAL NEURAL NETWORKS AND MACHINE LEARNING – ICANN 2013 2013. [DOI: 10.1007/978-3-642-40728-4_59] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
|