1
|
Hopkins M, Fil J, Jones EG, Furber S. BitBrain and Sparse Binary Coincidence (SBC) memories: Fast, robust learning and inference for neuromorphic architectures. Front Neuroinform 2023; 17:1125844. [PMID: 37025552 PMCID: PMC10071999 DOI: 10.3389/fninf.2023.1125844] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 03/03/2023] [Indexed: 04/08/2023] Open
Abstract
We present an innovative working mechanism (the SBC memory) and surrounding infrastructure (BitBrain) based upon a novel synthesis of ideas from sparse coding, computational neuroscience and information theory that enables fast and adaptive learning and accurate, robust inference. The mechanism is designed to be implemented efficiently on current and future neuromorphic devices as well as on more conventional CPU and memory architectures. An example implementation on the SpiNNaker neuromorphic platform has been developed and initial results are presented. The SBC memory stores coincidences between features detected in class examples in a training set, and infers the class of a previously unseen test example by identifying the class with which it shares the highest number of feature coincidences. A number of SBC memories may be combined in a BitBrain to increase the diversity of the contributing feature coincidences. The resulting inference mechanism is shown to have excellent classification performance on benchmarks such as MNIST and EMNIST, achieving classification accuracy with single-pass learning approaching that of state-of-the-art deep networks with much larger tuneable parameter spaces and much higher training costs. It can also be made very robust to noise. BitBrain is designed to be very efficient in training and inference on both conventional and neuromorphic architectures. It provides a unique combination of single-pass, single-shot and continuous supervised learning; following a very simple unsupervised phase. Accurate classification inference that is very robust against imperfect inputs has been demonstrated. These contributions make it uniquely well-suited for edge and IoT applications.
Collapse
|
2
|
Sun Y, Zeng Y, Li Y. Solving the spike feature information vanishing problem in spiking deep Q network with potential based normalization. Front Neurosci 2022; 16:953368. [PMID: 36090282 PMCID: PMC9453154 DOI: 10.3389/fnins.2022.953368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 07/29/2022] [Indexed: 11/18/2022] Open
Abstract
Brain-inspired spiking neural networks (SNNs) are successfully applied to many pattern recognition domains. The SNNs-based deep structure has achieved considerable results in perceptual tasks, such as image classification and target detection. However, applying deep SNNs in reinforcement learning (RL) tasks is still a problem to be explored. Although there have been previous studies on the combination of SNNs and RL, most focus on robotic control problems with shallow networks or using the ANN-SNN conversion method to implement spiking deep Q networks (SDQN). In this study, we mathematically analyzed the problem of the disappearance of spiking signal features in SDQN and proposed a potential-based layer normalization (pbLN) method to train spiking deep Q networks directly. Experiment shows that compared with state-of-art ANN-SNN conversion method and other SDQN works, the proposed pbLN spiking deep Q networks (PL-SDQN) achieved better performance on Atari game tasks.
Collapse
Affiliation(s)
- Yinqian Sun
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
- National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China
- *Correspondence: Yi Zeng
| | - Yang Li
- Research Center for Brain-Inspired Intelligence, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Unsupervised anomaly detection in multivariate time series with online evolving spiking neural networks. Mach Learn 2022. [DOI: 10.1007/s10994-022-06129-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
AbstractWith the increasing demand for digital products, processes and services the research area of automatic detection of signal outliers in streaming data has gained a lot of attention. The range of possible applications for this kind of algorithms is versatile and ranges from the monitoring of digital machinery and predictive maintenance up to applications in analyzing big data healthcare sensor data. In this paper we present a method for detecting anomalies in streaming multivariate times series by using an adapted evolving Spiking Neural Network. As the main components of this work we contribute (1) an alternative rank-order-based learning algorithm which uses the precise times of the incoming spikes for adjusting the synaptic weights, (2) an adapted, realtime-capable and efficient encoding technique for multivariate data based on multi-dimensional Gaussian Receptive Fields and (3) a continuous outlier scoring function for an improved interpretability of the classifications. Spiking neural networks are extremely efficient when it comes to process time dependent information. We demonstrate the effectiveness of our model on a synthetic dataset based on the Numenta Anomaly Benchmark with various anomaly types. We compare our algorithm to other streaming anomaly detecting algorithms and can prove that our algorithm performs better in detecting anomalies while demanding less computational resources for processing high dimensional data.
Collapse
|
4
|
Drix D, Hafner VV, Schmuker M. Sparse coding with a somato-dendritic rule. Neural Netw 2020; 131:37-49. [PMID: 32750603 DOI: 10.1016/j.neunet.2020.06.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Revised: 04/30/2020] [Accepted: 06/04/2020] [Indexed: 10/24/2022]
Abstract
Cortical neurons are silent most of the time: sparse activity enables low-energy computation in the brain, and promises to do the same in neuromorphic hardware. Beyond power efficiency, sparse codes have favourable properties for associative learning, as they can store more information than local codes but are easier to read out than dense codes. Auto-encoders with a sparse constraint can learn sparse codes, and so can single-layer networks that combine recurrent inhibition with unsupervised Hebbian learning. But the latter usually require fast homeostatic plasticity, which could lead to catastrophic forgetting in embodied agents that learn continuously. Here we set out to explore whether plasticity at recurrent inhibitory synapses could take up that role instead, regulating both the population sparseness and the firing rates of individual neurons. We put the idea to the test in a network that employs compartmentalised inputs to solve the task: rate-based dendritic compartments integrate the feedforward input, while spiking integrate-and-fire somas compete through recurrent inhibition. A somato-dendritic learning rule allows somatic inhibition to modulate nonlinear Hebbian learning in the dendrites. Trained on MNIST digits and natural images, the network discovers independent components that form a sparse encoding of the input and support linear decoding. These findings confirm that intrinsic homeostatic plasticity is not strictly required for regulating sparseness: inhibitory synaptic plasticity can have the same effect. Our work illustrates the usefulness of compartmentalised inputs, and makes the case for moving beyond point neuron models in artificial spiking neural networks.
Collapse
Affiliation(s)
- Damien Drix
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany.
| | - Verena V Hafner
- Adaptive Systems laboratory, Institut für Informatik, Humboldt-Universität zu Berlin, Berlin, Germany; Bernstein Center for Computational Neuroscience, Berlin, Germany
| | - Michael Schmuker
- Biocomputation group, Department of Computer Science, University of Hertfordshire, Hatfield, United Kingdom; Bernstein Center for Computational Neuroscience, Berlin, Germany
| |
Collapse
|
5
|
Shirai S, Acharya SK, Bose SK, Mallinson JB, Galli E, Pike MD, Arnold MD, Brown SA. Long-range temporal correlations in scale-free neuromorphic networks. Netw Neurosci 2020; 4:432-447. [PMID: 32537535 PMCID: PMC7286302 DOI: 10.1162/netn_a_00128] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2019] [Accepted: 01/17/2020] [Indexed: 12/05/2022] Open
Abstract
Biological neuronal networks are the computing engines of the mammalian brain. These networks exhibit structural characteristics such as hierarchical architectures, small-world attributes, and scale-free topologies, providing the basis for the emergence of rich temporal characteristics such as scale-free dynamics and long-range temporal correlations. Devices that have both the topological and the temporal features of a neuronal network would be a significant step toward constructing a neuromorphic system that can emulate the computational ability and energy efficiency of the human brain. Here we use numerical simulations to show that percolating networks of nanoparticles exhibit structural properties that are reminiscent of biological neuronal networks, and then show experimentally that stimulation of percolating networks by an external voltage stimulus produces temporal dynamics that are self-similar, follow power-law scaling, and exhibit long-range temporal correlations. These results are expected to have important implications for the development of neuromorphic devices, especially for those based on the concept of reservoir computing. Biological neuronal networks exhibit well-defined properties such as hierarchical structures and scale-free topologies, as well as a high degree of local clustering and short path lengths between nodes. These structural properties are intimately connected to the observed long-range temporal correlations in the network dynamics. Fabrication of artificial networks with similar structural properties would facilitate brain-like (“neuromorphic”) computing. Here we show experimentally that percolating networks of nanoparticles exhibit similar long-range temporal correlations to those of biological neuronal networks and use simulations to demonstrate that the dynamics arise from an underlying scale-free network architecture. We discuss similarities between the biological and percolating systems and highlight the potential for the percolating networks to be used in neuromorphic computing applications.
Collapse
Affiliation(s)
- Shota Shirai
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Susant Kumar Acharya
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Saurabh Kumar Bose
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Joshua Brian Mallinson
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Edoardo Galli
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| | - Matthew D Pike
- Electrical and Electronics Engineering, University of Canterbury, Christchurch, New Zealand
| | - Matthew D Arnold
- School of Mathematical and Physical Sciences, University of Technology Sydney, Australia
| | - Simon Anthony Brown
- The MacDiarmid Institute for Advanced Materials and Nanotechnology, School of Physical and Chemical Sciences, Te Kura Matū, University of Canterbury, Christchurch, New Zealand
| |
Collapse
|
6
|
Marcireau A, Ieng SH, Benosman R. Sepia, Tarsier, and Chameleon: A Modular C++ Framework for Event-Based Computer Vision. Front Neurosci 2020; 13:1338. [PMID: 31969799 PMCID: PMC6960268 DOI: 10.3389/fnins.2019.01338] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2019] [Accepted: 11/27/2019] [Indexed: 11/13/2022] Open
Abstract
This paper introduces an new open-source, header-only and modular C++ framework to facilitate the implementation of event-driven algorithms. The framework relies on three independent components: sepia (file IO), tarsier (algorithms), and chameleon (display). Our benchmarks show that algorithms implemented with tarsier are faster and have a lower latency than identical implementations in other state-of-the-art frameworks, thanks to static polymorphism (compile-time pipeline assembly). The observer pattern used throughout the framework encourages implementations that better reflect the event-driven nature of the algorithms and the way they process events, easing future translation to neuromorphic hardware. The framework integrates drivers to communicate with the DVS, the DAVIS, the Opal Kelly ATIS, and the CCam ATIS.
Collapse
Affiliation(s)
- Alexandre Marcireau
- INSERM UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, CNRS, UMR 7210, Institut de la Vision, Paris, France
| | - Sio-Hoi Ieng
- INSERM UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, CNRS, UMR 7210, Institut de la Vision, Paris, France
| | - Ryad Benosman
- INSERM UMRI S 968, Sorbonne Universites, UPMC Univ Paris 06, UMR S 968, CNRS, UMR 7210, Institut de la Vision, Paris, France.,University of Pittsburgh Medical Center, Pittsburgh, PA, United States.,Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| |
Collapse
|
7
|
Depannemaecker D, Canton Santos LE, Rodrigues AM, Scorza CA, Scorza FA, Almeida ACGD. Realistic spiking neural network: Non-synaptic mechanisms improve convergence in cell assembly. Neural Netw 2019; 122:420-433. [PMID: 31841876 DOI: 10.1016/j.neunet.2019.09.038] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 01/26/2023]
Abstract
Learning in neural networks inspired by brain tissue has been studied for machine learning applications. However, existing works primarily focused on the concept of synaptic weight modulation, and other aspects of neuronal interactions, such as non-synaptic mechanisms, have been neglected. Non-synaptic interaction mechanisms have been shown to play significant roles in the brain, and four classes of these mechanisms can be highlighted: (i) electrotonic coupling; (ii) ephaptic interactions; (iii) electric field effects; and iv) extracellular ionic fluctuations. In this work, we proposed simple rules for learning inspired by recent findings in machine learning adapted to a realistic spiking neural network. We show that the inclusion of non-synaptic interaction mechanisms improves cell assembly convergence. By including extracellular ionic fluctuation represented by the extracellular electrodiffusion in the network, we showed the importance of these mechanisms to improve cell assembly convergence. Additionally, we observed a variety of electrophysiological patterns of neuronal activity, particularly bursting and synchronism when the convergence is improved.
Collapse
Affiliation(s)
- Damien Depannemaecker
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil; Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Luiz Eduardo Canton Santos
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil; Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Antônio Márcio Rodrigues
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil
| | - Carla Alessandra Scorza
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil
| | - Fulvio Alexandre Scorza
- Disciplina de Neurociência, Departamento de Neurologia e Neurocirurgia, Universidade Federal de São Paulo (UNIFESP), São Paulo, Brazil
| | - Antônio-Carlos Guimarães de Almeida
- Laboratório de Neurociência Experimental e Computacional, Departamento de Engenharia de Biossistemas, Universidade Federal de São João del-Rei (UFSJ), Brazil.
| |
Collapse
|
8
|
Mallinson JB, Shirai S, Acharya SK, Bose SK, Galli E, Brown SA. Avalanches and criticality in self-organized nanoscale networks. SCIENCE ADVANCES 2019; 5:eaaw8438. [PMID: 31700999 PMCID: PMC6824861 DOI: 10.1126/sciadv.aaw8438] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/30/2019] [Accepted: 09/16/2019] [Indexed: 05/30/2023]
Abstract
Current efforts to achieve neuromorphic computation are focused on highly organized architectures, such as integrated circuits and regular arrays of memristors, which lack the complex interconnectivity of the brain and so are unable to exhibit brain-like dynamics. New architectures are required, both to emulate the complexity of the brain and to achieve critical dynamics and consequent maximal computational performance. We show here that electrical signals from self-organized networks of nanoparticles exhibit brain-like spatiotemporal correlations and criticality when fabricated at a percolating phase transition. Specifically, the sizes and durations of avalanches of switching events are power law distributed, and the power law exponents satisfy rigorous criteria for criticality. These signals are therefore qualitatively and quantitatively similar to those measured in the cortex. Our self-organized networks provide a low-cost platform for computational approaches that rely on spatiotemporal correlations, such as reservoir computing, and are an important step toward creating neuromorphic device architectures.
Collapse
|
9
|
Lim GH, Lau N, Pedrosa E, Amaral F, Pereira A, Luís Azevedo J, Cunha B. Precise and efficient pose estimation of stacked objects for mobile manipulation in industrial robotics challenges. Adv Robot 2019. [DOI: 10.1080/01691864.2019.1617780] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Gi Hyun Lim
- IEETA, University of Aveiro, Aveiro, Portugal
- School of Computer Science, University of Manchester, Manchester, UK
| | - Nuno Lau
- IEETA, University of Aveiro, Aveiro, Portugal
| | | | | | | | | | | |
Collapse
|
10
|
Schofield AJ, Gilchrist ID, Bloj M, Leonardis A, Bellotto N. Understanding images in biological and computer vision. Interface Focus 2018. [DOI: 10.1098/rsfs.2018.0027] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Affiliation(s)
- Andrew J. Schofield
- School of Psychology, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| | - Iain D. Gilchrist
- School of Experimental Psychology, University of Bristol, 12A Priory Road, Bristol, BS8 1TU, UK
| | - Marina Bloj
- School of Optometry and Vision Sciences, University of Bradford, Bradford, BD7 1DP, UK
| | - Ales Leonardis
- School of Computer Science, University of Birmingham, Edgbaston, Birmingham, B15 2TT, UK
| | - Nicola Bellotto
- School of Computer Science, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| |
Collapse
|