1
|
Ye C, Zhang Y, Ran C, Ma T. Recent Progress in Brain Network Models for Medical Applications: A Review. HEALTH DATA SCIENCE 2024; 4:0157. [PMID: 38979037 PMCID: PMC11227951 DOI: 10.34133/hds.0157] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2023] [Accepted: 05/28/2024] [Indexed: 07/10/2024]
Abstract
Importance: Pathological perturbations of the brain often spread via connectome to fundamentally alter functional consequences. By integrating multimodal neuroimaging data with mathematical neural mass modeling, brain network models (BNMs) enable to quantitatively characterize aberrant network dynamics underlying multiple neurological and psychiatric disorders. We delved into the advancements of BNM-based medical applications, discussed the prevalent challenges within this field, and provided possible solutions and future directions. Highlights: This paper reviewed the theoretical foundations and current medical applications of computational BNMs. Composed of neural mass models, the BNM framework allows to investigate large-scale brain dynamics behind brain diseases by linking the simulated functional signals to the empirical neurophysiological data, and has shown promise in exploring neuropathological mechanisms, elucidating therapeutic effects, and predicting disease outcome. Despite that several limitations existed, one promising trend of this research field is to precisely guide clinical neuromodulation treatment based on individual BNM simulation. Conclusion: BNM carries the potential to help understand the mechanism underlying how neuropathology affects brain network dynamics, further contributing to decision-making in clinical diagnosis and treatment. Several constraints must be addressed and surmounted to pave the way for its utilization in the clinic.
Collapse
Affiliation(s)
- Chenfei Ye
- International Research Institute for Artificial Intelligence,
Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | - Yixuan Zhang
- Department of Electronic and Information Engineering,
Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | - Chen Ran
- Department of Electronic and Information Engineering,
Harbin Institute of Technology at Shenzhen, Shenzhen, China
| | - Ting Ma
- International Research Institute for Artificial Intelligence,
Harbin Institute of Technology at Shenzhen, Shenzhen, China
- Department of Electronic and Information Engineering,
Harbin Institute of Technology at Shenzhen, Shenzhen, China
- Peng Cheng Laboratory, Shenzhen, China
- Guangdong Provincial Key Laboratory of Aerospace Communication and Networking Technology,
Harbin Institute of Technology at Shenzhen, China
| |
Collapse
|
2
|
Furber S. Digital neuromorphic technology: current and future prospects. Natl Sci Rev 2024; 11:nwad283. [PMID: 38577676 PMCID: PMC10989295 DOI: 10.1093/nsr/nwad283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2023] [Revised: 10/10/2023] [Accepted: 11/02/2023] [Indexed: 04/06/2024] Open
Abstract
Digital approaches to brain-inspired computing have advanced apace over recent years - where is the state-of-the-art and what does the future hold?
Collapse
Affiliation(s)
- Steve Furber
- Department of Computer Science, The University of Manchester, UK
| |
Collapse
|
3
|
Vieth M, Rahimi A, Gorgan Mohammadi A, Triesch J, Ganjtabesh M. Accelerating spiking neural network simulations with PymoNNto and PymoNNtorch. Front Neuroinform 2024; 18:1331220. [PMID: 38444756 PMCID: PMC10913591 DOI: 10.3389/fninf.2024.1331220] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Spiking neural network simulations are a central tool in Computational Neuroscience, Artificial Intelligence, and Neuromorphic Engineering research. A broad range of simulators and software frameworks for such simulations exist with different target application areas. Among these, PymoNNto is a recent Python-based toolbox for spiking neural network simulations that emphasizes the embedding of custom code in a modular and flexible way. While PymoNNto already supports GPU implementations, its backend relies on NumPy operations. Here we introduce PymoNNtorch, which is natively implemented with PyTorch while retaining PymoNNto's modular design. Furthermore, we demonstrate how changes to the implementations of common network operations in combination with PymoNNtorch's native GPU support can offer speed-up over conventional simulators like NEST, ANNarchy, and Brian 2 in certain situations. Overall, we show how PymoNNto's modular and flexible design in combination with PymoNNtorch's GPU acceleration and optimized indexing operations facilitate research and development of spiking neural networks in the Python programming language.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Ali Rahimi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Ashena Gorgan Mohammadi
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | - Mohammad Ganjtabesh
- Department of Mathematics, Statistics, and Computer Science - College of Science, University of Tehran, Tehran, Iran
| |
Collapse
|
4
|
Marrero D, Kern J, Urrea C. A Novel Robotic Controller Using Neural Engineering Framework-Based Spiking Neural Networks. SENSORS (BASEL, SWITZERLAND) 2024; 24:491. [PMID: 38257584 PMCID: PMC10819625 DOI: 10.3390/s24020491] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 01/11/2024] [Accepted: 01/11/2024] [Indexed: 01/24/2024]
Abstract
This paper investigates spiking neural networks (SNN) for novel robotic controllers with the aim of improving accuracy in trajectory tracking. By emulating the operation of the human brain through the incorporation of temporal coding mechanisms, SNN offer greater adaptability and efficiency in information processing, providing significant advantages in the representation of temporal information in robotic arm control compared to conventional neural networks. Exploring specific implementations of SNN in robot control, this study analyzes neuron models and learning mechanisms inherent to SNN. Based on the principles of the Neural Engineering Framework (NEF), a novel spiking PID controller is designed and simulated for a 3-DoF robotic arm using Nengo and MATLAB R2022b. The controller demonstrated good accuracy and efficiency in following designated trajectories, showing minimal deviations, overshoots, or oscillations. A thorough quantitative assessment, utilizing performance metrics like root mean square error (RMSE) and the integral of the absolute value of the time-weighted error (ITAE), provides additional validation for the efficacy of the SNN-based controller. Competitive performance was observed, surpassing a fuzzy controller by 5% in terms of the ITAE index and a conventional PID controller by 6% in the ITAE index and 30% in RMSE performance. This work highlights the utility of NEF and SNN in developing effective robotic controllers, laying the groundwork for future research focused on SNN adaptability in dynamic environments and advanced robotic applications.
Collapse
Affiliation(s)
| | - John Kern
- Electrical Engineering Department, Faculty of Engineering, University of Santiago of Chile (USACH), Av. Víctor Jara 3519, Estación Central, Santiago 9170124, Chile; (D.M.); (C.U.)
| | | |
Collapse
|
5
|
Gemo E, Spiga S, Brivio S. SHIP: a computational framework for simulating and validating novel technologies in hardware spiking neural networks. Front Neurosci 2024; 17:1270090. [PMID: 38264497 PMCID: PMC10804805 DOI: 10.3389/fnins.2023.1270090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 12/14/2023] [Indexed: 01/25/2024] Open
Abstract
Investigations in the field of spiking neural networks (SNNs) encompass diverse, yet overlapping, scientific disciplines. Examples range from purely neuroscientific investigations, researches on computational aspects of neuroscience, or applicative-oriented studies aiming to improve SNNs performance or to develop artificial hardware counterparts. However, the simulation of SNNs is a complex task that can not be adequately addressed with a single platform applicable to all scenarios. The optimization of a simulation environment to meet specific metrics often entails compromises in other aspects. This computational challenge has led to an apparent dichotomy of approaches, with model-driven algorithms dedicated to the detailed simulation of biological networks, and data-driven algorithms designed for efficient processing of large input datasets. Nevertheless, material scientists, device physicists, and neuromorphic engineers who develop new technologies for spiking neuromorphic hardware solutions would find benefit in a simulation environment that borrows aspects from both approaches, thus facilitating modeling, analysis, and training of prospective SNN systems. This manuscript explores the numerical challenges deriving from the simulation of spiking neural networks, and introduces SHIP, Spiking (neural network) Hardware In PyTorch, a numerical tool that supports the investigation and/or validation of materials, devices, small circuit blocks within SNN architectures. SHIP facilitates the algorithmic definition of the models for the components of a network, the monitoring of states and output of the modeled systems, and the training of the synaptic weights of the network, by way of user-defined unsupervised learning rules or supervised training techniques derived from conventional machine learning. SHIP offers a valuable tool for researchers and developers in the field of hardware-based spiking neural networks, enabling efficient simulation and validation of novel technologies.
Collapse
Affiliation(s)
- Emanuele Gemo
- CNR–IMM, Unit of Agrate Brianza, Agrate Brianza, Italy
| | | | | |
Collapse
|
6
|
Fang W, Chen Y, Ding J, Yu Z, Masquelier T, Chen D, Huang L, Zhou H, Li G, Tian Y. SpikingJelly: An open-source machine learning infrastructure platform for spike-based intelligence. SCIENCE ADVANCES 2023; 9:eadi1480. [PMID: 37801497 PMCID: PMC10558124 DOI: 10.1126/sciadv.adi1480] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 09/05/2023] [Indexed: 10/08/2023]
Abstract
Spiking neural networks (SNNs) aim to realize brain-inspired intelligence on neuromorphic chips with high energy efficiency by introducing neural dynamics and spike properties. As the emerging spiking deep learning paradigm attracts increasing interest, traditional programming frameworks cannot meet the demands of the automatic differentiation, parallel computation acceleration, and high integration of processing neuromorphic datasets and deployment. In this work, we present the SpikingJelly framework to address the aforementioned dilemma. We contribute a full-stack toolkit for preprocessing neuromorphic datasets, building deep SNNs, optimizing their parameters, and deploying SNNs on neuromorphic chips. Compared to existing methods, the training of deep SNNs can be accelerated 11×, and the superior extensibility and flexibility of SpikingJelly enable users to accelerate custom models at low costs through multilevel inheritance and semiautomatic code generation. SpikingJelly paves the way for synthesizing truly energy-efficient SNN-based machine intelligence systems, which will enrich the ecology of neuromorphic computing.
Collapse
Affiliation(s)
- Wei Fang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| | - Yanqi Chen
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | - Jianhao Ding
- School of Computer Science, Peking University, China
| | - Zhaofei Yu
- Institute for Artificial Intelligence, Peking University, China
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition (CERCO), UMR5549 CNRS–Université Toulouse 3, France
| | - Ding Chen
- Peng Cheng Laboratory, China
- Department of Computer Science and Engineering, Shanghai Jiao Tong University, China
| | - Liwei Huang
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
| | | | - Guoqi Li
- Institute of Automation, Chinese Academy of Sciences, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, China
| | - Yonghong Tian
- School of Computer Science, Peking University, China
- Peng Cheng Laboratory, China
- School of Electronic and Computer Engineering, Shenzhen Graduate School, Peking University, China
| |
Collapse
|
7
|
Borst JP, Aubin S, Stewart TC. A whole-task brain model of associative recognition that accounts for human behavior and neuroimaging data. PLoS Comput Biol 2023; 19:e1011427. [PMID: 37682986 PMCID: PMC10511112 DOI: 10.1371/journal.pcbi.1011427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Revised: 09/20/2023] [Accepted: 08/10/2023] [Indexed: 09/10/2023] Open
Abstract
Brain models typically focus either on low-level biological detail or on qualitative behavioral effects. In contrast, we present a biologically-plausible spiking-neuron model of associative learning and recognition that accounts for both human behavior and low-level brain activity across the whole task. Based on cognitive theories and insights from machine-learning analyses of M/EEG data, the model proceeds through five processing stages: stimulus encoding, familiarity judgement, associative retrieval, decision making, and motor response. The results matched human response times and source-localized MEG data in occipital, temporal, prefrontal, and precentral brain regions; as well as a classic fMRI effect in prefrontal cortex. This required two main conceptual advances: a basal-ganglia-thalamus action-selection system that relies on brief thalamic pulses to change the functional connectivity of the cortex, and a new unsupervised learning rule that causes very strong pattern separation in the hippocampus. The resulting model shows how low-level brain activity can result in goal-directed cognitive behavior in humans.
Collapse
Affiliation(s)
- Jelmer P. Borst
- Bernoulli Institute, University of Groningen; Groningen, The Netherlands
| | - Sean Aubin
- Centre for Theoretical Neuroscience, University of Waterloo; Waterloo, Ontario, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre; Waterloo, Ontario, Canada
| |
Collapse
|
8
|
Angelidis E. A perspective on large-scale simulation as an enabler for novel biorobotics applications. Front Robot AI 2023; 10:1102286. [PMID: 37692531 PMCID: PMC10485252 DOI: 10.3389/frobt.2023.1102286] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 08/15/2023] [Indexed: 09/12/2023] Open
Abstract
Our understanding of the complex mechanisms that power biological intelligence has been greatly enhanced through the explosive growth of large-scale neuroscience and robotics simulation tools that are used by the research community to perform previously infeasible experiments, such as the simulation of the neocortex's circuitry. Nevertheless, simulation falls far from being directly applicable to biorobots due to the large discrepancy between the simulated and the real world. A possible solution for this problem is the further enhancement of existing simulation tools for robotics, AI and neuroscience with multi-physics capabilities. Previously infeasible or difficult to simulate scenarios, such as robots swimming on the water surface, interacting with soft materials, walking on granular materials etc., would be rendered possible within a multi-physics simulation environment designed for robotics. In combination with multi-physics simulation, large-scale simulation tools that integrate multiple simulation modules in a closed-loop manner help address fundamental questions around the organization of neural circuits and the interplay between the brain, body and environment. We analyze existing designs for large-scale simulation running on cloud and HPC infrastructure as well as their shortcomings. Based on this analysis we propose a next-gen modular architecture design based on multi-physics engines, that we believe would greatly benefit biorobotics and AI.
Collapse
Affiliation(s)
- Emmanouil Angelidis
- Chair of Robotics, Artificial Intelligence and Embedded Systems, School of Informatics, Technical University of Munich, Munich, Germany
- Munich Research Center, Huawei Technologies Germany, Munich, Germany
| |
Collapse
|
9
|
Jaeger H, Noheda B, van der Wiel WG. Toward a formal theory for computing machines made out of whatever physics offers. Nat Commun 2023; 14:4911. [PMID: 37587135 PMCID: PMC10432384 DOI: 10.1038/s41467-023-40533-1] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2022] [Accepted: 08/01/2023] [Indexed: 08/18/2023] Open
Abstract
Approaching limitations of digital computing technologies have spurred research in neuromorphic and other unconventional approaches to computing. Here we argue that if we want to engineer unconventional computing systems in a systematic way, we need guidance from a formal theory that is different from the classical symbolic-algorithmic Turing machine theory. We propose a general strategy for developing such a theory, and within that general view, a specific approach that we call fluent computing. In contrast to Turing, who modeled computing processes from a top-down perspective as symbolic reasoning, we adopt the scientific paradigm of physics and model physical computing systems bottom-up by formalizing what can ultimately be measured in a physical computing system. This leads to an understanding of computing as the structuring of processes, while classical models of computing systems describe the processing of structures.
Collapse
Affiliation(s)
- Herbert Jaeger
- Bernoulli Institute, University of Groningen, 9700 AB, Groningen, The Netherlands.
- Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, The Netherlands.
| | - Beatriz Noheda
- Groningen Cognitive Systems and Materials Center (CogniGron), University of Groningen, 9700 AB, Groningen, The Netherlands
- Zernike Institute for Advanced Materials, University of Groningen, 9700 AB, Groningen, The Netherlands
| | - Wilfred G van der Wiel
- BRAINS Center for Brain-Inspired Nano Systems, University of Twente, 7500 AE, Enschede, The Netherlands
- MESA+ Institute for Nanotechnology, University of Twente, 7500 AE, Enschede, The Netherlands
- Institute of Physics, Westfälische Wilhelms-Universität Münster, Münster, Germany
| |
Collapse
|
10
|
Halaly R, Ezra Tsur E. Autonomous driving controllers with neuromorphic spiking neural networks. Front Neurorobot 2023; 17:1234962. [PMID: 37636326 PMCID: PMC10451073 DOI: 10.3389/fnbot.2023.1234962] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2023] [Accepted: 07/25/2023] [Indexed: 08/29/2023] Open
Abstract
Autonomous driving is one of the hallmarks of artificial intelligence. Neuromorphic (brain-inspired) control is posed to significantly contribute to autonomous behavior by leveraging spiking neural networks-based energy-efficient computational frameworks. In this work, we have explored neuromorphic implementations of four prominent controllers for autonomous driving: pure-pursuit, Stanley, PID, and MPC, using a physics-aware simulation framework. We extensively evaluated these models with various intrinsic parameters and compared their performance with conventional CPU-based implementations. While being neural approximations, we show that neuromorphic models can perform competitively with their conventional counterparts. We provide guidelines for building neuromorphic architectures for control and describe the importance of their underlying tuning parameters and neuronal resources. Our results show that most models would converge to their optimal performances with merely 100-1,000 neurons. They also highlight the importance of hybrid conventional and neuromorphic designs, as was suggested here with the MPC controller. This study also highlights the limitations of neuromorphic implementations, particularly at higher (> 15 m/s) speeds where they tend to degrade faster than in conventional designs.
Collapse
Affiliation(s)
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| |
Collapse
|
11
|
Zeng Y, Zhao D, Zhao F, Shen G, Dong Y, Lu E, Zhang Q, Sun Y, Liang Q, Zhao Y, Zhao Z, Fang H, Wang Y, Li Y, Liu X, Du C, Kong Q, Ruan Z, Bi W. BrainCog: A spiking neural network based, brain-inspired cognitive intelligence engine for brain-inspired AI and brain simulation. PATTERNS (NEW YORK, N.Y.) 2023; 4:100789. [PMID: 37602224 PMCID: PMC10435966 DOI: 10.1016/j.patter.2023.100789] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Revised: 02/06/2023] [Accepted: 06/05/2023] [Indexed: 08/22/2023]
Abstract
Spiking neural networks (SNNs) serve as a promising computational framework for integrating insights from the brain into artificial intelligence (AI). Existing software infrastructures based on SNNs exclusively support brain simulation or brain-inspired AI, but not both simultaneously. To decode the nature of biological intelligence and create AI, we present the brain-inspired cognitive intelligence engine (BrainCog). This SNN-based platform provides essential infrastructure support for developing brain-inspired AI and brain simulation. BrainCog integrates different biological neurons, encoding strategies, learning rules, brain areas, and hardware-software co-design as essential components. Leveraging these user-friendly components, BrainCog incorporates various cognitive functions, including perception and learning, decision-making, knowledge representation and reasoning, motor control, social cognition, and brain structure and function simulations across multiple scales. BORN is an AI engine developed by BrainCog, showcasing seamless integration of BrainCog's components and cognitive functions to build advanced AI models and applications.
Collapse
Affiliation(s)
- Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai 200031, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Dongcheng Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Guobin Shen
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yiting Dong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Enmeng Lu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Qian Zhang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yinqian Sun
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qian Liang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yuxuan Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Zhuoya Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Hongjian Fang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Yuwei Wang
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Yang Li
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Xin Liu
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Chengcheng Du
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Qingqun Kong
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
- School of Future Technology, University of Chinese Academy of Sciences, Beijing 101408, China
| | - Zizhe Ruan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| | - Weida Bi
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing 100190, China
| |
Collapse
|
12
|
Wu Z, Shen Y, Zhang J, Liang H, Zhao R, Li H, Xiong J, Zhang X, Chua Y. BIDL: a brain-inspired deep learning framework for spatiotemporal processing. Front Neurosci 2023; 17:1213720. [PMID: 37564366 PMCID: PMC10410154 DOI: 10.3389/fnins.2023.1213720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Accepted: 06/22/2023] [Indexed: 08/12/2023] Open
Abstract
Brain-inspired deep spiking neural network (DSNN) which emulates the function of the biological brain provides an effective approach for event-stream spatiotemporal perception (STP), especially for dynamic vision sensor (DVS) signals. However, there is a lack of generalized learning frameworks that can handle various spatiotemporal modalities beyond event-stream, such as video clips and 3D imaging data. To provide a unified design flow for generalized spatiotemporal processing (STP) and to investigate the capability of lightweight STP processing via brain-inspired neural dynamics, this study introduces a training platform called brain-inspired deep learning (BIDL). This framework constructs deep neural networks, which leverage neural dynamics for processing temporal information and ensures high-accuracy spatial processing via artificial neural network layers. We conducted experiments involving various types of data, including video information processing, DVS information processing, 3D medical imaging classification, and natural language processing. These experiments demonstrate the efficiency of the proposed method. Moreover, as a research framework for researchers in the fields of neuroscience and machine learning, BIDL facilitates the exploration of different neural models and enables global-local co-learning. For easily fitting to neuromorphic chips and GPUs, the framework incorporates several optimizations, including iteration representation, state-aware computational graph, and built-in neural functions. This study presents a user-friendly and efficient DSNN builder for lightweight STP applications and has the potential to drive future advancements in bio-inspired research.
Collapse
Affiliation(s)
- Zhenzhi Wu
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Yangshu Shen
- Lynxi Technologies, Co. Ltd., Beijing, China
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Jing Zhang
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Huaju Liang
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| | | | - Han Li
- Lynxi Technologies, Co. Ltd., Beijing, China
| | - Jianping Xiong
- Department of Precision Instruments and Mechanology, Tsinghua University, Beijing, China
| | - Xiyu Zhang
- School of Automation Science and Engineering, Xi'an Jiaotong University, Xi'an, Shaanxi, China
| | - Yansong Chua
- Neuromorphic Computing Laboratory, China Nanhu Academy of Electronics and Information Technology (CNAEIT), Jiaxing, Zhejiang, China
| |
Collapse
|
13
|
Nourse WRP, Jackson C, Szczecinski NS, Quinn RD. SNS-Toolbox: An Open Source Tool for Designing Synthetic Nervous Systems and Interfacing Them with Cyber-Physical Systems. Biomimetics (Basel) 2023; 8:247. [PMID: 37366842 DOI: 10.3390/biomimetics8020247] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2023] [Revised: 06/02/2023] [Accepted: 06/09/2023] [Indexed: 06/28/2023] Open
Abstract
One developing approach for robotic control is the use of networks of dynamic neurons connected with conductance-based synapses, also known as Synthetic Nervous Systems (SNS). These networks are often developed using cyclic topologies and heterogeneous mixtures of spiking and non-spiking neurons, which is a difficult proposition for existing neural simulation software. Most solutions apply to either one of two extremes, the detailed multi-compartment neural models in small networks, and the large-scale networks of greatly simplified neural models. In this work, we present our open-source Python package SNS-Toolbox, which is capable of simulating hundreds to thousands of spiking and non-spiking neurons in real-time or faster on consumer-grade computer hardware. We describe the neural and synaptic models supported by SNS-Toolbox, and provide performance on multiple software and hardware backends, including GPUs and embedded computing platforms. We also showcase two examples using the software, one for controlling a simulated limb with muscles in the physics simulator Mujoco, and another for a mobile robot using ROS. We hope that the availability of this software will reduce the barrier to entry when designing SNS networks, and will increase the prevalence of SNS networks in the field of robotic control.
Collapse
Affiliation(s)
- William R P Nourse
- Department of Electrical, Computer, and Systems Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Clayton Jackson
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| | - Nicholas S Szczecinski
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Roger D Quinn
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH 44106, USA
| |
Collapse
|
14
|
Gaurav R, Stewart TC, Yi Y. Reservoir based spiking models for univariate Time Series Classification. Front Comput Neurosci 2023; 17:1148284. [PMID: 37362059 PMCID: PMC10285304 DOI: 10.3389/fncom.2023.1148284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Accepted: 05/16/2023] [Indexed: 06/28/2023] Open
Abstract
A variety of advanced machine learning and deep learning algorithms achieve state-of-the-art performance on various temporal processing tasks. However, these methods are heavily energy inefficient-they run mainly on the power hungry CPUs and GPUs. Computing with Spiking Networks, on the other hand, has shown to be energy efficient on specialized neuromorphic hardware, e.g., Loihi, TrueNorth, SpiNNaker, etc. In this work, we present two architectures of spiking models, inspired from the theory of Reservoir Computing and Legendre Memory Units, for the Time Series Classification (TSC) task. Our first spiking architecture is closer to the general Reservoir Computing architecture and we successfully deploy it on Loihi; the second spiking architecture differs from the first by the inclusion of non-linearity in the readout layer. Our second model (trained with Surrogate Gradient Descent method) shows that non-linear decoding of the linearly extracted temporal features through spiking neurons not only achieves promising results, but also offers low computation-overhead by significantly reducing the number of neurons compared to the popular LSM based models-more than 40x reduction with respect to the recent spiking model we compare with. We experiment on five TSC datasets and achieve new SoTA spiking results (-as much as 28.607% accuracy improvement on one of the datasets), thereby showing the potential of our models to address the TSC tasks in a green energy-efficient manner. In addition, we also do energy profiling and comparison on Loihi and CPU to support our claims.
Collapse
Affiliation(s)
- Ramashish Gaurav
- Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, United States
| | - Terrence C. Stewart
- University of Waterloo Collaboration Centre, National Research Council of Canada, Waterloo, ON, Canada
| | - Yang Yi
- Department of Electrical and Computer Engineering, Virginia Tech, Blacksburg, VA, United States
| |
Collapse
|
15
|
Gleeson P, Crook S, Turner D, Mantel K, Raunak M, Willke T, Cohen JD. Integrating model development across computational neuroscience, cognitive science, and machine learning. Neuron 2023; 111:1526-1530. [PMID: 37100054 PMCID: PMC7616330 DOI: 10.1016/j.neuron.2023.03.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 03/14/2023] [Accepted: 03/28/2023] [Indexed: 04/28/2023]
Abstract
Neuroscience, cognitive science, and computer science are increasingly benefiting through their interactions. This could be accelerated by direct sharing of computational models across disparate modeling software used in each. We describe a Model Description Format designed to meet this challenge.
Collapse
Affiliation(s)
- Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, UK.
| | - Sharon Crook
- School of Mathematical and Statistical Sciences, Arizona State University, Tempe, AZ, USA
| | - David Turner
- Princeton Institute for Computational Science & Engineering, Princeton University, Princeton, NJ, USA
| | - Katherine Mantel
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA
| | | | - Ted Willke
- Intel Labs, Intel Corp, Hillsboro, OR, USA
| | - Jonathan D Cohen
- Princeton Neuroscience Institute, Princeton University, Princeton, NJ, USA.
| |
Collapse
|
16
|
AN EMBODIED AND COGNITIVE MODEL OF FIGTHER PILOT HIGH LEVEL AIR-TO-AIR ENGAGEMENT DECISION. COGN SYST RES 2023. [DOI: 10.1016/j.cogsys.2023.02.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/05/2023]
|
17
|
Wabina RS, Silpasuwanchai C. Neural stochastic differential equations network as uncertainty quantification method for EEG source localization. Biomed Phys Eng Express 2023; 9. [PMID: 36368029 DOI: 10.1088/2057-1976/aca20b] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Accepted: 11/11/2022] [Indexed: 11/13/2022]
Abstract
EEG source localization remains a challenging problem given the uncertain conductivity values of the volume conductor models (VCMs). As uncertain conductivities vary across people, they may considerably impact the forward and inverse solutions of the EEG, leading to an increase in localization mistakes and misdiagnoses of brain disorders. Calibration of conductivity values using uncertainty quantification (UQ) techniques is a promising approach to reduce localization errors. The widely-known UQ methods involve Bayesian approaches, which utilize prior conductivity values to derive their posterior inference and estimate their optimal calibration. However, these approaches have two significant drawbacks: solving for posterior inference is intractable, and choosing inappropriate priors may lead to increased localization mistakes. This study used the Neural Stochastic Differential equations Network (SDE-Net), a combination of dynamical systems and deep learning techniques that utilizes the Wiener process to minimize conductivity uncertainties in the VCM and improve the inverse problem. Results revealed that SDE-Net generated a lower localization error rate in the inverse problem compared to Bayesian techniques. Future studies may employ new stochastic dynamical systems-based techniques as a UQ technique to address further uncertainties in the EEG Source Localization problem. Our code can be found here:https://github.com/rrwabina/SDENet-UQ-ESL.
Collapse
Affiliation(s)
- R S Wabina
- Center for Health and Wellness Technology, Asian Institute of Technology (AIT), Khlong Luang, Pathum Thani, Thailand
| | - C Silpasuwanchai
- Center for Health and Wellness Technology, Asian Institute of Technology (AIT), Khlong Luang, Pathum Thani, Thailand
| |
Collapse
|
18
|
Schmitt FJ, Rostami V, Nawrot MP. Efficient parameter calibration and real-time simulation of large-scale spiking neural networks with GeNN and NEST. Front Neuroinform 2023; 17:941696. [PMID: 36844916 PMCID: PMC9950635 DOI: 10.3389/fninf.2023.941696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2022] [Accepted: 01/16/2023] [Indexed: 02/12/2023] Open
Abstract
Spiking neural networks (SNNs) represent the state-of-the-art approach to the biologically realistic modeling of nervous system function. The systematic calibration for multiple free model parameters is necessary to achieve robust network function and demands high computing power and large memory resources. Special requirements arise from closed-loop model simulation in virtual environments and from real-time simulation in robotic application. Here, we compare two complementary approaches to efficient large-scale and real-time SNN simulation. The widely used NEural Simulation Tool (NEST) parallelizes simulation across multiple CPU cores. The GPU-enhanced Neural Network (GeNN) simulator uses the highly parallel GPU-based architecture to gain simulation speed. We quantify fixed and variable simulation costs on single machines with different hardware configurations. As a benchmark model, we use a spiking cortical attractor network with a topology of densely connected excitatory and inhibitory neuron clusters with homogeneous or distributed synaptic time constants and in comparison to the random balanced network. We show that simulation time scales linearly with the simulated biological model time and, for large networks, approximately linearly with the model size as dominated by the number of synaptic connections. Additional fixed costs with GeNN are almost independent of model size, while fixed costs with NEST increase linearly with model size. We demonstrate how GeNN can be used for simulating networks with up to 3.5 · 106 neurons (> 3 · 1012synapses) on a high-end GPU, and up to 250, 000 neurons (25 · 109 synapses) on a low-cost GPU. Real-time simulation was achieved for networks with 100, 000 neurons. Network calibration and parameter grid search can be efficiently achieved using batch processing. We discuss the advantages and disadvantages of both approaches for different use cases.
Collapse
Affiliation(s)
- Felix Johannes Schmitt
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | - Vahid Rostami
- Computational Systems Neuroscience, Institute of Zoology, University of Cologne, Cologne, Germany
| | | |
Collapse
|
19
|
Dumont NSY, Stöckel A, Furlong PM, Bartlett M, Eliasmith C, Stewart TC. Biologically-Based Computation: How Neural Details and Dynamics Are Suited for Implementing a Variety of Algorithms. Brain Sci 2023; 13:brainsci13020245. [PMID: 36831788 PMCID: PMC9954128 DOI: 10.3390/brainsci13020245] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 01/28/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
The Neural Engineering Framework (Eliasmith & Anderson, 2003) is a long-standing method for implementing high-level algorithms constrained by low-level neurobiological details. In recent years, this method has been expanded to incorporate more biological details and applied to new tasks. This paper brings together these ongoing research strands, presenting them in a common framework. We expand on the NEF's core principles of (a) specifying the desired tuning curves of neurons in different parts of the model, (b) defining the computational relationships between the values represented by the neurons in different parts of the model, and (c) finding the synaptic connection weights that will cause those computations and tuning curves. In particular, we show how to extend this to include complex spatiotemporal tuning curves, and then apply this approach to produce functional computational models of grid cells, time cells, path integration, sparse representations, probabilistic representations, and symbolic representations in the brain.
Collapse
Affiliation(s)
- Nicole Sandra-Yaffa Dumont
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Correspondence:
| | | | - P. Michael Furlong
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Madeleine Bartlett
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON N2L 3G1, Canada
- Applied Brain Research Inc., Waterloo, ON N2T 1G9, Canada
| | - Terrence C. Stewart
- National Research Council, University of Waterloo Collaboration Centre, Waterloo, ON N2L 3G1, Canada
| |
Collapse
|
20
|
Chlasta K, Sochaczewski P, Wójcik GM, Krejtz I. Neural simulation pipeline: Enabling container-based simulations on-premise and in public clouds. Front Neuroinform 2023; 17:1122470. [PMID: 37025550 PMCID: PMC10070792 DOI: 10.3389/fninf.2023.1122470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2022] [Accepted: 02/06/2023] [Indexed: 04/08/2023] Open
Abstract
In this study, we explore the simulation setup in computational neuroscience. We use GENESIS, a general purpose simulation engine for sub-cellular components and biochemical reactions, realistic neuron models, large neural networks, and system-level models. GENESIS supports developing and running computer simulations but leaves a gap for setting up today's larger and more complex models. The field of realistic models of brain networks has overgrown the simplicity of earliest models. The challenges include managing the complexity of software dependencies and various models, setting up model parameter values, storing the input parameters alongside the results, and providing execution statistics. Moreover, in the high performance computing (HPC) context, public cloud resources are becoming an alternative to the expensive on-premises clusters. We present Neural Simulation Pipeline (NSP), which facilitates the large-scale computer simulations and their deployment to multiple computing infrastructures using the infrastructure as the code (IaC) containerization approach. The authors demonstrate the effectiveness of NSP in a pattern recognition task programmed with GENESIS, through a custom-built visual system, called RetNet(8 × 5,1) that uses biologically plausible Hodgkin-Huxley spiking neurons. We evaluate the pipeline by performing 54 simulations executed on-premise, at the Hasso Plattner Institute's (HPI) Future Service-Oriented Computing (SOC) Lab, and through the Amazon Web Services (AWS), the biggest public cloud service provider in the world. We report on the non-containerized and containerized execution with Docker, as well as present the cost per simulation in AWS. The results show that our neural simulation pipeline can reduce entry barriers to neural simulations, making them more practical and cost-effective.
Collapse
Affiliation(s)
- Karol Chlasta
- Department of Computer Science, Polish-Japanese Academy of Information Technology, Warsaw, Poland
- Department of Management in Networked and Digital Societies, Kozminski University, Warsaw, Poland
- *Correspondence: Karol Chlasta
| | - Paweł Sochaczewski
- Department of Management in Networked and Digital Societies, Kozminski University, Warsaw, Poland
| | - Grzegorz M. Wójcik
- Department of Neuroinformatics and Biomedical Engineering, Institute of Computer Science, Maria Curie-Sklodowska University in Lublin, Lublin, Poland
| | - Izabela Krejtz
- Eye Tracking Research Center, SWPS University, Warsaw, Poland
| |
Collapse
|
21
|
Nilsson M, Schelén O, Lindgren A, Bodin U, Paniagua C, Delsing J, Sandin F. Integration of neuromorphic AI in event-driven distributed digitized systems: Concepts and research directions. Front Neurosci 2023; 17:1074439. [PMID: 36875653 PMCID: PMC9981939 DOI: 10.3389/fnins.2023.1074439] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2022] [Accepted: 01/23/2023] [Indexed: 02/19/2023] Open
Abstract
Increasing complexity and data-generation rates in cyber-physical systems and the industrial Internet of things are calling for a corresponding increase in AI capabilities at the resource-constrained edges of the Internet. Meanwhile, the resource requirements of digital computing and deep learning are growing exponentially, in an unsustainable manner. One possible way to bridge this gap is the adoption of resource-efficient brain-inspired "neuromorphic" processing and sensing devices, which use event-driven, asynchronous, dynamic neurosynaptic elements with colocated memory for distributed processing and machine learning. However, since neuromorphic systems are fundamentally different from conventional von Neumann computers and clock-driven sensor systems, several challenges are posed to large-scale adoption and integration of neuromorphic devices into the existing distributed digital-computational infrastructure. Here, we describe the current landscape of neuromorphic computing, focusing on characteristics that pose integration challenges. Based on this analysis, we propose a microservice-based conceptual framework for neuromorphic systems integration, consisting of a neuromorphic-system proxy, which would provide virtualization and communication capabilities required in distributed systems of systems, in combination with a declarative programming approach offering engineering-process abstraction. We also present concepts that could serve as a basis for the realization of this framework, and identify directions for further research required to enable large-scale system integration of neuromorphic devices.
Collapse
Affiliation(s)
- Mattias Nilsson
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Olov Schelén
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Anders Lindgren
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden.,Applied AI and IoT, Industrial Systems, Digital Systems, RISE Research Institutes of Sweden, Kista, Sweden
| | - Ulf Bodin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Cristina Paniagua
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Jerker Delsing
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| | - Fredrik Sandin
- Embedded Intelligent Systems Lab (EISLAB), Department of Computer Science, Electrical and Space Engineering, Luleå University of Technology, Lulea, Sweden
| |
Collapse
|
22
|
Joseph GV, Pakrashi V. Spiking Neural Networks for Structural Health Monitoring. SENSORS (BASEL, SWITZERLAND) 2022; 22:9245. [PMID: 36501946 PMCID: PMC9740015 DOI: 10.3390/s22239245] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Revised: 11/21/2022] [Accepted: 11/23/2022] [Indexed: 06/17/2023]
Abstract
This paper presents the first implementation of a spiking neural network (SNN) for the extraction of cepstral coefficients in structural health monitoring (SHM) applications and demonstrates the possibilities of neuromorphic computing in this field. In this regard, we show that spiking neural networks can be effectively used to extract cepstral coefficients as features of vibration signals of structures in their operational conditions. We demonstrate that the neural cepstral coefficients extracted by the network can be successfully used for anomaly detection. To address the power efficiency of sensor nodes, related to both processing and transmission, affecting the applicability of the proposed approach, we implement the algorithm on specialised neuromorphic hardware (Intel ® Loihi architecture) and benchmark the results using numerical and experimental data of degradation in the form of stiffness change of a single degree of freedom system excited by Gaussian white noise. The work is expected to open a new direction of SHM applications towards non-Von Neumann computing through a neuromorphic approach.
Collapse
|
23
|
Michaelis C, Lehr AB, Oed W, Tetzlaff C. Brian2Loihi: An emulator for the neuromorphic chip Loihi using the spiking neural network simulator Brian. Front Neuroinform 2022; 16:1015624. [PMID: 36439945 PMCID: PMC9682266 DOI: 10.3389/fninf.2022.1015624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 10/12/2022] [Indexed: 11/11/2022] Open
Abstract
Developing intelligent neuromorphic solutions remains a challenging endeavor. It requires a solid conceptual understanding of the hardware's fundamental building blocks. Beyond this, accessible and user-friendly prototyping is crucial to speed up the design pipeline. We developed an open source Loihi emulator based on the neural network simulator Brian that can easily be incorporated into existing simulation workflows. We demonstrate errorless Loihi emulation in software for a single neuron and for a recurrently connected spiking neural network. On-chip learning is also reviewed and implemented, with reasonable discrepancy due to stochastic rounding. This work provides a coherent presentation of Loihi's computational unit and introduces a new, easy-to-use Loihi prototyping package with the aim to help streamline conceptualization and deployment of new algorithms.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
- *Correspondence: Carlo Michaelis
| | - Andrew B. Lehr
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Winfried Oed
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | - Christian Tetzlaff
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
- Bernstein Center for Computational Neuroscience, University of Göttingen, Göttingen, Germany
| |
Collapse
|
24
|
Zhao Z, Wang Y, Zou Q, Xu T, Tao F, Zhang J, Wang X, Shi CJR, Luo J, Xie Y. The spike gating flow: A hierarchical structure-based spiking neural network for online gesture recognition. Front Neurosci 2022; 16:923587. [PMID: 36408382 PMCID: PMC9667043 DOI: 10.3389/fnins.2022.923587] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2022] [Accepted: 10/03/2022] [Indexed: 01/25/2023] Open
Abstract
Action recognition is an exciting research avenue for artificial intelligence since it may be a game changer in emerging industrial fields such as robotic visions and automobiles. However, current deep learning (DL) faces major challenges for such applications because of the huge computational cost and inefficient learning. Hence, we developed a novel brain-inspired spiking neural network (SNN) based system titled spiking gating flow (SGF) for online action learning. The developed system consists of multiple SGF units which are assembled in a hierarchical manner. A single SGF unit contains three layers: a feature extraction layer, an event-driven layer, and a histogram-based training layer. To demonstrate the capability of the developed system, we employed a standard dynamic vision sensor (DVS) gesture classification as a benchmark. The results indicated that we can achieve 87.5% of accuracy which is comparable with DL, but at a smaller training/inference data number ratio of 1.5:1. Only a single training epoch is required during the learning process. Meanwhile, to the best of our knowledge, this is the highest accuracy among the non-backpropagation based SNNs. Finally, we conclude the few-shot learning (FSL) paradigm of the developed network: 1) a hierarchical structure-based network design involves prior human knowledge; 2) SNNs for content-based global dynamic feature detection.
Collapse
Affiliation(s)
- Zihao Zhao
- School of Microelectronics, Fudan University, Shanghai, China,Alibaba DAMO Academy, Shanghai, China
| | - Yanhong Wang
- School of Microelectronics, Fudan University, Shanghai, China,Alibaba DAMO Academy, Shanghai, China
| | - Qiaosha Zou
- School of Microelectronics, Fudan University, Shanghai, China
| | - Tie Xu
- Alibaba Group, Hangzhou, China
| | | | | | - Xiaoan Wang
- BrainUp Research Laboratory, Shanghai, China
| | - C.-J. Richard Shi
- Department of Electrical and Computer Engineering, University of Washington, Seattle, WA, United States
| | - Junwen Luo
- Alibaba DAMO Academy, Shanghai, China,BrainUp Research Laboratory, Shanghai, China,*Correspondence: Junwen Luo
| | - Yuan Xie
- Alibaba DAMO Academy, Shanghai, China
| |
Collapse
|
25
|
Garg N, Balafrej I, Stewart TC, Portal JM, Bocquet M, Querlioz D, Drouin D, Rouat J, Beilliard Y, Alibart F. Voltage-dependent synaptic plasticity: Unsupervised probabilistic Hebbian plasticity rule based on neurons membrane potential. Front Neurosci 2022; 16:983950. [PMID: 36340782 PMCID: PMC9634260 DOI: 10.3389/fnins.2022.983950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 09/05/2022] [Indexed: 11/27/2022] Open
Abstract
This study proposes voltage-dependent-synaptic plasticity (VDSP), a novel brain-inspired unsupervised local learning rule for the online implementation of Hebb’s plasticity mechanism on neuromorphic hardware. The proposed VDSP learning rule updates the synaptic conductance on the spike of the postsynaptic neuron only, which reduces by a factor of two the number of updates with respect to standard spike timing dependent plasticity (STDP). This update is dependent on the membrane potential of the presynaptic neuron, which is readily available as part of neuron implementation and hence does not require additional memory for storage. Moreover, the update is also regularized on synaptic weight and prevents explosion or vanishing of weights on repeated stimulation. Rigorous mathematical analysis is performed to draw an equivalence between VDSP and STDP. To validate the system-level performance of VDSP, we train a single-layer spiking neural network (SNN) for the recognition of handwritten digits. We report 85.01 ± 0.76% (Mean ± SD) accuracy for a network of 100 output neurons on the MNIST dataset. The performance improves when scaling the network size (89.93 ± 0.41% for 400 output neurons, 90.56 ± 0.27 for 500 neurons), which validates the applicability of the proposed learning rule for spatial pattern recognition tasks. Future work will consider more complicated tasks. Interestingly, the learning rule better adapts than STDP to the frequency of input signal and does not require hand-tuning of hyperparameters.
Collapse
Affiliation(s)
- Nikhil Garg
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- *Correspondence: Nikhil Garg,
| | - Ismael Balafrej
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Terrence C. Stewart
- National Research Council Canada, University of Waterloo Collaboration Centre, Waterloo, ON, Canada
| | - Jean-Michel Portal
- Aix-Marseille Université, Université de Toulon, CNRS, IM2NP, Marseille, France
| | - Marc Bocquet
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
| | - Damien Querlioz
- Université Paris-Saclay, CNRS, Centre de Nanosciences et de Nanotechnologies, Palaiseau, France
| | - Dominique Drouin
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Jean Rouat
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- NECOTIS Research Lab, Department of Electrical and Computer Engineering, University of Sherbrooke, Sherbrooke, QC, Canada
| | - Yann Beilliard
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
| | - Fabien Alibart
- Institut Interdisciplinaire d’Innovation Technologique (3IT), Université de Sherbrooke, Sherbrooke, QC, Canada
- Laboratoire Nanotechnologies Nanosystèmes (LN2)—CNRS UMI-3463, Université de Sherbrooke, Sherbrooke, QC, Canada
- Institute of Electronics, Microelectronics and Nanotechnology (IEMN), Université de Lille, Villeneuve-d’Ascq, France
- Fabien Alibart,
| |
Collapse
|
26
|
Kleyko D, Davies M, Frady EP, Kanerva P, Kent SJ, Olshausen BA, Osipov E, Rabaey JM, Rachkovskij DA, Rahimi A, Sommer FT. Vector Symbolic Architectures as a Computing Framework for Emerging Hardware. PROCEEDINGS OF THE IEEE. INSTITUTE OF ELECTRICAL AND ELECTRONICS ENGINEERS 2022; 110:1538-1571. [PMID: 37868615 PMCID: PMC10588678] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 10/24/2023]
Abstract
This article reviews recent progress in the development of the computing framework Vector Symbolic Architectures (also known as Hyperdimensional Computing). This framework is well suited for implementation in stochastic, emerging hardware and it naturally expresses the types of cognitive operations required for Artificial Intelligence (AI). We demonstrate in this article that the field-like algebraic structure of Vector Symbolic Architectures offers simple but powerful operations on high-dimensional vectors that can support all data structures and manipulations relevant to modern computing. In addition, we illustrate the distinguishing feature of Vector Symbolic Architectures, "computing in superposition," which sets it apart from conventional computing. It also opens the door to efficient solutions to the difficult combinatorial search problems inherent in AI applications. We sketch ways of demonstrating that Vector Symbolic Architectures are computationally universal. We see them acting as a framework for computing with distributed representations that can play a role of an abstraction layer for emerging computing hardware. This article serves as a reference for computer architects by illustrating the philosophy behind Vector Symbolic Architectures, techniques of distributed computing with them, and their relevance to emerging computing hardware, such as neuromorphic computing.
Collapse
Affiliation(s)
- Denis Kleyko
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA and also with the Intelligent Systems Lab at Research Institutes of Sweden, 16440 Kista, Sweden
| | - Mike Davies
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - E Paxon Frady
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA
| | - Pentti Kanerva
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Spencer J Kent
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Bruno A Olshausen
- Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| | - Evgeny Osipov
- Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Jan M Rabaey
- Department of Electrical Engineering and Computer Sciences at the University of California at Berkeley, CA 94720, USA
| | - Dmitri A Rachkovskij
- International Research and Training Center for Information Technologies and Systems, 03680 Kyiv, Ukraine, and with the Department of Computer Science Electrical and Space Engineering, Luleå University of Technology, 97187 Luleå, Sweden
| | - Abbas Rahimi
- IBM Research - Zurich, 8803 Rüschlikon, Switzerland
| | - Friedrich T Sommer
- Neuromorphic Computing Lab, Intel Labs, Santa Clara, CA 95054, USA and also with the Redwood Center for Theoretical Neuroscience at the University of California at Berkeley, CA 94720, USA
| |
Collapse
|
27
|
Ehrlich M, Zaidel Y, Weiss PL, Melamed Yekel A, Gefen N, Supic L, Ezra Tsur E. Adaptive control of a wheelchair mounted robotic arm with neuromorphically integrated velocity readings and online-learning. Front Neurosci 2022; 16:1007736. [PMID: 36248665 PMCID: PMC9559600 DOI: 10.3389/fnins.2022.1007736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 08/31/2022] [Indexed: 11/30/2022] Open
Abstract
Wheelchair-mounted robotic arms support people with upper extremity disabilities with various activities of daily living (ADL). However, the associated cost and the power consumption of responsive and adaptive assistive robotic arms contribute to the fact that such systems are in limited use. Neuromorphic spiking neural networks can be used for a real-time machine learning-driven control of robots, providing an energy efficient framework for adaptive control. In this work, we demonstrate a neuromorphic adaptive control of a wheelchair-mounted robotic arm deployed on Intel’s Loihi chip. Our algorithm design uses neuromorphically represented and integrated velocity readings to derive the arm’s current state. The proposed controller provides the robotic arm with adaptive signals, guiding its motion while accounting for kinematic changes in real-time. We pilot-tested the device with an able-bodied participant to evaluate its accuracy while performing ADL-related trajectories. We further demonstrated the capacity of the controller to compensate for unexpected inertia-generating payloads using online learning. Videotaped recordings of ADL tasks performed by the robot were viewed by caregivers; data summarizing their feedback on the user experience and the potential benefit of the system is reported.
Collapse
Affiliation(s)
- Michael Ehrlich
- Neuro-Biomorphic Engineering Lab, Open University of Israel, Ra’anana, Israel
| | - Yuval Zaidel
- Neuro-Biomorphic Engineering Lab, Open University of Israel, Ra’anana, Israel
| | - Patrice L. Weiss
- Department of Occupational Therapy, University of Haifa, Haifa, Israel
- The Helmsley Pediatric & Adolescent Rehabilitation Research Center, ALYN Hospital, Jerusalem, Israel
| | - Arie Melamed Yekel
- The Helmsley Pediatric & Adolescent Rehabilitation Research Center, ALYN Hospital, Jerusalem, Israel
| | - Naomi Gefen
- The Helmsley Pediatric & Adolescent Rehabilitation Research Center, ALYN Hospital, Jerusalem, Israel
| | - Lazar Supic
- Accenture Labs, San Francisco, CA, United States
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Open University of Israel, Ra’anana, Israel
- *Correspondence: Elishai Ezra Tsur,
| |
Collapse
|
28
|
Duggins P, Eliasmith C. Constructing functional models from biophysically-detailed neurons. PLoS Comput Biol 2022; 18:e1010461. [PMID: 36074765 PMCID: PMC9455888 DOI: 10.1371/journal.pcbi.1010461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 07/30/2022] [Indexed: 11/25/2022] Open
Abstract
Improving biological plausibility and functional capacity are two important goals for brain models that connect low-level neural details to high-level behavioral phenomena. We develop a method called “oracle-supervised Neural Engineering Framework” (osNEF) to train biologically-detailed spiking neural networks that realize a variety of cognitively-relevant dynamical systems. Specifically, we train networks to perform computations that are commonly found in cognitive systems (communication, multiplication, harmonic oscillation, and gated working memory) using four distinct neuron models (leaky-integrate-and-fire neurons, Izhikevich neurons, 4-dimensional nonlinear point neurons, and 4-compartment, 6-ion-channel layer-V pyramidal cell reconstructions) connected with various synaptic models (current-based synapses, conductance-based synapses, and voltage-gated synapses). We show that osNEF networks exhibit the target dynamics by accounting for nonlinearities present within the neuron models: performance is comparable across all four systems and all four neuron models, with variance proportional to task and neuron model complexity. We also apply osNEF to build a model of working memory that performs a delayed response task using a combination of pyramidal cells and inhibitory interneurons connected with NMDA and GABA synapses. The baseline performance and forgetting rate of the model are consistent with animal data from delayed match-to-sample tasks (DMTST): we observe a baseline performance of 95% and exponential forgetting with time constant τ = 8.5s, while a recent meta-analysis of DMTST performance across species observed baseline performances of 58 − 99% and exponential forgetting with time constants of τ = 2.4 − 71s. These results demonstrate that osNEF can train functional brain models using biologically-detailed components and open new avenues for investigating the relationship between biophysical mechanisms and functional capabilities. Computational models of biologically realistic neural networks help scientists understand and recreate a wide variety of brain processes, responsible for everything from fish locomotion to human cognition. To be useful, these models must both recreate features of the brain, such as the electrical, chemical, and geometric properties of neurons, and perform useful functional operations, such as storing and retrieving information from a short term memory. Here, we develop a new method for training networks built from biologically detailed components. We simulate networks that contain a variety of complex neurons and synapses, then show that our method successfully trains them to perform a variety of cognitive operations. Most notably, we train a working memory model that contains detailed reconstructions of cortical neurons, and demonstrate that it performs a memory task with performance that is comparable to simple animals. Researchers can use our method to train detailed brain models and investigate how biological features (or deficits thereof) relate to cognition, which may provide insights into the biological basis of mental disorders such as Parkinson’s disease.
Collapse
Affiliation(s)
- Peter Duggins
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
- * E-mail:
| | - Chris Eliasmith
- Computational Neuroscience Research Group, Department of Systems Design Engineering, University of Waterloo, Waterloo, Canada
| |
Collapse
|
29
|
Gutierrez CE, Skibbe H, Musset H, Doya K. A Spiking Neural Network Builder for Systematic Data-to-Model Workflow. Front Neuroinform 2022; 16:855765. [PMID: 35909884 PMCID: PMC9326306 DOI: 10.3389/fninf.2022.855765] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 05/30/2022] [Indexed: 11/16/2022] Open
Abstract
In building biological neural network models, it is crucial to efficiently convert diverse anatomical and physiological data into parameters of neurons and synapses and to systematically estimate unknown parameters in reference to experimental observations. Web-based tools for systematic model building can improve the transparency and reproducibility of computational models and can facilitate collaborative model building, validation, and evolution. Here, we present a framework to support collaborative data-driven development of spiking neural network (SNN) models based on the Entity-Relationship (ER) data description commonly used in large-scale business software development. We organize all data attributes, including species, brain regions, neuron types, projections, neuron models, and references as tables and relations within a database management system (DBMS) and provide GUI interfaces for data registration and visualization. This allows a robust "business-oriented" data representation that supports collaborative model building and traceability of source information for every detail of a model. We tested this data-to-model framework in cortical and striatal network models by successfully combining data from papers with existing neuron and synapse models and by generating NEST simulation codes for various network sizes. Our framework also helps to check data integrity and consistency and data comparisons across species. The framework enables the modeling of any region of the brain and is being deployed to support the integration of anatomical and physiological datasets from the brain/MINDS project for systematic SNN modeling of the marmoset brain.
Collapse
Affiliation(s)
- Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Henrik Skibbe
- Brain Image Analysis Unit, RIKEN Center for Brain Science, Wako, Japan
| | - Hugo Musset
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| |
Collapse
|
30
|
Sadeghi D, Shoeibi A, Ghassemi N, Moridian P, Khadem A, Alizadehsani R, Teshnehlab M, Gorriz JM, Khozeimeh F, Zhang YD, Nahavandi S, Acharya UR. An overview of artificial intelligence techniques for diagnosis of Schizophrenia based on magnetic resonance imaging modalities: Methods, challenges, and future works. Comput Biol Med 2022; 146:105554. [DOI: 10.1016/j.compbiomed.2022.105554] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2021] [Revised: 04/11/2022] [Accepted: 04/11/2022] [Indexed: 12/21/2022]
|
31
|
Spiking Neural Networks and Their Applications: A Review. Brain Sci 2022; 12:brainsci12070863. [PMID: 35884670 PMCID: PMC9313413 DOI: 10.3390/brainsci12070863] [Citation(s) in RCA: 43] [Impact Index Per Article: 21.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/12/2022] [Accepted: 06/13/2022] [Indexed: 02/04/2023] Open
Abstract
The past decade has witnessed the great success of deep neural networks in various domains. However, deep neural networks are very resource-intensive in terms of energy consumption, data requirements, and high computational costs. With the recent increasing need for the autonomy of machines in the real world, e.g., self-driving vehicles, drones, and collaborative robots, exploitation of deep neural networks in those applications has been actively investigated. In those applications, energy and computational efficiencies are especially important because of the need for real-time responses and the limited energy supply. A promising solution to these previously infeasible applications has recently been given by biologically plausible spiking neural networks. Spiking neural networks aim to bridge the gap between neuroscience and machine learning, using biologically realistic models of neurons to carry out the computation. Due to their functional similarity to the biological neural network, spiking neural networks can embrace the sparsity found in biology and are highly compatible with temporal code. Our contributions in this work are: (i) we give a comprehensive review of theories of biological neurons; (ii) we present various existing spike-based neuron models, which have been studied in neuroscience; (iii) we detail synapse models; (iv) we provide a review of artificial neural networks; (v) we provide detailed guidance on how to train spike-based neuron models; (vi) we revise available spike-based neuron frameworks that have been developed to support implementing spiking neural networks; (vii) finally, we cover existing spiking neural network applications in computer vision and robotics domains. The paper concludes with discussions of future perspectives.
Collapse
|
32
|
Nicholson DA, Prinz AA. Could simplified stimuli change how the brain performs visual search tasks? A deep neural network study. J Vis 2022; 22:3. [PMID: 35675057 PMCID: PMC9187944 DOI: 10.1167/jov.22.7.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2021] [Accepted: 05/04/2022] [Indexed: 11/24/2022] Open
Abstract
Visual search is a complex behavior influenced by many factors. To control for these factors, many studies use highly simplified stimuli. However, the statistics of these stimuli are very different from the statistics of the natural images that the human visual system is optimized by evolution and experience to perceive. Could this difference change search behavior? If so, simplified stimuli may contribute to effects typically attributed to cognitive processes, such as selective attention. Here we use deep neural networks to test how optimizing models for the statistics of one distribution of images constrains performance on a task using images from a different distribution. We train four deep neural network architectures on one of three source datasets-natural images, faces, and x-ray images-and then adapt them to a visual search task using simplified stimuli. This adaptation produces models that exhibit performance limitations similar to humans, whereas models trained on the search task alone exhibit no such limitations. However, we also find that deep neural networks trained to classify natural images exhibit similar limitations when adapted to a search task that uses a different set of natural images. Therefore, the distribution of data alone cannot explain this effect. We discuss how future work might integrate an optimization-based approach into existing models of visual search behavior.
Collapse
Affiliation(s)
- David A Nicholson
- Emory University, Department of Biology, O. Wayne Rollins Research Center, Atlanta, Georgia
| | - Astrid A Prinz
- Emory University, Department of Biology, O. Wayne Rollins Research Center, Atlanta, Georgia
| |
Collapse
|
33
|
Lee YJ, On MB, Xiao X, Proietti R, Yoo SJB. Photonic spiking neural networks with event-driven femtojoule optoelectronic neurons based on Izhikevich-inspired model. OPTICS EXPRESS 2022; 30:19360-19389. [PMID: 36221716 DOI: 10.1364/oe.449528] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/20/2021] [Accepted: 04/16/2022] [Indexed: 06/16/2023]
Abstract
Photonic spiking neural networks (PSNNs) potentially offer exceptionally high throughput and energy efficiency compared to their electronic neuromorphic counterparts while maintaining their benefits in terms of event-driven computing capability. While state-of-the-art PSNN designs require a continuous laser pump, this paper presents a monolithic optoelectronic PSNN hardware design consisting of an MZI mesh incoherent network and event-driven laser spiking neurons. We designed, prototyped, and experimentally demonstrated this event-driven neuron inspired by the Izhikevich model incorporating both excitatory and inhibitory optical spiking inputs and producing optical spiking outputs accordingly. The optoelectronic neurons consist of two photodetectors for excitatory and inhibitory optical spiking inputs, electrical transistors' circuits providing spiking nonlinearity, and a laser for optical spiking outputs. Additional inclusion of capacitors and resistors complete the Izhikevich-inspired optoelectronic neurons, which receive excitatory and inhibitory optical spikes as inputs from other optoelectronic neurons. We developed a detailed optoelectronic neuron model in Verilog-A and simulated the circuit-level operation of various cases with excitatory input and inhibitory input signals. The experimental results closely resemble the simulated results and demonstrate how the excitatory inputs trigger the optical spiking outputs while the inhibitory inputs suppress the outputs. The nanoscale neuron designed in our monolithic PSNN utilizes quantum impedance conversion. It shows that estimated 21.09 fJ/spike input can trigger the output from on-chip nanolasers running at a maximum of 10 Gspike/second in the neural network. Utilizing the simulated neuron model, we conducted simulations on MNIST handwritten digits recognition using fully connected (FC) and convolutional neural networks (CNN). The simulation results show 90% accuracy on unsupervised learning and 97% accuracy on a supervised modified FC neural network. The benchmark shows our PSNN can achieve 50 TOP/J energy efficiency, which corresponds to 100 × throughputs and 1000 × energy-efficiency improvements compared to state-of-art electrical neuromorphic hardware such as Loihi and NeuroGrid.
Collapse
|
34
|
Feldotto B, Eppler JM, Jimenez-Romero C, Bignamini C, Gutierrez CE, Albanese U, Retamino E, Vorobev V, Zolfaghari V, Upton A, Sun Z, Yamaura H, Heidarinejad M, Klijn W, Morrison A, Cruz F, McMurtrie C, Knoll AC, Igarashi J, Yamazaki T, Doya K, Morin FO. Deploying and Optimizing Embodied Simulations of Large-Scale Spiking Neural Networks on HPC Infrastructure. Front Neuroinform 2022; 16:884180. [PMID: 35662903 PMCID: PMC9160925 DOI: 10.3389/fninf.2022.884180] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2022] [Accepted: 04/19/2022] [Indexed: 12/20/2022] Open
Abstract
Simulating the brain-body-environment trinity in closed loop is an attractive proposal to investigate how perception, motor activity and interactions with the environment shape brain activity, and vice versa. The relevance of this embodied approach, however, hinges entirely on the modeled complexity of the various simulated phenomena. In this article, we introduce a software framework that is capable of simulating large-scale, biologically realistic networks of spiking neurons embodied in a biomechanically accurate musculoskeletal system that interacts with a physically realistic virtual environment. We deploy this framework on the high performance computing resources of the EBRAINS research infrastructure and we investigate the scaling performance by distributing computation across an increasing number of interconnected compute nodes. Our architecture is based on requested compute nodes as well as persistent virtual machines; this provides a high-performance simulation environment that is accessible to multi-domain users without expert knowledge, with a view to enable users to instantiate and control simulations at custom scale via a web-based graphical user interface. Our simulation environment, entirely open source, is based on the Neurorobotics Platform developed in the context of the Human Brain Project, and the NEST simulator. We characterize the capabilities of our parallelized architecture for large-scale embodied brain simulations through two benchmark experiments, by investigating the effects of scaling compute resources on performance defined in terms of experiment runtime, brain instantiation and simulation time. The first benchmark is based on a large-scale balanced network, while the second one is a multi-region embodied brain simulation consisting of more than a million neurons and a billion synapses. Both benchmarks clearly show how scaling compute resources improves the aforementioned performance metrics in a near-linear fashion. The second benchmark in particular is indicative of both the potential and limitations of a highly distributed simulation in terms of a trade-off between computation speed and resource cost. Our simulation architecture is being prepared to be accessible for everyone as an EBRAINS service, thereby offering a community-wide tool with a unique workflow that should provide momentum to the investigation of closed-loop embodiment within the computational neuroscience community.
Collapse
Affiliation(s)
- Benedikt Feldotto
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jochen Martin Eppler
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Cristian Jimenez-Romero
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | | | - Carlos Enrique Gutierrez
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Ugo Albanese
- Department of Excellence in Robotics and AI, The BioRobotics Institute, Scuola Superiore Sant'Anna, Pontedera, Italy
| | - Eloy Retamino
- Department of Computer Architecture and Technology, Research Centre for Information and Communication Technologies, University of Granada, Granada, Spain
| | - Viktor Vorobev
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Vahid Zolfaghari
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Alex Upton
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Zhe Sun
- Image Processing Research Team, Center for Advanced Photonics, RIKEN, Wako, Japan
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Hiroshi Yamaura
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Morteza Heidarinejad
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
| | - Wouter Klijn
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
| | - Abigail Morrison
- Simulation and Data Lab Neuroscience, Jülich Supercomputing Centre (JSC), Institute for Advanced Simulation, JARA, Forschungszentrum Jülich GmbH, Jülich, Germany
- Jülich Research Centre, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Computer Science 3-Software Engineering, RWTH Aachen University, Aachen, Germany
| | - Felipe Cruz
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Colin McMurtrie
- Swiss National Supercomputing Centre (CSCS), ETH Zurich, Lugano, Switzerland
| | - Alois C. Knoll
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| | - Jun Igarashi
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Wako, Japan
- Center for Computational Science, RIKEN, Kobe, Japan
| | - Tadashi Yamazaki
- Graduate School of Informatics and Engineering, The University of Electro-Communications, Tokyo, Japan
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology Graduate University, Okinawa, Japan
| | - Fabrice O. Morin
- Robotics, Artificial Intelligence and Real-Time Systems, Faculty of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
35
|
Kröger BJ, Bekolay T, Cao M. On the Emergence of Phonological Knowledge and on Motor Planning and Motor Programming in a Developmental Model of Speech Production. Front Hum Neurosci 2022; 16:844529. [PMID: 35634209 PMCID: PMC9133537 DOI: 10.3389/fnhum.2022.844529] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2021] [Accepted: 04/12/2022] [Indexed: 11/13/2022] Open
Abstract
A broad sketch for a model of speech production is outlined which describes developmental aspects of its cognitive-linguistic and sensorimotor components. A description of the emergence of phonological knowledge is a central point in our model sketch. It will be shown that the phonological form level emerges during speech acquisition and becomes an important representation at the interface between cognitive-linguistic and sensorimotor processes. Motor planning as well as motor programming are defined as separate processes in our model sketch and it will be shown that both processes revert to the phonological information. Two computational simulation experiments based on quantitative implementations (simulation models) are undertaken to show proof of principle of key ideas of the model sketch: (i) the emergence of phonological information over developmental stages, (ii) the adaptation process for generating new motor programs, and (iii) the importance of various forms of phonological representation in that process. Based on the ideas developed within our sketch of a production model and its quantitative spell-out within the simulation models, motor planning can be defined here as the process of identifying a succession of executable chunks from a currently activated phoneme sequence and of coding them as raw gesture scores. Motor programming can be defined as the process of building up the complete set of motor commands by specifying all gestures in detail (fully specified gesture score including temporal relations). This full specification of gesture scores is achieved in our model by adapting motor information from phonologically similar syllables (adapting approach) or by assembling motor programs from sub-syllabic units (assembling approach).
Collapse
Affiliation(s)
- Bernd J. Kröger
- Department of Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty, RWTH Aachen University, Aachen, Germany
- *Correspondence: Bernd J. Kröger,
| | | | - Mengxue Cao
- School of Chinese Language and Literature, Beijing Normal University, Beijing, China
| |
Collapse
|
36
|
Neuromorphic Neural Engineering Framework-Inspired Online Continuous Learning with Analog Circuitry. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094528] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Neuromorphic hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for machine learning. Here, we propose a neuromorphic analog design for continuous real-time learning. Our hardware design realizes the underlying principles of the neural engineering framework (NEF). NEF brings forth a theoretical framework for the representation and transformation of mathematical constructs with spiking neurons, thus providing efficient means for neuromorphic machine learning and the design of intricate dynamical systems. Our analog circuit design implements the neuromorphic prescribed error sensitivity (PES) learning rule with OZ neurons. OZ is an analog implementation of a spiking neuron, which was shown to have complete correspondence with NEF across firing rates, encoding vectors, and intercepts. We demonstrate PES-based neuromorphic representation of mathematical constructs with varying neuron configurations, the transformation of mathematical constructs, and the construction of a dynamical system with the design of an inducible leaky oscillator. We further designed a circuit emulator, allowing the evaluation of our electrical designs on a large scale. We used the circuit emulator in conjunction with a robot simulator to demonstrate adaptive learning-based control of a robotic arm with six degrees of freedom.
Collapse
|
37
|
Javanshir A, Nguyen TT, Mahmud MAP, Kouzani AZ. Advancements in Algorithms and Neuromorphic Hardware for Spiking Neural Networks. Neural Comput 2022; 34:1289-1328. [PMID: 35534005 DOI: 10.1162/neco_a_01499] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Artificial neural networks (ANNs) have experienced a rapid advancement for their success in various application domains, including autonomous driving and drone vision. Researchers have been improving the performance efficiency and computational requirement of ANNs inspired by the mechanisms of the biological brain. Spiking neural networks (SNNs) provide a power-efficient and brain-inspired computing paradigm for machine learning applications. However, evaluating large-scale SNNs on classical von Neumann architectures (central processing units/graphics processing units) demands a high amount of power and time. Therefore, hardware designers have developed neuromorphic platforms to execute SNNs in and approach that combines fast processing and low power consumption. Recently, field-programmable gate arrays (FPGAs) have been considered promising candidates for implementing neuromorphic solutions due to their varied advantages, such as higher flexibility, shorter design, and excellent stability. This review aims to describe recent advances in SNNs and the neuromorphic hardware platforms (digital, analog, hybrid, and FPGA based) suitable for their implementation. We present that biological background of SNN learning, such as neuron models and information encoding techniques, followed by a categorization of SNN training. In addition, we describe state-of-the-art SNN simulators. Furthermore, we review and present FPGA-based hardware implementation of SNNs. Finally, we discuss some future directions for research in this field.
Collapse
Affiliation(s)
| | - Thanh Thi Nguyen
- School of Information Technology, Deakin University (Burwood Campus) Burwood, VIC 3125, Australia
| | - M A Parvez Mahmud
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| | - Abbas Z Kouzani
- School of Engineering, Deakin University, Geelong, VIC 3216, Australia
| |
Collapse
|
38
|
Robinson BS, Norman-Tenazas R, Cervantes M, Symonette D, Johnson EC, Joyce J, Rivlin PK, Hwang GM, Zhang K, Gray-Roncal W. Online learning for orientation estimation during translation in an insect ring attractor network. Sci Rep 2022; 12:3210. [PMID: 35217679 PMCID: PMC8881593 DOI: 10.1038/s41598-022-05798-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2021] [Accepted: 01/10/2022] [Indexed: 11/09/2022] Open
Abstract
Insect neural systems are a promising source of inspiration for new navigation algorithms, especially on low size, weight, and power platforms. There have been unprecedented recent neuroscience breakthroughs with Drosophila in behavioral and neural imaging experiments as well as the mapping of detailed connectivity of neural structures. General mechanisms for learning orientation in the central complex (CX) of Drosophila have been investigated previously; however, it is unclear how these underlying mechanisms extend to cases where there is translation through an environment (beyond only rotation), which is critical for navigation in robotic systems. Here, we develop a CX neural connectivity-constrained model that performs sensor fusion, as well as unsupervised learning of visual features for path integration; we demonstrate the viability of this circuit for use in robotic systems in simulated and physical environments. Furthermore, we propose a theoretical understanding of how distributed online unsupervised network weight modification can be leveraged for learning in a trajectory through an environment by minimizing orientation estimation error. Overall, our results may enable a new class of CX-derived low power robotic navigation algorithms and lead to testable predictions to inform future neuroscience experiments.
Collapse
Affiliation(s)
- Brian S Robinson
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA.
| | | | - Martha Cervantes
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Danilo Symonette
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Erik C Johnson
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Justin Joyce
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
| | - Patricia K Rivlin
- Janelia Research Campus, Howard Hughes Medical Institute, Ashburn, VA, 20147, USA
| | - Grace M Hwang
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - Kechen Zhang
- Kavli Neuroscience Discovery Institute, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, 21205, USA
- Department of Neuroscience, Johns Hopkins University, Baltimore, MD, 21205, USA
| | - William Gray-Roncal
- The Johns Hopkins University Applied Physics Laboratory, Laurel, MD, 20723, USA
- Department of Computer Science, Johns Hopkins University, Baltimore, MD, 21218, USA
| |
Collapse
|
39
|
Poduval P, Alimohamadi H, Zakeri A, Imani F, Najafi MH, Givargis T, Imani M. GrapHD: Graph-Based Hyperdimensional Memorization for Brain-Like Cognitive Learning. Front Neurosci 2022; 16:757125. [PMID: 35185456 PMCID: PMC8855686 DOI: 10.3389/fnins.2022.757125] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2021] [Accepted: 01/03/2022] [Indexed: 11/13/2022] Open
Abstract
Memorization is an essential functionality that enables today's machine learning algorithms to provide a high quality of learning and reasoning for each prediction. Memorization gives algorithms prior knowledge to keep the context and define confidence for their decision. Unfortunately, the existing deep learning algorithms have a weak and nontransparent notion of memorization. Brain-inspired HyperDimensional Computing (HDC) is introduced as a model of human memory. Therefore, it mimics several important functionalities of the brain memory by operating with a vector that is computationally tractable and mathematically rigorous in describing human cognition. In this manuscript, we introduce a brain-inspired system that represents HDC memorization capability over a graph of relations. We propose GrapHD, hyperdimensional memorization that represents graph-based information in high-dimensional space. GrapHD defines an encoding method representing complex graph structure while supporting both weighted and unweighted graphs. Our encoder spreads the information of all nodes and edges across into a full holistic representation so that no component is more responsible for storing any piece of information than another. Then, GrapHD defines several important cognitive functionalities over the encoded memory graph. These operations include memory reconstruction, information retrieval, graph matching, and shortest path. Our extensive evaluation shows that GrapHD: (1) significantly enhances learning capability by giving the notion of short/long term memorization to learning algorithms, (2) enables cognitive computing and reasoning over memorization graph, and (3) enables holographic brain-like computation with substantial robustness to noise and failure.
Collapse
Affiliation(s)
| | - Haleh Alimohamadi
- Department of Bioengineering, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ali Zakeri
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Farhad Imani
- Department of Mechanical Engineering, University of Connecticut, Storrs, CT, United States
| | - M. Hassan Najafi
- School of Computing and Informatics, University of Louisiana, Lafayette, LA, United States
| | - Tony Givargis
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
| | - Mohsen Imani
- Department of Computer Science, University of California, Irvine, Irvine, CA, United States
- *Correspondence: Mohsen Imani
| |
Collapse
|
40
|
Vaila R, Chiasson J, Saxena V. A Deep Unsupervised Feature Learning Spiking Neural Network With Binarized Classification Layers for the EMNIST Classification. IEEE TRANSACTIONS ON EMERGING TOPICS IN COMPUTATIONAL INTELLIGENCE 2022. [DOI: 10.1109/tetci.2020.3035164] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
41
|
Heittmann A, Psychou G, Trensch G, Cox CE, Wilcke WW, Diesmann M, Noll TG. Simulating the Cortical Microcircuit Significantly Faster Than Real Time on the IBM INC-3000 Neural Supercomputer. Front Neurosci 2022; 15:728460. [PMID: 35126034 PMCID: PMC8811464 DOI: 10.3389/fnins.2021.728460] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2021] [Accepted: 11/04/2021] [Indexed: 11/13/2022] Open
Abstract
This article employs the new IBM INC-3000 prototype FPGA-based neural supercomputer to implement a widely used model of the cortical microcircuit. With approximately 80,000 neurons and 300 Million synapses this model has become a benchmark network for comparing simulation architectures with regard to performance. To the best of our knowledge, the achieved speed-up factor is 2.4 times larger than the highest speed-up factor reported in the literature and four times larger than biological real time demonstrating the potential of FPGA systems for neural modeling. The work was performed at Jülich Research Centre in Germany and the INC-3000 was built at the IBM Almaden Research Center in San Jose, CA, United States. For the simulation of the microcircuit only the programmable logic part of the FPGA nodes are used. All arithmetic is implemented with single-floating point precision. The original microcircuit network with linear LIF neurons and current-based exponential-decay-, alpha-function- as well as beta-function-shaped synapses was simulated using exact exponential integration as ODE solver method. In order to demonstrate the flexibility of the approach, additionally networks with non-linear neuron models (AdEx, Izhikevich) and conductance-based synapses were simulated, applying Runge–Kutta and Parker–Sochacki solver methods. In all cases, the simulation-time speed-up factor did not decrease by more than a very few percent. It finally turns out that the speed-up factor is essentially limited by the latency of the INC-3000 communication system.
Collapse
Affiliation(s)
- Arne Heittmann
- JARA-Institute Green IT (PGI-10), Jülich Research Centre, Jülich, Germany
- *Correspondence: Arne Heittmann,
| | - Georgia Psychou
- JARA-Institute Green IT (PGI-10), Jülich Research Centre, Jülich, Germany
| | - Guido Trensch
- Simulation and Data Laboratory Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Research Centre, Jülich, Germany
| | - Charles E. Cox
- IBM Research Division, Almaden Research Center, San Jose, CA, United States
| | - Winfried W. Wilcke
- IBM Research Division, Almaden Research Center, San Jose, CA, United States
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
- Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, RWTH Aachen University, Aachen, Germany
| | - Tobias G. Noll
- JARA-Institute Green IT (PGI-10), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
42
|
Volinski A, Zaidel Y, Shalumov A, DeWolf T, Supic L, Ezra Tsur E. Data-driven artificial and spiking neural networks for inverse kinematics in neurorobotics. PATTERNS (NEW YORK, N.Y.) 2022; 3:100391. [PMID: 35079712 PMCID: PMC8767299 DOI: 10.1016/j.patter.2021.100391] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/10/2021] [Revised: 09/06/2021] [Accepted: 10/21/2021] [Indexed: 11/26/2022]
Abstract
Inverse kinematics is fundamental for computational motion planning. It is used to derive an appropriate state in a robot's configuration space, given a target position in task space. In this work, we investigate the performance of fully connected and residual artificial neural networks as well as recurrent, learning-based, and deep spiking neural networks for conventional and geometrically constrained inverse kinematics. We show that while highly parameterized data-driven neural networks with tens to hundreds of thousands of parameters exhibit sub-ms inference time and sub-mm accuracy, learning-based spiking architectures can provide reasonably good results with merely a few thousand neurons. Moreover, we show that spiking neural networks can perform well in geometrically constrained task space, even when configured to an energy-conserved spiking rate, demonstrating their robustness. Neural networks were evaluated on NVIDIA's Xavier and Intel's neuromorphic Loihi chip.
Collapse
Affiliation(s)
- Alex Volinski
- Neuro-Biomorphic Engineering Lab, The Open University of Israel, Ra'anana, Israel
| | - Yuval Zaidel
- Neuro-Biomorphic Engineering Lab, The Open University of Israel, Ra'anana, Israel
| | - Albert Shalumov
- Neuro-Biomorphic Engineering Lab, The Open University of Israel, Ra'anana, Israel
| | | | | | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
43
|
Schuman CD, Kulkarni SR, Parsa M, Mitchell JP, Date P, Kay B. Opportunities for neuromorphic computing algorithms and applications. NATURE COMPUTATIONAL SCIENCE 2022; 2:10-19. [PMID: 38177712 DOI: 10.1038/s43588-021-00184-y] [Citation(s) in RCA: 111] [Impact Index Per Article: 55.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Accepted: 12/07/2021] [Indexed: 01/06/2024]
Abstract
Neuromorphic computing technologies will be important for the future of computing, but much of the work in neuromorphic computing has focused on hardware development. Here, we review recent results in neuromorphic computing algorithms and applications. We highlight characteristics of neuromorphic computing technologies that make them attractive for the future of computing and we discuss opportunities for future development of algorithms and applications on these systems.
Collapse
Affiliation(s)
- Catherine D Schuman
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA.
- Department of Electrical Engineering and Computer Science, University of Tennessee, Knoxville, TN, USA.
| | - Shruti R Kulkarni
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Maryam Parsa
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
- Department of Electrical and Computer Engineering, George Mason University, Fairfax, VA, USA
| | - J Parker Mitchell
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Prasanna Date
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| | - Bill Kay
- Computer Science and Mathematics Division, Oak Ridge National Laboratory, Oak Ridge, TN, USA
| |
Collapse
|
44
|
Gallego G, Delbruck T, Orchard G, Bartolozzi C, Taba B, Censi A, Leutenegger S, Davison AJ, Conradt J, Daniilidis K, Scaramuzza D. Event-Based Vision: A Survey. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:154-180. [PMID: 32750812 DOI: 10.1109/tpami.2020.3008413] [Citation(s) in RCA: 179] [Impact Index Per Article: 89.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Event cameras are bio-inspired sensors that differ from conventional frame cameras: Instead of capturing images at a fixed rate, they asynchronously measure per-pixel brightness changes, and output a stream of events that encode the time, location and sign of the brightness changes. Event cameras offer attractive properties compared to traditional cameras: high temporal resolution (in the order of μs), very high dynamic range (140 dB versus 60 dB), low power consumption, and high pixel bandwidth (on the order of kHz) resulting in reduced motion blur. Hence, event cameras have a large potential for robotics and computer vision in challenging scenarios for traditional cameras, such as low-latency, high speed, and high dynamic range. However, novel methods are required to process the unconventional output of these sensors in order to unlock their potential. This paper provides a comprehensive overview of the emerging field of event-based vision, with a focus on the applications and the algorithms developed to unlock the outstanding properties of event cameras. We present event cameras from their working principle, the actual sensors that are available and the tasks that they have been used for, from low-level vision (feature detection and tracking, optic flow, etc.) to high-level vision (reconstruction, segmentation, recognition). We also discuss the techniques developed to process events, including learning-based techniques, as well as specialized processors for these novel sensors, such as spiking neural networks. Additionally, we highlight the challenges that remain to be tackled and the opportunities that lie ahead in the search for a more efficient, bio-inspired way for machines to perceive and interact with the world.
Collapse
|
45
|
Spreizer S, Senk J, Rotter S, Diesmann M, Weyers B. NEST Desktop, an Educational Application for Neuroscience. eNeuro 2021; 8:ENEURO.0274-21.2021. [PMID: 34764188 PMCID: PMC8638679 DOI: 10.1523/eneuro.0274-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 08/20/2021] [Accepted: 09/19/2021] [Indexed: 11/21/2022] Open
Abstract
Simulation software for spiking neuronal network models matured in the past decades regarding performance and flexibility. But the entry barrier remains high for students and early career scientists in computational neuroscience since these simulators typically require programming skills and a complex installation. Here, we describe an installation-free Graphical User Interface (GUI) running in the web browser, which is distinct from the simulation engine running anywhere, on the student's laptop or on a supercomputer. This architecture provides robustness against technological changes in the software stack and simplifies deployment for self-education and for teachers. Our new open-source tool, NEST Desktop, comprises graphical elements for creating and configuring network models, running simulations, and visualizing and analyzing the results. NEST Desktop allows students to explore important concepts in computational neuroscience without the need to learn a simulator control language before. Our experiences so far highlight that NEST Desktop helps advancing both quality and intensity of teaching in computational neuroscience in regular university courses. We view the availability of the tool on public resources like the European ICT infrastructure for neuroscience EBRAINS as a contribution to equal opportunities.
Collapse
Affiliation(s)
- Sebastian Spreizer
- Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Computer Science, University of Trier, 54296 Trier, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
| | - Stefan Rotter
- Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, Rheinisch-Westfälische Technische Hochschule Aachen University, 52074 Aachen, Germany
- Department of Physics, Faculty 1, Rheinisch-Westfälische Technische Hochschule Aachen University, 52074 Aachen, Germany
| | - Benjamin Weyers
- Department of Computer Science, University of Trier, 54296 Trier, Germany
| |
Collapse
|
46
|
Vieth M, Stöber TM, Triesch J. PymoNNto: A Flexible Modular Toolbox for Designing Brain-Inspired Neural Networks. Front Neuroinform 2021; 15:715131. [PMID: 34790108 PMCID: PMC8591031 DOI: 10.3389/fninf.2021.715131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2021] [Accepted: 09/07/2021] [Indexed: 11/13/2022] Open
Abstract
The Python Modular Neural Network Toolbox (PymoNNto) provides a versatile and adaptable Python-based framework to develop and investigate brain-inspired neural networks. In contrast to other commonly used simulators such as Brian2 and NEST, PymoNNto imposes only minimal restrictions for implementation and execution. The basic structure of PymoNNto consists of one network class with several neuron- and synapse-groups. The behaviour of each group can be flexibly defined by exchangeable modules. The implementation of these modules is up to the user and only limited by Python itself. Behaviours can be implemented in Python, Numpy, Tensorflow, and other libraries to perform computations on CPUs and GPUs. PymoNNto comes with convenient high level behaviour modules, allowing differential equation-based implementations similar to Brian2, and an adaptable modular Graphical User Interface for real-time observation and modification of the simulated network and its parameters.
Collapse
Affiliation(s)
- Marius Vieth
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| | | | - Jochen Triesch
- Frankfurt Institute for Advanced Studies, Frankfurt am Main, Germany
| |
Collapse
|
47
|
Shalumov A, Halaly R, Tsur EE. LiDAR-driven spiking neural network for collision avoidance in autonomous driving. BIOINSPIRATION & BIOMIMETICS 2021; 16:066016. [PMID: 34551395 DOI: 10.1088/1748-3190/ac290c] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Accepted: 09/22/2021] [Indexed: 06/13/2023]
Abstract
Facilitated by advances in real-time sensing, low and high-level control, and machine learning, autonomous vehicles draw ever-increasing attention from many branches of knowledge. Neuromorphic (brain-inspired) implementation of robotic control has been shown to outperform conventional control paradigms in terms of energy efficiency, robustness to perturbations, and adaptation to varying conditions. Here we propose LiDAR-driven neuromorphic control of both vehicle's speed and steering. We evaluated and compared neuromorphic PID control and online learning for autonomous vehicle control in static and dynamic environments, finally suggesting proportional learning as a preferred control scheme. We employed biologically plausible basal-ganglia and thalamus neural models for steering and collision-avoidance, finally extending them to support a null controller and a target-reaching optimization, significantly increasing performance.
Collapse
Affiliation(s)
- Albert Shalumov
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| | - Raz Halaly
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab at the Open University of Israel, Ra'anana, Israel
| |
Collapse
|
48
|
Cakan C, Jajcay N, Obermayer K. neurolib: A Simulation Framework for Whole-Brain Neural Mass Modeling. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09931-9] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Abstractneurolib is a computational framework for whole-brain modeling written in Python. It provides a set of neural mass models that represent the average activity of a brain region on a mesoscopic scale. In a whole-brain network model, brain regions are connected with each other based on biologically informed structural connectivity, i.e., the connectome of the brain. neurolib can load structural and functional datasets, set up a whole-brain model, manage its parameters, simulate it, and organize its outputs for later analysis. The activity of each brain region can be converted into a simulated BOLD signal in order to calibrate the model against empirical data from functional magnetic resonance imaging (fMRI). Extensive model analysis is made possible using a parameter exploration module, which allows one to characterize a model’s behavior as a function of changing parameters. An optimization module is provided for fitting models to multimodal empirical data using evolutionary algorithms. neurolib is designed to be extendable and allows for easy implementation of custom neural mass models, offering a versatile platform for computational neuroscientists for prototyping models, managing large numerical experiments, studying the structure–function relationship of brain networks, and for performing in-silico optimization of whole-brain models.
Collapse
|
49
|
Chauhan T, Masquelier T, Cottereau BR. Sub-Optimality of the Early Visual System Explained Through Biologically Plausible Plasticity. Front Neurosci 2021; 15:727448. [PMID: 34602970 PMCID: PMC8480265 DOI: 10.3389/fnins.2021.727448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 08/25/2021] [Indexed: 11/13/2022] Open
Abstract
The early visual cortex is the site of crucial pre-processing for more complex, biologically relevant computations that drive perception and, ultimately, behaviour. This pre-processing is often studied under the assumption that neural populations are optimised for the most efficient (in terms of energy, information, spikes, etc.) representation of natural statistics. Normative models such as Independent Component Analysis (ICA) and Sparse Coding (SC) consider the phenomenon as a generative, minimisation problem which they assume the early cortical populations have evolved to solve. However, measurements in monkey and cat suggest that receptive fields (RFs) in the primary visual cortex are often noisy, blobby, and symmetrical, making them sub-optimal for operations such as edge-detection. We propose that this suboptimality occurs because the RFs do not emerge through a global minimisation of generative error, but through locally operating biological mechanisms such as spike-timing dependent plasticity (STDP). Using a network endowed with an abstract, rank-based STDP rule, we show that the shape and orientation tuning of the converged units are remarkably close to single-cell measurements in the macaque primary visual cortex. We quantify this similarity using physiological parameters (frequency-normalised spread vectors), information theoretic measures [Kullback–Leibler (KL) divergence and Gini index], as well as simulations of a typical electrophysiology experiment designed to estimate orientation tuning curves. Taken together, our results suggest that compared to purely generative schemes, process-based biophysical models may offer a better description of the suboptimality observed in the early visual cortex.
Collapse
Affiliation(s)
- Tushar Chauhan
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Benoit R Cottereau
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
50
|
Abstract
In recent years, spiking neural networks (SNNs) have attracted increasingly more researchers to study by virtue of its bio-interpretability and low-power computing. The SNN simulator is an essential tool to accomplish image classification, recognition, speech recognition, and other tasks using SNN. However, most of the existing simulators for spike neural networks are clock-driven, which has two main problems. First, the calculation result is affected by time slice, which obviously shows that when the calculation accuracy is low, the calculation speed is fast, but when the calculation accuracy is high, the calculation speed is unacceptable. The other is the failure of lateral inhibition, which severely affects SNN learning. In order to solve these problems, an event-driven high accurate simulator named EDHA (Event-Driven High Accuracy) for spike neural networks is proposed in this paper. EDHA takes full advantage of the event-driven characteristics of SNN and only calculates when a spike is generated, which is independent of the time slice. Compared with previous SNN simulators, EDHA is completely event-driven, which reduces a large amount of calculations and achieves higher computational accuracy. The calculation speed of EDHA in the MNIST classification task is more than 10 times faster than that of mainstream clock-driven simulators. By optimizing the spike encoding method, the former can even achieve more than 100 times faster than the latter. Due to the cross-platform characteristics of Java, EDHA can run on x86, amd64, ARM, and other platforms that support Java.
Collapse
|