51
|
Chauhan T, Masquelier T, Cottereau BR. Sub-Optimality of the Early Visual System Explained Through Biologically Plausible Plasticity. Front Neurosci 2021; 15:727448. [PMID: 34602970 PMCID: PMC8480265 DOI: 10.3389/fnins.2021.727448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2021] [Accepted: 08/25/2021] [Indexed: 11/13/2022] Open
Abstract
The early visual cortex is the site of crucial pre-processing for more complex, biologically relevant computations that drive perception and, ultimately, behaviour. This pre-processing is often studied under the assumption that neural populations are optimised for the most efficient (in terms of energy, information, spikes, etc.) representation of natural statistics. Normative models such as Independent Component Analysis (ICA) and Sparse Coding (SC) consider the phenomenon as a generative, minimisation problem which they assume the early cortical populations have evolved to solve. However, measurements in monkey and cat suggest that receptive fields (RFs) in the primary visual cortex are often noisy, blobby, and symmetrical, making them sub-optimal for operations such as edge-detection. We propose that this suboptimality occurs because the RFs do not emerge through a global minimisation of generative error, but through locally operating biological mechanisms such as spike-timing dependent plasticity (STDP). Using a network endowed with an abstract, rank-based STDP rule, we show that the shape and orientation tuning of the converged units are remarkably close to single-cell measurements in the macaque primary visual cortex. We quantify this similarity using physiological parameters (frequency-normalised spread vectors), information theoretic measures [Kullback–Leibler (KL) divergence and Gini index], as well as simulations of a typical electrophysiology experiment designed to estimate orientation tuning curves. Taken together, our results suggest that compared to purely generative schemes, process-based biophysical models may offer a better description of the suboptimality observed in the early visual cortex.
Collapse
Affiliation(s)
- Tushar Chauhan
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Timothée Masquelier
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| | - Benoit R Cottereau
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, Toulouse, France.,Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
52
|
Abstract
In recent years, spiking neural networks (SNNs) have attracted increasingly more researchers to study by virtue of its bio-interpretability and low-power computing. The SNN simulator is an essential tool to accomplish image classification, recognition, speech recognition, and other tasks using SNN. However, most of the existing simulators for spike neural networks are clock-driven, which has two main problems. First, the calculation result is affected by time slice, which obviously shows that when the calculation accuracy is low, the calculation speed is fast, but when the calculation accuracy is high, the calculation speed is unacceptable. The other is the failure of lateral inhibition, which severely affects SNN learning. In order to solve these problems, an event-driven high accurate simulator named EDHA (Event-Driven High Accuracy) for spike neural networks is proposed in this paper. EDHA takes full advantage of the event-driven characteristics of SNN and only calculates when a spike is generated, which is independent of the time slice. Compared with previous SNN simulators, EDHA is completely event-driven, which reduces a large amount of calculations and achieves higher computational accuracy. The calculation speed of EDHA in the MNIST classification task is more than 10 times faster than that of mainstream clock-driven simulators. By optimizing the spike encoding method, the former can even achieve more than 100 times faster than the latter. Due to the cross-platform characteristics of Java, EDHA can run on x86, amd64, ARM, and other platforms that support Java.
Collapse
|
53
|
Kulkarni SR, Parsa M, Mitchell JP, Schuman CD. Benchmarking the performance of neuromorphic and spiking neural network simulators. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.028] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
54
|
Dallmann CJ, Karashchuk P, Brunton BW, Tuthill JC. A leg to stand on: computational models of proprioception. CURRENT OPINION IN PHYSIOLOGY 2021; 22:100426. [PMID: 34595361 PMCID: PMC8478261 DOI: 10.1016/j.cophys.2021.03.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
Abstract
Dexterous motor control requires feedback from proprioceptors, internal mechanosensory neurons that sense the body's position and movement. An outstanding question in neuroscience is how diverse proprioceptive feedback signals contribute to flexible motor control. Genetic tools now enable targeted recording and perturbation of proprioceptive neurons in behaving animals; however, these experiments can be challenging to interpret, due to the tight coupling of proprioception and motor control. Here, we argue that understanding the role of proprioceptive feedback in controlling behavior will be aided by the development of multiscale models of sensorimotor loops. We review current phenomenological and structural models for proprioceptor encoding and discuss how they may be integrated with existing models of posture, movement, and body state estimation.
Collapse
Affiliation(s)
- Chris J Dallmann
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| | - Pierre Karashchuk
- Neuroscience Graduate Program, University of Washington, Seattle, WA, USA
| | - Bingni W Brunton
- Department of Biology, University of Washington, Seattle, WA, USA
| | - John C Tuthill
- Department of Physiology and Biophysics, University of Washington, Seattle, WA, USA
| |
Collapse
|
55
|
Voelker AR, Blouw P, Choo X, Dumont NSY, Stewart TC, Eliasmith C. Simulating and Predicting Dynamical Systems With Spatial Semantic Pointers. Neural Comput 2021; 33:2033-2067. [PMID: 34310679 DOI: 10.1162/neco_a_01410] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2020] [Accepted: 03/15/2021] [Indexed: 11/04/2022]
Abstract
While neural networks are highly effective at learning task-relevant representations from data, they typically do not learn representations with the kind of symbolic structure that is hypothesized to support high-level cognitive processes, nor do they naturally model such structures within problem domains that are continuous in space and time. To fill these gaps, this work exploits a method for defining vector representations that bind discrete (symbol-like) entities to points in continuous topological spaces in order to simulate and predict the behavior of a range of dynamical systems. These vector representations are spatial semantic pointers (SSPs), and we demonstrate that they can (1) be used to model dynamical systems involving multiple objects represented in a symbol-like manner and (2) be integrated with deep neural networks to predict the future of physical trajectories. These results help unify what have traditionally appeared to be disparate approaches in machine learning.
Collapse
Affiliation(s)
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON N2L 3G1, Canada
| | - Xuan Choo
- Applied Brain Research, Waterloo, ON N2L 3G1, Canada
| | | | - Terrence C Stewart
- National Research Council of Canada, University of Waterloo Collaboration Centre, Waterloo, ON N2L 3G1 Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, N2L 3G1, Canada
| |
Collapse
|
56
|
Stöckel A, Stewart TC, Eliasmith C. Connecting Biological Detail With Neural Computation: Application to the Cerebellar Granule-Golgi Microcircuit. Top Cogn Sci 2021; 13:515-533. [PMID: 34146453 DOI: 10.1111/tops.12536] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2020] [Revised: 04/21/2021] [Accepted: 04/22/2021] [Indexed: 11/29/2022]
Abstract
Neurophysiology and neuroanatomy constrain the set of possible computations that can be performed in a brain circuit. While detailed data on brain microcircuits is sometimes available, cognitive modelers are seldom in a position to take these constraints into account. One reason for this is the intrinsic complexity of accounting for biological mechanisms when describing cognitive function. In this paper, we present multiple extensions to the neural engineering framework (NEF), which simplify the integration of low-level constraints such as Dale's principle and spatially constrained connectivity into high-level, functional models. We focus on a model of eyeblink conditioning in the cerebellum, and, in particular, on systematically constructing temporal representations in the recurrent granule-Golgi microcircuit. We analyze how biological constraints impact these representations and demonstrate that our overall model is capable of reproducing key properties of eyeblink conditioning. Furthermore, since our techniques facilitate variation of neurophysiological parameters, we gain insights into why certain neurophysiological parameters may be as observed in nature. While eyeblink conditioning is a somewhat primitive form of learning, we argue that the same methods apply for more cognitive models as well. We implemented our extensions to the NEF in an open-source software library named "NengoBio" and hope that this work inspires similar attempts to bridge low-level biological detail and high-level function.
Collapse
Affiliation(s)
| | - Terrence C Stewart
- National Research Council of Canada, University of Waterloo Collaboration Centre
| | | |
Collapse
|
57
|
Kim JZ, Lu Z, Nozari E, Pappas GJ, Bassett DS. Teaching recurrent neural networks to infer global temporal structure from local examples. NAT MACH INTELL 2021. [DOI: 10.1038/s42256-021-00321-2] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
58
|
Tieck JCV, Secker K, Kaiser J, Roennau A, Dillmann R. Soft-Grasping With an Anthropomorphic Robotic Hand Using Spiking Neurons. IEEE Robot Autom Lett 2021. [DOI: 10.1109/lra.2020.3034067] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
59
|
|
60
|
Yu A, Yang H, Nguyen KK, Zhang J, Cheriet M. Burst Traffic Scheduling for Hybrid E/O Switching DCN: An Error Feedback Spiking Neural Network Approach. IEEE TRANSACTIONS ON NETWORK AND SERVICE MANAGEMENT 2021. [DOI: 10.1109/tnsm.2020.3040907] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
61
|
Hazan A, Ezra Tsur E. Neuromorphic Analog Implementation of Neural Engineering Framework-Inspired Spiking Neuron for High-Dimensional Representation. Front Neurosci 2021; 15:627221. [PMID: 33692670 PMCID: PMC7937893 DOI: 10.3389/fnins.2021.627221] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 01/25/2021] [Indexed: 11/13/2022] Open
Abstract
Brain-inspired hardware designs realize neural principles in electronics to provide high-performing, energy-efficient frameworks for artificial intelligence. The Neural Engineering Framework (NEF) brings forth a theoretical framework for representing high-dimensional mathematical constructs with spiking neurons to implement functional large-scale neural networks. Here, we present OZ, a programable analog implementation of NEF-inspired spiking neurons. OZ neurons can be dynamically programmed to feature varying high-dimensional response curves with positive and negative encoders for a neuromorphic distributed representation of normalized input data. Our hardware design demonstrates full correspondence with NEF across firing rates, encoding vectors, and intercepts. OZ neurons can be independently configured in real-time to allow efficient spanning of a representation space, thus using fewer neurons and therefore less power for neuromorphic data representation.
Collapse
Affiliation(s)
- Avi Hazan
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, The Open University of Israel, Ra'anana, Israel
| |
Collapse
|
62
|
Lazar AA, Liu T, Turkcan MK, Zhou Y. Accelerating with FlyBrainLab the discovery of the functional logic of the Drosophila brain in the connectomic and synaptomic era. eLife 2021; 10:e62362. [PMID: 33616035 PMCID: PMC8016480 DOI: 10.7554/elife.62362] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2020] [Accepted: 02/21/2021] [Indexed: 11/25/2022] Open
Abstract
In recent years, a wealth of Drosophila neuroscience data have become available including cell type and connectome/synaptome datasets for both the larva and adult fly. To facilitate integration across data modalities and to accelerate the understanding of the functional logic of the fruit fly brain, we have developed FlyBrainLab, a unique open-source computing platform that integrates 3D exploration and visualization of diverse datasets with interactive exploration of the functional logic of modeled executable brain circuits. FlyBrainLab's User Interface, Utilities Libraries and Circuit Libraries bring together neuroanatomical, neurogenetic and electrophysiological datasets with computational models of different researchers for validation and comparison within the same platform. Seeking to transcend the limitations of the connectome/synaptome, FlyBrainLab also provides libraries for molecular transduction arising in sensory coding in vision/olfaction. Together with sensory neuron activity data, these libraries serve as entry points for the exploration, analysis, comparison, and evaluation of circuit functions of the fruit fly brain.
Collapse
Affiliation(s)
- Aurel A Lazar
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
| | - Tingkai Liu
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
| | | | - Yiyin Zhou
- Department of Electrical Engineering, Columbia UniversityNew YorkUnited States
| |
Collapse
|
63
|
Tiotto TF, Goossens AS, Borst JP, Banerjee T, Taatgen NA. Learning to Approximate Functions Using Nb-Doped SrTiO 3 Memristors. Front Neurosci 2021; 14:627276. [PMID: 33679290 PMCID: PMC7933504 DOI: 10.3389/fnins.2020.627276] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2020] [Accepted: 12/24/2020] [Indexed: 11/27/2022] Open
Abstract
Memristors have attracted interest as neuromorphic computation elements because they show promise in enabling efficient hardware implementations of artificial neurons and synapses. We performed measurements on interface-type memristors to validate their use in neuromorphic hardware. Specifically, we utilized Nb-doped SrTiO3 memristors as synapses in a simulated neural network by arranging them into differential synaptic pairs, with the weight of the connection given by the difference in normalized conductance values between the two paired memristors. This network learned to represent functions through a training process based on a novel supervised learning algorithm, during which discrete voltage pulses were applied to one of the two memristors in each pair. To simulate the fact that both the initial state of the physical memristive devices and the impact of each voltage pulse are unknown we injected noise into the simulation. Nevertheless, discrete updates based on local knowledge were shown to result in robust learning performance. Using this class of memristive devices as the synaptic weight element in a spiking neural network yields, to our knowledge, one of the first models of this kind, capable of learning to be a universal function approximator, and strongly suggests the suitability of these memristors for usage in future computing platforms.
Collapse
Affiliation(s)
- Thomas F. Tiotto
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, Netherlands
- Artificial Intelligence, Bernoulli Institute, University of Groningen, Groningen, Netherlands
| | - Anouk S. Goossens
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, Netherlands
- Zernike Institute for Advanced Materials, University of Groningen, Groningen, Netherlands
| | - Jelmer P. Borst
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, Netherlands
- Artificial Intelligence, Bernoulli Institute, University of Groningen, Groningen, Netherlands
| | - Tamalika Banerjee
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, Netherlands
- Zernike Institute for Advanced Materials, University of Groningen, Groningen, Netherlands
| | - Niels A. Taatgen
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, Netherlands
- Artificial Intelligence, Bernoulli Institute, University of Groningen, Groningen, Netherlands
| |
Collapse
|
64
|
Getty N, Brettin T, Jin D, Stevens R, Xia F. Deep medical image analysis with representation learning and neuromorphic computing. Interface Focus 2021; 11:20190122. [PMID: 33343872 DOI: 10.1098/rsfs.2019.0122] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/23/2020] [Indexed: 11/12/2022] Open
Abstract
Deep learning is increasingly used in medical imaging, improving many steps of the processing chain, from acquisition to segmentation and anomaly detection to outcome prediction. Yet significant challenges remain: (i) image-based diagnosis depends on the spatial relationships between local patterns, something convolution and pooling often do not capture adequately; (ii) data augmentation, the de facto method for learning three-dimensional pose invariance, requires exponentially many points to achieve robust improvement; (iii) labelled medical images are much less abundant than unlabelled ones, especially for heterogeneous pathological cases; and (iv) scanning technologies such as magnetic resonance imaging can be slow and costly, generally without online learning abilities to focus on regions of clinical interest. To address these challenges, novel algorithmic and hardware approaches are needed for deep learning to reach its full potential in medical imaging.
Collapse
Affiliation(s)
- N Getty
- Data Science and Learning Division, Argonne National Laboratory, Lemont, IL 60439, USA.,Computer Science Department, Illinois Institute of Technology, Chicago, IL 60616, USA
| | - T Brettin
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL 60439, USA
| | - D Jin
- Computer Science Department, Illinois Institute of Technology, Chicago, IL 60616, USA
| | - R Stevens
- Computing, Environment and Life Sciences Directorate, Argonne National Laboratory, Lemont, IL 60439, USA.,Department of Computer Science, University of Chicago, Chicago, IL 60637, USA
| | - F Xia
- Data Science and Learning Division, Argonne National Laboratory, Lemont, IL 60439, USA
| |
Collapse
|
65
|
Zaidel Y, Shalumov A, Volinski A, Supic L, Ezra Tsur E. Neuromorphic NEF-Based Inverse Kinematics and PID Control. Front Neurorobot 2021; 15:631159. [PMID: 33613225 PMCID: PMC7887770 DOI: 10.3389/fnbot.2021.631159] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/19/2020] [Accepted: 01/05/2021] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic implementation of robotic control has been shown to outperform conventional control paradigms in terms of robustness to perturbations and adaptation to varying conditions. Two main ingredients of robotics are inverse kinematic and Proportional-Integral-Derivative (PID) control. Inverse kinematics is used to compute an appropriate state in a robot's configuration space, given a target position in task space. PID control applies responsive correction signals to a robot's actuators, allowing it to reach its target accurately. The Neural Engineering Framework (NEF) offers a theoretical framework for a neuromorphic encoding of mathematical constructs with spiking neurons for the implementation of functional large-scale neural networks. In this work, we developed NEF-based neuromorphic algorithms for inverse kinematics and PID control, which we used to manipulate 6 degrees of freedom robotic arm. We used online learning for inverse kinematics and signal integration and differentiation for PID, offering high performing and energy-efficient neuromorphic control. Algorithms were evaluated in simulation as well as on Intel's Loihi neuromorphic hardware.
Collapse
Affiliation(s)
- Yuval Zaidel
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Albert Shalumov
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Alex Volinski
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| | - Lazar Supic
- Accenture Labs, San Francisco, CA, United States
| | - Elishai Ezra Tsur
- Neuro-Biomorphic Engineering Lab, Department of Mathematics and Computer Science, Open University of Israel, Ra'anana, Israel
| |
Collapse
|
66
|
Szczecinski NS, Quinn RD, Hunt AJ. Extending the Functional Subnetwork Approach to a Generalized Linear Integrate-and-Fire Neuron Model. Front Neurorobot 2020; 14:577804. [PMID: 33281592 PMCID: PMC7691602 DOI: 10.3389/fnbot.2020.577804] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 10/08/2020] [Indexed: 11/24/2022] Open
Abstract
Engineering neural networks to perform specific tasks often represents a monumental challenge in determining network architecture and parameter values. In this work, we extend our previously-developed method for tuning networks of non-spiking neurons, the “Functional subnetwork approach” (FSA), to the tuning of networks composed of spiking neurons. This extension enables the direct assembly and tuning of networks of spiking neurons and synapses based on the network's intended function, without the use of global optimization or machine learning. To extend the FSA, we show that the dynamics of a generalized linear integrate and fire (GLIF) neuron model have fundamental similarities to those of a non-spiking leaky integrator neuron model. We derive analytical expressions that show functional parallels between: (1) A spiking neuron's steady-state spiking frequency and a non-spiking neuron's steady-state voltage in response to an applied current; (2) a spiking neuron's transient spiking frequency and a non-spiking neuron's transient voltage in response to an applied current; and (3) a spiking synapse's average conductance during steady spiking and a non-spiking synapse's conductance. The models become more similar as additional spiking neurons are added to each population “node” in the network. We apply the FSA to model a neuromuscular reflex pathway two different ways: Via non-spiking components and then via spiking components. These results provide a concrete example of how a single non-spiking neuron may model the average spiking frequency of a population of spiking neurons. The resulting model also demonstrates that by using the FSA, models can be constructed that incorporate both spiking and non-spiking units. This work facilitates the construction of large networks of spiking neurons and synapses that perform specific functions, for example, those implemented with neuromorphic computing hardware, by providing an analytical method for directly tuning their parameters without time-consuming optimization or learning.
Collapse
Affiliation(s)
- Nicholas S Szczecinski
- Department of Mechanical and Aerospace Engineering, West Virginia University, Morgantown, WV, United States
| | - Roger D Quinn
- Department of Mechanical and Aerospace Engineering, Case Western Reserve University, Cleveland, OH, United States
| | - Alexander J Hunt
- Department of Mechanical and Materials Engineering, Portland State University, Portland, OR, United States
| |
Collapse
|
67
|
Nadji-Tehrani M, Eslami A. A Brain-Inspired Framework for Evolutionary Artificial General Intelligence. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:5257-5271. [PMID: 32175876 DOI: 10.1109/tnnls.2020.2965567] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
From the medical field to agriculture, from energy to transportation, every industry is going through a revolution by embracing artificial intelligence (AI); nevertheless, AI is still in its infancy. Inspired by the evolution of the human brain, this article demonstrates a novel method and framework to synthesize an artificial brain with cognitive abilities by taking advantage of the same process responsible for the growth of the biological brain called "neuroembryogenesis." This framework shares some of the key behavioral aspects of the biological brain, such as spiking neurons, neuroplasticity, neuronal pruning, and excitatory and inhibitory interactions between neurons, together making it capable of learning and memorizing. One of the highlights of the proposed design is its potential to incrementally improve itself over generations based on system performance, using genetic algorithms. A proof of concept at the end of this article demonstrates how a simplified implementation of the human visual cortex using the proposed framework is capable of character recognition. Our framework is open source, and the code is shared with the scientific community at http://www.feagi.org.
Collapse
|
68
|
Michaelis C, Lehr AB, Tetzlaff C. Robust Trajectory Generation for Robotic Control on the Neuromorphic Research Chip Loihi. Front Neurorobot 2020; 14:589532. [PMID: 33324191 PMCID: PMC7726255 DOI: 10.3389/fnbot.2020.589532] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2020] [Accepted: 10/28/2020] [Indexed: 11/13/2022] Open
Abstract
Neuromorphic hardware has several promising advantages compared to von Neumann architectures and is highly interesting for robot control. However, despite the high speed and energy efficiency of neuromorphic computing, algorithms utilizing this hardware in control scenarios are still rare. One problem is the transition from fast spiking activity on the hardware, which acts on a timescale of a few milliseconds, to a control-relevant timescale on the order of hundreds of milliseconds. Another problem is the execution of complex trajectories, which requires spiking activity to contain sufficient variability, while at the same time, for reliable performance, network dynamics must be adequately robust against noise. In this study we exploit a recently developed biologically-inspired spiking neural network model, the so-called anisotropic network. We identified and transferred the core principles of the anisotropic network to neuromorphic hardware using Intel's neuromorphic research chip Loihi and validated the system on trajectories from a motor-control task performed by a robot arm. We developed a network architecture including the anisotropic network and a pooling layer which allows fast spike read-out from the chip and performs an inherent regularization. With this, we show that the anisotropic network on Loihi reliably encodes sequential patterns of neural activity, each representing a robotic action, and that the patterns allow the generation of multidimensional trajectories on control-relevant timescales. Taken together, our study presents a new algorithm that allows the generation of complex robotic movements as a building block for robotic control using state of the art neuromorphic hardware.
Collapse
Affiliation(s)
- Carlo Michaelis
- Department of Computational Neuroscience, University of Göttingen, Göttingen, Germany
| | | | | |
Collapse
|
69
|
Zhang Y, Qu P, Ji Y, Zhang W, Gao G, Wang G, Song S, Li G, Chen W, Zheng W, Chen F, Pei J, Zhao R, Zhao M, Shi L. A system hierarchy for brain-inspired computing. Nature 2020; 586:378-384. [PMID: 33057220 DOI: 10.1038/s41586-020-2782-y] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Accepted: 08/10/2020] [Indexed: 12/15/2022]
Abstract
Neuromorphic computing draws inspiration from the brain to provide computing technology and architecture with the potential to drive the next wave of computer engineering1-13. Such brain-inspired computing also provides a promising platform for the development of artificial general intelligence14,15. However, unlike conventional computing systems, which have a well established computer hierarchy built around the concept of Turing completeness and the von Neumann architecture16-18, there is currently no generalized system hierarchy or understanding of completeness for brain-inspired computing. This affects the compatibility between software and hardware, impairing the programming flexibility and development productivity of brain-inspired computing. Here we propose 'neuromorphic completeness', which relaxes the requirement for hardware completeness, and a corresponding system hierarchy, which consists of a Turing-complete software-abstraction model and a versatile abstract neuromorphic architecture. Using this hierarchy, various programs can be described as uniform representations and transformed into the equivalent executable on any neuromorphic complete hardware-that is, it ensures programming-language portability, hardware completeness and compilation feasibility. We implement toolchain software to support the execution of different types of program on various typical hardware platforms, demonstrating the advantage of our system hierarchy, including a new system-design dimension introduced by the neuromorphic completeness. We expect that our study will enable efficient and compatible progress in all aspects of brain-inspired computing systems, facilitating the development of various applications, including artificial general intelligence.
Collapse
Affiliation(s)
- Youhui Zhang
- Department of Computer Science and Technology, Tsinghua University, Beijing, China. .,Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China. .,Beijing National Research Center for Information Science and Technology, Beijing, China.
| | - Peng Qu
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.,Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Yu Ji
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.,Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Weihao Zhang
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Guangrong Gao
- Department of Electrical and Computer Engineering, University of Delaware, Newark, DE, USA
| | - Guanrui Wang
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Sen Song
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Biomedical Engineering, Tsinghua University, Beijing, China
| | - Guoqi Li
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Wenguang Chen
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.,Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Weimin Zheng
- Department of Computer Science and Technology, Tsinghua University, Beijing, China.,Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Feng Chen
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Automation, Tsinghua University, Beijing, China
| | - Jing Pei
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Precision Instruments, Tsinghua University, Beijing, China
| | - Rong Zhao
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China
| | - Mingguo Zhao
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China.,Department of Automation, Tsinghua University, Beijing, China
| | - Luping Shi
- Center for Brain-Inspired Computing Research (CBICR), Tsinghua University, Beijing, China. .,Department of Precision Instruments, Tsinghua University, Beijing, China.
| |
Collapse
|
70
|
DeWolf T, Jaworski P, Eliasmith C. Nengo and Low-Power AI Hardware for Robust, Embedded Neurorobotics. Front Neurorobot 2020; 14:568359. [PMID: 33162886 PMCID: PMC7581863 DOI: 10.3389/fnbot.2020.568359] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2020] [Accepted: 09/01/2020] [Indexed: 11/13/2022] Open
Abstract
In this paper we demonstrate how the Nengo neural modeling and simulation libraries enable users to quickly develop robotic perception and action neural networks for simulation on neuromorphic hardware using tools they are already familiar with, such as Keras and Python. We identify four primary challenges in building robust, embedded neurorobotic systems, including: (1) developing infrastructure for interfacing with the environment and sensors; (2) processing task specific sensory signals; (3) generating robust, explainable control signals; and (4) compiling neural networks to run on target hardware. Nengo helps to address these challenges by: (1) providing the NengoInterfaces library, which defines a simple but powerful API for users to interact with simulations and hardware; (2) providing the NengoDL library, which lets users use the Keras and TensorFlow API to develop Nengo models; (3) implementing the Neural Engineering Framework, which provides white-box methods for implementing known functions and circuits; and (4) providing multiple backend libraries, such as NengoLoihi, that enable users to compile the same model to different hardware. We present two examples using Nengo to develop neural networks that run on CPUs and GPUs as well as Intel's neuromorphic chip, Loihi, to demonstrate two variations on this workflow. The first example is an implementation of an end-to-end spiking neural network in Nengo that controls a rover simulated in Mujoco. The network integrates a deep convolutional network that processes visual input from cameras mounted on the rover to track a target, and a control system implementing steering and drive functions in connection weights to guide the rover to the target. The second example uses Nengo as a smaller component in a system that has addressed some but not all of those challenges. Specifically it is used to augment a force-based operational space controller with neural adaptive control to improve performance during a reaching task using a real-world Kinova Jaco2 robotic arm. The code and implementation details are provided, with the intent of enabling other researchers to build and run their own neurorobotic systems.
Collapse
Affiliation(s)
| | | | - Chris Eliasmith
- Applied Brain Research, Waterloo, ON, Canada.,Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
71
|
Jiménez JP, Martin L, Dounce IA, Ávila-Contreras C, Ramos F. Methodological aspects for cognitive architectures construction: a study and proposal. Artif Intell Rev 2020. [DOI: 10.1007/s10462-020-09901-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
AbstractIn the field of Artificial Intelligence (AI), efforts to achieve human-like behavior have taken very different paths through time. Cognitive Architectures (CAs) differentiate from traditional AI approaches, due to their intention to model cognitive and behavioral processes by understanding the brain’s structure and their functionalities in a natural way. However, the development of distinct CAs has not been easy, mainly because there is no consensus on the theoretical basis, assumptions or even purposes for their creation nor how well they reflect human function. In consequence, there is limited information about the methodological aspects to construct this type of models. To address this issue, some initial statements are established to contextualize about the origins and directions of cognitive architectures and their development, which help to outline perspectives, approaches and objectives of this work, supported by a brief study of methodological strategies and historical aspects taken by some of the most relevant architectures to propose a methodology which covers general perspectives for the construction of CAs. This proposal is intended to be flexible, focused on use-case tasks, but also directed by theoretic paradigms or manifestos. A case study between cognitive functions is then detailed, using visual perception and working memory to exemplify the proposal’s assumptions, postulates and binding tools, from their meta-architectural conceptions to validation. Finally, the discussion addresses the challenges found at this stage of development and future work directions.
Collapse
|
72
|
Stille CM, Bekolay T, Blouw P, Kröger BJ. Modeling the Mental Lexicon as Part of Long-Term and Working Memory and Simulating Lexical Access in a Naming Task Including Semantic and Phonological Cues. Front Psychol 2020; 11:1594. [PMID: 32774315 PMCID: PMC7381331 DOI: 10.3389/fpsyg.2020.01594] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 06/15/2020] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns. METHODS The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon. RESULTS First, the neural model is able to simulate the test behavior for normal children that exhibit no lexical dysfunction. Second, the model shows worse results in test performance as larger degrees of dysfunction are introduced. Third, if the severity of dysfunction is not too high, phonological and semantic cues are observed to lead to an increase in the number of correctly named words. Phonological cues are observed to be more effective than semantic cues. CONCLUSION Our simulation results are in line with human experimental data. Specifically, phonological cues seem not only to activate phonologically similar items within the phonological level. Moreover, phonological cues support higher-level processing during access of the mental lexicon. Thus, the neural model introduced in this paper offers a promising approach to modeling the mental lexicon, and to incorporating the mental lexicon into a complex model of language processing.
Collapse
Affiliation(s)
- Catharina Marie Stille
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| | - Trevor Bekolay
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Bernd J. Kröger
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
73
|
Balkenius C, Johansson B, Tjøstheim TA. Ikaros: A framework for controlling robots with system-level brain models. INT J ADV ROBOT SYST 2020. [DOI: 10.1177/1729881420925002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Ikaros is an open framework for system-level brain modeling and real-time robot control. Version 2 of the system includes a range of computational components that implements various algorithms and methods ranging from models of neural circuits to control systems and hardware interfaces for robot. Ikaros supports the design and implementation of large-scale computation models using a flow programming paradigm. Version 2 includes a number of new features that support complex networks of hierarchically arranged components as well as a web-based interactive editor. More than 100 persons have contributed to the code base and over 100 scientific publications report on work that has used Ikaros for simulations or robot control.
Collapse
Affiliation(s)
| | - Birger Johansson
- Lund University Cognitive Science, Lund University, Lund, Sweden
| | | |
Collapse
|
74
|
Pals M, Stewart TC, Akyürek EG, Borst JP. A functional spiking-neuron model of activity-silent working memory in humans based on calcium-mediated short-term synaptic plasticity. PLoS Comput Biol 2020; 16:e1007936. [PMID: 32516337 PMCID: PMC7282629 DOI: 10.1371/journal.pcbi.1007936] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 05/07/2020] [Indexed: 11/19/2022] Open
Abstract
In this paper, we present a functional spiking-neuron model of human working memory (WM). This model combines neural firing for encoding of information with activity-silent maintenance. While it used to be widely assumed that information in WM is maintained through persistent recurrent activity, recent studies have shown that information can be maintained without persistent firing; instead, information can be stored in activity-silent states. A candidate mechanism underlying this type of storage is short-term synaptic plasticity (STSP), by which the strength of connections between neurons rapidly changes to encode new information. To demonstrate that STSP can lead to functional behavior, we integrated STSP by means of calcium-mediated synaptic facilitation in a large-scale spiking-neuron model and added a decision mechanism. The model was used to simulate a recent study that measured behavior and EEG activity of participants in three delayed-response tasks. In these tasks, one or two visual gratings had to be maintained in WM, and compared to subsequent probes. The original study demonstrated that WM contents and its priority status could be decoded from neural activity elicited by a task-irrelevant stimulus displayed during the activity-silent maintenance period. In support of our model, we show that it can perform these tasks, and that both its behavior as well as its neural representations are in agreement with the human data. We conclude that information in WM can be effectively maintained in activity-silent states by means of calcium-mediated STSP. Mentally maintaining information for short periods of time in working memory is crucial for human adaptive behavior. It was recently shown that the human brain does not only store information through neural firing–as was widely believed–but also maintains information in activity-silent states. Here, we present a detailed neural model of how this could happen in our brain through short-term synaptic plasticity: rapidly adapting the connection strengths between neurons in response to incoming information. By reactivating the adapted network, the stored information can be read out later. We show that our model can perform three working memory tasks as accurately as human participants can, while using similar mental representations. We conclude that our model is a plausible and effective neural implementation of human working memory.
Collapse
Affiliation(s)
- Matthijs Pals
- Bernoulli Institute, University of Groningen, Groningen, The Netherlands
| | - Terrence C. Stewart
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| | - Elkan G. Akyürek
- Department of Experimental Psychology, University of Groningen, Groningen, The Netherlands
| | - Jelmer P. Borst
- Bernoulli Institute, University of Groningen, Groningen, The Netherlands
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, The Netherlands
- * E-mail:
| |
Collapse
|
75
|
Riley SN, Davies J. A spiking neural network model of spatial and visual mental imagery. Cogn Neurodyn 2020; 14:239-251. [PMID: 32226565 PMCID: PMC7090122 DOI: 10.1007/s11571-019-09566-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2019] [Revised: 09/30/2019] [Accepted: 11/26/2019] [Indexed: 12/18/2022] Open
Abstract
Mental imagery has long been of interest to the cognitive and neurosciences, but how it manifests itself in the mind and brain still remains unresolved. In pursuit of this, we built a spiking neural model that can perform mental rotation and mental map scanning using strategies informed by the psychology and neuroscience literature. Results: When performing mental map scanning, reaction times (RTs) for our model closely match behavioural studies (approx. 50 ms/cm), and replicate the cognitive penetrability of the task. When performing mental rotation, our model's RTs once again closely match behavioural studies (model: 55-65°/s; studies: 60°/s), and performed the task using the same task strategy (whole unit rotation of simple and familiar objects through intermediary points). Overall, our model suggests: (1) vector-based approaches to neuro-cognitive modelling are well equipped to re-produce behavioural findings, and (2) the cognitive (in)penetrability of imagery tasks may depend on whether or not the task makes use of (non)symbolic processing.
Collapse
Affiliation(s)
- Sean N. Riley
- Institute of Cognitive Science, Carleton University, 2201 Dunton Tower 1125 Colonel BY Drive, Ottawa, ON K1S 5B6 Canada
| | - Jim Davies
- Institute of Cognitive Science, Carleton University, 2201 Dunton Tower 1125 Colonel BY Drive, Ottawa, ON K1S 5B6 Canada
| |
Collapse
|
76
|
Levy SD. Robustness Through Simplicity: A Minimalist Gateway to Neurorobotic Flight. Front Neurorobot 2020; 14:16. [PMID: 32231529 PMCID: PMC7088445 DOI: 10.3389/fnbot.2020.00016] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2019] [Accepted: 02/27/2020] [Indexed: 11/13/2022] Open
Abstract
In attempting to build neurorobotic systems based on flying animals, engineers have come to rely on existing firmware and simulation tools designed for miniature aerial vehicles (MAVs). Although they provide a valuable platform for the collection of data for Deep Learning and related AI approaches, such tools are deliberately designed to be general (supporting air, ground, and water vehicles) and feature-rich. The sheer amount of code required to support such broad capabilities can make it a daunting task to adapt these tools to building neurorobotic systems for flight. In this paper we present a complementary pair of simple, object-oriented software tools (multirotor flight-control firmware and simulation platform), each consisting of a core of a few thousand lines of C++ code, that we offer as a candidate solution to this challenge. By providing a minimalist application programming interface (API) for sensors and PID controllers, our software tools make it relatively painless for engineers to prototype neuromorphic approaches to MAV sensing and navigation. We conclude our discussion by presenting a simple PID controller we built using the popular Nengo neural simulator in conjunction with our flight-simulation platform.
Collapse
Affiliation(s)
- Simon D. Levy
- Computer Science Department, Washington and Lee University, Lexington, VA, United States
| |
Collapse
|
77
|
Calmus R, Wilson B, Kikuchi Y, Petkov CI. Structured sequence processing and combinatorial binding: neurobiologically and computationally informed hypotheses. Philos Trans R Soc Lond B Biol Sci 2020; 375:20190304. [PMID: 31840585 PMCID: PMC6939361 DOI: 10.1098/rstb.2019.0304] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/04/2019] [Indexed: 12/13/2022] Open
Abstract
Understanding how the brain forms representations of structured information distributed in time is a challenging endeavour for the neuroscientific community, requiring computationally and neurobiologically informed approaches. The neural mechanisms for segmenting continuous streams of sensory input and establishing representations of dependencies remain largely unknown, as do the transformations and computations occurring between the brain regions involved in these aspects of sequence processing. We propose a blueprint for a neurobiologically informed and informing computational model of sequence processing (entitled: Vector-symbolic Sequencing of Binding INstantiating Dependencies, or VS-BIND). This model is designed to support the transformation of serially ordered elements in sensory sequences into structured representations of bound dependencies, readily operates on multiple timescales, and encodes or decodes sequences with respect to chunked items wherever dependencies occur in time. The model integrates established vector symbolic additive and conjunctive binding operators with neurobiologically plausible oscillatory dynamics, and is compatible with modern spiking neural network simulation methods. We show that the model is capable of simulating previous findings from structured sequence processing tasks that engage fronto-temporal regions, specifying mechanistic roles for regions such as prefrontal areas 44/45 and the frontal operculum during interactions with sensory representations in temporal cortex. Finally, we are able to make predictions based on the configuration of the model alone that underscore the importance of serial position information, which requires input from time-sensitive cells, known to reside in the hippocampus and dorsolateral prefrontal cortex. This article is part of the theme issue 'Towards mechanistic models of meaning composition'.
Collapse
Affiliation(s)
- Ryan Calmus
- Newcastle University Medical School, Framlington Place, Newcastle upon Tyne, UK
| | | | | | | |
Collapse
|
78
|
Brian2GeNN: accelerating spiking neural network simulations with graphics hardware. Sci Rep 2020; 10:410. [PMID: 31941893 PMCID: PMC6962409 DOI: 10.1038/s41598-019-54957-7] [Citation(s) in RCA: 18] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 11/21/2019] [Indexed: 12/05/2022] Open
Abstract
“Brian” is a popular Python-based simulator for spiking neural networks, commonly used in computational neuroscience. GeNN is a C++-based meta-compiler for accelerating spiking neural network simulations using consumer or high performance grade graphics processing units (GPUs). Here we introduce a new software package, Brian2GeNN, that connects the two systems so that users can make use of GeNN GPU acceleration when developing their models in Brian, without requiring any technical knowledge about GPUs, C++ or GeNN. The new Brian2GeNN software uses a pipeline of code generation to translate Brian scripts into C++ code that can be used as input to GeNN, and subsequently can be run on suitable NVIDIA GPU accelerators. From the user’s perspective, the entire pipeline is invoked by adding two simple lines to their Brian scripts. We have shown that using Brian2GeNN, two non-trivial models from the literature can run tens to hundreds of times faster than on CPU.
Collapse
|
79
|
S A M, A H M. Synchronization of Hindmarsh Rose Neurons. Neural Netw 2019; 123:372-380. [PMID: 31901566 DOI: 10.1016/j.neunet.2019.11.024] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2019] [Revised: 10/22/2019] [Accepted: 11/25/2019] [Indexed: 11/26/2022]
Abstract
Modeling and implementation of biological neurons are key to the fundamental understanding of neural network architectures in the brain and its cognitive behavior. Synchronization of neuronal models play a significant role in neural signal processing as it is very difficult to identify the actual interaction between neurons in living brain. Therefore, the synchronization study of these neuronal architectures has received extensive attention from researchers. Higher biological accuracy of these neuronal units demands more computational overhead and requires more hardware resources for implementation. This paper presents a two coupled hardware implementation of Hindmarsh Rose neuron model which is mathematically simpler model and yet mimics several behaviors of a real biological neuron. These neurons are synchronized using an exponential function. The coupled system shows several behaviors depending upon the parameters of HR model and coupling function. An approximation of coupling function is also provided to reduce the hardware cost. Both simulations and a low cost hardware implementations of exponential synaptic coupling function and its approximation are carried out for comparison. Hardware implementation on field programmable gate array (FPGA) of approximated coupling function shows that the coupled network produces different dynamical behaviors with acceptable error. Hardware implementation shows that the approximated coupling function has significantly lower implementation cost. A spiking neural network based on HR neuron is also shown as a practical application of this coupled HR neural networks. The spiking network successfully encodes and decodes a time varying input.
Collapse
Affiliation(s)
- Malik S A
- Machine Learning Lab, Department of Electronics and Communication Engineering, National Institute of Technology, Srinagar, India.
| | - Mir A H
- Machine Learning Lab, Department of Electronics and Communication Engineering, National Institute of Technology, Srinagar, India
| |
Collapse
|
80
|
Gast R, Rose D, Salomon C, Möller HE, Weiskopf N, Knösche TR. PyRates-A Python framework for rate-based neural simulations. PLoS One 2019; 14:e0225900. [PMID: 31841550 PMCID: PMC6913930 DOI: 10.1371/journal.pone.0225900] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 11/14/2019] [Indexed: 12/13/2022] Open
Abstract
In neuroscience, computational modeling has become an important source of insight into brain states and dynamics. A basic requirement for computational modeling studies is the availability of efficient software for setting up models and performing numerical simulations. While many such tools exist for different families of neural models, there is a lack of tools allowing for both a generic model definition and efficiently parallelized simulations. In this work, we present PyRates, a Python framework that provides the means to build a large variety of rate-based neural models. PyRates provides intuitive access to and modification of all mathematical operators in a graph, thus allowing for a highly generic model definition. For computational efficiency and parallelization, the model is translated into a compute graph. Using the example of two different neural models belonging to the family of rate-based population models, we explain the mathematical formalism, software structure and user interfaces of PyRates. We show via numerical simulations that the behavior of the PyRates model implementations is consistent with the literature. Finally, we demonstrate the computational capacities and scalability of PyRates via a number of benchmark simulations of neural networks differing in size and connectivity.
Collapse
Affiliation(s)
- Richard Gast
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
- Nuclear Magnetic Resonance Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
- Neurophysics Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| | - Daniel Rose
- Neurophysics Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| | - Christoph Salomon
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
- Institute for Biomedical Engineering and Informatics, TU Ilmenau, Ilmenau, Thuringia, Germany
| | - Harald E. Möller
- Nuclear Magnetic Resonance Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| | - Nikolaus Weiskopf
- Neurophysics Department, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
| | - Thomas R. Knösche
- MEG and Cortical Networks Group, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Saxony, Germany
- Institute for Biomedical Engineering and Informatics, TU Ilmenau, Ilmenau, Thuringia, Germany
| |
Collapse
|
81
|
Poldrack RA, Feingold F, Frank MJ, Gleeson P, de Hollander G, Huys QJM, Love BC, Markiewicz CJ, Moran R, Ritter P, Rogers TT, Turner BM, Yarkoni T, Zhan M, Cohen JD. The importance of standards for sharing of computational models and data. COMPUTATIONAL BRAIN & BEHAVIOR 2019; 2:229-232. [PMID: 32440654 PMCID: PMC7241435 DOI: 10.1007/s42113-019-00062-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
The Target Article by Lee et al. (2019) highlights the ways in which ongoing concerns about research reproducibility extend to model-based approaches in cognitive science. Whereas Lee et al. focus primarily on the importance of research practices to improve model robustness, we propose that the transparent sharing of model specifications, including their inputs and outputs, is also essential to improving the reproducibility of model-based analyses. We outline an ongoing effort (within the context of the Brain Imaging Data Structure community) to develop standards for the sharing of the structure of computational models and their outputs.
Collapse
Affiliation(s)
| | | | | | | | | | | | | | | | | | - Petra Ritter
- Charité Universitätsmedizin Berlin & Berlin Institute of Health
| | | | | | | | - Ming Zhan
- National Institute of Mental Health, NIH
| | | |
Collapse
|
82
|
Mirus F, Blouw P, Stewart TC, Conradt J. An Investigation of Vehicle Behavior Prediction Using a Vector Power Representation to Encode Spatial Positions of Multiple Objects and Neural Networks. Front Neurorobot 2019; 13:84. [PMID: 31680925 PMCID: PMC6805696 DOI: 10.3389/fnbot.2019.00084] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2019] [Accepted: 09/26/2019] [Indexed: 11/13/2022] Open
Abstract
Predicting future behavior and positions of other traffic participants from observations is a key problem that needs to be solved by human drivers and automated vehicles alike to safely navigate their environment and to reach their desired goal. In this paper, we expand on previous work on an automotive environment model based on vector symbolic architectures (VSAs). We investigate a vector-representation to encapsulate spatial information of multiple objects based on a convolutive power encoding. Assuming that future positions of vehicles are influenced not only by their own past positions and dynamics (e.g., velocity and acceleration) but also by the behavior of the other traffic participants in the vehicle's surroundings, our motivation is 3-fold: we hypothesize that our structured vector-representation will be able to capture these relations and mutual influence between multiple traffic participants. Furthermore, the dimension of the encoding vectors remains fixed while being independent of the number of other vehicles encoded in addition to the target vehicle. Finally, a VSA-based encoding allows us to combine symbol-like processing with the advantages of neural network learning. In this work, we use our vector representation as input for a long short-term memory (LSTM) network for sequence to sequence prediction of vehicle positions. In an extensive evaluation, we compare this approach to other LSTM-based benchmark systems using alternative data encoding schemes, simple feed-forward neural networks as well as a simple linear prediction model for reference. We analyze advantages and drawbacks of the presented methods and identify specific driving situations where our approach performs best. We use characteristics specifying such situations as a foundation for an online-learning mixture-of-experts prototype, which chooses at run time between several available predictors depending on the current driving situation to achieve the best possible forecast.
Collapse
Affiliation(s)
- Florian Mirus
- BMW Group, Research, New Technologies, Garching, Germany.,Department of Electrical and Computer Engineering, Technical University of Munich, Munich, Germany
| | - Peter Blouw
- Applied Brain Research Inc., Waterloo, ON, Canada
| | | | - Jörg Conradt
- Department of Computational Science and Technology, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
83
|
|
84
|
Stille CM, Bekolay T, Blouw P, Kröger BJ. Natural Language Processing in Large-Scale Neural Models for Medical Screenings. Front Robot AI 2019; 6:62. [PMID: 33501077 PMCID: PMC7805752 DOI: 10.3389/frobt.2019.00062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Accepted: 07/09/2019] [Indexed: 11/18/2022] Open
Abstract
Many medical screenings used for the diagnosis of neurological, psychological or language and speech disorders access the language and speech processing system. Specifically, patients are asked to fulfill a task (perception) and then requested to give answers verbally or by writing (production). To analyze cognitive or higher-level linguistic impairments or disorders it is thus expected that specific parts of the language and speech processing system of patients are working correctly or that verbal instructions are replaced by pictures (avoiding auditory perception) or oral answers by pointing (avoiding speech articulation). The first goal of this paper is to propose a large-scale neural model which comprises cognitive and lexical levels of the human neural system, and which is able to simulate the human behavior occurring in medical screenings. The second goal of this paper is to relate (microscopic) neural deficits introduced into the model to corresponding (macroscopic) behavioral deficits resulting from the model simulations. The Neural Engineering Framework and the Semantic Pointer Architecture are used to develop the large-scale neural model. Parts of two medical screenings are simulated: (1) a screening of word naming for the detection of developmental problems in lexical storage and lexical retrieval; and (2) a screening of cognitive abilities for the detection of mild cognitive impairment and early dementia. Both screenings include cognitive, language, and speech processing, and for both screenings the same model is simulated with and without neural deficits (physiological case vs. pathological case). While the simulation of both screenings results in the expected normal behavior in the physiological case, the simulations clearly show a deviation of behavior, e.g., an increase in errors in the pathological case. Moreover, specific types of neural dysfunctions resulting from different types of neural defects lead to differences in the type and strength of the observed behavioral deficits.
Collapse
Affiliation(s)
- Catharina Marie Stille
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty RWTH Aachen University, Aachen, Germany
| | - Trevor Bekolay
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Bernd J. Kröger
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty RWTH Aachen University, Aachen, Germany
| |
Collapse
|
85
|
Jordan J, Weidel P, Morrison A. A Closed-Loop Toolchain for Neural Network Simulations of Learning Autonomous Agents. Front Comput Neurosci 2019; 13:46. [PMID: 31427939 PMCID: PMC6687756 DOI: 10.3389/fncom.2019.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 06/25/2019] [Indexed: 11/17/2022] Open
Abstract
Neural network simulation is an important tool for generating and evaluating hypotheses on the structure, dynamics, and function of neural circuits. For scientific questions addressing organisms operating autonomously in their environments, in particular where learning is involved, it is crucial to be able to operate such simulations in a closed-loop fashion. In such a set-up, the neural agent continuously receives sensory stimuli from the environment and provides motor signals that manipulate the environment or move the agent within it. So far, most studies requiring such functionality have been conducted with custom simulation scripts and manually implemented tasks. This makes it difficult for other researchers to reproduce and build upon previous work and nearly impossible to compare the performance of different learning architectures. In this work, we present a novel approach to solve this problem, connecting benchmark tools from the field of machine learning and state-of-the-art neural network simulators from computational neuroscience. The resulting toolchain enables researchers in both fields to make use of well-tested high-performance simulation software supporting biologically plausible neuron, synapse and network models and allows them to evaluate and compare their approach on the basis of standardized environments with various levels of complexity. We demonstrate the functionality of the toolchain by implementing a neuronal actor-critic architecture for reinforcement learning in the NEST simulator and successfully training it on two different environments from the OpenAI Gym. We compare its performance to a previously suggested neural network model of reinforcement learning in the basal ganglia and a generic Q-learning algorithm.
Collapse
Affiliation(s)
- Jakob Jordan
- Department of Physiology, University of Bern, Bern, Switzerland
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
| | - Philipp Weidel
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- aiCTX, Zurich, Switzerland
- Department of Computer Science, RWTH Aachen University, Aachen, Germany
| | - Abigail Morrison
- Institute of Neuroscience and Medicine (INM-6) & Institute for Advanced Simulation (IAS-6) & JARA-Institute Brain Structure Function Relationship (JBI 1/INM-10), Research Centre Jülich, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr-University Bochum, Bochum, Germany
| |
Collapse
|
86
|
Mozafari M, Ganjtabesh M, Nowzari-Dalini A, Masquelier T. SpykeTorch: Efficient Simulation of Convolutional Spiking Neural Networks With at Most One Spike per Neuron. Front Neurosci 2019; 13:625. [PMID: 31354403 PMCID: PMC6640212 DOI: 10.3389/fnins.2019.00625] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 05/31/2019] [Indexed: 11/13/2022] Open
Abstract
Application of deep convolutional spiking neural networks (SNNs) to artificial intelligence (AI) tasks has recently gained a lot of interest since SNNs are hardware-friendly and energy-efficient. Unlike the non-spiking counterparts, most of the existing SNN simulation frameworks are not practically efficient enough for large-scale AI tasks. In this paper, we introduce SpykeTorch, an open-source high-speed simulation framework based on PyTorch. This framework simulates convolutional SNNs with at most one spike per neuron and the rank-order encoding scheme. In terms of learning rules, both spike-timing-dependent plasticity (STDP) and reward-modulated STDP (R-STDP) are implemented, but other rules could be implemented easily. Apart from the aforementioned properties, SpykeTorch is highly generic and capable of reproducing the results of various studies. Computations in the proposed framework are tensor-based and totally done by PyTorch functions, which in turn brings the ability of just-in-time optimization for running on CPUs, GPUs, or Multi-GPU platforms.
Collapse
Affiliation(s)
- Milad Mozafari
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran.,CERCO UMR 5549, CNRS - Université Toulouse 3, Toulouse, France
| | - Mohammad Ganjtabesh
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | - Abbas Nowzari-Dalini
- Department of Computer Science, School of Mathematics, Statistics, and Computer Science, University of Tehran, Tehran, Iran
| | | |
Collapse
|
87
|
Wärnberg E, Kumar A. Perturbing low dimensional activity manifolds in spiking neuronal networks. PLoS Comput Biol 2019; 15:e1007074. [PMID: 31150376 PMCID: PMC6586365 DOI: 10.1371/journal.pcbi.1007074] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Revised: 06/20/2019] [Accepted: 05/07/2019] [Indexed: 11/19/2022] Open
Abstract
Several recent studies have shown that neural activity in vivo tends to be constrained to a low-dimensional manifold. Such activity does not arise in simulated neural networks with homogeneous connectivity and it has been suggested that it is indicative of some other connectivity pattern in neuronal networks. In particular, this connectivity pattern appears to be constraining learning so that only neural activity patterns falling within the intrinsic manifold can be learned and elicited. Here, we use three different models of spiking neural networks (echo-state networks, the Neural Engineering Framework and Efficient Coding) to demonstrate how the intrinsic manifold can be made a direct consequence of the circuit connectivity. Using this relationship between the circuit connectivity and the intrinsic manifold, we show that learning of patterns outside the intrinsic manifold corresponds to much larger changes in synaptic weights than learning of patterns within the intrinsic manifold. Assuming larger changes to synaptic weights requires extensive learning, this observation provides an explanation of why learning is easier when it does not require the neural activity to leave its intrinsic manifold.
Collapse
Affiliation(s)
- Emil Wärnberg
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
- Dept. of Neuroscience, Karolinska Institutet, Stockholm, Sweden
| | - Arvind Kumar
- Dept. of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden
| |
Collapse
|
88
|
Verduzco-Flores S, De Schutter E. Draculab: A Python Simulator for Firing Rate Neural Networks With Delayed Adaptive Connections. Front Neuroinform 2019; 13:18. [PMID: 31001101 PMCID: PMC6454197 DOI: 10.3389/fninf.2019.00018] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2018] [Accepted: 03/08/2019] [Indexed: 11/13/2022] Open
Abstract
Draculab is a neural simulator with a particular use scenario: firing rate units with delayed connections, using custom-made unit and synapse models, possibly controlling simulated physical systems. Draculab also has a particular design philosophy. It aims to blur the line between users and developers. Three factors help to achieve this: a simple design using Python's data structures, extensive use of standard libraries, and profusely commented source code. This paper is an introduction to Draculab's architecture and philosophy. After presenting some example networks it explains basic algorithms and data structures that constitute the essence of this approach. The relation with other simulators is discussed, as well as the reasons why connection delays and interaction with simulated physical systems are emphasized.
Collapse
Affiliation(s)
- Sergio Verduzco-Flores
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| | - Erik De Schutter
- Computational Neuroscience Unit, Okinawa Institute of Science and Technology, Okinawa, Japan
| |
Collapse
|
89
|
Huang F, Ching S. Spiking networks as efficient distributed controllers. BIOLOGICAL CYBERNETICS 2019; 113:179-190. [PMID: 29951907 DOI: 10.1007/s00422-018-0769-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 06/19/2018] [Indexed: 06/08/2023]
Abstract
In the brain, networks of neurons produce activity that is decoded into perceptions and actions. How the dynamics of neural networks support this decoding is a major scientific question. That is, while we understand the basic mechanisms by which neurons produce activity in the form of spikes, whether these dynamics reflect an overlying functional objective is not understood. In this paper, we examine neuronal dynamics from a first-principles control-theoretic viewpoint. Specifically, we postulate an objective wherein neuronal spiking activity is decoded into a control signal that subsequently drives a linear system. Then, using a recently proposed principle from theoretical neuroscience, we optimize the production of spikes so that the linear system in question achieves reference tracking. It turns out that such optimization leads to a recurrent network architecture wherein each neuron possess integrative dynamics. The network amounts to an efficient, distributed event-based controller where each neuron (node) produces a spike if doing so improves tracking performance. Moreover, the dynamics provide inherent robustness properties, so that if some neurons fail, others will compensate by increasing their activity so that the tracking objective is met.
Collapse
Affiliation(s)
- Fuqiang Huang
- The Department of Electrical and Systems Engineering, Washington University, St. Louis, MO, 63130, USA.
| | - ShiNung Ching
- The Department of Electrical and Systems Engineering, Washington University, St. Louis, MO, 63130, USA
| |
Collapse
|
90
|
Chen G, Bing Z, Rohrbein F, Conradt J, Huang K, Cheng L, Jiang Z, Knoll A. Toward Brain-Inspired Learning With the Neuromorphic Snake-Like Robot and the Neurorobotic Platform. IEEE Trans Cogn Dev Syst 2019. [DOI: 10.1109/tcds.2017.2712712] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
91
|
Hazan H, Saunders DJ, Khan H, Patel D, Sanghavi DT, Siegelmann HT, Kozma R. BindsNET: A Machine Learning-Oriented Spiking Neural Networks Library in Python. Front Neuroinform 2018; 12:89. [PMID: 30631269 PMCID: PMC6315182 DOI: 10.3389/fninf.2018.00089] [Citation(s) in RCA: 57] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2018] [Accepted: 11/13/2018] [Indexed: 01/08/2023] Open
Abstract
The development of spiking neural network simulation software is a critical component enabling the modeling of neural systems and the development of biologically inspired algorithms. Existing software frameworks support a wide range of neural functionality, software abstraction levels, and hardware devices, yet are typically not suitable for rapid prototyping or application to problems in the domain of machine learning. In this paper, we describe a new Python package for the simulation of spiking neural networks, specifically geared toward machine learning and reinforcement learning. Our software, called BindsNET, enables rapid building and simulation of spiking networks and features user-friendly, concise syntax. BindsNET is built on the PyTorch deep neural networks library, facilitating the implementation of spiking neural networks on fast CPU and GPU computational platforms. Moreover, the BindsNET framework can be adjusted to utilize other existing computing and hardware backends; e.g., TensorFlow and SpiNNaker. We provide an interface with the OpenAI gym library, allowing for training and evaluation of spiking networks on reinforcement learning environments. We argue that this package facilitates the use of spiking networks for large-scale machine learning problems and show some simple examples by using BindsNET in practice.
Collapse
Affiliation(s)
- Hananel Hazan
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Computer and Information Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | - Daniel J. Saunders
- Biologically Inspired Neural and Dynamical Systems Laboratory, College of Computer and Information Sciences, University of Massachusetts Amherst, Amherst, MA, United States
| | | | | | | | | | | |
Collapse
|
92
|
Andalibi V, Hokkanen H, Vanni S. Controlling Complexity of Cerebral Cortex Simulations-I: CxSystem, a Flexible Cortical Simulation Framework. Neural Comput 2018; 31:1048-1065. [PMID: 30148703 DOI: 10.1162/neco_a_01120] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Simulation of the cerebral cortex requires a combination of extensive domain-specific knowledge and efficient software. However, when the complexity of the biological system is combined with that of the software, the likelihood of coding errors increases, which slows model adjustments. Moreover, few life scientists are familiar with software engineering and would benefit from simplicity in form of a high-level abstraction of the biological model. Our primary aim was to build a scalable cortical simulation framework for personal computers. We isolated an adjustable part of the domain-specific knowledge from the software. Next, we designed a framework that reads the model parameters from comma-separated value files and creates the necessary code for Brian2 model simulation. This separation allows rapid exploration of complex cortical circuits while decreasing the likelihood of coding errors and automatically using efficient hardware devices. Next, we tested the system on a simplified version of the neocortical microcircuit proposed by Markram and colleagues ( 2015 ). Our results indicate that the framework can efficiently perform simulations using Python, C <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow><mml:mo>+</mml:mo><mml:mo>+</mml:mo></mml:mrow></mml:math> , and GPU devices. The most efficient device varied with computer hardware and the duration and scale of the simulated system. The speed of Brian2 was retained despite an overlying layer of software. However, the Python and C <mml:math xmlns:mml="http://www.w3.org/1998/Math/MathML"><mml:mrow><mml:mo>+</mml:mo><mml:mo>+</mml:mo></mml:mrow></mml:math> devices inherited the single core limitation of Brian2. The CxSystem framework supports exploration of complex models on personal computers and thus has the potential to facilitate research on cortical networks and systems.
Collapse
Affiliation(s)
- Vafa Andalibi
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland, and School of Informatics, Computing and Engineering, Indiana University Bloomington, IN 47408, U.S.A.
| | - Henri Hokkanen
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland
| | - Simo Vanni
- Clinical Neurosciences, Neurology, University of Helsinki and Helsinki University Hospital, Helsinki 00029, Finland
| |
Collapse
|
93
|
Bing Z, Meschede C, Röhrbein F, Huang K, Knoll AC. A Survey of Robotics Control Based on Learning-Inspired Spiking Neural Networks. Front Neurorobot 2018; 12:35. [PMID: 30034334 PMCID: PMC6043678 DOI: 10.3389/fnbot.2018.00035] [Citation(s) in RCA: 50] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2018] [Accepted: 06/14/2018] [Indexed: 11/30/2022] Open
Abstract
Biological intelligence processes information using impulses or spikes, which makes those living creatures able to perceive and act in the real world exceptionally well and outperform state-of-the-art robots in almost every aspect of life. To make up the deficit, emerging hardware technologies and software knowledge in the fields of neuroscience, electronics, and computer science have made it possible to design biologically realistic robots controlled by spiking neural networks (SNNs), inspired by the mechanism of brains. However, a comprehensive review on controlling robots based on SNNs is still missing. In this paper, we survey the developments of the past decade in the field of spiking neural networks for control tasks, with particular focus on the fast emerging robotics-related applications. We first highlight the primary impetuses of SNN-based robotics tasks in terms of speed, energy efficiency, and computation capabilities. We then classify those SNN-based robotic applications according to different learning rules and explicate those learning rules with their corresponding robotic applications. We also briefly present some existing platforms that offer an interaction between SNNs and robotics simulations for exploration and exploitation. Finally, we conclude our survey with a forecast of future challenges and some associated potential research topics in terms of controlling robots based on SNNs.
Collapse
Affiliation(s)
- Zhenshan Bing
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Claus Meschede
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Florian Röhrbein
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Department of Informatics, Technical University of Munich, Munich, Germany
| | - Kai Huang
- Department of Data and Computer Science, Sun Yat-Sen University, Guangzhou, China
| | - Alois C. Knoll
- Chair of Robotics, Artificial Intelligence and Real-time Systems, Department of Informatics, Technical University of Munich, Munich, Germany
| |
Collapse
|
94
|
Taghia J, Cai W, Ryali S, Kochalka J, Nicholas J, Chen T, Menon V. Uncovering hidden brain state dynamics that regulate performance and decision-making during cognition. Nat Commun 2018; 9:2505. [PMID: 29950686 PMCID: PMC6021386 DOI: 10.1038/s41467-018-04723-6] [Citation(s) in RCA: 89] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Accepted: 05/21/2018] [Indexed: 12/11/2022] Open
Abstract
Human cognition is influenced not only by external task demands but also latent mental processes and brain states that change over time. Here, we use novel Bayesian switching dynamical systems algorithm to identify hidden brain states and determine that these states are only weakly aligned with external task conditions. We compute state transition probabilities and demonstrate how dynamic transitions between hidden states allow flexible reconfiguration of functional brain circuits. Crucially, we identify latent transient brain states and dynamic functional circuits that are optimal for cognition and show that failure to engage these states in a timely manner is associated with poorer task performance and weaker decision-making dynamics. We replicate findings in a large sample (N = 122) and reveal a robust link between cognition and flexible latent brain state dynamics. Our study demonstrates the power of switching dynamical systems models for investigating hidden dynamic brain states and functional interactions underlying human cognition.
Collapse
Affiliation(s)
- Jalil Taghia
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| | - Weidong Cai
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| | - Srikanth Ryali
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - John Kochalka
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Jonathan Nicholas
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Tianwen Chen
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA
| | - Vinod Menon
- Department of Psychiatry & Behavioral Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA. .,Department of Neurology & Neurological Sciences, Stanford University School of Medicine, Stanford, CA, 94305, USA. .,Stanford Neuroscience Institute, Stanford University School of Medicine, Stanford, CA, 94305, USA.
| |
Collapse
|
95
|
Senft V, Stewart TC, Bekolay T, Eliasmith C, Kröger BJ. Inhibiting Basal Ganglia Regions Reduces Syllable Sequencing Errors in Parkinson's Disease: A Computer Simulation Study. Front Comput Neurosci 2018; 12:41. [PMID: 29928197 PMCID: PMC5997929 DOI: 10.3389/fncom.2018.00041] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Accepted: 05/18/2018] [Indexed: 01/24/2023] Open
Abstract
Background: Parkinson's disease affects many motor processes including speech. Besides drug treatment, deep brain stimulation (DBS) in the subthalamic nucleus (STN) and globus pallidus internus (GPi) has developed as an effective therapy. Goal: We present a neural model that simulates a syllable repetition task and evaluate its performance when varying the level of dopamine in the striatum, and the level of activity reduction in the STN or GPi. Method: The Neural Engineering Framework (NEF) is used to build a model of syllable sequencing through a cortico-basal ganglia-thalamus-cortex circuit. The model is able to simulate a failing substantia nigra pars compacta (SNc), as occurs in Parkinson's patients. We simulate syllable sequencing parameterized by (i) the tonic dopamine level in the striatum and (ii) average neural activity in STN or GPi. Results: With decreased dopamine levels, the model produces syllable sequencing errors in the form of skipping and swapping syllables, repeating the same syllable, breaking and restarting in the middle of a sequence, and cessation (“freezing”) of sequences. We also find that reducing (inhibiting) activity in either STN or GPi reduces the occurrence of syllable sequencing errors. Conclusion: The model predicts that inhibiting activity in STN or GPi can reduce syllable sequencing errors in Parkinson's patients. Since DBS also reduces syllable sequencing errors in Parkinson's patients, we therefore suggest that STN or GPi inhibition is one mechanism through which DBS reduces syllable sequencing errors in Parkinson's patients.
Collapse
Affiliation(s)
| | - Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | | | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Bernd J Kröger
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
96
|
Antolík J, Davison AP. Arkheia: Data Management and Communication for Open Computational Neuroscience. Front Neuroinform 2018; 12:6. [PMID: 29556187 PMCID: PMC5845131 DOI: 10.3389/fninf.2018.00006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2017] [Accepted: 02/14/2018] [Indexed: 11/13/2022] Open
Abstract
Two trends have been unfolding in computational neuroscience during the last decade. First, a shift of focus to increasingly complex and heterogeneous neural network models, with a concomitant increase in the level of collaboration within the field (whether direct or in the form of building on top of existing tools and results). Second, a general trend in science toward more open communication, both internally, with other potential scientific collaborators, and externally, with the wider public. This multi-faceted development toward more integrative approaches and more intense communication within and outside of the field poses major new challenges for modelers, as currently there is a severe lack of tools to help with automatic communication and sharing of all aspects of a simulation workflow to the rest of the community. To address this important gap in the current computational modeling software infrastructure, here we introduce Arkheia. Arkheia is a web-based open science platform for computational models in systems neuroscience. It provides an automatic, interactive, graphical presentation of simulation results, experimental protocols, and interactive exploration of parameter searches, in a web browser-based application. Arkheia is focused on automatic presentation of these resources with minimal manual input from users. Arkheia is written in a modular fashion with a focus on future development of the platform. The platform is designed in an open manner, with a clearly defined and separated API for database access, so that any project can write its own backend translating its data into the Arkheia database format. Arkheia is not a centralized platform, but allows any user (or group of users) to set up their own repository, either for public access by the general population, or locally for internal use. Overall, Arkheia provides users with an automatic means to communicate information about not only their models but also individual simulation results and the entire experimental context in an approachable graphical manner, thus facilitating the user's ability to collaborate in the field and outreach to a wider audience.
Collapse
Affiliation(s)
- Ján Antolík
- Institut National de la Santé et de la Recherche Médicale UMRI S 968; Sorbonne Universits, UPMC Univ Paris 06, UMR S 968; Centre National de la Recherche Scientifique, UMR 7210, Institut de la Vision, Paris, France.,Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique UPR 3293, Gif-sur-Yvette, France
| | - Andrew P Davison
- Unité de Neurosciences, Information et Complexité, Centre National de la Recherche Scientifique UPR 3293, Gif-sur-Yvette, France
| |
Collapse
|
97
|
Jordan J, Ippen T, Helias M, Kitayama I, Sato M, Igarashi J, Diesmann M, Kunkel S. Extremely Scalable Spiking Neuronal Network Simulation Code: From Laptops to Exascale Computers. Front Neuroinform 2018; 12:2. [PMID: 29503613 PMCID: PMC5820465 DOI: 10.3389/fninf.2018.00002] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2017] [Accepted: 01/18/2018] [Indexed: 11/13/2022] Open
Abstract
State-of-the-art software tools for neuronal network simulations scale to the largest computing systems available today and enable investigations of large-scale networks of up to 10 % of the human cortex at a resolution of individual neurons and synapses. Due to an upper limit on the number of incoming connections of a single neuron, network connectivity becomes extremely sparse at this scale. To manage computational costs, simulation software ultimately targeting the brain scale needs to fully exploit this sparsity. Here we present a two-tier connection infrastructure and a framework for directed communication among compute nodes accounting for the sparsity of brain-scale networks. We demonstrate the feasibility of this approach by implementing the technology in the NEST simulation code and we investigate its performance in different scaling scenarios of typical network simulations. Our results show that the new data structures and communication scheme prepare the simulation kernel for post-petascale high-performance computing facilities without sacrificing performance in smaller systems.
Collapse
Affiliation(s)
- Jakob Jordan
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany
| | - Tammo Ippen
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway
| | - Moritz Helias
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany
| | - Itaru Kitayama
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Mitsuhisa Sato
- Advanced Institute for Computational Science, RIKEN, Kobe, Japan
| | - Jun Igarashi
- Computational Engineering Applications Unit, RIKEN, Wako, Japan
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and JARA Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, Jülich, Germany.,Department of Physics, Faculty 1, RWTH Aachen University, Aachen, Germany.,Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty, RWTH Aachen University, Aachen, Germany
| | - Susanne Kunkel
- Department of Computational Science and Technology, School of Computer Science and Communication, KTH Royal Institute of Technology, Stockholm, Sweden.,Simulation Laboratory Neuroscience - Bernstein Facility for Simulation and Database Technology, Jülich Research Centre, Jülich, Germany
| |
Collapse
|
98
|
Neuromorphic photonic networks using silicon photonic weight banks. Sci Rep 2017; 7:7430. [PMID: 28784997 PMCID: PMC5547135 DOI: 10.1038/s41598-017-07754-z] [Citation(s) in RCA: 109] [Impact Index Per Article: 15.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Accepted: 06/29/2017] [Indexed: 12/03/2022] Open
Abstract
Photonic systems for high-performance information processing have attracted renewed interest. Neuromorphic silicon photonics has the potential to integrate processing functions that vastly exceed the capabilities of electronics. We report first observations of a recurrent silicon photonic neural network, in which connections are configured by microring weight banks. A mathematical isomorphism between the silicon photonic circuit and a continuous neural network model is demonstrated through dynamical bifurcation analysis. Exploiting this isomorphism, a simulated 24-node silicon photonic neural network is programmed using “neural compiler” to solve a differential system emulation task. A 294-fold acceleration against a conventional benchmark is predicted. We also propose and derive power consumption analysis for modulator-class neurons that, as opposed to laser-class neurons, are compatible with silicon photonic platforms. At increased scale, Neuromorphic silicon photonics could access new regimes of ultrafast information processing for radio, control, and scientific computing.
Collapse
|
99
|
Rasmussen D, Voelker A, Eliasmith C. A neural model of hierarchical reinforcement learning. PLoS One 2017; 12:e0180234. [PMID: 28683111 PMCID: PMC5500327 DOI: 10.1371/journal.pone.0180234] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 06/12/2017] [Indexed: 11/19/2022] Open
Abstract
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.
Collapse
Affiliation(s)
| | - Aaron Voelker
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Chris Eliasmith
- Applied Brain Research, Inc., Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
100
|
Serruya MD. Connecting the Brain to Itself through an Emulation. Front Neurosci 2017; 11:373. [PMID: 28713235 PMCID: PMC5492113 DOI: 10.3389/fnins.2017.00373] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2017] [Accepted: 06/15/2017] [Indexed: 01/03/2023] Open
Abstract
Pilot clinical trials of human patients implanted with devices that can chronically record and stimulate ensembles of hundreds to thousands of individual neurons offer the possibility of expanding the substrate of cognition. Parallel trains of firing rate activity can be delivered in real-time to an array of intermediate external modules that in turn can trigger parallel trains of stimulation back into the brain. These modules may be built in software, VLSI firmware, or biological tissue as in vitro culture preparations or in vivo ectopic construct organoids. Arrays of modules can be constructed as early stage whole brain emulators, following canonical intra- and inter-regional circuits. By using machine learning algorithms and classic tasks known to activate quasi-orthogonal functional connectivity patterns, bedside testing can rapidly identify ensemble tuning properties and in turn cycle through a sequence of external module architectures to explore which can causatively alter perception and behavior. Whole brain emulation both (1) serves to augment human neural function, compensating for disease and injury as an auxiliary parallel system, and (2) has its independent operation bootstrapped by a human-in-the-loop to identify optimal micro- and macro-architectures, update synaptic weights, and entrain behaviors. In this manner, closed-loop brain-computer interface pilot clinical trials can advance strong artificial intelligence development and forge new therapies to restore independence in children and adults with neurological conditions.
Collapse
Affiliation(s)
- Mijail D Serruya
- Neurology, Thomas Jefferson UniversityPhiladelphia, PA, United States
| |
Collapse
|