101
|
Aleksin SG, Zheng K, Rusakov DA, Savtchenko LP. ARACHNE: A neural-neuroglial network builder with remotely controlled parallel computing. PLoS Comput Biol 2017; 13:e1005467. [PMID: 28362877 PMCID: PMC5393895 DOI: 10.1371/journal.pcbi.1005467] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2016] [Revised: 04/17/2017] [Accepted: 03/20/2017] [Indexed: 11/30/2022] Open
Abstract
Creating and running realistic models of neural networks has hitherto been a task for computing professionals rather than experimental neuroscientists. This is mainly because such networks usually engage substantial computational resources, the handling of which requires specific programing skills. Here we put forward a newly developed simulation environment ARACHNE: it enables an investigator to build and explore cellular networks of arbitrary biophysical and architectural complexity using the logic of NEURON and a simple interface on a local computer or a mobile device. The interface can control, through the internet, an optimized computational kernel installed on a remote computer cluster. ARACHNE can combine neuronal (wired) and astroglial (extracellular volume-transmission driven) network types and adopt realistic cell models from the NEURON library. The program and documentation (current version) are available at GitHub repository https://github.com/LeonidSavtchenko/Arachne under the MIT License (MIT).
Collapse
Affiliation(s)
- Sergey G. Aleksin
- AMC Bridge LLC, Waltham MA, United States of America and Dnipro, Ukraine
| | - Kaiyu Zheng
- UCL Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
| | - Dmitri A. Rusakov
- UCL Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
- * E-mail: (LPS); (DAR)
| | - Leonid P. Savtchenko
- UCL Institute of Neurology, University College London, London WC1N 3BG, United Kingdom
- Institute of Neuroscience, University of Nizhny Novgorod, Nizhny Novgorod, Russia
- * E-mail: (LPS); (DAR)
| |
Collapse
|
102
|
Kajić I, Gosmann J, Stewart TC, Wennekers T, Eliasmith C. A Spiking Neuron Model of Word Associations for the Remote Associates Test. Front Psychol 2017; 8:99. [PMID: 28210234 PMCID: PMC5288385 DOI: 10.3389/fpsyg.2017.00099] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2016] [Accepted: 01/16/2017] [Indexed: 11/13/2022] Open
Abstract
Generating associations is important for cognitive tasks including language acquisition and creative problem solving. It remains an open question how the brain represents and processes associations. The Remote Associates Test (RAT) is a task, originally used in creativity research, that is heavily dependent on generating associations in a search for the solutions to individual RAT problems. In this work we present a model that solves the test. Compared to earlier modeling work on the RAT, our hybrid (i.e., non-developmental) model is implemented in a spiking neural network by means of the Neural Engineering Framework (NEF), demonstrating that it is possible for spiking neurons to be organized to store the employed representations and to manipulate them. In particular, the model shows that distributed representations can support sophisticated linguistic processing. The model was validated on human behavioral data including the typical length of response sequences and similarity relationships in produced responses. These data suggest two cognitive processes that are involved in solving the RAT: one process generates potential responses and a second process filters the responses.
Collapse
Affiliation(s)
- Ivana Kajić
- School of Computing, Electronics and Mathematics, University of PlymouthPlymouth, UK
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| | - Jan Gosmann
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| | - Terrence C. Stewart
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| | - Thomas Wennekers
- School of Computing, Electronics and Mathematics, University of PlymouthPlymouth, UK
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| |
Collapse
|
103
|
Aubin S, Voelker AR, Eliasmith C. Improving With Practice: A Neural Model of Mathematical Development. Top Cogn Sci 2016; 9:6-20. [PMID: 28019687 DOI: 10.1111/tops.12242] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2016] [Revised: 10/28/2016] [Accepted: 10/19/2016] [Indexed: 11/30/2022]
Abstract
The ability to improve in speed and accuracy as a result of repeating some task is an important hallmark of intelligent biological systems. Although gradual behavioral improvements from practice have been modeled in spiking neural networks, few such models have attempted to explain cognitive development of a task as complex as addition. In this work, we model the progression from a counting-based strategy for addition to a recall-based strategy. The model consists of two networks working in parallel: a slower basal ganglia loop and a faster cortical network. The slow network methodically computes the count from one digit given another, corresponding to the addition of two digits, whereas the fast network gradually "memorizes" the output from the slow network. The faster network eventually learns how to add the same digits that initially drove the behavior of the slower network. Performance of this model is demonstrated by simulating a fully spiking neural network that includes basal ganglia, thalamus, and various cortical areas. Consequently, the model incorporates various neuroanatomical data, in terms of brain areas used for calculation and makes psychologically testable predictions related to frequency of rehearsal. Furthermore, the model replicates developmental progression through addition strategies in terms of reaction times and accuracy, and naturally explains observed symptoms of dyscalculia.
Collapse
Affiliation(s)
- Sean Aubin
- Centre for Theoretical Neuroscience, University of Waterloo
| | | | | |
Collapse
|
104
|
Pastur-Romay LA, Cedrón F, Pazos A, Porto-Pazos AB. Deep Artificial Neural Networks and Neuromorphic Chips for Big Data Analysis: Pharmaceutical and Bioinformatics Applications. Int J Mol Sci 2016; 17:E1313. [PMID: 27529225 PMCID: PMC5000710 DOI: 10.3390/ijms17081313] [Citation(s) in RCA: 35] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2016] [Revised: 07/14/2016] [Accepted: 07/25/2016] [Indexed: 12/20/2022] Open
Abstract
Over the past decade, Deep Artificial Neural Networks (DNNs) have become the state-of-the-art algorithms in Machine Learning (ML), speech recognition, computer vision, natural language processing and many other tasks. This was made possible by the advancement in Big Data, Deep Learning (DL) and drastically increased chip processing abilities, especially general-purpose graphical processing units (GPGPUs). All this has created a growing interest in making the most of the potential offered by DNNs in almost every field. An overview of the main architectures of DNNs, and their usefulness in Pharmacology and Bioinformatics are presented in this work. The featured applications are: drug design, virtual screening (VS), Quantitative Structure-Activity Relationship (QSAR) research, protein structure prediction and genomics (and other omics) data mining. The future need of neuromorphic hardware for DNNs is also discussed, and the two most advanced chips are reviewed: IBM TrueNorth and SpiNNaker. In addition, this review points out the importance of considering not only neurons, as DNNs and neuromorphic chips should also include glial cells, given the proven importance of astrocytes, a type of glial cell which contributes to information processing in the brain. The Deep Artificial Neuron-Astrocyte Networks (DANAN) could overcome the difficulties in architecture design, learning process and scalability of the current ML methods.
Collapse
Affiliation(s)
- Lucas Antón Pastur-Romay
- Department of Information and Communications Technologies, University of A Coruña, A Coruña 15071, Spain.
| | - Francisco Cedrón
- Department of Information and Communications Technologies, University of A Coruña, A Coruña 15071, Spain.
| | - Alejandro Pazos
- Department of Information and Communications Technologies, University of A Coruña, A Coruña 15071, Spain.
- Instituto de Investigación Biomédica de A Coruña (INIBIC), Complexo Hospitalario Universitario de A Coruña (CHUAC), A Coruña 15006, Spain.
| | - Ana Belén Porto-Pazos
- Department of Information and Communications Technologies, University of A Coruña, A Coruña 15071, Spain.
- Instituto de Investigación Biomédica de A Coruña (INIBIC), Complexo Hospitalario Universitario de A Coruña (CHUAC), A Coruña 15006, Spain.
| |
Collapse
|
105
|
Kröger BJ, Crawford E, Bekolay T, Eliasmith C. Modeling Interactions between Speech Production and Perception: Speech Error Detection at Semantic and Phonological Levels and the Inner Speech Loop. Front Comput Neurosci 2016; 10:51. [PMID: 27303287 PMCID: PMC4885855 DOI: 10.3389/fncom.2016.00051] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2016] [Accepted: 05/17/2016] [Indexed: 11/13/2022] Open
Abstract
Production and comprehension of speech are closely interwoven. For example, the ability to detect an error in one's own speech, halt speech production, and finally correct the error can be explained by assuming an inner speech loop which continuously compares the word representations induced by production to those induced by perception at various cognitive levels (e.g., conceptual, word, or phonological levels). Because spontaneous speech errors are relatively rare, a picture naming and halt paradigm can be used to evoke them. In this paradigm, picture presentation (target word initiation) is followed by an auditory stop signal (distractor word) for halting speech production. The current study seeks to understand the neural mechanisms governing self-detection of speech errors by developing a biologically inspired neural model of the inner speech loop. The neural model is based on the Neural Engineering Framework (NEF) and consists of a network of about 500,000 spiking neurons. In the first experiment we induce simulated speech errors semantically and phonologically. In the second experiment, we simulate a picture naming and halt task. Target-distractor word pairs were balanced with respect to variation of phonological and semantic similarity. The results of the first experiment show that speech errors are successfully detected by a monitoring component in the inner speech loop. The results of the second experiment show that the model correctly reproduces human behavioral data on the picture naming and halt task. In particular, the halting rate in the production of target words was lower for phonologically similar words than for semantically similar or fully dissimilar distractor words. We thus conclude that the neural architecture proposed here to model the inner speech loop reflects important interactions in production and perception at phonological and semantic levels.
Collapse
Affiliation(s)
- Bernd J. Kröger
- Neurophonetics Group, Department of Phoniatrics, Pedaudiology, and Communication Disorders, Medical School, RWTH Aachen UniversityAachen, Germany
| | - Eric Crawford
- Reasoning and Learning Lab, School of Computer Science, McGill UniversityMontreal, QC, Canada
| | - Trevor Bekolay
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of WaterlooWaterloo, ON, Canada
| |
Collapse
|
106
|
Stewart TC, Kleinhans A, Mundy A, Conradt J. Serendipitous Offline Learning in a Neuromorphic Robot. Front Neurorobot 2016; 10:1. [PMID: 26913002 PMCID: PMC4753383 DOI: 10.3389/fnbot.2016.00001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2015] [Accepted: 01/26/2016] [Indexed: 11/25/2022] Open
Abstract
We demonstrate a hybrid neuromorphic learning paradigm that learns complex sensorimotor mappings based on a small set of hard-coded reflex behaviors. A mobile robot is first controlled by a basic set of reflexive hand-designed behaviors. All sensor data is provided via a spike-based silicon retina camera (eDVS), and all control is implemented via spiking neurons simulated on neuromorphic hardware (SpiNNaker). Given this control system, the robot is capable of simple obstacle avoidance and random exploration. To train the robot to perform more complex tasks, we observe the robot and find instances where the robot accidentally performs the desired action. Data recorded from the robot during these times is then used to update the neural control system, increasing the likelihood of the robot performing that task in the future, given a similar sensor state. As an example application of this general-purpose method of training, we demonstrate the robot learning to respond to novel sensory stimuli (a mirror) by turning right if it is present at an intersection, and otherwise turning left. In general, this system can learn arbitrary relations between sensory input and motor behavior.
Collapse
Affiliation(s)
- Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo , Waterloo, ON , Canada
| | - Ashley Kleinhans
- Mobile Intelligent Autonomous Systems Group, Council for Scientific and Industrial Research , Pretoria , South Africa
| | - Andrew Mundy
- School of Computer Science, University of Manchester , Manchester , UK
| | - Jörg Conradt
- Department of Electrical and Computer Engineering, Technische Universität München , München , Germany
| |
Collapse
|
107
|
Gosmann J, Eliasmith C. Optimizing Semantic Pointer Representations for Symbol-Like Processing in Spiking Neural Networks. PLoS One 2016; 11:e0149928. [PMID: 26900931 PMCID: PMC4762696 DOI: 10.1371/journal.pone.0149928] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Accepted: 02/05/2016] [Indexed: 11/17/2022] Open
Abstract
The Semantic Pointer Architecture (SPA) is a proposal of specifying the computations and architectural elements needed to account for cognitive functions. By means of the Neural Engineering Framework (NEF) this proposal can be realized in a spiking neural network. However, in any such network each SPA transformation will accumulate noise. By increasing the accuracy of common SPA operations, the overall network performance can be increased considerably. As well, the representations in such networks present a trade-off between being able to represent all possible values and being only able to represent the most likely values, but with high accuracy. We derive a heuristic to find the near-optimal point in this trade-off. This allows us to improve the accuracy of common SPA operations by up to 25 times. Ultimately, it allows for a reduction of neuron number and a more efficient use of both traditional and neuromorphic hardware, which we demonstrate here.
Collapse
Affiliation(s)
- Jan Gosmann
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| |
Collapse
|
108
|
Stewart TC, DeWolf T, Kleinhans A, Eliasmith C. Closed-Loop Neuromorphic Benchmarks. Front Neurosci 2015; 9:464. [PMID: 26696820 PMCID: PMC4678234 DOI: 10.3389/fnins.2015.00464] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Accepted: 11/23/2015] [Indexed: 11/15/2022] Open
Abstract
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled.
Collapse
Affiliation(s)
- Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Travis DeWolf
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Ashley Kleinhans
- Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research Pretoria, South Africa
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
109
|
Bekolay T, Stewart TC, Eliasmith C. Benchmarking neuromorphic systems with Nengo. Front Neurosci 2015; 9:380. [PMID: 26539076 PMCID: PMC4609756 DOI: 10.3389/fnins.2015.00380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 10/02/2015] [Indexed: 11/20/2022] Open
Abstract
Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly.
Collapse
|
110
|
Vitay J, Dinkelbach HÜ, Hamker FH. ANNarchy: a code generation approach to neural simulations on parallel hardware. Front Neuroinform 2015; 9:19. [PMID: 26283957 PMCID: PMC4521356 DOI: 10.3389/fninf.2015.00019] [Citation(s) in RCA: 47] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2015] [Accepted: 07/13/2015] [Indexed: 11/22/2022] Open
Abstract
Many modern neural simulators focus on the simulation of networks of spiking neurons on parallel hardware. Another important framework in computational neuroscience, rate-coded neural networks, is mostly difficult or impossible to implement using these simulators. We present here the ANNarchy (Artificial Neural Networks architect) neural simulator, which allows to easily define and simulate rate-coded and spiking networks, as well as combinations of both. The interface in Python has been designed to be close to the PyNN interface, while the definition of neuron and synapse models can be specified using an equation-oriented mathematical description similar to the Brian neural simulator. This information is used to generate C++ code that will efficiently perform the simulation on the chosen parallel hardware (multi-core system or graphical processing unit). Several numerical methods are available to transform ordinary differential equations into an efficient C++code. We compare the parallel performance of the simulator to existing solutions.
Collapse
Affiliation(s)
- Julien Vitay
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany
| | - Helge Ü Dinkelbach
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany
| | - Fred H Hamker
- Department of Computer Science, Chemnitz University of Technology Chemnitz, Germany ; Bernstein Center for Computational Neuroscience, Charité University Medicine Berlin, Germany
| |
Collapse
|
111
|
A spiking neural integrator model of the adaptive control of action by the medial prefrontal cortex. J Neurosci 2014; 34:1892-902. [PMID: 24478368 DOI: 10.1523/jneurosci.2421-13.2014] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022] Open
Abstract
Subjects performing simple reaction-time tasks can improve reaction times by learning the expected timing of action-imperative stimuli and preparing movements in advance. Success or failure on the previous trial is often an important factor for determining whether a subject will attempt to time the stimulus or wait for it to occur before initiating action. The medial prefrontal cortex (mPFC) has been implicated in enabling the top-down control of action depending on the outcome of the previous trial. Analysis of spike activity from the rat mPFC suggests that neural integration is a key mechanism for adaptive control in precisely timed tasks. We show through simulation that a spiking neural network consisting of coupled neural integrators captures the neural dynamics of the experimentally recorded mPFC. Errors lead to deviations in the normal dynamics of the system, a process that could enable learning from past mistakes. We expand on this coupled integrator network to construct a spiking neural network that performs a reaction-time task by following either a cue-response or timing strategy, and show that it performs the task with similar reaction times as experimental subjects while maintaining the same spiking dynamics as the experimentally recorded mPFC.
Collapse
|