1
|
Spreizer S, Senk J, Rotter S, Diesmann M, Weyers B. NEST Desktop, an Educational Application for Neuroscience. eNeuro 2021; 8:ENEURO.0274-21.2021. [PMID: 34764188 PMCID: PMC8638679 DOI: 10.1523/eneuro.0274-21.2021] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2021] [Revised: 08/20/2021] [Accepted: 09/19/2021] [Indexed: 11/21/2022] Open
Abstract
Simulation software for spiking neuronal network models matured in the past decades regarding performance and flexibility. But the entry barrier remains high for students and early career scientists in computational neuroscience since these simulators typically require programming skills and a complex installation. Here, we describe an installation-free Graphical User Interface (GUI) running in the web browser, which is distinct from the simulation engine running anywhere, on the student's laptop or on a supercomputer. This architecture provides robustness against technological changes in the software stack and simplifies deployment for self-education and for teachers. Our new open-source tool, NEST Desktop, comprises graphical elements for creating and configuring network models, running simulations, and visualizing and analyzing the results. NEST Desktop allows students to explore important concepts in computational neuroscience without the need to learn a simulator control language before. Our experiences so far highlight that NEST Desktop helps advancing both quality and intensity of teaching in computational neuroscience in regular university courses. We view the availability of the tool on public resources like the European ICT infrastructure for neuroscience EBRAINS as a contribution to equal opportunities.
Collapse
Affiliation(s)
- Sebastian Spreizer
- Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Computer Science, University of Trier, 54296 Trier, Germany
| | - Johanna Senk
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
| | - Stefan Rotter
- Faculty of Biology, University of Freiburg, 79104 Freiburg, Germany
- Bernstein Center Freiburg, University of Freiburg, 79104 Freiburg, Germany
| | - Markus Diesmann
- Institute of Neuroscience and Medicine (INM-6) and Institute for Advanced Simulation (IAS-6) and Jülich Aachen Research Alliance (JARA)-Institute Brain Structure-Function Relationships (INM-10), Jülich Research Centre, 52428 Jülich, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine, Rheinisch-Westfälische Technische Hochschule Aachen University, 52074 Aachen, Germany
- Department of Physics, Faculty 1, Rheinisch-Westfälische Technische Hochschule Aachen University, 52074 Aachen, Germany
| | - Benjamin Weyers
- Department of Computer Science, University of Trier, 54296 Trier, Germany
| |
Collapse
|
2
|
Crook SM, Davison AP, McDougal RA, Plesser HE. Editorial: Reproducibility and Rigour in Computational Neuroscience. Front Neuroinform 2020; 14:23. [PMID: 32536859 PMCID: PMC7267030 DOI: 10.3389/fninf.2020.00023] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2020] [Accepted: 04/23/2020] [Indexed: 12/28/2022] Open
Affiliation(s)
- Sharon M Crook
- School of Mathematical and Statistical Sciences, School of Life Sciences, Arizona State University, Tempe, AZ, United States
| | - Andrew P Davison
- Department of Integrative and Computational Neuroscience, Paris-Saclay Institute of Neuroscience, CNRS/Université Paris-Saclay, Gif-sur-Yvette, France
| | - Robert A McDougal
- Department of Biostatistics and Center for Medical Informatics, Yale University, New Haven, CT, United States
| | - Hans Ekkehard Plesser
- Faculty of Science and Technology, Norwegian University of Life Sciences, Ås, Norway.,Institute of Neuroscience and Medicine (INM-6), Jülich Research Centre, Jülich, Germany
| |
Collapse
|
3
|
Iyengar RS, Pithapuram MV, Singh AK, Raghavan M. Curated Model Development Using NEUROiD: A Web-Based NEUROmotor Integration and Design Platform. Front Neuroinform 2019; 13:56. [PMID: 31440153 PMCID: PMC6693358 DOI: 10.3389/fninf.2019.00056] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2018] [Accepted: 07/11/2019] [Indexed: 11/24/2022] Open
Abstract
Decades of research on neuromotor circuits and systems has provided valuable information on neuronal control of movement. Computational models of several elements of the neuromotor system have been developed at various scales, from sub-cellular to system. While several small models abound, their structured integration is the key to building larger and more biologically realistic models which can predict the behavior of the system in different scenarios. This effort calls for integration of elements across neuroscience and musculoskeletal biomechanics. There is also a need for development of methods and tools for structured integration that yield larger in silico models demonstrating a set of desired system responses. We take a small step in this direction with the NEUROmotor integration and Design (NEUROiD) platform. NEUROiD helps integrate results from motor systems anatomy, physiology, and biomechanics into an integrated neuromotor system model. Simulation and visualization of the model across multiple scales is supported. Standard electrophysiological operations such as slicing, current injection, recording of membrane potential, and local field potential are part of NEUROiD. The platform allows traceability of model parameters to primary literature. We illustrate the power and utility of NEUROiD by building a simple ankle model and its controlling neural circuitry by curating a set of published components. NEUROiD allows researchers to utilize remote high-performance computers for simulation, while controlling the model using a web browser.
Collapse
Affiliation(s)
- Raghu Sesha Iyengar
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Madhav Vinodh Pithapuram
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Avinash Kumar Singh
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| | - Mohan Raghavan
- Spine Labs, Department of Biomedical Engineering, Indian Institute of Technology, Hyderabad, India
| |
Collapse
|
4
|
Knight JC, Nowotny T. GPUs Outperform Current HPC and Neuromorphic Solutions in Terms of Speed and Energy When Simulating a Highly-Connected Cortical Model. Front Neurosci 2018; 12:941. [PMID: 30618570 PMCID: PMC6299048 DOI: 10.3389/fnins.2018.00941] [Citation(s) in RCA: 26] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2018] [Accepted: 11/29/2018] [Indexed: 11/15/2022] Open
Abstract
While neuromorphic systems may be the ultimate platform for deploying spiking neural networks (SNNs), their distributed nature and optimization for specific types of models makes them unwieldy tools for developing them. Instead, SNN models tend to be developed and simulated on computers or clusters of computers with standard von Neumann CPU architectures. Over the last decade, as well as becoming a common fixture in many workstations, NVIDIA GPU accelerators have entered the High Performance Computing field and are now used in 50 % of the Top 10 super computing sites worldwide. In this paper we use our GeNN code generator to re-implement two neo-cortex-inspired, circuit-scale, point neuron network models on GPU hardware. We verify the correctness of our GPU simulations against prior results obtained with NEST running on traditional HPC hardware and compare the performance with respect to speed and energy consumption against published data from CPU-based HPC and neuromorphic hardware. A full-scale model of a cortical column can be simulated at speeds approaching 0.5× real-time using a single NVIDIA Tesla V100 accelerator-faster than is currently possible using a CPU based cluster or the SpiNNaker neuromorphic system. In addition, we find that, across a range of GPU systems, the energy to solution as well as the energy per synaptic event of the microcircuit simulation is as much as 14× lower than either on SpiNNaker or in CPU-based simulations. Besides performance in terms of speed and energy consumption of the simulation, efficient initialization of models is also a crucial concern, particularly in a research context where repeated runs and parameter-space exploration are required. Therefore, we also introduce in this paper some of the novel parallel initialization methods implemented in the latest version of GeNN and demonstrate how they can enable further speed and energy advantages.
Collapse
Affiliation(s)
- James C. Knight
- Centre for Computational Neuroscience and Robotics, School of Engineering and Informatics, University of Sussex, Brighton, United Kingdom
| | | |
Collapse
|
5
|
Blundell I, Brette R, Cleland TA, Close TG, Coca D, Davison AP, Diaz-Pier S, Fernandez Musoles C, Gleeson P, Goodman DFM, Hines M, Hopkins MW, Kumbhar P, Lester DR, Marin B, Morrison A, Müller E, Nowotny T, Peyser A, Plotnikov D, Richmond P, Rowley A, Rumpe B, Stimberg M, Stokes AB, Tomkins A, Trensch G, Woodman M, Eppler JM. Code Generation in Computational Neuroscience: A Review of Tools and Techniques. Front Neuroinform 2018; 12:68. [PMID: 30455637 PMCID: PMC6230720 DOI: 10.3389/fninf.2018.00068] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2018] [Accepted: 09/12/2018] [Indexed: 01/18/2023] Open
Abstract
Advances in experimental techniques and computational power allowing researchers to gather anatomical and electrophysiological data at unprecedented levels of detail have fostered the development of increasingly complex models in computational neuroscience. Large-scale, biophysically detailed cell models pose a particular set of computational challenges, and this has led to the development of a number of domain-specific simulators. At the other level of detail, the ever growing variety of point neuron models increases the implementation barrier even for those based on the relatively simple integrate-and-fire neuron model. Independently of the model complexity, all modeling methods crucially depend on an efficient and accurate transformation of mathematical model descriptions into efficiently executable code. Neuroscientists usually publish model descriptions in terms of the mathematical equations underlying them. However, actually simulating them requires they be translated into code. This can cause problems because errors may be introduced if this process is carried out by hand, and code written by neuroscientists may not be very computationally efficient. Furthermore, the translated code might be generated for different hardware platforms, operating system variants or even written in different languages and thus cannot easily be combined or even compared. Two main approaches to addressing this issues have been followed. The first is to limit users to a fixed set of optimized models, which limits flexibility. The second is to allow model definitions in a high level interpreted language, although this may limit performance. Recently, a third approach has become increasingly popular: using code generation to automatically translate high level descriptions into efficient low level code to combine the best of previous approaches. This approach also greatly enriches efforts to standardize simulator-independent model description languages. In the past few years, a number of code generation pipelines have been developed in the computational neuroscience community, which differ considerably in aim, scope and functionality. This article provides an overview of existing pipelines currently used within the community and contrasts their capabilities and the technologies and concepts behind them.
Collapse
Affiliation(s)
- Inga Blundell
- Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
| | - Romain Brette
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Thomas A. Cleland
- Department of Psychology, Cornell University, Ithaca, NY, United States
| | - Thomas G. Close
- Monash Biomedical Imaging, Monash University, Melbourne, VIC, Australia
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Andrew P. Davison
- Unité de Neurosciences, Information et Complexité, CNRS FRE 3693, Gif sur Yvette, France
| | - Sandra Diaz-Pier
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Carlos Fernandez Musoles
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Padraig Gleeson
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
| | - Dan F. M. Goodman
- Department of Electrical and Electronic Engineering, Imperial College London, London, United Kingdom
| | - Michael Hines
- Department of Neurobiology, School of Medicine, Yale University, New Haven, CT, United States
| | - Michael W. Hopkins
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Pramod Kumbhar
- Blue Brain Project, EPFLCampus Biotech, Geneva, Switzerland
| | - David R. Lester
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Bóris Marin
- Department of Neuroscience, Physiology and Pharmacology, University College London, London, United Kingdom
- Centro de Matemática, Computação e CogniçãoUniversidade Federal do ABC, São Bernardo do Campo, Brazil
| | - Abigail Morrison
- Forschungszentrum Jülich, Institute of Neuroscience and Medicine (INM-6), Institute for Advanced Simulation (IAS-6), JARA BRAIN Institute I, Jülich, Germany
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
- Faculty of Psychology, Institute of Cognitive NeuroscienceRuhr-University Bochum, Bochum, Germany
| | - Eric Müller
- Kirchhoff-Institute for PhysicsUniversität Heidelberg, Heidelberg, Germany
| | - Thomas Nowotny
- Centre for Computational Neuroscience and Robotics, School of Engineering and InformaticsUniversity of Sussex, Brighton, United Kingdom
| | - Alexander Peyser
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Dimitri Plotnikov
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
- RWTH Aachen University, Software EngineeringJülich Aachen Research Alliance, Aachen, Germany
| | - Paul Richmond
- Department of Computer ScienceUniversity of Sheffield, Sheffield, United Kingdom
| | - Andrew Rowley
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Bernhard Rumpe
- RWTH Aachen University, Software EngineeringJülich Aachen Research Alliance, Aachen, Germany
| | - Marcel Stimberg
- Sorbonne Université, INSERM, CNRS, Institut de la Vision, Paris, France
| | - Alan B. Stokes
- Advanced Processor Technologies Group, School of Computer ScienceUniversity of Manchester, Manchester, United Kingdom
| | - Adam Tomkins
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, United Kingdom
| | - Guido Trensch
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| | - Marmaduke Woodman
- Institut de Neurosciences des SystèmesAix Marseille Université, Marseille, France
| | - Jochen Martin Eppler
- Forschungszentrum Jülich, Simulation Lab Neuroscience, Jülich Supercomputing Centre, Institute for Advanced Simulation, Jülich Aachen Research Alliance, Jülich, Germany
| |
Collapse
|
6
|
Sen-Bhattacharya B, James S, Rhodes O, Sugiarto I, Rowley A, Stokes AB, Gurney K, Furber SB. Building a Spiking Neural Network Model of the Basal Ganglia on SpiNNaker. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2018.2797426] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
7
|
Cope AJ, Vasilaki E, Minors D, Sabo C, Marshall JAR, Barron AB. Abstract concept learning in a simple neural network inspired by the insect brain. PLoS Comput Biol 2018; 14:e1006435. [PMID: 30222735 PMCID: PMC6160224 DOI: 10.1371/journal.pcbi.1006435] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2018] [Revised: 09/27/2018] [Accepted: 08/15/2018] [Indexed: 12/24/2022] Open
Abstract
The capacity to learn abstract concepts such as 'sameness' and 'difference' is considered a higher-order cognitive function, typically thought to be dependent on top-down neocortical processing. It is therefore surprising that honey bees apparantly have this capacity. Here we report a model of the structures of the honey bee brain that can learn sameness and difference, as well as a range of complex and simple associative learning tasks. Our model is constrained by the known connections and properties of the mushroom body, including the protocerebral tract, and provides a good fit to the learning rates and performances of real bees in all tasks, including learning sameness and difference. The model proposes a novel mechanism for learning the abstract concepts of 'sameness' and 'difference' that is compatible with the insect brain, and is not dependent on top-down or executive control processing.
Collapse
Affiliation(s)
- Alex J. Cope
- Department of Computer Science, University of Sheffield, Sheffield, UK
- Sheffield Robotics, University of Sheffield, Sheffield, UK
| | - Eleni Vasilaki
- Department of Computer Science, University of Sheffield, Sheffield, UK
- Sheffield Robotics, University of Sheffield, Sheffield, UK
| | - Dorian Minors
- Department of Biological Sciences, Macquarie University, Sydney, Australia
| | - Chelsea Sabo
- Department of Computer Science, University of Sheffield, Sheffield, UK
- Sheffield Robotics, University of Sheffield, Sheffield, UK
| | - James A. R. Marshall
- Department of Computer Science, University of Sheffield, Sheffield, UK
- Sheffield Robotics, University of Sheffield, Sheffield, UK
| | - Andrew B. Barron
- Department of Biological Sciences, Macquarie University, Sydney, Australia
| |
Collapse
|
8
|
James SS, Papapavlou C, Blenkinsop A, Cope AJ, Anderson SR, Moustakas K, Gurney KN. Integrating Brain and Biomechanical Models-A New Paradigm for Understanding Neuro-muscular Control. Front Neurosci 2018; 12:39. [PMID: 29467606 PMCID: PMC5808253 DOI: 10.3389/fnins.2018.00039] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2017] [Accepted: 01/16/2018] [Indexed: 12/26/2022] Open
Abstract
To date, realistic models of how the central nervous system governs behavior have been restricted in scope to the brain, brainstem or spinal column, as if these existed as disembodied organs. Further, the model is often exercised in relation to an in vivo physiological experiment with input comprising an impulse, a periodic signal or constant activation, and output as a pattern of neural activity in one or more neural populations. Any link to behavior is inferred only indirectly via these activity patterns. We argue that to discover the principles of operation of neural systems, it is necessary to express their behavior in terms of physical movements of a realistic motor system, and to supply inputs that mimic sensory experience. To do this with confidence, we must connect our brain models to neuro-muscular models and provide relevant visual and proprioceptive feedback signals, thereby closing the loop of the simulation. This paper describes an effort to develop just such an integrated brain and biomechanical system using a number of pre-existing models. It describes a model of the saccadic oculomotor system incorporating a neuromuscular model of the eye and its six extraocular muscles. The position of the eye determines how illumination of a retinotopic input population projects information about the location of a saccade target into the system. A pre-existing saccadic burst generator model was incorporated into the system, which generated motoneuron activity patterns suitable for driving the biomechanical eye. The model was demonstrated to make accurate saccades to a target luminance under a set of environmental constraints. Challenges encountered in the development of this model showed the importance of this integrated modeling approach. Thus, we exposed shortcomings in individual model components which were only apparent when these were supplied with the more plausible inputs available in a closed loop design. Consequently we were able to suggest missing functionality which the system would require to reproduce more realistic behavior. The construction of such closed-loop animal models constitutes a new paradigm of computational neurobehavior and promises a more thoroughgoing approach to our understanding of the brain's function as a controller for movement and behavior.
Collapse
Affiliation(s)
- Sebastian S. James
- Adaptive Behaviour Research Group, Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for In-Silico Medicine, The University of Sheffield, Sheffield, United Kingdom
| | - Chris Papapavlou
- Department of Electrical and Computer Engineering, The University of Patras, Patras, Greece
| | - Alexander Blenkinsop
- Adaptive Behaviour Research Group, Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for In-Silico Medicine, The University of Sheffield, Sheffield, United Kingdom
| | - Alexander J. Cope
- Department of Computer Science, The University of Sheffield, Sheffield, United Kingdom
| | - Sean R. Anderson
- Insigneo Institute for In-Silico Medicine, The University of Sheffield, Sheffield, United Kingdom
- Department of Automatic Control Systems Engineering, The University of Sheffield, Sheffield, United Kingdom
| | - Konstantinos Moustakas
- Department of Electrical and Computer Engineering, The University of Patras, Patras, Greece
| | - Kevin N. Gurney
- Adaptive Behaviour Research Group, Department of Psychology, The University of Sheffield, Sheffield, United Kingdom
- Insigneo Institute for In-Silico Medicine, The University of Sheffield, Sheffield, United Kingdom
| |
Collapse
|
9
|
Buxton D, Bracci E, Overton PG, Gurney K. Striatal Neuropeptides Enhance Selection and Rejection of Sequential Actions. Front Comput Neurosci 2017; 11:62. [PMID: 28798678 PMCID: PMC5529366 DOI: 10.3389/fncom.2017.00062] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2017] [Accepted: 06/27/2017] [Indexed: 12/05/2022] Open
Abstract
The striatum is the primary input nucleus for the basal ganglia, and receives glutamatergic afferents from the cortex. Under the hypothesis that basal ganglia perform action selection, these cortical afferents encode potential “action requests.” Previous studies have suggested the striatum may utilize a mutually inhibitory network of medium spiny neurons (MSNs) to filter these requests so that only those of high salience are selected. However, the mechanisms enabling the striatum to perform clean, rapid switching between distinct actions that form part of a learned action sequence are still poorly understood. Substance P (SP) and enkephalin are neuropeptides co-released with GABA in MSNs preferentially expressing D1 or D2 dopamine receptors respectively. SP has a facilitatory effect on subsequent glutamatergic inputs to target MSNs, while enkephalin has an inhibitory effect. Blocking the action of SP in the striatum is also known to affect behavioral transitions. We constructed phenomenological models of the effects of SP and enkephalin, and integrated these into a hybrid model of basal ganglia comprising a spiking striatal microcircuit and rate–coded populations representing other major structures. We demonstrated that diffuse neuropeptide connectivity enhanced the selection of unordered action requests, and that for true action sequences, where action semantics define a fixed structure, a patterning of the SP connectivity reflecting this ordering enhanced selection of actions presented in the correct sequential order and suppressed incorrect ordering. We also showed that selective pruning of SP connections allowed context–sensitive inhibition of specific undesirable requests that otherwise interfered with selection of an action group. Our model suggests that the interaction of SP and enkephalin enhances the contrast between selection and rejection of action requests, and that patterned SP connectivity in the striatum allows the “chunking” of actions and improves selection of sequences. Efficient execution of action sequences may therefore result from a combination of ordered cortical inputs and patterned neuropeptide connectivity within striatum.
Collapse
Affiliation(s)
- David Buxton
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Enrico Bracci
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Paul G Overton
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| | - Kevin Gurney
- Adaptive Behaviour Research Group, Department of Psychology, The University of SheffieldSheffield, United Kingdom
| |
Collapse
|
10
|
Neuronify: An Educational Simulator for Neural Circuits. eNeuro 2017; 4:eN-MNT-0022-17. [PMID: 28321440 PMCID: PMC5355897 DOI: 10.1523/eneuro.0022-17.2017] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2017] [Revised: 02/18/2017] [Accepted: 02/23/2017] [Indexed: 11/21/2022] Open
Abstract
Educational software (apps) can improve science education by providing an interactive way of learning about complicated topics that are hard to explain with text and static illustrations. However, few educational apps are available for simulation of neural networks. Here, we describe an educational app, Neuronify, allowing the user to easily create and explore neural networks in a plug-and-play simulation environment. The user can pick network elements with adjustable parameters from a menu, i.e., synaptically connected neurons modelled as integrate-and-fire neurons and various stimulators (current sources, spike generators, visual, and touch) and recording devices (voltmeter, spike detector, and loudspeaker). We aim to provide a low entry point to simulation-based neuroscience by allowing students with no programming experience to create and simulate neural networks. To facilitate the use of Neuronify in teaching, a set of premade common network motifs is provided, performing functions such as input summation, gain control by inhibition, and detection of direction of stimulus movement. Neuronify is developed in C++ and QML using the cross-platform application framework Qt and runs on smart phones (Android, iOS) and tablet computers as well personal computers (Windows, Mac, Linux).
Collapse
|
11
|
A computational model of the integration of landmarks and motion in the insect central complex. PLoS One 2017; 12:e0172325. [PMID: 28241061 PMCID: PMC5328262 DOI: 10.1371/journal.pone.0172325] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2016] [Accepted: 02/02/2017] [Indexed: 11/19/2022] Open
Abstract
The insect central complex (CX) is an enigmatic structure whose computational function has evaded inquiry, but has been implicated in a wide range of behaviours. Recent experimental evidence from the fruit fly (Drosophila melanogaster) and the cockroach (Blaberus discoidalis) has demonstrated the existence of neural activity corresponding to the animal's orientation within a virtual arena (a neural 'compass'), and this provides an insight into one component of the CX structure. There are two key features of the compass activity: an offset between the angle represented by the compass and the true angular position of visual features in the arena, and the remapping of the 270° visual arena onto an entire circle of neurons in the compass. Here we present a computational model which can reproduce this experimental evidence in detail, and predicts the computational mechanisms that underlie the data. We predict that both the offset and remapping of the fly's orientation onto the neural compass can be explained by plasticity in the synaptic weights between segments of the visual field and the neurons representing orientation. Furthermore, we predict that this learning is reliant on the existence of neural pathways that detect rotational motion across the whole visual field and uses this rotation signal to drive the rotation of activity in a neural ring attractor. Our model also reproduces the 'transitioning' between visual landmarks seen when rotationally symmetric landmarks are presented. This model can provide the basis for further investigation into the role of the central complex, which promises to be a key structure for understanding insect behaviour, as well as suggesting approaches towards creating fully autonomous robotic agents.
Collapse
|