1
|
Wärnberg E, Kumar A. Feasibility of dopamine as a vector-valued feedback signal in the basal ganglia. Proc Natl Acad Sci U S A 2023; 120:e2221994120. [PMID: 37527344 PMCID: PMC10410740 DOI: 10.1073/pnas.2221994120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 06/08/2023] [Indexed: 08/03/2023] Open
Abstract
It is well established that midbrain dopaminergic neurons support reinforcement learning (RL) in the basal ganglia by transmitting a reward prediction error (RPE) to the striatum. In particular, different computational models and experiments have shown that a striatum-wide RPE signal can support RL over a small discrete set of actions (e.g., no/no-go, choose left/right). However, there is accumulating evidence that the basal ganglia functions not as a selector between predefined actions but rather as a dynamical system with graded, continuous outputs. To reconcile this view with RL, there is a need to explain how dopamine could support learning of continuous outputs, rather than discrete action values. Inspired by the recent observations that besides RPE, the firing rates of midbrain dopaminergic neurons correlate with motor and cognitive variables, we propose a model in which dopamine signal in the striatum carries a vector-valued error feedback signal (a loss gradient) instead of a homogeneous scalar error (a loss). We implement a local, "three-factor" corticostriatal plasticity rule involving the presynaptic firing rate, a postsynaptic factor, and the unique dopamine concentration perceived by each striatal neuron. With this learning rule, we show that such a vector-valued feedback signal results in an increased capacity to learn a multidimensional series of real-valued outputs. Crucially, we demonstrate that this plasticity rule does not require precise nigrostriatal synapses but remains compatible with experimental observations of random placement of varicosities and diffuse volume transmission of dopamine.
Collapse
Affiliation(s)
- Emil Wärnberg
- Department of Neuroscience, Karolinska Institutet, 171 77Stockholm, Sweden
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28Stockholm, Sweden
| | - Arvind Kumar
- Division of Computational Science and Technology, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, 114 28Stockholm, Sweden
| |
Collapse
|
2
|
Dumont NSY, Furlong PM, Orchard J, Eliasmith C. Exploiting semantic information in a spiking neural SLAM system. Front Neurosci 2023; 17:1190515. [PMID: 37476829 PMCID: PMC10354246 DOI: 10.3389/fnins.2023.1190515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Accepted: 06/16/2023] [Indexed: 07/22/2023] Open
Abstract
To navigate in new environments, an animal must be able to keep track of its position while simultaneously creating and updating an internal map of features in the environment, a problem formulated as simultaneous localization and mapping (SLAM) in the field of robotics. This requires integrating information from different domains, including self-motion cues, sensory, and semantic information. Several specialized neuron classes have been identified in the mammalian brain as being involved in solving SLAM. While biology has inspired a whole class of SLAM algorithms, the use of semantic information has not been explored in such work. We present a novel, biologically plausible SLAM model called SSP-SLAM-a spiking neural network designed using tools for large scale cognitive modeling. Our model uses a vector representation of continuous spatial maps, which can be encoded via spiking neural activity and bound with other features (continuous and discrete) to create compressed structures containing semantic information from multiple domains (e.g., spatial, temporal, visual, conceptual). We demonstrate that the dynamics of these representations can be implemented with a hybrid oscillatory-interference and continuous attractor network of head direction cells. The estimated self-position from this network is used to learn an associative memory between semantically encoded landmarks and their positions, i.e., an environment map, which is used for loop closure. Our experiments demonstrate that environment maps can be learned accurately and their use greatly improves self-position estimation. Furthermore, grid cells, place cells, and object vector cells are observed by this model. We also run our path integrator network on the NengoLoihi neuromorphic emulator to demonstrate feasibility for a full neuromorphic implementation for energy efficient SLAM.
Collapse
|
3
|
Coarse-Grained Neural Network Model of the Basal Ganglia to Simulate Reinforcement Learning Tasks. Brain Sci 2022; 12:brainsci12020262. [PMID: 35204025 PMCID: PMC8870197 DOI: 10.3390/brainsci12020262] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Revised: 02/05/2022] [Accepted: 02/11/2022] [Indexed: 01/27/2023] Open
Abstract
Computational models of the basal ganglia (BG) provide a mechanistic account of different phenomena observed during reinforcement learning tasks performed by healthy individuals, as well as by patients with various nervous or mental disorders. The aim of the present work was to develop a BG model that could represent a good compromise between simplicity and completeness. Based on more complex (fine-grained neural network, FGNN) models, we developed a new (coarse-grained neural network, CGNN) model by replacing layers of neurons with single nodes that represent the collective behavior of a given layer while preserving the fundamental anatomical structures of BG. We then compared the functionality of both the FGNN and CGNN models with respect to several reinforcement learning tasks that are based on BG circuitry, such as the Probabilistic Selection Task, Probabilistic Reversal Learning Task and Instructed Probabilistic Selection Task. We showed that CGNN still has a functionality that mirrors the behavior of the most often used reinforcement learning tasks in human studies. The simplification of the CGNN model reduces its flexibility but improves the readability of the signal flow in comparison to more detailed FGNN models and, thus, can help to a greater extent in the translation between clinical neuroscience and computational modeling.
Collapse
|
4
|
Humphries MD, Gurney K. Making decisions in the dark basement of the brain: A look back at the GPR model of action selection and the basal ganglia. BIOLOGICAL CYBERNETICS 2021; 115:323-329. [PMID: 34272969 DOI: 10.1007/s00422-021-00887-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/29/2021] [Accepted: 07/06/2021] [Indexed: 06/13/2023]
Abstract
How does your brain decide what you will do next? Over the past few decades compelling evidence has emerged that the basal ganglia, a collection of nuclei in the fore- and mid-brain of all vertebrates, are vital to action selection. Gurney, Prescott, and Redgrave published an influential computational account of this idea in Biological Cybernetics in 2001. Here we take a look back at this pair of papers, outlining the "GPR" model contained therein, the context of that model's development, and the influence it has had over the past twenty years. Tracing its lineage into models and theories still emerging now, we are encouraged that the GPR model is that rare thing, a computational model of a brain circuit whose advances were directly built on by others.
Collapse
|
5
|
Abstract
AbstractTo improve the understanding of cognitive processing stages, we combined two prominent traditions in cognitive science: evidence accumulation models and stage discovery methods. While evidence accumulation models have been applied to a wide variety of tasks, they are limited to tasks in which decision-making effects can be attributed to a single processing stage. Here, we propose a new method that first uses machine learning to discover processing stages in EEG data and then applies evidence accumulation models to characterize the duration effects in the identified stages. To evaluate this method, we applied it to a previously published associative recognition task (Application 1) and a previously published random dot motion task with a speed-accuracy trade-off manipulation (Application 2). In both applications, the evidence accumulation models accounted better for the data when we first applied the stage-discovery method, and the resulting parameter estimates where generally in line with psychological theories. In addition, in Application 1 the results shed new light on target-foil effects in associative recognition, while in Application 2 the stage discovery method identified an additional stage in the accuracy-focused condition — challenging standard evidence accumulation accounts. We conclude that the new framework provides a powerful new tool to investigate processing stages.
Collapse
|
6
|
Berberyan HS, van Maanen L, van Rijn H, Borst J. EEG-based Identification of Evidence Accumulation Stages in Decision-Making. J Cogn Neurosci 2020; 33:510-527. [PMID: 33326329 DOI: 10.1162/jocn_a_01663] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
Dating back to the 19th century, the discovery of processing stages has been of great interest to researchers in cognitive science. The goal of this paper is to demonstrate the validity of a recently developed method, hidden semi-Markov model multivariate pattern analysis (HsMM-MVPA), for discovering stages directly from EEG data, in contrast to classical reaction-time-based methods. To test the validity of stages discovered with the HsMM-MVPA method, we applied it to two relatively simple tasks where the interpretation of processing stages is straightforward. In these visual discrimination EEG data experiments, perceptual processing and decision difficulty were manipulated. The HsMM-MVPA revealed that participants progressed through five cognitive processing stages while performing these tasks. The brain activation of one of those stages was dependent on perceptual processing, whereas the brain activation and the duration of two other stages were dependent on decision difficulty. In addition, evidence accumulation models (EAMs) were used to assess to what extent the results of HsMM-MVPA are comparable to standard reaction-time-based methods. Consistent with the HsMM-MVPA results, EAMs showed that nondecision time varied with perceptual difficulty and drift rate varied with decision difficulty. Moreover, nondecision and decision time of the EAMs correlated highly with the first two and last three stages of the HsMM-MVPA, respectively, indicating that the HsMM-MVPA gives a more detailed description of stages discovered with this more classical method. The results demonstrate that cognitive stages can be robustly inferred with the HsMM-MVPA.
Collapse
|
7
|
Kelly MA, Arora N, West RL, Reitter D. Holographic Declarative Memory: Distributional Semantics as the Architecture of Memory. Cogn Sci 2020; 44:e12904. [PMID: 33140517 DOI: 10.1111/cogs.12904] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2019] [Revised: 03/30/2020] [Accepted: 08/31/2020] [Indexed: 11/29/2022]
Abstract
We demonstrate that the key components of cognitive architectures (declarative and procedural memory) and their key capabilities (learning, memory retrieval, probability judgment, and utility estimation) can be implemented as algebraic operations on vectors and tensors in a high-dimensional space using a distributional semantics model. High-dimensional vector spaces underlie the success of modern machine learning techniques based on deep learning. However, while neural networks have an impressive ability to process data to find patterns, they do not typically model high-level cognition, and it is often unclear how they work. Symbolic cognitive architectures can capture the complexities of high-level cognition and provide human-readable, explainable models, but scale poorly to naturalistic, non-symbolic, or big data. Vector-symbolic architectures, where symbols are represented as vectors, bridge the gap between the two approaches. We posit that cognitive architectures, if implemented in a vector-space model, represent a useful, explanatory model of the internal representations of otherwise opaque neural architectures. Our proposed model, Holographic Declarative Memory (HDM), is a vector-space model based on distributional semantics. HDM accounts for primacy and recency effects in free recall, the fan effect in recognition, probability judgments, and human performance on an iterated decision task. HDM provides a flexible, scalable alternative to symbolic cognitive architectures at a level of description that bridges symbolic, quantum, and neural models of cognition.
Collapse
Affiliation(s)
- Mary Alexandria Kelly
- Department of Computer Science, Bucknell University
- College of Information Sciences and Computing, The Pennsylvania State University
| | - Nipun Arora
- Department of Cognitive Science, Carleton University
| | - Robert L West
- Department of Cognitive Science, Carleton University
| | - David Reitter
- College of Information Sciences and Computing, The Pennsylvania State University
- Google Research
| |
Collapse
|
8
|
González-Redondo Á, Naveros F, Ros E, Garrido JA. A Basal Ganglia Computational Model to Explain the Paradoxical Sensorial Improvement in the Presence of Huntington's Disease. Int J Neural Syst 2020; 30:2050057. [PMID: 32840409 DOI: 10.1142/s0129065720500574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
The basal ganglia (BG) represent a critical center of the nervous system for sensorial discrimination. Although it is known that Huntington's disease (HD) affects this brain area, it still remains unclear how HD patients achieve paradoxical improvement in sensorial discrimination tasks. This paper presents a computational model of the BG including the main nuclei and the typical firing properties of their neurons. The BG model has been embedded within an auditory signal detection task. We have emulated the effect that the altered levels of dopamine and the degree of HD affectation have in information processing at different layers of the BG, and how these aspects shape transient and steady states differently throughout the selection task. By extracting the independent components of the BG activity at different populations, it is evidenced that early and medium stages of HD affectation may enhance transient activity in the striatum and the substantia nigra pars reticulata. These results represent a possible explanation for the paradoxical improvement that HD patients present in discrimination task performance. Thus, this paper provides a novel understanding on how the fast dynamics of the BG network at different layers interact and enable transient states to emerge throughout the successive neuron populations.
Collapse
Affiliation(s)
| | - Francisco Naveros
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| | - Eduardo Ros
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| | - Jesús A Garrido
- Department of Computer Architecture and Technology, University of Granada, Granada, Spain
| |
Collapse
|
9
|
Stille CM, Bekolay T, Blouw P, Kröger BJ. Modeling the Mental Lexicon as Part of Long-Term and Working Memory and Simulating Lexical Access in a Naming Task Including Semantic and Phonological Cues. Front Psychol 2020; 11:1594. [PMID: 32774315 PMCID: PMC7381331 DOI: 10.3389/fpsyg.2020.01594] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 06/15/2020] [Indexed: 12/02/2022] Open
Abstract
BACKGROUND To produce and understand words, humans access the mental lexicon. From a functional perspective, the long-term memory component of the mental lexicon is comprised of three levels: the concept level, the lemma level, and the phonological level. At each level, different kinds of word information are stored. Semantic as well as phonological cues can help to facilitate word access during a naming task, especially when neural dysfunctions are present. The processing corresponding to word access occurs in specific parts of working memory. Neural models for simulating speech processing help to uncover the complex relationships that exist between neural dysfunctions and corresponding behavioral patterns. METHODS The Neural Engineering Framework (NEF) and the Semantic Pointer Architecture (SPA) are used to develop a quantitative neural model of the mental lexicon and its access during speech processing. By simulating a picture-naming task (WWT 6-10), the influence of cues is investigated by introducing neural dysfunctions within the neural model at different levels of the mental lexicon. RESULTS First, the neural model is able to simulate the test behavior for normal children that exhibit no lexical dysfunction. Second, the model shows worse results in test performance as larger degrees of dysfunction are introduced. Third, if the severity of dysfunction is not too high, phonological and semantic cues are observed to lead to an increase in the number of correctly named words. Phonological cues are observed to be more effective than semantic cues. CONCLUSION Our simulation results are in line with human experimental data. Specifically, phonological cues seem not only to activate phonologically similar items within the phonological level. Moreover, phonological cues support higher-level processing during access of the mental lexicon. Thus, the neural model introduced in this paper offers a promising approach to modeling the mental lexicon, and to incorporating the mental lexicon into a complex model of language processing.
Collapse
Affiliation(s)
- Catharina Marie Stille
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| | - Trevor Bekolay
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Bernd J. Kröger
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Faculty of Medicine, RWTH Aachen University, Aachen, Germany
| |
Collapse
|
10
|
Girard B, Lienard J, Gutierrez CE, Delord B, Doya K. A biologically constrained spiking neural network model of the primate basal ganglia with overlapping pathways exhibits action selection. Eur J Neurosci 2020; 53:2254-2277. [PMID: 32564449 PMCID: PMC8246891 DOI: 10.1111/ejn.14869] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2020] [Revised: 05/19/2020] [Accepted: 06/08/2020] [Indexed: 12/19/2022]
Abstract
Action selection has been hypothesized to be a key function of the basal ganglia, yet the nuclei involved, their interactions and the importance of the direct/indirect pathway segregation in such process remain debated. Here, we design a spiking computational model of the monkey basal ganglia derived from a previously published population model, initially parameterized to reproduce electrophysiological activity at rest and to embody as much quantitative anatomical data as possible. As a particular feature, both models exhibit the strong overlap between the direct and indirect pathways that has been documented in non-human primates. Here, we first show how the translation from a population to an individual neuron model was achieved, with the addition of a minimal number of parameters. We then show that our model performs action selection, even though it was built without any assumption on the activity carried out during behaviour. We investigate the mechanisms of this selection through circuit disruptions and found an instrumental role of the off-centre/on-surround structure of the MSN-STN-GPi circuit, as well as of the MSN-MSN and FSI-MSN projections. This validates their potency in enabling selection. We finally study the pervasive centromedian and parafascicular thalamic inputs that reach all basal ganglia nuclei and whose influence is therefore difficult to anticipate. Our model predicts that these inputs modulate the responsiveness of action selection, making them a candidate for the regulation of the speed-accuracy trade-off during decision-making.
Collapse
Affiliation(s)
- Benoît Girard
- Institut des Systèmes Intelligent et de Robotique (ISIR), Sorbonne Université, CNRS, Paris, France
| | - Jean Lienard
- Neural Computation Unit, Okinawa Institute of Science and Technology, Kunigami-gun, Japan
| | | | - Bruno Delord
- Institut des Systèmes Intelligent et de Robotique (ISIR), Sorbonne Université, CNRS, Paris, France
| | - Kenji Doya
- Neural Computation Unit, Okinawa Institute of Science and Technology, Kunigami-gun, Japan
| |
Collapse
|
11
|
Pals M, Stewart TC, Akyürek EG, Borst JP. A functional spiking-neuron model of activity-silent working memory in humans based on calcium-mediated short-term synaptic plasticity. PLoS Comput Biol 2020; 16:e1007936. [PMID: 32516337 PMCID: PMC7282629 DOI: 10.1371/journal.pcbi.1007936] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Accepted: 05/07/2020] [Indexed: 11/19/2022] Open
Abstract
In this paper, we present a functional spiking-neuron model of human working memory (WM). This model combines neural firing for encoding of information with activity-silent maintenance. While it used to be widely assumed that information in WM is maintained through persistent recurrent activity, recent studies have shown that information can be maintained without persistent firing; instead, information can be stored in activity-silent states. A candidate mechanism underlying this type of storage is short-term synaptic plasticity (STSP), by which the strength of connections between neurons rapidly changes to encode new information. To demonstrate that STSP can lead to functional behavior, we integrated STSP by means of calcium-mediated synaptic facilitation in a large-scale spiking-neuron model and added a decision mechanism. The model was used to simulate a recent study that measured behavior and EEG activity of participants in three delayed-response tasks. In these tasks, one or two visual gratings had to be maintained in WM, and compared to subsequent probes. The original study demonstrated that WM contents and its priority status could be decoded from neural activity elicited by a task-irrelevant stimulus displayed during the activity-silent maintenance period. In support of our model, we show that it can perform these tasks, and that both its behavior as well as its neural representations are in agreement with the human data. We conclude that information in WM can be effectively maintained in activity-silent states by means of calcium-mediated STSP. Mentally maintaining information for short periods of time in working memory is crucial for human adaptive behavior. It was recently shown that the human brain does not only store information through neural firing–as was widely believed–but also maintains information in activity-silent states. Here, we present a detailed neural model of how this could happen in our brain through short-term synaptic plasticity: rapidly adapting the connection strengths between neurons in response to incoming information. By reactivating the adapted network, the stored information can be read out later. We show that our model can perform three working memory tasks as accurately as human participants can, while using similar mental representations. We conclude that our model is a plausible and effective neural implementation of human working memory.
Collapse
Affiliation(s)
- Matthijs Pals
- Bernoulli Institute, University of Groningen, Groningen, The Netherlands
| | - Terrence C. Stewart
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, Ontario, Canada
| | - Elkan G. Akyürek
- Department of Experimental Psychology, University of Groningen, Groningen, The Netherlands
| | - Jelmer P. Borst
- Bernoulli Institute, University of Groningen, Groningen, The Netherlands
- Groningen Cognitive Systems and Materials Center, University of Groningen, Groningen, The Netherlands
- * E-mail:
| |
Collapse
|
12
|
A general method to generate artificial spike train populations matching recorded neurons. J Comput Neurosci 2020; 48:47-63. [PMID: 31974719 DOI: 10.1007/s10827-020-00741-w] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2019] [Revised: 01/05/2020] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
We developed a general method to generate populations of artificial spike trains (ASTs) that match the statistics of recorded neurons. The method is based on computing a Gaussian local rate function of the recorded spike trains, which results in rate templates from which ASTs are drawn as gamma distributed processes with a refractory period. Multiple instances of spike trains can be sampled from the same rate templates. Importantly, we can manipulate rate-covariances between spike trains by performing simple algorithmic transformations on the rate templates, such as filtering or amplifying specific frequency bands, and adding behavior related rate modulations. The method was examined for accuracy and limitations using surrogate data such as sine wave rate templates, and was then verified for recorded spike trains from cerebellum and cerebral cortex. We found that ASTs generated with this method can closely follow the firing rate and local as well as global spike time variance and power spectrum. The method is primarily intended to generate well-controlled spike train populations as inputs for dynamic clamp studies or biophysically realistic multicompartmental models. Such inputs are essential to study detailed properties of synaptic integration with well-controlled input patterns that mimic the in vivo situation while allowing manipulation of input rate covariances at different time scales.
Collapse
|
13
|
Experimental Study of Reinforcement Learning in Mobile Robots Through Spiking Architecture of Thalamo-Cortico-Thalamic Circuitry of Mammalian Brain. ROBOTICA 2019. [DOI: 10.1017/s0263574719001632] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
SUMMARYIn this paper, the behavioral learning of robots through spiking neural networks is studied in which the architecture of the network is based on the thalamo-cortico-thalamic circuitry of the mammalian brain. According to a variety of neurons, the Izhikevich model of single neuron is used for the representation of neuronal behaviors. One thousand and ninety spiking neurons are considered in the network. The spiking model of the proposed architecture is derived and prepared for the learning problem of robots. The reinforcement learning algorithm is based on spike-timing-dependent plasticity and dopamine release as a reward. It results in strengthening the synaptic weights of the neurons that are involved in the robot’s proper performance. Sensory and motor neurons are placed in the thalamus and cortical module, respectively. The inputs of thalamo-cortico-thalamic circuitry are the signals related to distance of the target from robot, and the outputs are the velocities of actuators. The target attraction task is used as an example to validate the proposed method in which dopamine is released when the robot catches the target. Some simulation studies, as well as experimental implementation, are done on a mobile robot named Tabrizbot. Experimental studies illustrate that after successful learning, the meantime of catching target is decreased by about 36%. These prove that through the proposed method, thalamo-cortical structure could be trained successfully to learn to perform various robotic tasks.
Collapse
|
14
|
Stille CM, Bekolay T, Blouw P, Kröger BJ. Natural Language Processing in Large-Scale Neural Models for Medical Screenings. Front Robot AI 2019; 6:62. [PMID: 33501077 PMCID: PMC7805752 DOI: 10.3389/frobt.2019.00062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Accepted: 07/09/2019] [Indexed: 11/18/2022] Open
Abstract
Many medical screenings used for the diagnosis of neurological, psychological or language and speech disorders access the language and speech processing system. Specifically, patients are asked to fulfill a task (perception) and then requested to give answers verbally or by writing (production). To analyze cognitive or higher-level linguistic impairments or disorders it is thus expected that specific parts of the language and speech processing system of patients are working correctly or that verbal instructions are replaced by pictures (avoiding auditory perception) or oral answers by pointing (avoiding speech articulation). The first goal of this paper is to propose a large-scale neural model which comprises cognitive and lexical levels of the human neural system, and which is able to simulate the human behavior occurring in medical screenings. The second goal of this paper is to relate (microscopic) neural deficits introduced into the model to corresponding (macroscopic) behavioral deficits resulting from the model simulations. The Neural Engineering Framework and the Semantic Pointer Architecture are used to develop the large-scale neural model. Parts of two medical screenings are simulated: (1) a screening of word naming for the detection of developmental problems in lexical storage and lexical retrieval; and (2) a screening of cognitive abilities for the detection of mild cognitive impairment and early dementia. Both screenings include cognitive, language, and speech processing, and for both screenings the same model is simulated with and without neural deficits (physiological case vs. pathological case). While the simulation of both screenings results in the expected normal behavior in the physiological case, the simulations clearly show a deviation of behavior, e.g., an increase in errors in the pathological case. Moreover, specific types of neural dysfunctions resulting from different types of neural defects lead to differences in the type and strength of the observed behavioral deficits.
Collapse
Affiliation(s)
- Catharina Marie Stille
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty RWTH Aachen University, Aachen, Germany
| | - Trevor Bekolay
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Peter Blouw
- Applied Brain Research, Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Bernd J. Kröger
- Department for Phoniatrics, Pedaudiology, and Communication Disorders, Medical Faculty RWTH Aachen University, Aachen, Germany
| |
Collapse
|
15
|
Yang S, Wang J, Deng B, Liu C, Li H, Fietkiewicz C, Loparo KA. Real-Time Neuromorphic System for Large-Scale Conductance-Based Spiking Neural Networks. IEEE TRANSACTIONS ON CYBERNETICS 2019; 49:2490-2503. [PMID: 29993922 DOI: 10.1109/tcyb.2018.2823730] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The investigation of the human intelligence, cognitive systems and functional complexity of human brain is significantly facilitated by high-performance computational platforms. In this paper, we present a real-time digital neuromorphic system for the simulation of large-scale conductance-based spiking neural networks (LaCSNN), which has the advantages of both high biological realism and large network scale. Using this system, a detailed large-scale cortico-basal ganglia-thalamocortical loop is simulated using a scalable 3-D network-on-chip (NoC) topology with six Altera Stratix III field-programmable gate arrays simulate 1 million neurons. Novel router architecture is presented to deal with the communication of multiple data flows in the multinuclei neural network, which has not been solved in previous NoC studies. At the single neuron level, cost-efficient conductance-based neuron models are proposed, resulting in the average utilization of 95% less memory resources and 100% less DSP resources for multiplier-less realization, which is the foundation of the large-scale realization. An analysis of the modified models is conducted, including investigation of bifurcation behaviors and ionic dynamics, demonstrating the required range of dynamics with a more reduced resource cost. The proposed LaCSNN system is shown to outperform the alternative state-of-the-art approaches previously used to implement the large-scale spiking neural network, and enables a broad range of potential applications due to its real-time computational power.
Collapse
|
16
|
Suryanarayana SM, Hellgren Kotaleski J, Grillner S, Gurney KN. Roles for globus pallidus externa revealed in a computational model of action selection in the basal ganglia. Neural Netw 2018; 109:113-136. [PMID: 30414556 DOI: 10.1016/j.neunet.2018.10.003] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2018] [Revised: 08/28/2018] [Accepted: 10/09/2018] [Indexed: 01/12/2023]
Abstract
The basal ganglia are considered vital to action selection - a hypothesis supported by several biologically plausible computational models. Of the several subnuclei of the basal ganglia, the globus pallidus externa (GPe) has been thought of largely as a relay nucleus, and its intrinsic connectivity has not been incorporated in significant detail, in any model thus far. Here, we incorporate newly revealed subgroups of neurons within the GPe into an existing computational model of the basal ganglia, and investigate their role in action selection. Three main results ensued. First, using previously used metrics for selection, the new extended connectivity improved the action selection performance of the model. Second, low frequency theta oscillations were observed in the subpopulation of the GPe (the TA or 'arkypallidal' neurons) which project exclusively to the striatum. These oscillations were suppressed by increased dopamine activity - revealing a possible link with symptoms of Parkinson's disease. Third, a new phenomenon was observed in which the usual monotonic relationship between input to the basal ganglia and its output within an action 'channel' was, under some circumstances, reversed. Thus, at high levels of input, further increase of this input to the channel could cause an increase of the corresponding output rather than the more usually observed decrease. Moreover, this phenomenon was associated with the prevention of multiple channel selection, thereby assisting in optimal action selection. Examination of the mechanistic origin of our results showed the so-called 'prototypical' GPe neurons to be the principal subpopulation influencing action selection. They control the striatum via the arkypallidal neurons and are also able to regulate the output nuclei directly. Taken together, our results highlight the role of the GPe as a major control hub of the basal ganglia, and provide a mechanistic account for its control function.
Collapse
Affiliation(s)
| | - Jeanette Hellgren Kotaleski
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden; Science for Life Laboratory, School of Electrical Engineering and Computer Science, KTH Royal Institute of Technology, Stockholm, Sweden.
| | - Sten Grillner
- Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Kevin N Gurney
- Department of Psychology, University of Sheffield, Sheffield, UK.
| |
Collapse
|
17
|
Zeng Y, Wang G, Xu B. A Basal Ganglia Network Centric Reinforcement Learning Model and Its Application in Unmanned Aerial Vehicle. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2017.2649564] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|
18
|
Baladron J, Nambu A, Hamker FH. The subthalamic nucleus‐external globus pallidus loop biases exploratory decisions towards known alternatives: a neuro‐computational study. Eur J Neurosci 2017; 49:754-767. [DOI: 10.1111/ejn.13666] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2017] [Revised: 07/11/2017] [Accepted: 07/25/2017] [Indexed: 11/27/2022]
Affiliation(s)
- Javier Baladron
- Computer Science Chemnitz University of Technology Straße der Nationen 62 Chemnitz Germany
| | - Atsushi Nambu
- Division of System Neurophysiology National Institute for Physiological Sciences Okazaki Japan
- Department of Physiological Sciences SOKENDAI (The Graduate University for Advanced Studies) Okazaki Japan
| | - Fred H. Hamker
- Computer Science Chemnitz University of Technology Straße der Nationen 62 Chemnitz Germany
| |
Collapse
|
19
|
Rasmussen D, Voelker A, Eliasmith C. A neural model of hierarchical reinforcement learning. PLoS One 2017; 12:e0180234. [PMID: 28683111 PMCID: PMC5500327 DOI: 10.1371/journal.pone.0180234] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2016] [Accepted: 06/12/2017] [Indexed: 11/19/2022] Open
Abstract
We develop a novel, biologically detailed neural model of reinforcement learning (RL) processes in the brain. This model incorporates a broad range of biological features that pose challenges to neural RL, such as temporally extended action sequences, continuous environments involving unknown time delays, and noisy/imprecise computations. Most significantly, we expand the model into the realm of hierarchical reinforcement learning (HRL), which divides the RL process into a hierarchy of actions at different levels of abstraction. Here we implement all the major components of HRL in a neural model that captures a variety of known anatomical and physiological properties of the brain. We demonstrate the performance of the model in a range of different environments, in order to emphasize the aim of understanding the brain’s general reinforcement learning ability. These results show that the model compares well to previous modelling work and demonstrates improved performance as a result of its hierarchical ability. We also show that the model’s behaviour is consistent with available data on human hierarchical RL, and generate several novel predictions.
Collapse
Affiliation(s)
| | - Aaron Voelker
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| | - Chris Eliasmith
- Applied Brain Research, Inc., Waterloo, ON, Canada
- Centre for Theoretical Neuroscience, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
20
|
Untangling Basal Ganglia Network Dynamics and Function: Role of Dopamine Depletion and Inhibition Investigated in a Spiking Network Model. eNeuro 2017; 3:eN-NWR-0156-16. [PMID: 28101525 PMCID: PMC5228592 DOI: 10.1523/eneuro.0156-16.2016] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2016] [Revised: 11/22/2016] [Accepted: 11/27/2016] [Indexed: 12/30/2022] Open
Abstract
The basal ganglia are a crucial brain system for behavioral selection, and their function is disturbed in Parkinson's disease (PD), where neurons exhibit inappropriate synchronization and oscillations. We present a spiking neural model of basal ganglia including plausible details on synaptic dynamics, connectivity patterns, neuron behavior, and dopamine effects. Recordings of neuronal activity in the subthalamic nucleus and Type A (TA; arkypallidal) and Type I (TI; prototypical) neurons in globus pallidus externa were used to validate the model. Simulation experiments predict that both local inhibition in striatum and the existence of an indirect pathway are important for basal ganglia to function properly over a large range of cortical drives. The dopamine depletion-induced increase of AMPA efficacy in corticostriatal synapses to medium spiny neurons (MSNs) with dopamine receptor D2 synapses (CTX-MSN D2) and the reduction of MSN lateral connectivity (MSN-MSN) were found to contribute significantly to the enhanced synchrony and oscillations seen in PD. Additionally, reversing the dopamine depletion-induced changes to CTX-MSN D1, CTX-MSN D2, TA-MSN, and MSN-MSN couplings could improve or restore basal ganglia action selection ability. In summary, we found multiple changes of parameters for synaptic efficacy and neural excitability that could improve action selection ability and at the same time reduce oscillations. Identification of such targets could potentially generate ideas for treatments of PD and increase our understanding of the relation between network dynamics and network function.
Collapse
|
21
|
Berthet P, Lindahl M, Tully PJ, Hellgren-Kotaleski J, Lansner A. Functional Relevance of Different Basal Ganglia Pathways Investigated in a Spiking Model with Reward Dependent Plasticity. Front Neural Circuits 2016; 10:53. [PMID: 27493625 PMCID: PMC4954853 DOI: 10.3389/fncir.2016.00053] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2015] [Accepted: 07/06/2016] [Indexed: 11/13/2022] Open
Abstract
The brain enables animals to behaviorally adapt in order to survive in a complex and dynamic environment, but how reward-oriented behaviors are achieved and computed by its underlying neural circuitry is an open question. To address this concern, we have developed a spiking model of the basal ganglia (BG) that learns to dis-inhibit the action leading to a reward despite ongoing changes in the reward schedule. The architecture of the network features the two pathways commonly described in BG, the direct (denoted D1) and the indirect (denoted D2) pathway, as well as a loop involving striatum and the dopaminergic system. The activity of these dopaminergic neurons conveys the reward prediction error (RPE), which determines the magnitude of synaptic plasticity within the different pathways. All plastic connections implement a versatile four-factor learning rule derived from Bayesian inference that depends upon pre- and post-synaptic activity, receptor type, and dopamine level. Synaptic weight updates occur in the D1 or D2 pathways depending on the sign of the RPE, and an efference copy informs upstream nuclei about the action selected. We demonstrate successful performance of the system in a multiple-choice learning task with a transiently changing reward schedule. We simulate lesioning of the various pathways and show that a condition without the D2 pathway fares worse than one without D1. Additionally, we simulate the degeneration observed in Parkinson's disease (PD) by decreasing the number of dopaminergic neurons during learning. The results suggest that the D1 pathway impairment in PD might have been overlooked. Furthermore, an analysis of the alterations in the synaptic weights shows that using the absolute reward value instead of the RPE leads to a larger change in D1.
Collapse
Affiliation(s)
- Pierre Berthet
- Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Mikael Lindahl
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| | - Philip J. Tully
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Institute for Adaptive and Neural Computation, School of Informatics, University of EdinburghEdinburgh, UK
| | - Jeanette Hellgren-Kotaleski
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
- Department of Neuroscience, Karolinska InstituteStockholm, Sweden
| | - Anders Lansner
- Numerical Analysis and Computer Science, Stockholm UniversityStockholm, Sweden
- Department of Computational Biology, School of Computer Science and Communication, KTH Royal Institute of TechnologyStockholm, Sweden
- Stockholm Brain Institute, Karolinska InstituteStockholm, Sweden
| |
Collapse
|
22
|
Stewart TC, DeWolf T, Kleinhans A, Eliasmith C. Closed-Loop Neuromorphic Benchmarks. Front Neurosci 2015; 9:464. [PMID: 26696820 PMCID: PMC4678234 DOI: 10.3389/fnins.2015.00464] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/16/2015] [Accepted: 11/23/2015] [Indexed: 11/15/2022] Open
Abstract
Evaluating the effectiveness and performance of neuromorphic hardware is difficult. It is even more difficult when the task of interest is a closed-loop task; that is, a task where the output from the neuromorphic hardware affects some environment, which then in turn affects the hardware's future input. However, closed-loop situations are one of the primary potential uses of neuromorphic hardware. To address this, we present a methodology for generating closed-loop benchmarks that makes use of a hybrid of real physical embodiment and a type of “minimal” simulation. Minimal simulation has been shown to lead to robust real-world performance, while still maintaining the practical advantages of simulation, such as making it easy for the same benchmark to be used by many researchers. This method is flexible enough to allow researchers to explicitly modify the benchmarks to identify specific task domains where particular hardware excels. To demonstrate the method, we present a set of novel benchmarks that focus on motor control for an arbitrary system with unknown external forces. Using these benchmarks, we show that an error-driven learning rule can consistently improve motor control performance across a randomly generated family of closed-loop simulations, even when there are up to 15 interacting joints to be controlled.
Collapse
Affiliation(s)
- Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Travis DeWolf
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Ashley Kleinhans
- Mobile Intelligent Autonomous Systems group, Council for Scientific and Industrial Research Pretoria, South Africa
| | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
23
|
Bekolay T, Stewart TC, Eliasmith C. Benchmarking neuromorphic systems with Nengo. Front Neurosci 2015; 9:380. [PMID: 26539076 PMCID: PMC4609756 DOI: 10.3389/fnins.2015.00380] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2015] [Accepted: 10/02/2015] [Indexed: 11/20/2022] Open
Abstract
Nengo is a software package for designing and simulating large-scale neural models. Nengo is architected such that the same Nengo model can be simulated on any of several Nengo backends with few to no modifications. Backends translate a model to specific platforms, which include GPUs and neuromorphic hardware. Nengo also contains a large test suite that can be run with any backend and focuses primarily on functional performance. We propose that Nengo's large test suite can be used to benchmark neuromorphic hardware's functional performance and simulation speed in an efficient, unbiased, and future-proof manner. We implement four benchmark models and show that Nengo can collect metrics across five different backends that identify situations in which some backends perform more accurately or quickly.
Collapse
|
24
|
Baladron J, Hamker FH. A spiking neural network based on the basal ganglia functional anatomy. Neural Netw 2015; 67:1-13. [DOI: 10.1016/j.neunet.2015.03.002] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Revised: 01/29/2015] [Accepted: 03/03/2015] [Indexed: 10/23/2022]
|
25
|
Mandali A, Rengaswamy M, Chakravarthy VS, Moustafa AA. A spiking Basal Ganglia model of synchrony, exploration and decision making. Front Neurosci 2015; 9:191. [PMID: 26074761 PMCID: PMC4444758 DOI: 10.3389/fnins.2015.00191] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2014] [Accepted: 05/12/2015] [Indexed: 12/31/2022] Open
Abstract
To make an optimal decision we need to weigh all the available options, compare them with the current goal, and choose the most rewarding one. Depending on the situation an optimal decision could be to either “explore” or “exploit” or “not to take any action” for which the Basal Ganglia (BG) is considered to be a key neural substrate. In an attempt to expand this classical picture of BG function, we had earlier hypothesized that the Indirect Pathway (IP) of the BG could be the subcortical substrate for exploration. In this study we build a spiking network model to relate exploration to synchrony levels in the BG (which are a neural marker for tremor in Parkinson's disease). Key BG nuclei such as the Sub Thalamic Nucleus (STN), Globus Pallidus externus (GPe) and Globus Pallidus internus (GPi) were modeled as Izhikevich spiking neurons whereas the Striatal output was modeled as Poisson spikes. The model is cast in reinforcement learning framework with the dopamine signal representing reward prediction error. We apply the model to two decision making tasks: a binary action selection task (similar to one used by Humphries et al., 2006) and an n-armed bandit task (Bourdaud et al., 2008). The model shows that exploration levels could be controlled by STN's lateral connection strength which also influenced the synchrony levels in the STN-GPe circuit. An increase in STN's lateral strength led to a decrease in exploration which can be thought as the possible explanation for reduced exploratory levels in Parkinson's patients. Our simulations also show that on complete removal of IP, the model exhibits only Go and No-Go behaviors, thereby demonstrating the crucial role of IP in exploration. Our model provides a unified account for synchronization, action section, and explorative behavior.
Collapse
Affiliation(s)
- Alekhya Mandali
- Computational Neuroscience Lab, Department of Biotechnology, Bhupat and Mehta School of BioSciences, Indian Institute of Technology Madras Chennai, India
| | - Maithreye Rengaswamy
- Computational Neuroscience Lab, Department of Biotechnology, Bhupat and Mehta School of BioSciences, Indian Institute of Technology Madras Chennai, India
| | - V Srinivasa Chakravarthy
- Computational Neuroscience Lab, Department of Biotechnology, Bhupat and Mehta School of BioSciences, Indian Institute of Technology Madras Chennai, India
| | - Ahmed A Moustafa
- Marcs Institute for Brain and Behaviour and School of Social Sciences and Psychology, University of Western Sydney Sydney, NSW, Australia
| |
Collapse
|
26
|
Rombouts JO, Bohte SM, Roelfsema PR. How attention can create synaptic tags for the learning of working memories in sequential tasks. PLoS Comput Biol 2015; 11:e1004060. [PMID: 25742003 PMCID: PMC4351255 DOI: 10.1371/journal.pcbi.1004060] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2013] [Accepted: 11/24/2014] [Indexed: 11/17/2022] Open
Abstract
Intelligence is our ability to learn appropriate responses to new stimuli and situations. Neurons in association cortex are thought to be essential for this ability. During learning these neurons become tuned to relevant features and start to represent them with persistent activity during memory delays. This learning process is not well understood. Here we develop a biologically plausible learning scheme that explains how trial-and-error learning induces neuronal selectivity and working memory representations for task-relevant information. We propose that the response selection stage sends attentional feedback signals to earlier processing levels, forming synaptic tags at those connections responsible for the stimulus-response mapping. Globally released neuromodulators then interact with tagged synapses to determine their plasticity. The resulting learning rule endows neural networks with the capacity to create new working memory representations of task relevant information as persistent activity. It is remarkably generic: it explains how association neurons learn to store task-relevant information for linear as well as non-linear stimulus-response mappings, how they become tuned to category boundaries or analog variables, depending on the task demands, and how they learn to integrate probabilistic evidence for perceptual decisions.
Collapse
Affiliation(s)
- Jaldert O. Rombouts
- Department of Life Sciences, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
| | - Sander M. Bohte
- Department of Life Sciences, Centrum Wiskunde & Informatica, Amsterdam, The Netherlands
| | - Pieter R. Roelfsema
- Department of Vision & Cognition, Netherlands Institute for Neurosciences, an institute of the Royal Netherlands Academy of Arts and Sciences (KNAW), Amsterdam, The Netherlands
- Department of Integrative Neurophysiology, Centre for Neurogenomics and Cognitive Research, VU University, Amsterdam, The Netherlands
- Psychiatry Department, Academic Medical Center, Amsterdam, The Netherlands
| |
Collapse
|
27
|
Action Selection and Operant Conditioning: A Neurorobotic Implementation. JOURNAL OF ROBOTICS 2015. [DOI: 10.1155/2015/643869] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Action selection (AS) is thought to represent the mechanism involved by natural agents when deciding what should be the next move or action. Is there a functional elementary core sustaining this cognitive process? Could we reproduce the mechanism with an artificial agent and more specifically in a neurorobotic paradigm? Unsupervised autonomous robots may require a decision-making skill to evolve in the real world and the bioinspired approach is the avenue explored through this paper. We propose simulating an AS process by using a small spiking neural network (SNN) as the lower neural organisms, in order to control virtual and physical robots. We base our AS process on a simple central pattern generator (CPG), decision neurons, sensory neurons, and motor neurons as the main circuit components. As novelty, this study targets a specific operant conditioning (OC) context which is relevant in an AS process; choices do influence future sensory feedback. Using a simple adaptive scenario, we show the complementarity interaction of both phenomena. We also suggest that this AS kernel could be a fast track model to efficiently design complex SNN which include a growing number of input stimuli and motor outputs. Our results demonstrate that merging AS and OC brings flexibility to the behavior in generic dynamical situations.
Collapse
|
28
|
Bekolay T, Bergstra J, Hunsberger E, Dewolf T, Stewart TC, Rasmussen D, Choo X, Voelker AR, Eliasmith C. Nengo: a Python tool for building large-scale functional brain models. Front Neuroinform 2014; 7:48. [PMID: 24431999 PMCID: PMC3880998 DOI: 10.3389/fninf.2013.00048] [Citation(s) in RCA: 111] [Impact Index Per Article: 11.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2013] [Accepted: 12/18/2013] [Indexed: 11/13/2022] Open
Abstract
Neuroscience currently lacks a comprehensive theory of how cognitive processes can be implemented in a biological substrate. The Neural Engineering Framework (NEF) proposes one such theory, but has not yet gathered significant empirical support, partly due to the technical challenge of building and simulating large-scale models with the NEF. Nengo is a software tool that can be used to build and simulate large-scale models based on the NEF; currently, it is the primary resource for both teaching how the NEF is used, and for doing research that generates specific NEF models to explain experimental data. Nengo 1.4, which was implemented in Java, was used to create Spaun, the world's largest functional brain model (Eliasmith et al., 2012). Simulating Spaun highlighted limitations in Nengo 1.4's ability to support model construction with simple syntax, to simulate large models quickly, and to collect large amounts of data for subsequent analysis. This paper describes Nengo 2.0, which is implemented in Python and overcomes these limitations. It uses simple and extendable syntax, simulates a benchmark model on the scale of Spaun 50 times faster than Nengo 1.4, and has a flexible mechanism for collecting simulation results.
Collapse
Affiliation(s)
- Trevor Bekolay
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - James Bergstra
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Eric Hunsberger
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Travis Dewolf
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Terrence C Stewart
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Daniel Rasmussen
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | - Xuan Choo
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| | | | - Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo Waterloo, ON, Canada
| |
Collapse
|
29
|
Eliasmith C, Trujillo O. The use and abuse of large-scale brain models. Curr Opin Neurobiol 2013; 25:1-6. [PMID: 24709593 DOI: 10.1016/j.conb.2013.09.009] [Citation(s) in RCA: 60] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2013] [Revised: 09/11/2013] [Accepted: 09/17/2013] [Indexed: 11/28/2022]
Abstract
We provide an overview and comparison of several recent large-scale brain models. In addition to discussing challenges involved with building large neural models, we identify several expected benefits of pursuing such a research program. We argue that these benefits are only likely to be realized if two basic guidelines are made central to the pursuit. The first is that such models need to be intimately tied to behavior. The second is that models, and more importantly their underlying methods, should provide mechanisms for varying the level of simulated detail. Consequently, we express concerns with models that insist on a 'correct' amount of detail while expecting interesting behavior to simply emerge.
Collapse
Affiliation(s)
- Chris Eliasmith
- Centre for Theoretical Neuroscience, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada N2L 3G1.
| | - Oliver Trujillo
- Centre for Theoretical Neuroscience, University of Waterloo, 200 University Avenue West, Waterloo, Ontario, Canada N2L 3G1
| |
Collapse
|
30
|
Seger CA, Peterson EJ. Categorization = decision making + generalization. Neurosci Biobehav Rev 2013; 37:1187-200. [PMID: 23548891 PMCID: PMC3739997 DOI: 10.1016/j.neubiorev.2013.03.015] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2012] [Revised: 03/21/2013] [Accepted: 03/22/2013] [Indexed: 11/22/2022]
Abstract
We rarely, if ever, repeatedly encounter exactly the same situation. This makes generalization crucial for real world decision making. We argue that categorization, the study of generalizable representations, is a type of decision making, and that categorization learning research would benefit from approaches developed to study the neuroscience of decision making. Similarly, methods developed to examine generalization and learning within the field of categorization may enhance decision making research. We first discuss perceptual information processing and integration, with an emphasis on accumulator models. We then examine learning the value of different decision making choices via experience, emphasizing reinforcement learning modeling approaches. Next we discuss how value is combined with other factors in decision making, emphasizing the effects of uncertainty. Finally, we describe how a final decision is selected via thresholding processes implemented by the basal ganglia and related regions. We also consider how memory related functions in the hippocampus may be integrated with decision making mechanisms and contribute to categorization.
Collapse
Affiliation(s)
- Carol A Seger
- Department of Psychology, Colorado State University Fort Collins, CO 80523, USA.
| | | |
Collapse
|
31
|
Thibeault CM, Srinivasa N. Using a hybrid neuron in physiologically inspired models of the basal ganglia. Front Comput Neurosci 2013; 7:88. [PMID: 23847524 PMCID: PMC3701869 DOI: 10.3389/fncom.2013.00088] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2013] [Accepted: 06/15/2013] [Indexed: 11/15/2022] Open
Abstract
Our current understanding of the basal ganglia (BG) has facilitated the creation of computational models that have contributed novel theories, explored new functional anatomy and demonstrated results complementing physiological experiments. However, the utility of these models extends beyond these applications. Particularly in neuromorphic engineering, where the basal ganglia's role in computation is important for applications such as power efficient autonomous agents and model-based control strategies. The neurons used in existing computational models of the BG, however, are not amenable for many low-power hardware implementations. Motivated by a need for more hardware accessible networks, we replicate four published models of the BG, spanning single neuron and small networks, replacing the more computationally expensive neuron models with an Izhikevich hybrid neuron. This begins with a network modeling action-selection, where the basal activity levels and the ability to appropriately select the most salient input is reproduced. A Parkinson's disease model is then explored under normal conditions, Parkinsonian conditions and during subthalamic nucleus deep brain stimulation (DBS). The resulting network is capable of replicating the loss of thalamic relay capabilities in the Parkinsonian state and its return under DBS. This is also demonstrated using a network capable of action-selection. Finally, a study of correlation transfer under different patterns of Parkinsonian activity is presented. These networks successfully captured the significant results of the originals studies. This not only creates a foundation for neuromorphic hardware implementations but may also support the development of large-scale biophysical models. The former potentially providing a way of improving the efficacy of DBS and the latter allowing for the efficient simulation of larger more comprehensive networks.
Collapse
Affiliation(s)
- Corey M Thibeault
- Center for Neural and Emergent Systems, Information and System Sciences Laboratory, HRL Laboratories LLC. Malibu, CA, USA ; Department of Electrical and Biomedical Engineering, The University of Nevada Reno, NV, USA ; Department of Computer Science and Engineering, The University of Nevada Reno, NV, USA
| | | |
Collapse
|
32
|
Meehan TP, Bressler SL. Neurocognitive networks: Findings, models, and theory. Neurosci Biobehav Rev 2012; 36:2232-47. [DOI: 10.1016/j.neubiorev.2012.08.002] [Citation(s) in RCA: 53] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Revised: 07/27/2012] [Accepted: 08/08/2012] [Indexed: 11/26/2022]
|