1
|
Golosio B, De Luca C, Capone C, Pastorelli E, Stegel G, Tiddia G, De Bonis G, Paolucci PS. Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep. PLoS Comput Biol 2021; 17:e1009045. [PMID: 34181642 PMCID: PMC8270441 DOI: 10.1371/journal.pcbi.1009045] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2021] [Accepted: 05/05/2021] [Indexed: 01/19/2023] Open
Abstract
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories. We created a thalamo-cortical spiking model (ThaCo) with the purpose of demonstrating a link among two phenomena that we believe to be essential for the brain capability of efficient incremental learning from few examples in noisy environments. Grounded in two experimental observations—the first about the effects of deep-sleep on pre- and post-sleep firing rate distributions, the second about the combination of perceptual and contextual information in pyramidal neurons—our model joins these two ingredients. ThaCo alternates phases of incremental learning, classification and deep-sleep. Memories of handwritten digit examples are learned through thalamo-cortical and cortico-cortical plastic synapses. In absence of noise, the combination of contextual information with perception enables fast incremental learning. Deep-sleep becomes crucial when noisy inputs are considered. We observed in ThaCo both homeostatic and associative processes: deep-sleep fights noise in perceptual and internal knowledge and it supports the categorical association of examples belonging to the same digit class, through reinforcement of class-specific cortico-cortical synapses. The distributions of pre-sleep and post-sleep firing rates during classification change in a manner similar to those of experimental observation. These changes promote energetic efficiency during recall of memories, better representation of individual memories and categories and higher classification performances.
Collapse
Affiliation(s)
- Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
- * E-mail:
| | - Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Giovanni Stegel
- Dipartimento di Chimica e Farmacia, Università di Sassari, Sassari, Italy
| | - Gianmarco Tiddia
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Giulia De Bonis
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
2
|
Burylko O, Kazanovich Y, Borisyuk R. Winner-take-all in a phase oscillator system with adaptation. Sci Rep 2018; 8:416. [PMID: 29323149 PMCID: PMC5765106 DOI: 10.1038/s41598-017-18666-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 12/15/2017] [Indexed: 11/09/2022] Open
Abstract
We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation.
Collapse
Affiliation(s)
- Oleksandr Burylko
- Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska 3, 01601, Kyiv, Ukraine.
| | - Yakov Kazanovich
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia
| | - Roman Borisyuk
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia.,School of Computing and Mathematics, Plymouth University, PL4 8AA, Plymouth, United Kingdom
| |
Collapse
|
3
|
Chen Y. Mechanisms of Winner-Take-All and Group Selection in Neuronal Spiking Networks. Front Comput Neurosci 2017; 11:20. [PMID: 28484384 PMCID: PMC5399521 DOI: 10.3389/fncom.2017.00020] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2016] [Accepted: 03/20/2017] [Indexed: 11/13/2022] Open
Abstract
A major function of central nervous systems is to discriminate different categories or types of sensory input. Neuronal networks accomplish such tasks by learning different sensory maps at several stages of neural hierarchy, such that different neurons fire selectively to reflect different internal or external patterns and states. The exact mechanisms of such map formation processes in the brain are not completely understood. Here we study the mechanism by which a simple recurrent/reentrant neuronal network accomplish group selection and discrimination to different inputs in order to generate sensory maps. We describe the conditions and mechanism of transition from a rhythmic epileptic state (in which all neurons fire synchronized and indiscriminately to any input) to a winner-take-all state in which only a subset of neurons fire for a specific input. We prove an analytic condition under which a stable bump solution and a winner-take-all state can emerge from the local recurrent excitation-inhibition interactions in a three-layer spiking network with distinct excitatory and inhibitory populations, and demonstrate the importance of surround inhibitory connection topology on the stability of dynamic patterns in spiking neural network.
Collapse
|
4
|
McKinstry JL, Fleischer JG, Chen Y, Gall WE, Edelman GM. Imagery May Arise from Associations Formed through Sensory Experience: A Network of Spiking Neurons Controlling a Robot Learns Visual Sequences in Order to Perform a Mental Rotation Task. PLoS One 2016; 11:e0162155. [PMID: 27653977 PMCID: PMC5031450 DOI: 10.1371/journal.pone.0162155] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2015] [Accepted: 08/18/2016] [Indexed: 11/24/2022] Open
Abstract
Mental imagery occurs “when a representation of the type created during the initial phases of perception is present but the stimulus is not actually being perceived.” How does the capability to perform mental imagery arise? Extending the idea that imagery arises from learned associations, we propose that mental rotation, a specific form of imagery, could arise through the mechanism of sequence learning–that is, by learning to regenerate the sequence of mental images perceived while passively observing a rotating object. To demonstrate the feasibility of this proposal, we constructed a simulated nervous system and embedded it within a behaving humanoid robot. By observing a rotating object, the system learns the sequence of neural activity patterns generated by the visual system in response to the object. After learning, it can internally regenerate a similar sequence of neural activations upon briefly viewing the static object. This system learns to perform a mental rotation task in which the subject must determine whether two objects are identical despite differences in orientation. As with human subjects, the time taken to respond is proportional to the angular difference between the two stimuli. Moreover, as reported in humans, the system fills in intermediate angles during the task, and this putative mental rotation activates the same pathways that are activated when the system views physical rotation. This work supports the proposal that mental rotation arises through sequence learning and the idea that mental imagery aids perception through learned associations, and suggests testable predictions for biological experiments.
Collapse
Affiliation(s)
- Jeffrey L. McKinstry
- The Neurosciences Institute, La Jolla, California, United States of America
- * E-mail:
| | - Jason G. Fleischer
- The Neurosciences Institute, La Jolla, California, United States of America
| | - Yanqing Chen
- The Neurosciences Institute, La Jolla, California, United States of America
| | - W. Einar Gall
- The Neurosciences Institute, La Jolla, California, United States of America
| | - Gerald M. Edelman
- The Neurosciences Institute, La Jolla, California, United States of America
| |
Collapse
|
5
|
Chou TS, Bucci LD, Krichmar JL. Learning touch preferences with a tactile robot using dopamine modulated STDP in a model of insular cortex. Front Neurorobot 2015; 9:6. [PMID: 26257639 PMCID: PMC4510776 DOI: 10.3389/fnbot.2015.00006] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2015] [Accepted: 07/02/2015] [Indexed: 11/17/2022] Open
Abstract
Neurorobots enable researchers to study how behaviors are produced by neural mechanisms in an uncertain, noisy, real-world environment. To investigate how the somatosensory system processes noisy, real-world touch inputs, we introduce a neurorobot called CARL-SJR, which has a full-body tactile sensory area. The design of CARL-SJR is such that it encourages people to communicate with it through gentle touch. CARL-SJR provides feedback to users by displaying bright colors on its surface. In the present study, we show that CARL-SJR is capable of learning associations between conditioned stimuli (CS; a color pattern on its surface) and unconditioned stimuli (US; a preferred touch pattern) by applying a spiking neural network (SNN) with neurobiologically inspired plasticity. Specifically, we modeled the primary somatosensory cortex, prefrontal cortex, striatum, and the insular cortex, which is important for hedonic touch, to process noisy data generated directly from CARL-SJR's tactile sensory area. To facilitate learning, we applied dopamine-modulated Spike Timing Dependent Plasticity (STDP) to our simulated prefrontal cortex, striatum, and insular cortex. To cope with noisy, varying inputs, the SNN was tuned to produce traveling waves of activity that carried spatiotemporal information. Despite the noisy tactile sensors, spike trains, and variations in subject hand swipes, the learning was quite robust. Further, insular cortex activities in the incremental pathway of dopaminergic reward system allowed us to control CARL-SJR's preference for touch direction without heavily pre-processed inputs. The emerged behaviors we found in this model match animal's behaviors wherein they prefer touch in particular areas and directions. Thus, the results in this paper could serve as an explanation on the underlying neural mechanisms for developing tactile preferences and hedonic touch.
Collapse
Affiliation(s)
- Ting-Shuo Chou
- Department of Computer Sciences, University of California, Irvine Irvine, CA, USA
| | - Liam D Bucci
- Department of Cognitive Sciences, University of California, Irvine Irvine, CA, USA
| | - Jeffrey L Krichmar
- Department of Computer Sciences, University of California, Irvine Irvine, CA, USA ; Department of Cognitive Sciences, University of California, Irvine Irvine, CA, USA
| |
Collapse
|
6
|
Shoemaker PA. Neuronal networks with NMDARs and lateral inhibition implement winner-takes-all. Front Comput Neurosci 2015; 9:12. [PMID: 25741276 PMCID: PMC4332340 DOI: 10.3389/fncom.2015.00012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2014] [Accepted: 01/23/2015] [Indexed: 11/13/2022] Open
Abstract
A neural circuit that relies on the electrical properties of NMDA synaptic receptors is shown by numerical and theoretical analysis to be capable of realizing the winner-takes-all function, a powerful computational primitive that is often attributed to biological nervous systems. This biophysically-plausible model employs global lateral inhibition in a simple feedback arrangement. As its inputs increase, high-gain and then bi- or multi-stable equilibrium states may be assumed in which there is significant depolarization of a single neuron and hyperpolarization or very weak depolarization of other neurons in the network. The state of the winning neuron conveys analog information about its input. The winner-takes-all characteristic depends on the nonmonotonic current-voltage relation of NMDA receptor ion channels, as well as neural thresholding, and the gain and nature of the inhibitory feedback. Dynamical regimes vary with input strength. Fixed points may become unstable as the network enters a winner-takes-all regime, which can lead to entrained oscillations. Under some conditions, oscillatory behavior can be interpreted as winner-takes-all in nature. Stable winner-takes-all behavior is typically recovered as inputs increase further, but with still larger inputs, the winner-takes-all characteristic is ultimately lost. Network stability may be enhanced by biologically plausible mechanisms.
Collapse
|
7
|
Bauer R, Zubler F, Pfister S, Hauri A, Pfeiffer M, Muir DR, Douglas RJ. Developmental self-construction and -configuration of functional neocortical neuronal networks. PLoS Comput Biol 2014; 10:e1003994. [PMID: 25474693 PMCID: PMC4256067 DOI: 10.1371/journal.pcbi.1003994] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2014] [Accepted: 10/09/2014] [Indexed: 11/20/2022] Open
Abstract
The prenatal development of neural circuits must provide sufficient configuration to support at least a set of core postnatal behaviors. Although knowledge of various genetic and cellular aspects of development is accumulating rapidly, there is less systematic understanding of how these various processes play together in order to construct such functional networks. Here we make some steps toward such understanding by demonstrating through detailed simulations how a competitive co-operative (‘winner-take-all’, WTA) network architecture can arise by development from a single precursor cell. This precursor is granted a simplified gene regulatory network that directs cell mitosis, differentiation, migration, neurite outgrowth and synaptogenesis. Once initial axonal connection patterns are established, their synaptic weights undergo homeostatic unsupervised learning that is shaped by wave-like input patterns. We demonstrate how this autonomous genetically directed developmental sequence can give rise to self-calibrated WTA networks, and compare our simulation results with biological data. Models of learning in artificial neural networks generally assume that the neurons and approximate network are given, and then learning tunes the synaptic weights. By contrast, we address the question of how an entire functional neuronal network containing many differentiated neurons and connections can develop from only a single progenitor cell. We chose a winner-take-all network as the developmental target, because it is a computationally powerful circuit, and a candidate motif of neocortical networks. The key aspect of this challenge is that the developmental mechanisms must be locally autonomous as in Biology: They cannot depend on global knowledge or supervision. We have explored this developmental process by simulating in physical detail the fundamental biological behaviors, such as cell proliferation, neurite growth and synapse formation that give rise to the structural connectivity observed in the superficial layers of the neocortex. These differentiated, approximately connected neurons then adapt their synaptic weights homeostatically to obtain a uniform electrical signaling activity before going on to organize themselves according to the fundamental correlations embedded in a noisy wave-like input signal. In this way the precursor expands itself through development and unsupervised learning into winner-take-all functionality and orientation selectivity in a biologically plausible manner.
Collapse
Affiliation(s)
- Roman Bauer
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- School of Computing Science, Newcastle University, Newcastle upon Tyne, United Kingdom
- * E-mail:
| | - Frédéric Zubler
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- Department of Neurology, Inselspital Bern, Bern University Hospital, University of Bern, Bern, Switzerland
| | - Sabina Pfister
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Andreas Hauri
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| | - Dylan R. Muir
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
- Biozentrum, University of Basel, Basel, Switzerland
| | - Rodney J. Douglas
- Institute of Neuroinformatics, University/ETH Zürich, Zürich, Switzerland
| |
Collapse
|
8
|
Binas J, Rutishauser U, Indiveri G, Pfeiffer M. Learning and stabilization of winner-take-all dynamics through interacting excitatory and inhibitory plasticity. Front Comput Neurosci 2014; 8:68. [PMID: 25071538 PMCID: PMC4086298 DOI: 10.3389/fncom.2014.00068] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Accepted: 06/16/2014] [Indexed: 12/31/2022] Open
Abstract
Winner-Take-All (WTA) networks are recurrently connected populations of excitatory and inhibitory neurons that represent promising candidate microcircuits for implementing cortical computation. WTAs can perform powerful computations, ranging from signal-restoration to state-dependent processing. However, such networks require fine-tuned connectivity parameters to keep the network dynamics within stable operating regimes. In this article, we show how such stability can emerge autonomously through an interaction of biologically plausible plasticity mechanisms that operate simultaneously on all excitatory and inhibitory synapses of the network. A weight-dependent plasticity rule is derived from the triplet spike-timing dependent plasticity model, and its stabilization properties in the mean-field case are analyzed using contraction theory. Our main result provides simple constraints on the plasticity rule parameters, rather than on the weights themselves, which guarantee stable WTA behavior. The plastic network we present is able to adapt to changing input conditions, and to dynamically adjust its gain, therefore exhibiting self-stabilization mechanisms that are crucial for maintaining stable operation in large networks of interconnected subunits. We show how distributed neural assemblies can adjust their parameters for stable WTA function autonomously while respecting anatomical constraints on neural wiring.
Collapse
Affiliation(s)
- Jonathan Binas
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Ueli Rutishauser
- Department of Neurosurgery and Department of Neurology, Cedars-Sinai Medical CenterLos Angeles, CA, USA
- Computation and Neural Systems Program, Division of Biology and Biological Engineering, California Institute of TechnologyPasadena, CA, USA
| | - Giacomo Indiveri
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zurich and ETH ZurichZurich, Switzerland
| |
Collapse
|
9
|
Avery MC, Dutt N, Krichmar JL. A large-scale neural network model of the influence of neuromodulatory levels on working memory and behavior. Front Comput Neurosci 2013; 7:133. [PMID: 24106474 PMCID: PMC3789270 DOI: 10.3389/fncom.2013.00133] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Accepted: 09/11/2013] [Indexed: 11/13/2022] Open
Abstract
The dorsolateral prefrontal cortex (dlPFC), which is regarded as the primary site for visuospatial working memory in the brain, is significantly modulated by dopamine (DA) and norepinephrine (NE). DA and NE originate in the ventral tegmental area (VTA) and locus coeruleus (LC), respectively, and have been shown to have an “inverted-U” dose-response profile in dlPFC, where the level of arousal and decision-making performance is a function of DA and NE concentrations. Moreover, there appears to be a sweet spot, in terms of the level of DA and NE activation, which allows for optimal working memory and behavioral performance. When either DA or NE is too high, input to the PFC is essentially blocked. When either DA or NE is too low, PFC network dynamics become noisy and activity levels diminish. Mechanisms for how this is occurring have been suggested, however, they have not been tested in a large-scale model with neurobiologically plausible network dynamics. Also, DA and NE levels have not been simultaneously manipulated experimentally, which is not realistic in vivo due to strong bi-directional connections between the VTA and LC. To address these issues, we built a spiking neural network model that includes D1, α2A, and α1 receptors. The model was able to match the inverted-U profiles that have been shown experimentally for differing levels of DA and NE. Furthermore, we were able to make predictions about what working memory and behavioral deficits may occur during simultaneous manipulation of DA and NE outside of their optimal levels. Specifically, when DA levels were low and NE levels were high, cues could not be held in working memory due to increased noise. On the other hand, when DA levels were high and NE levels were low, incorrect decisions were made due to weak overall network activity. We also show that lateral inhibition in working memory may play a more important role in increasing signal-to-noise ratio than increasing recurrent excitatory input.
Collapse
Affiliation(s)
- Michael C Avery
- Department of Cognitive Sciences, University of California Irvine Irvine, CA, USA
| | | | | |
Collapse
|
10
|
McKinstry JL, Edelman GM. Temporal sequence learning in winner-take-all networks of spiking neurons demonstrated in a brain-based device. Front Neurorobot 2013; 7:10. [PMID: 23760804 PMCID: PMC3674315 DOI: 10.3389/fnbot.2013.00010] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2013] [Accepted: 05/20/2013] [Indexed: 11/24/2022] Open
Abstract
Animal behavior often involves a temporally ordered sequence of actions learned from experience. Here we describe simulations of interconnected networks of spiking neurons that learn to generate patterns of activity in correct temporal order. The simulation consists of large-scale networks of thousands of excitatory and inhibitory neurons that exhibit short-term synaptic plasticity and spike-timing dependent synaptic plasticity. The neural architecture within each area is arranged to evoke winner-take-all (WTA) patterns of neural activity that persist for tens of milliseconds. In order to generate and switch between consecutive firing patterns in correct temporal order, a reentrant exchange of signals between these areas was necessary. To demonstrate the capacity of this arrangement, we used the simulation to train a brain-based device responding to visual input by autonomously generating temporal sequences of motor actions.
Collapse
|