1
|
Bianchi S, Muñoz-Martin I, Covi E, Bricalli A, Piccolboni G, Regev A, Molas G, Nodin JF, Andrieu F, Ielmini D. A self-adaptive hardware with resistive switching synapses for experience-based neurocomputing. Nat Commun 2023; 14:1565. [PMID: 36944647 PMCID: PMC10030830 DOI: 10.1038/s41467-023-37097-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2021] [Accepted: 03/02/2023] [Indexed: 03/23/2023] Open
Abstract
Neurobiological systems continually interact with the surrounding environment to refine their behaviour toward the best possible reward. Achieving such learning by experience is one of the main challenges of artificial intelligence, but currently it is hindered by the lack of hardware capable of plastic adaptation. Here, we propose a bio-inspired recurrent neural network, mastered by a digital system on chip with resistive-switching synaptic arrays of memory devices, which exploits homeostatic Hebbian learning for improved efficiency. All the results are discussed experimentally and theoretically, proposing a conceptual framework for benchmarking the main outcomes in terms of accuracy and resilience. To test the proposed architecture for reinforcement learning tasks, we study the autonomous exploration of continually evolving environments and verify the results for the Mars rover navigation. We also show that, compared to conventional deep learning techniques, our in-memory hardware has the potential to achieve a significant boost in speed and power-saving.
Collapse
Affiliation(s)
- S Bianchi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- Infineon Technologies, Villach, Austria
| | - I Muñoz-Martin
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- Infineon Technologies, Villach, Austria
| | - E Covi
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy
- NaMLab gGmbH, Dresden, Germany
| | | | | | - A Regev
- Weebit Nano, Hod Hasharon, Israel
| | - G Molas
- Weebit Nano, Hod Hasharon, Israel
| | - J F Nodin
- Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - F Andrieu
- Univ. Grenoble Alpes, CEA, Leti, F-38000, Grenoble, France
| | - D Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria, Politecnico di Milano and IUNET, Milano, 20133, Italy.
| |
Collapse
|
2
|
Barkdoll K, Lu Y, Barranca VJ. New insights into binocular rivalry from the reconstruction of evolving percepts using model network dynamics. Front Comput Neurosci 2023; 17:1137015. [PMID: 37034441 PMCID: PMC10079880 DOI: 10.3389/fncom.2023.1137015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 03/07/2023] [Indexed: 04/11/2023] Open
Abstract
When the two eyes are presented with highly distinct stimuli, the resulting visual percept generally switches every few seconds between the two monocular images in an irregular fashion, giving rise to a phenomenon known as binocular rivalry. While a host of theoretical studies have explored potential mechanisms for binocular rivalry in the context of evoked model dynamics in response to simple stimuli, here we investigate binocular rivalry directly through complex stimulus reconstructions based on the activity of a two-layer neuronal network model with competing downstream pools driven by disparate monocular stimuli composed of image pixels. To estimate the dynamic percept, we derive a linear input-output mapping rooted in the non-linear network dynamics and iteratively apply compressive sensing techniques for signal recovery. Utilizing a dominance metric, we are able to identify when percept alternations occur and use data collected during each dominance period to generate a sequence of percept reconstructions. We show that despite the approximate nature of the input-output mapping and the significant reduction in neurons downstream relative to stimulus pixels, the dominant monocular image is well-encoded in the network dynamics and improvements are garnered when realistic spatial receptive field structure is incorporated into the feedforward connectivity. Our model demonstrates gamma-distributed dominance durations and well obeys Levelt's four laws for how dominance durations change with stimulus strength, agreeing with key recurring experimental observations often used to benchmark rivalry models. In light of evidence that individuals with autism exhibit relatively slow percept switching in binocular rivalry, we corroborate the ubiquitous hypothesis that autism manifests from reduced inhibition in the brain by systematically probing our model alternation rate across choices of inhibition strength. We exhibit sufficient conditions for producing binocular rivalry in the context of natural scene stimuli, opening a clearer window into the dynamic brain computations that vary with the generated percept and a potential path toward further understanding neurological disorders.
Collapse
|
3
|
Brain-inspired meta-reinforcement learning cognitive control in conflictual inhibition decision-making task for artificial agents. Neural Netw 2022; 154:283-302. [DOI: 10.1016/j.neunet.2022.06.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 06/09/2022] [Accepted: 06/16/2022] [Indexed: 11/21/2022]
|
4
|
Yan Y, Burgess N, Bicanski A. A model of head direction and landmark coding in complex environments. PLoS Comput Biol 2021; 17:e1009434. [PMID: 34570749 PMCID: PMC8496825 DOI: 10.1371/journal.pcbi.1009434] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Revised: 10/07/2021] [Accepted: 09/08/2021] [Indexed: 01/27/2023] Open
Abstract
Environmental information is required to stabilize estimates of head direction (HD) based on angular path integration. However, it is unclear how this happens in real-world (visually complex) environments. We present a computational model of how visual feedback can stabilize HD information in environments that contain multiple cues of varying stability and directional specificity. We show how combinations of feature-specific visual inputs can generate a stable unimodal landmark bearing signal, even in the presence of multiple cues and ambiguous directional specificity. This signal is associated with the retrosplenial HD signal (inherited from thalamic HD cells) and conveys feedback to the subcortical HD circuitry. The model predicts neurons with a unimodal encoding of the egocentric orientation of the array of landmarks, rather than any one particular landmark. The relationship between these abstract landmark bearing neurons and head direction cells is reminiscent of the relationship between place cells and grid cells. Their unimodal encoding is formed from visual inputs via a modified version of Oja's Subspace Algorithm. The rule allows the landmark bearing signal to disconnect from directionally unstable or ephemeral cues, incorporate newly added stable cues, support orientation across many different environments (high memory capacity), and is consistent with recent empirical findings on bidirectional HD firing reported in the retrosplenial cortex. Our account of visual feedback for HD stabilization provides a novel perspective on neural mechanisms of spatial navigation within richer sensory environments, and makes experimentally testable predictions.
Collapse
Affiliation(s)
- Yijia Yan
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- Nuffield Department of Clinical Neurosciences, University of Oxford, Oxford, United Kingdom
| | - Neil Burgess
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
| | - Andrej Bicanski
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom
- School of Psychology, Newcastle University, Newcastle upon Tyne, United Kingdom
| |
Collapse
|
5
|
Golosio B, De Luca C, Capone C, Pastorelli E, Stegel G, Tiddia G, De Bonis G, Paolucci PS. Thalamo-cortical spiking model of incremental learning combining perception, context and NREM-sleep. PLoS Comput Biol 2021; 17:e1009045. [PMID: 34181642 PMCID: PMC8270441 DOI: 10.1371/journal.pcbi.1009045] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 07/09/2021] [Accepted: 05/05/2021] [Indexed: 01/19/2023] Open
Abstract
The brain exhibits capabilities of fast incremental learning from few noisy examples, as well as the ability to associate similar memories in autonomously-created categories and to combine contextual hints with sensory perceptions. Together with sleep, these mechanisms are thought to be key components of many high-level cognitive functions. Yet, little is known about the underlying processes and the specific roles of different brain states. In this work, we exploited the combination of context and perception in a thalamo-cortical model based on a soft winner-take-all circuit of excitatory and inhibitory spiking neurons. After calibrating this model to express awake and deep-sleep states with features comparable with biological measures, we demonstrate the model capability of fast incremental learning from few examples, its resilience when proposed with noisy perceptions and contextual signals, and an improvement in visual classification after sleep due to induced synaptic homeostasis and association of similar memories. We created a thalamo-cortical spiking model (ThaCo) with the purpose of demonstrating a link among two phenomena that we believe to be essential for the brain capability of efficient incremental learning from few examples in noisy environments. Grounded in two experimental observations—the first about the effects of deep-sleep on pre- and post-sleep firing rate distributions, the second about the combination of perceptual and contextual information in pyramidal neurons—our model joins these two ingredients. ThaCo alternates phases of incremental learning, classification and deep-sleep. Memories of handwritten digit examples are learned through thalamo-cortical and cortico-cortical plastic synapses. In absence of noise, the combination of contextual information with perception enables fast incremental learning. Deep-sleep becomes crucial when noisy inputs are considered. We observed in ThaCo both homeostatic and associative processes: deep-sleep fights noise in perceptual and internal knowledge and it supports the categorical association of examples belonging to the same digit class, through reinforcement of class-specific cortico-cortical synapses. The distributions of pre-sleep and post-sleep firing rates during classification change in a manner similar to those of experimental observation. These changes promote energetic efficiency during recall of memories, better representation of individual memories and categories and higher classification performances.
Collapse
Affiliation(s)
- Bruno Golosio
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Chiara De Luca
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
- * E-mail:
| | - Cristiano Capone
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Elena Pastorelli
- Ph.D. Program in Behavioural Neuroscience, “Sapienza” Università di Roma, Rome, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | - Giovanni Stegel
- Dipartimento di Chimica e Farmacia, Università di Sassari, Sassari, Italy
| | - Gianmarco Tiddia
- Dipartimento di Fisica, Università di Cagliari, Cagliari, Italy
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Cagliari, Cagliari, Italy
| | - Giulia De Bonis
- Istituto Nazionale di Fisica Nucleare (INFN), Sezione di Roma, Rome, Italy
| | | |
Collapse
|
6
|
Toomey E, Segall K, Castellani M, Colangelo M, Lynch N, Berggren KK. Superconducting Nanowire Spiking Element for Neural Networks. NANO LETTERS 2020; 20:8059-8066. [PMID: 32965119 DOI: 10.1021/acs.nanolett.0c03057] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
As the limits of traditional von Neumann computing come into view, the brain's ability to communicate vast quantities of information using low-power spikes has become an increasing source of inspiration for alternative architectures. Key to the success of these largescale neural networks is a power-efficient spiking element that is scalable and easily interfaced with traditional control electronics. In this work, we present a spiking element fabricated from superconducting nanowires that has pulse energies on the order of ∼10 aJ. We demonstrate that the device reproduces essential characteristics of biological neurons, such as a refractory period and a firing threshold. Through simulations using experimentally measured device parameters, we show how nanowire-based networks may be used for inference in image recognition and that the probabilistic nature of nanowire switching may be exploited for modeling biological processes and for applications that rely on stochasticity.
Collapse
Affiliation(s)
- E Toomey
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - K Segall
- Department of Physics and Astronomy, Colgate University, Hamilton, New York 13346, United States
| | - M Castellani
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - M Colangelo
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - N Lynch
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| | - K K Berggren
- Department of Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, Massachusetts 02139, United States
| |
Collapse
|
7
|
Bianchi S, Muñoz-Martin I, Ielmini D. Bio-Inspired Techniques in a Fully Digital Approach for Lifelong Learning. Front Neurosci 2020; 14:379. [PMID: 32425749 PMCID: PMC7203347 DOI: 10.3389/fnins.2020.00379] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2020] [Accepted: 03/27/2020] [Indexed: 11/13/2022] Open
Abstract
Lifelong learning has deeply underpinned the resilience of biological organisms respect to a constantly changing environment. This flexibility has allowed the evolution of parallel-distributed systems able to merge past information with new stimulus for accurate and efficient brain-computation. Nowadays, there is a strong attempt to reproduce such intelligent systems in standard artificial neural networks (ANNs). However, despite some great results in specific tasks, ANNs still appear too rigid and static in real life respect to the biological systems. Thus, it is necessary to define a new neural paradigm capable of merging the lifelong resilience of biological organisms with the great accuracy of ANNs. Here, we present a digital implementation of a novel mixed supervised-unsupervised neural network capable of performing lifelong learning. The network uses a set of convolutional filters to extract features from the input images of the MNIST and the Fashion-MNIST training datasets. This information defines an original combination of responses of both trained classes and non-trained classes by transfer learning. The responses are then used in the subsequent unsupervised learning based on spike-timing dependent plasticity (STDP). This procedure allows the clustering of non-trained information thanks to bio-inspired algorithms such as neuronal redundancy and spike-frequency adaptation. We demonstrate the implementation of the neural network in a fully digital environment, such as the Xilinx Zynq-7000 System on Chip (SoC). We illustrate a user-friendly interface to test the network by choosing the number and the type of the non-trained classes, or drawing a custom pattern on a tablet. Finally, we propose a comparison of this work with networks based on memristive synaptic devices capable of continual learning, highlighting the main differences and capabilities respect to a fully digital approach.
Collapse
Affiliation(s)
| | | | - Daniele Ielmini
- Dipartimento di Elettronica, Informazione e Bioingegneria (DEIB), Politecnico di Milano, Milan, Italy
| |
Collapse
|
8
|
Medial Prefrontal Cortical Modulation of Whisker Thalamic Responses in Anesthetized Rats. Neuroscience 2019; 406:626-636. [DOI: 10.1016/j.neuroscience.2019.01.059] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2018] [Revised: 01/24/2019] [Accepted: 01/29/2019] [Indexed: 12/24/2022]
|
9
|
Barranca VJ, Huang H, Kawakita G. Network structure and input integration in competing firing rate models for decision-making. J Comput Neurosci 2019; 46:145-168. [PMID: 30661144 DOI: 10.1007/s10827-018-0708-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2018] [Revised: 12/05/2018] [Accepted: 12/17/2018] [Indexed: 11/30/2022]
Abstract
Making a decision among numerous alternatives is a pervasive and central undertaking encountered by mammals in natural settings. While decision making for two-option tasks has been studied extensively both experimentally and theoretically, characterizing decision making in the face of a large set of alternatives remains challenging. We explore this issue by formulating a scalable mechanistic network model for decision making and analyzing the dynamics evoked given various potential network structures. In the case of a fully-connected network, we provide an analytical characterization of the model fixed points and their stability with respect to winner-take-all behavior for fair tasks. We compare several means of input integration, demonstrating a more gradual sigmoidal transfer function is likely evolutionarily advantageous relative to binary gain commonly utilized in engineered systems. We show via asymptotic analysis and numerical simulation that sigmoidal transfer functions with smaller steepness yield faster response times but depreciation in accuracy. However, in the presence of noise or degradation of connections, a sigmoidal transfer function garners significantly more robust and accurate decision-making dynamics. For fair tasks and sigmoidal gain, our model network also exhibits a stable parameter regime that produces high accuracy and persists across tasks with diverse numbers of alternatives and difficulties, satisfying physiological energetic constraints. In the case of more sparse and structured network topologies, including random, regular, and small-world connectivity, we show the high-accuracy parameter regime persists for biologically realistic connection densities. Our work shows how neural system architecture is potentially optimal in making economic, reliable, and advantageous decisions across tasks.
Collapse
Affiliation(s)
| | - Han Huang
- Swarthmore College, 500 College Avenue, Swarthmore, PA, 19081, USA
| | - Genji Kawakita
- Swarthmore College, 500 College Avenue, Swarthmore, PA, 19081, USA
| |
Collapse
|
10
|
Matheus Gauy M, Lengler J, Einarsson H, Meier F, Weissenberger F, Yanik MF, Steger A. A Hippocampal Model for Behavioral Time Acquisition and Fast Bidirectional Replay of Spatio-Temporal Memory Sequences. Front Neurosci 2018; 12:961. [PMID: 30618583 PMCID: PMC6306028 DOI: 10.3389/fnins.2018.00961] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Accepted: 12/03/2018] [Indexed: 01/09/2023] Open
Abstract
The hippocampus is known to play a crucial role in the formation of long-term memory. For this, fast replays of previously experienced activities during sleep or after reward experiences are believed to be crucial. But how such replays are generated is still completely unclear. In this paper we propose a possible mechanism for this: we present a model that can store experienced trajectories on a behavioral timescale after a single run, and can subsequently bidirectionally replay such trajectories, thereby omitting any specifics of the previous behavior like speed, etc, but allowing repetitions of events, even with different subsequent events. Our solution builds on well-known concepts, one-shot learning and synfire chains, enhancing them by additional mechanisms using global inhibition and disinhibition. For replays our approach relies on dendritic spikes and cholinergic modulation, as supported by experimental data. We also hypothesize a functional role of disinhibition as a pacemaker during behavioral time.
Collapse
Affiliation(s)
- Marcelo Matheus Gauy
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Johannes Lengler
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Hafsteinn Einarsson
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Florian Meier
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Felix Weissenberger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| | - Mehmet Fatih Yanik
- Department of Information Technology and Electrical Engineering, Institute for Neuroinformatics, ETH Zurich, Zurich, Switzerland
| | - Angelika Steger
- Department of Computer Science, Institute of Theoretical Computer Science, ETH Zurich, Zurich, Switzerland
| |
Collapse
|
11
|
Saglietti L, Gerace F, Ingrosso A, Baldassi C, Zecchina R. From statistical inference to a differential learning rule for stochastic neural networks. Interface Focus 2018; 8:20180033. [PMID: 30443331 PMCID: PMC6227809 DOI: 10.1098/rsfs.2018.0033] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/11/2018] [Indexed: 11/13/2022] Open
Abstract
Stochastic neural networks are a prototypical computational device able to build a probabilistic representation of an ensemble of external stimuli. Building on the relationship between inference and learning, we derive a synaptic plasticity rule that relies only on delayed activity correlations, and that shows a number of remarkable features. Our delayed-correlations matching (DCM) rule satisfies some basic requirements for biological feasibility: finite and noisy afferent signals, Dale’s principle and asymmetry of synaptic connections, locality of the weight update computations. Nevertheless, the DCM rule is capable of storing a large, extensive number of patterns as attractors in a stochastic recurrent neural network, under general scenarios without requiring any modification: it can deal with correlated patterns, a broad range of architectures (with or without hidden neuronal states), one-shot learning with the palimpsest property, all the while avoiding the proliferation of spurious attractors. When hidden units are present, our learning rule can be employed to construct Boltzmann machine-like generative models, exploiting the addition of hidden neurons in feature extraction and classification tasks.
Collapse
Affiliation(s)
- Luca Saglietti
- Microsoft Research New England, Cambridge, MA, USA.,Italian Institute for Genomic Medicine, Torino, Italy
| | - Federica Gerace
- Italian Institute for Genomic Medicine, Torino, Italy.,Politecnico di Torino, DISAT, Torino, Italy
| | | | - Carlo Baldassi
- Italian Institute for Genomic Medicine, Torino, Italy.,Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy.,Istituto Nazionale di Fisica Nucleare, Torino, Italy
| | - Riccardo Zecchina
- Italian Institute for Genomic Medicine, Torino, Italy.,Bocconi Institute for Data Science and Analytics, Bocconi University, Milano, Italy.,International Centre for Theoretical Physics, Trieste, Italy
| |
Collapse
|
12
|
Marić M, Domijan D. A Neurodynamic Model of Feature-Based Spatial Selection. Front Psychol 2018; 9:417. [PMID: 29643826 PMCID: PMC5883145 DOI: 10.3389/fpsyg.2018.00417] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 03/13/2018] [Indexed: 11/21/2022] Open
Abstract
Huang and Pashler (2007) suggested that feature-based attention creates a special form of spatial representation, which is termed a Boolean map. It partitions the visual scene into two distinct and complementary regions: selected and not selected. Here, we developed a model of a recurrent competitive network that is capable of state-dependent computation. It selects multiple winning locations based on a joint top-down cue. We augmented a model of the WTA circuit that is based on linear-threshold units with two computational elements: dendritic non-linearity that acts on the excitatory units and activity-dependent modulation of synaptic transmission between excitatory and inhibitory units. Computer simulations showed that the proposed model could create a Boolean map in response to a featured cue and elaborate it using the logical operations of intersection and union. In addition, it was shown that in the absence of top-down guidance, the model is sensitive to bottom-up cues such as saliency and abrupt visual onset.
Collapse
Affiliation(s)
- Mateja Marić
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| | - Dražen Domijan
- Department of Psychology, Faculty of Humanities and Social Sciences, University of Rijeka, Rijeka, Croatia
| |
Collapse
|
13
|
Burylko O, Kazanovich Y, Borisyuk R. Winner-take-all in a phase oscillator system with adaptation. Sci Rep 2018; 8:416. [PMID: 29323149 PMCID: PMC5765106 DOI: 10.1038/s41598-017-18666-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2017] [Accepted: 12/15/2017] [Indexed: 11/09/2022] Open
Abstract
We consider a system of generalized phase oscillators with a central element and radial connections. In contrast to conventional phase oscillators of the Kuramoto type, the dynamic variables in our system include not only the phase of each oscillator but also the natural frequency of the central oscillator, and the connection strengths from the peripheral oscillators to the central oscillator. With appropriate parameter values the system demonstrates winner-take-all behavior in terms of the competition between peripheral oscillators for the synchronization with the central oscillator. Conditions for the winner-take-all regime are derived for stationary and non-stationary types of system dynamics. Bifurcation analysis of the transition from stationary to non-stationary winner-take-all dynamics is presented. A new bifurcation type called a Saddle Node on Invariant Torus (SNIT) bifurcation was observed and is described in detail. Computer simulations of the system allow an optimal choice of parameters for winner-take-all implementation.
Collapse
Affiliation(s)
- Oleksandr Burylko
- Institute of Mathematics, National Academy of Sciences of Ukraine, Tereshchenkivska 3, 01601, Kyiv, Ukraine.
| | - Yakov Kazanovich
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia
| | - Roman Borisyuk
- Institute of Mathematical Problems of Biology, The Branch of Keldysh Institute of Applied Mathematics of Russian Academy of Sciences, 142290, Pushchino, Russia.,School of Computing and Mathematics, Plymouth University, PL4 8AA, Plymouth, United Kingdom
| |
Collapse
|
14
|
|
15
|
Kornfeld J, Benezra SE, Narayanan RT, Svara F, Egger R, Oberlaender M, Denk W, Long MA. EM connectomics reveals axonal target variation in a sequence-generating network. eLife 2017; 6:e24364. [PMID: 28346140 PMCID: PMC5400503 DOI: 10.7554/elife.24364] [Citation(s) in RCA: 55] [Impact Index Per Article: 7.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2016] [Accepted: 03/23/2017] [Indexed: 01/15/2023] Open
Abstract
The sequential activation of neurons has been observed in various areas of the brain, but in no case is the underlying network structure well understood. Here we examined the circuit anatomy of zebra finch HVC, a cortical region that generates sequences underlying the temporal progression of the song. We combined serial block-face electron microscopy with light microscopy to determine the cell types targeted by HVC(RA) neurons, which control song timing. Close to their soma, axons almost exclusively targeted inhibitory interneurons, consistent with what had been found with electrical recordings from pairs of cells. Conversely, far from the soma the targets were mostly other excitatory neurons, about half of these being other HVC(RA) cells. Both observations are consistent with the notion that the neural sequences that pace the song are generated by global synaptic chains in HVC embedded within local inhibitory networks.
Collapse
Affiliation(s)
| | - Sam E Benezra
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, United States
- Center for Neural Science, New York University, New York, United States
| | - Rajeevan T Narayanan
- Computational Neuroanatomy Group, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Center of Advanced European Studies and Research, Bonn, Germany
| | - Fabian Svara
- Max Planck Institute of Neurobiology, Martinsried, Germany
| | - Robert Egger
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, United States
- Center for Neural Science, New York University, New York, United States
| | - Marcel Oberlaender
- Computational Neuroanatomy Group, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Bernstein Center for Computational Neuroscience, Tübingen, Germany
- Center of Advanced European Studies and Research, Bonn, Germany
| | - Winfried Denk
- Max Planck Institute of Neurobiology, Martinsried, Germany
| | - Michael A Long
- NYU Neuroscience Institute and Department of Otolaryngology, New York University Langone Medical Center, New York, United States
- Center for Neural Science, New York University, New York, United States
| |
Collapse
|
16
|
Halassa MM, Acsády L. Thalamic Inhibition: Diverse Sources, Diverse Scales. Trends Neurosci 2016; 39:680-693. [PMID: 27589879 DOI: 10.1016/j.tins.2016.08.001] [Citation(s) in RCA: 132] [Impact Index Per Article: 16.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2016] [Revised: 07/31/2016] [Accepted: 08/02/2016] [Indexed: 12/11/2022]
Abstract
The thalamus is the major source of cortical inputs shaping sensation, action, and cognition. Thalamic circuits are targeted by two major inhibitory systems: the thalamic reticular nucleus (TRN) and extrathalamic inhibitory (ETI) inputs. A unifying framework of how these systems operate is currently lacking. Here, we propose that TRN circuits are specialized to exert thalamic control at different spatiotemporal scales. Local inhibition of thalamic spike rates prevails during attentional selection, whereas global inhibition more likely prevails during sleep. In contrast, the ETI (arising from basal ganglia, zona incerta (ZI), anterior pretectum, and pontine reticular formation) provides temporally precise and focal inhibition, impacting spike timing. Together, these inhibitory systems allow graded control of thalamic output, enabling thalamocortical operations to dynamically match ongoing behavioral demands.
Collapse
Affiliation(s)
- Michael M Halassa
- New York University Neuroscience Institute and the Departments of Psychiatry, Neuroscience and Physiology, New York University Langone Medical Center, New York, 10016, USA; Center for Neural Science, New York University, New York, 10016, USA.
| | - László Acsády
- Laboratory of Thalamus Research, Institute of Experimental Medicine, Hungarian Academy of Sciences, Budapest, 1083 Hungary.
| |
Collapse
|
17
|
Grajski KA. Emergent Spatial Patterns of Excitatory and Inhibitory Synaptic Strengths Drive Somatotopic Representational Discontinuities and their Plasticity in a Computational Model of Primary Sensory Cortical Area 3b. Front Comput Neurosci 2016; 10:72. [PMID: 27504086 PMCID: PMC4958931 DOI: 10.3389/fncom.2016.00072] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2016] [Accepted: 06/29/2016] [Indexed: 11/13/2022] Open
Abstract
Mechanisms underlying the emergence and plasticity of representational discontinuities in the mammalian primary somatosensory cortical representation of the hand are investigated in a computational model. The model consists of an input lattice organized as a three-digit hand forward-connected to a lattice of cortical columns each of which contains a paired excitatory and inhibitory cell. Excitatory and inhibitory synaptic plasticity of feedforward and lateral connection weights is implemented as a simple covariance rule and competitive normalization. Receptive field properties are computed independently for excitatory and inhibitory cells and compared within and across columns. Within digit representational zones intracolumnar excitatory and inhibitory receptive field extents are concentric, single-digit, small, and unimodal. Exclusively in representational boundary-adjacent zones, intracolumnar excitatory and inhibitory receptive field properties diverge: excitatory cell receptive fields are single-digit, small, and unimodal; and the paired inhibitory cell receptive fields are bimodal, double-digit, and large. In simulated syndactyly (webbed fingers), boundary-adjacent intracolumnar receptive field properties reorganize to within-representation type; divergent properties are reacquired following syndactyly release. This study generates testable hypotheses for assessment of cortical laminar-dependent receptive field properties and plasticity within and between cortical representational zones. For computational studies, present results suggest that concurrent excitatory and inhibitory plasticity may underlie novel emergent properties.
Collapse
|
18
|
A winner-take-all approach to emotional neural networks with universal approximation property. Inf Sci (N Y) 2016. [DOI: 10.1016/j.ins.2016.01.055] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
19
|
Spike-Based Bayesian-Hebbian Learning of Temporal Sequences. PLoS Comput Biol 2016; 12:e1004954. [PMID: 27213810 PMCID: PMC4877102 DOI: 10.1371/journal.pcbi.1004954] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Accepted: 04/28/2016] [Indexed: 11/25/2022] Open
Abstract
Many cognitive and motor functions are enabled by the temporal representation and processing of stimuli, but it remains an open issue how neocortical microcircuits can reliably encode and replay such sequences of information. To better understand this, a modular attractor memory network is proposed in which meta-stable sequential attractor transitions are learned through changes to synaptic weights and intrinsic excitabilities via the spike-based Bayesian Confidence Propagation Neural Network (BCPNN) learning rule. We find that the formation of distributed memories, embodied by increased periods of firing in pools of excitatory neurons, together with asymmetrical associations between these distinct network states, can be acquired through plasticity. The model’s feasibility is demonstrated using simulations of adaptive exponential integrate-and-fire model neurons (AdEx). We show that the learning and speed of sequence replay depends on a confluence of biophysically relevant parameters including stimulus duration, level of background noise, ratio of synaptic currents, and strengths of short-term depression and adaptation. Moreover, sequence elements are shown to flexibly participate multiple times in the sequence, suggesting that spiking attractor networks of this type can support an efficient combinatorial code. The model provides a principled approach towards understanding how multiple interacting plasticity mechanisms can coordinate hetero-associative learning in unison. From one moment to the next, in an ever-changing world, and awash in a deluge of sensory data, the brain fluidly guides our actions throughout an astonishing variety of tasks. Processing this ongoing bombardment of information is a fundamental problem faced by its underlying neural circuits. Given that the structure of our actions along with the organization of the environment in which they are performed can be intuitively decomposed into sequences of simpler patterns, an encoding strategy reflecting the temporal nature of these patterns should offer an efficient approach for assembling more complex memories and behaviors. We present a model that demonstrates how activity could propagate through recurrent cortical microcircuits as a result of a learning rule based on neurobiologically plausible time courses and dynamics. The model predicts that the interaction between several learning and dynamical processes constitute a compound mnemonic engram that can flexibly generate sequential step-wise increases of activity within neural populations.
Collapse
|
20
|
Gilson M, Savin C, Zenke F. Editorial: Emergent Neural Computation from the Interaction of Different Forms of Plasticity. Front Comput Neurosci 2015; 9:145. [PMID: 26648864 PMCID: PMC4663259 DOI: 10.3389/fncom.2015.00145] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2015] [Accepted: 11/13/2015] [Indexed: 11/13/2022] Open
Affiliation(s)
- Matthieu Gilson
- Computational Neuroscience Group, Department of Technology and Information of Communication, Universitat Pompeu Fabra Barcelona, Spain
| | - Cristina Savin
- Institute of Science and Technology Austria Klosterneuburg, Austria
| | - Friedemann Zenke
- Neural Dynamics and Computation Lab, Department of Applied Physics, Stanford University Stanford, CA, USA
| |
Collapse
|
21
|
Galluppi F, Lagorce X, Stromatias E, Pfeiffer M, Plana LA, Furber SB, Benosman RB. A framework for plasticity implementation on the SpiNNaker neural architecture. Front Neurosci 2015; 8:429. [PMID: 25653580 PMCID: PMC4299433 DOI: 10.3389/fnins.2014.00429] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2014] [Accepted: 12/07/2014] [Indexed: 11/21/2022] Open
Abstract
Many of the precise biological mechanisms of synaptic plasticity remain elusive, but simulations of neural networks have greatly enhanced our understanding of how specific global functions arise from the massively parallel computation of neurons and local Hebbian or spike-timing dependent plasticity rules. For simulating large portions of neural tissue, this has created an increasingly strong need for large scale simulations of plastic neural networks on special purpose hardware platforms, because synaptic transmissions and updates are badly matched to computing style supported by current architectures. Because of the great diversity of biological plasticity phenomena and the corresponding diversity of models, there is a great need for testing various hypotheses about plasticity before committing to one hardware implementation. Here we present a novel framework for investigating different plasticity approaches on the SpiNNaker distributed digital neural simulation platform. The key innovation of the proposed architecture is to exploit the reconfigurability of the ARM processors inside SpiNNaker, dedicating a subset of them exclusively to process synaptic plasticity updates, while the rest perform the usual neural and synaptic simulations. We demonstrate the flexibility of the proposed approach by showing the implementation of a variety of spike- and rate-based learning rules, including standard Spike-Timing dependent plasticity (STDP), voltage-dependent STDP, and the rate-based BCM rule. We analyze their performance and validate them by running classical learning experiments in real time on a 4-chip SpiNNaker board. The result is an efficient, modular, flexible and scalable framework, which provides a valuable tool for the fast and easy exploration of learning models of very different kinds on the parallel and reconfigurable SpiNNaker system.
Collapse
Affiliation(s)
- Francesco Galluppi
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Xavier Lagorce
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| | - Evangelos Stromatias
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Michael Pfeiffer
- Institute of Neuroinformatics, University of Zürich and ETH ZürichZürich, Switzerland
| | - Luis A. Plana
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Steve B. Furber
- Advanced Processors Technology Group, School of Computer Science, University of ManchesterManchester, UK
| | - Ryad B. Benosman
- Equipe de Vision et Calcul Naturel, Vision Institute, Université Pierre et Marie Curie, Unité Mixte de Recherche S968 Inserm, l'Université Pierre et Marie Curie, Centre National de la Recherche Scientifique Unité Mixte de Recherche 7210, Centre Hospitalier National d'Ophtalmologie des quinze-vingtsParis, France
| |
Collapse
|