1
|
Kwon JY, Kim JE, Kim JS, Chun SY, Soh K, Yoon JH. Artificial sensory system based on memristive devices. EXPLORATION (BEIJING, CHINA) 2024; 4:20220162. [PMID: 38854486 PMCID: PMC10867403 DOI: 10.1002/exp.20220162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/20/2023] [Accepted: 10/16/2023] [Indexed: 06/11/2024]
Abstract
In the biological nervous system, the integration and cooperation of parallel system of receptors, neurons, and synapses allow efficient detection and processing of intricate and disordered external information. Such systems acquire and process environmental data in real-time, efficiently handling complex tasks with minimal energy consumption. Memristors can mimic typical biological receptors, neurons, and synapses by implementing key features of neuronal signal-processing functions such as selective adaption in receptors, leaky integrate-and-fire in neurons, and synaptic plasticity in synapses. External stimuli are sensitively detected and filtered by "artificial receptors," encoded into spike signals via "artificial neurons," and integrated and stored through "artificial synapses." The high operational speed, low power consumption, and superior scalability of memristive devices make their integration with high-performance sensors a promising approach for creating integrated artificial sensory systems. These integrated systems can extract useful data from a large volume of raw data, facilitating real-time detection and processing of environmental information. This review explores the recent advances in memristor-based artificial sensory systems. The authors begin with the requirements of artificial sensory elements and then present an in-depth review of such elements demonstrated by memristive devices. Finally, the major challenges and opportunities in the development of memristor-based artificial sensory systems are discussed.
Collapse
Affiliation(s)
- Ju Young Kwon
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
| | - Ji Eun Kim
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
- Department of Materials Science and EngineeringKorea UniversitySeoulRepublic of Korea
| | - Jong Sung Kim
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
- Department of Materials Science and EngineeringKorea UniversitySeoulRepublic of Korea
| | - Suk Yeop Chun
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
- KU‐KIST Graduate School of Converging Science and TechnologyKorea UniversitySeoulRepublic of Korea
| | - Keunho Soh
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
- Department of Materials Science and EngineeringKorea UniversitySeoulRepublic of Korea
| | - Jung Ho Yoon
- Electronic Materials Research CenterKorea Institute of Science and Technology (KIST)SeoulRepublic of Korea
| |
Collapse
|
2
|
Moro F, Hardy E, Fain B, Dalgaty T, Clémençon P, De Prà A, Esmanhotto E, Castellani N, Blard F, Gardien F, Mesquida T, Rummens F, Esseni D, Casas J, Indiveri G, Payvand M, Vianello E. Neuromorphic object localization using resistive memories and ultrasonic transducers. Nat Commun 2022; 13:3506. [PMID: 35717413 PMCID: PMC9206646 DOI: 10.1038/s41467-022-31157-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 06/03/2022] [Indexed: 11/25/2022] Open
Abstract
Real-world sensory-processing applications require compact, low-latency, and low-power computing systems. Enabled by their in-memory event-driven computing abilities, hybrid memristive-Complementary Metal-Oxide Semiconductor neuromorphic architectures provide an ideal hardware substrate for such tasks. To demonstrate the full potential of such systems, we propose and experimentally demonstrate an end-to-end sensory processing solution for a real-world object localization application. Drawing inspiration from the barn owl's neuroanatomy, we developed a bio-inspired, event-driven object localization system that couples state-of-the-art piezoelectric micromachined ultrasound transducer sensors to a neuromorphic resistive memories-based computational map. We present measurement results from the fabricated system comprising resistive memories-based coincidence detectors, delay line circuits, and a full-custom ultrasound sensor. We use these experimental results to calibrate our system-level simulations. These simulations are then used to estimate the angular resolution and energy efficiency of the object localization model. The results reveal the potential of our approach, evaluated in orders of magnitude greater energy efficiency than a microcontroller performing the same task.
Collapse
Affiliation(s)
- Filippo Moro
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France.
| | - Emmanuel Hardy
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
| | - Bruno Fain
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
| | - Thomas Dalgaty
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
- CEA, LIST, Université Grenoble Alpes, 38054, Grenoble, France
| | - Paul Clémençon
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
- Insect Biology Research Institute, Université de Tours, 37020, Tours, France
| | - Alessio De Prà
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
- DPIA, Università degli Studi di Udine, 33100, Udine, Italy
| | | | | | - François Blard
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France
| | | | - Thomas Mesquida
- CEA, LIST, Université Grenoble Alpes, 38054, Grenoble, France
| | | | - David Esseni
- DPIA, Università degli Studi di Udine, 33100, Udine, Italy
| | - Jérôme Casas
- Insect Biology Research Institute, Université de Tours, 37020, Tours, France
| | - Giacomo Indiveri
- Institute for Neuroinformatics, University of Zürich and ETH Zürich, 8057, Zürich, Switzerland
| | - Melika Payvand
- Institute for Neuroinformatics, University of Zürich and ETH Zürich, 8057, Zürich, Switzerland
| | - Elisa Vianello
- CEA, LETI, Université Grenoble Alpes, 38054, Grenoble, France.
| |
Collapse
|
3
|
Kreutzer E, Senn W, Petrovici MA. Natural-gradient learning for spiking neurons. eLife 2022; 11:e66526. [PMID: 35467527 PMCID: PMC9038192 DOI: 10.7554/elife.66526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 02/21/2022] [Indexed: 11/16/2022] Open
Abstract
In many normative theories of synaptic plasticity, weight updates implicitly depend on the chosen parametrization of the weights. This problem relates, for example, to neuronal morphology: synapses which are functionally equivalent in terms of their impact on somatic firing can differ substantially in spine size due to their different positions along the dendritic tree. Classical theories based on Euclidean-gradient descent can easily lead to inconsistencies due to such parametrization dependence. The issues are solved in the framework of Riemannian geometry, in which we propose that plasticity instead follows natural-gradient descent. Under this hypothesis, we derive a synaptic learning rule for spiking neurons that couples functional efficiency with the explanation of several well-documented biological phenomena such as dendritic democracy, multiplicative scaling, and heterosynaptic plasticity. We therefore suggest that in its search for functional synaptic plasticity, evolution might have come up with its own version of natural-gradient descent.
Collapse
Affiliation(s)
- Elena Kreutzer
- Department of Physiology, University of BernBernSwitzerland
| | - Walter Senn
- Department of Physiology, University of BernBernSwitzerland
| | - Mihai A Petrovici
- Department of Physiology, University of BernBernSwitzerland
- Kirchhoff-Institute for Physics, Heidelberg UniversityHeidelbergGermany
| |
Collapse
|
4
|
Human group coordination in a sensorimotor task with neuron-like decision-making. Sci Rep 2020; 10:8226. [PMID: 32427875 PMCID: PMC7237467 DOI: 10.1038/s41598-020-64091-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 04/08/2020] [Indexed: 11/24/2022] Open
Abstract
The formation of cooperative groups of agents with limited information-processing capabilities to solve complex problems together is a fundamental building principle that cuts through multiple scales in biology from groups of cells to groups of humans. Here, we study an experimental paradigm where a group of humans is joined together to solve a common sensorimotor task that cannot be achieved by a single agent but relies on the cooperation of the group. In particular, each human acts as a neuron-like binary decision-maker that determines in each moment of time whether to be active or not. Inspired by the population vector method for movement decoding, each neuron-like decision-maker is assigned a preferred movement direction that the decision-maker is ignorant about. From the population vector reflecting the group activity, the movement of a cursor is determined, and the task for the group is to steer the cursor into a predefined target. As the preferred movement directions are unknown and players are not allowed to communicate, the group has to learn a control strategy on the fly from the shared visual feedback. Performance is analyzed by learning speed and accuracy, action synchronization, and group coherence. We study four different computational models of the observed behavior, including a perceptron model, a reinforcement learning model, a Bayesian inference model and a Thompson sampling model that efficiently approximates Bayes optimal behavior. The Bayes and especially the Thompson model excel in predicting the human group behavior compared to the other models, suggesting that internal models are crucial for adaptive coordination. We discuss benefits and limitations of our paradigm regarding a better understanding of distributed information processing.
Collapse
|
5
|
Abstract
Many aspects of the brain’s design can be understood as the result of evolutionary drive toward metabolic efficiency. In addition to the energetic costs of neural computation and transmission, experimental evidence indicates that synaptic plasticity is metabolically demanding as well. As synaptic plasticity is crucial for learning, we examine how these metabolic costs enter in learning. We find that when synaptic plasticity rules are naively implemented, training neural networks requires extremely large amounts of energy when storing many patterns. We propose that this is avoided by precisely balancing labile forms of synaptic plasticity with more stable forms. This algorithm, termed synaptic caching, boosts energy efficiency manifold and can be used with any plasticity rule, including back-propagation. Our results yield a novel interpretation of the multiple forms of neural synaptic plasticity observed experimentally, including synaptic tagging and capture phenomena. Furthermore, our results are relevant for energy efficient neuromorphic designs. The brain expends a lot of energy. While the organ accounts for only about 2% of a person’s bodyweight, it is responsible for about 20% of our energy use at rest. Neurons use some of this energy to communicate with each other and to process information, but much of the energy is likely used to support learning. A study in fruit flies showed that insects that learned to associate two stimuli and then had their food supply cut off, died 20% earlier than untrained flies. This is thought to be because learning used up the insects’ energy reserves. If learning a single association requires so much energy, how does the brain manage to store vast amounts of data? Li and van Rossum offer an explanation based on a computer model of neural networks. The advantage of using such a model is that it is possible to control and measure conditions more precisely than in the living brain. Analysing the model confirmed that learning many new associations requires large amounts of energy. This is particularly true if the memories must be stored with a high degree of accuracy, and if the neural network contains many stored memories already. The reason that learning consumes so much energy is that forming long-term memories requires neurons to produce new proteins. Using the computer model, Li and van Rossum show that neural networks can overcome this limitation by storing memories initially in a transient form that does not require protein synthesis. Doing so reduces energy requirements by as much as 10-fold. Studies in living brains have shown that transient memories of this type do in fact exist. The current results hence offer a hypothesis as to how the brain can learn in a more energy efficient way. Energy consumption is thought to have placed constraints on brain evolution. It is also often a bottleneck in computers. By revealing how the brain encodes memories energy efficiently, the current findings could thus also inspire new engineering solutions.
Collapse
Affiliation(s)
- Ho Ling Li
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Mark Cw van Rossum
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.,School of Mathematical Sciences, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
6
|
Beyond STDP-towards diverse and functionally relevant plasticity rules. Curr Opin Neurobiol 2018; 54:12-19. [PMID: 30056261 DOI: 10.1016/j.conb.2018.06.011] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2018] [Revised: 06/06/2018] [Accepted: 06/18/2018] [Indexed: 01/08/2023]
Abstract
Synaptic plasticity, induced by the close temporal association of two neural signals, supports associative forms of learning. However, the millisecond timescales for association often do not match the much longer delays for behaviorally relevant signals that supervise learning. In particular, information about the behavioral outcome of neural activity can be delayed, leading to a problem of temporal credit assignment. Recent studies suggest that synaptic plasticity can have temporal rules that not only accommodate the delays relevant to the circuit, but also be precisely tuned to the behavior the circuit supports. These discoveries highlight the diversity of plasticity rules, whose temporal requirements may depend on circuit delays and the contingencies of behavior.
Collapse
|
7
|
Gilra A, Gerstner W. Predicting non-linear dynamics by stable local learning in a recurrent spiking neural network. eLife 2017; 6:28295. [PMID: 29173280 PMCID: PMC5730383 DOI: 10.7554/elife.28295] [Citation(s) in RCA: 36] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2017] [Accepted: 11/22/2017] [Indexed: 12/21/2022] Open
Abstract
The brain needs to predict how the body reacts to motor commands, but how a network of spiking neurons can learn non-linear body dynamics using local, online and stable learning rules is unclear. Here, we present a supervised learning scheme for the feedforward and recurrent connections in a network of heterogeneous spiking neurons. The error in the output is fed back through fixed random connections with a negative gain, causing the network to follow the desired dynamics. The rule for Feedback-based Online Local Learning Of Weights (FOLLOW) is local in the sense that weight changes depend on the presynaptic activity and the error signal projected onto the postsynaptic neuron. We provide examples of learning linear, non-linear and chaotic dynamics, as well as the dynamics of a two-link arm. Under reasonable approximations, we show, using the Lyapunov method, that FOLLOW learning is uniformly stable, with the error going to zero asymptotically.
Collapse
Affiliation(s)
- Aditya Gilra
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| | - Wulfram Gerstner
- Brain-Mind Institute, School of Life Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.,School of Computer and Communication Sciences, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland
| |
Collapse
|
8
|
Albers C, Westkott M, Pawelzik K. Learning of Precise Spike Times with Homeostatic Membrane Potential Dependent Synaptic Plasticity. PLoS One 2016; 11:e0148948. [PMID: 26900845 PMCID: PMC4763343 DOI: 10.1371/journal.pone.0148948] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2015] [Accepted: 12/23/2015] [Indexed: 11/28/2022] Open
Abstract
Precise spatio-temporal patterns of neuronal action potentials underly e.g. sensory representations and control of muscle activities. However, it is not known how the synaptic efficacies in the neuronal networks of the brain adapt such that they can reliably generate spikes at specific points in time. Existing activity-dependent plasticity rules like Spike-Timing-Dependent Plasticity are agnostic to the goal of learning spike times. On the other hand, the existing formal and supervised learning algorithms perform a temporally precise comparison of projected activity with the target, but there is no known biologically plausible implementation of this comparison. Here, we propose a simple and local unsupervised synaptic plasticity mechanism that is derived from the requirement of a balanced membrane potential. Since the relevant signal for synaptic change is the postsynaptic voltage rather than spike times, we call the plasticity rule Membrane Potential Dependent Plasticity (MPDP). Combining our plasticity mechanism with spike after-hyperpolarization causes a sensitivity of synaptic change to pre- and postsynaptic spike times which can reproduce Hebbian spike timing dependent plasticity for inhibitory synapses as was found in experiments. In addition, the sensitivity of MPDP to the time course of the voltage when generating a spike allows MPDP to distinguish between weak (spurious) and strong (teacher) spikes, which therefore provides a neuronal basis for the comparison of actual and target activity. For spatio-temporal input spike patterns our conceptually simple plasticity rule achieves a surprisingly high storage capacity for spike associations. The sensitivity of the MPDP to the subthreshold membrane potential during training allows robust memory retrieval after learning even in the presence of activity corrupted by noise. We propose that MPDP represents a biophysically plausible mechanism to learn temporal target activity patterns.
Collapse
Affiliation(s)
- Christian Albers
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
- * E-mail:
| | - Maren Westkott
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| | - Klaus Pawelzik
- Institute for Theoretical Physics, University of Bremen, Bremen, Germany
| |
Collapse
|
9
|
Bayati M, Valizadeh A, Abbassian A, Cheng S. Self-organization of synchronous activity propagation in neuronal networks driven by local excitation. Front Comput Neurosci 2015; 9:69. [PMID: 26089794 PMCID: PMC4454885 DOI: 10.3389/fncom.2015.00069] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2014] [Accepted: 05/20/2015] [Indexed: 12/30/2022] Open
Abstract
Many experimental and theoretical studies have suggested that the reliable propagation of synchronous neural activity is crucial for neural information processing. The propagation of synchronous firing activity in so-called synfire chains has been studied extensively in feed-forward networks of spiking neurons. However, it remains unclear how such neural activity could emerge in recurrent neuronal networks through synaptic plasticity. In this study, we investigate whether local excitation, i.e., neurons that fire at a higher frequency than the other, spontaneously active neurons in the network, can shape a network to allow for synchronous activity propagation. We use two-dimensional, locally connected and heterogeneous neuronal networks with spike-timing dependent plasticity (STDP). We find that, in our model, local excitation drives profound network changes within seconds. In the emergent network, neural activity propagates synchronously through the network. This activity originates from the site of the local excitation and propagates through the network. The synchronous activity propagation persists, even when the local excitation is removed, since it derives from the synaptic weight matrix. Importantly, once this connectivity is established it remains stable even in the presence of spontaneous activity. Our results suggest that synfire-chain-like activity can emerge in a relatively simple way in realistic neural networks by locally exciting the desired origin of the neuronal sequence.
Collapse
Affiliation(s)
- Mehdi Bayati
- Mercator Research Group "Structure of Memory", Ruhr-Universität Bochum Bochum, Germany
| | - Alireza Valizadeh
- Department of Physics, Institute for Advanced Studies in Basic Sciences Zanjan, Iran ; School of Cognitive Sciences, Institute for Research in Fundamental Sciences Tehran, Iran
| | | | - Sen Cheng
- Mercator Research Group "Structure of Memory", Ruhr-Universität Bochum Bochum, Germany ; Department of Psychology, Ruhr-Universität Bochum Bochum, Germany
| |
Collapse
|
10
|
Sountsov P, Miller P. Spiking neuron network Helmholtz machine. Front Comput Neurosci 2015; 9:46. [PMID: 25954191 PMCID: PMC4405618 DOI: 10.3389/fncom.2015.00046] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2014] [Accepted: 04/03/2015] [Indexed: 11/13/2022] Open
Abstract
An increasing amount of behavioral and neurophysiological data suggests that the brain performs optimal (or near-optimal) probabilistic inference and learning during perception and other tasks. Although many machine learning algorithms exist that perform inference and learning in an optimal way, the complete description of how one of those algorithms (or a novel algorithm) can be implemented in the brain is currently incomplete. There have been many proposed solutions that address how neurons can perform optimal inference but the question of how synaptic plasticity can implement optimal learning is rarely addressed. This paper aims to unify the two fields of probabilistic inference and synaptic plasticity by using a neuronal network of realistic model spiking neurons to implement a well-studied computational model called the Helmholtz Machine. The Helmholtz Machine is amenable to neural implementation as the algorithm it uses to learn its parameters, called the wake-sleep algorithm, uses a local delta learning rule. Our spiking-neuron network implements both the delta rule and a small example of a Helmholtz machine. This neuronal network can learn an internal model of continuous-valued training data sets without supervision. The network can also perform inference on the learned internal models. We show how various biophysical features of the neural implementation constrain the parameters of the wake-sleep algorithm, such as the duration of the wake and sleep phases of learning and the minimal sample duration. We examine the deviations from optimal performance and tie them to the properties of the synaptic plasticity rule.
Collapse
Affiliation(s)
- Pavel Sountsov
- Neuroscience Graduate Program, Brandeis UniversityWaltham, MA, USA
- Volen National Center for Complex Systems, Brandeis UniversityWaltham, MA, USA
| | - Paul Miller
- Volen National Center for Complex Systems, Brandeis UniversityWaltham, MA, USA
- Department of Biology, Brandeis UniversityWaltham, MA, USA
| |
Collapse
|
11
|
Hussain S, Liu SC, Basu A. Hardware-amenable structural learning for spike-based pattern classification using a simple model of active dendrites. Neural Comput 2015; 27:845-97. [PMID: 25734494 DOI: 10.1162/neco_a_00713] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
This letter presents a spike-based model that employs neurons with functionally distinct dendritic compartments for classifying high-dimensional binary patterns. The synaptic inputs arriving on each dendritic subunit are nonlinearly processed before being linearly integrated at the soma, giving the neuron the capacity to perform a large number of input-output mappings. The model uses sparse synaptic connectivity, where each synapse takes a binary value. The optimal connection pattern of a neuron is learned by using a simple hardware-friendly, margin-enhancing learning algorithm inspired by the mechanism of structural plasticity in biological neurons. The learning algorithm groups correlated synaptic inputs on the same dendritic branch. Since the learning results in modified connection patterns, it can be incorporated into current event-based neuromorphic systems with little overhead. This work also presents a branch-specific spike-based version of this structural plasticity rule. The proposed model is evaluated on benchmark binary classification problems, and its performance is compared against that achieved using support vector machine and extreme learning machine techniques. Our proposed method attains comparable performance while using 10% to 50% less in computational resource than the other reported techniques.
Collapse
Affiliation(s)
- Shaista Hussain
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | | | | |
Collapse
|
12
|
Abstract
Recent modeling of spike-timing-dependent plasticity indicates that plasticity involves as a third factor a local dendritic potential, besides pre- and postsynaptic firing times. We present a simple compartmental neuron model together with a non-Hebbian, biologically plausible learning rule for dendritic synapses where plasticity is modulated by these three factors. In functional terms, the rule seeks to minimize discrepancies between somatic firings and a local dendritic potential. Such prediction errors can arise in our model from stochastic fluctuations as well as from synaptic input, which directly targets the soma. Depending on the nature of this direct input, our plasticity rule subserves supervised or unsupervised learning. When a reward signal modulates the learning rate, reinforcement learning results. Hence a single plasticity rule supports diverse learning paradigms.
Collapse
|
13
|
Yuan WJ, Zhou JF, Zhou C. Network evolution induced by asynchronous stimuli through spike-timing-dependent plasticity. PLoS One 2013; 8:e84644. [PMID: 24391971 PMCID: PMC3877323 DOI: 10.1371/journal.pone.0084644] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Accepted: 11/25/2013] [Indexed: 11/21/2022] Open
Abstract
In sensory neural system, external asynchronous stimuli play an important role in perceptual learning, associative memory and map development. However, the organization of structure and dynamics of neural networks induced by external asynchronous stimuli are not well understood. Spike-timing-dependent plasticity (STDP) is a typical synaptic plasticity that has been extensively found in the sensory systems and that has received much theoretical attention. This synaptic plasticity is highly sensitive to correlations between pre- and postsynaptic firings. Thus, STDP is expected to play an important role in response to external asynchronous stimuli, which can induce segregative pre- and postsynaptic firings. In this paper, we study the impact of external asynchronous stimuli on the organization of structure and dynamics of neural networks through STDP. We construct a two-dimensional spatial neural network model with local connectivity and sparseness, and use external currents to stimulate alternately on different spatial layers. The adopted external currents imposed alternately on spatial layers can be here regarded as external asynchronous stimuli. Through extensive numerical simulations, we focus on the effects of stimulus number and inter-stimulus timing on synaptic connecting weights and the property of propagation dynamics in the resulting network structure. Interestingly, the resulting feedforward structure induced by stimulus-dependent asynchronous firings and its propagation dynamics reflect both the underlying property of STDP. The results imply a possible important role of STDP in generating feedforward structure and collective propagation activity required for experience-dependent map plasticity in developing in vivo sensory pathways and cortices. The relevance of the results to cue-triggered recall of learned temporal sequences, an important cognitive function, is briefly discussed as well. Furthermore, this finding suggests a potential application for examining STDP by measuring neural population activity in a cultured neural network.
Collapse
Affiliation(s)
- Wu-Jie Yuan
- College of Physics and Electronic Information, Huaibei Normal University, Huaibei, China
- Department of Physics, Centre for Nonlinear Studies and the Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- * E-mail: (WJY); (CZ)
| | - Jian-Fang Zhou
- College of Physics and Electronic Information, Huaibei Normal University, Huaibei, China
| | - Changsong Zhou
- Department of Physics, Centre for Nonlinear Studies and the Beijing-Hong Kong-Singapore Joint Centre for Nonlinear and Complex Systems (Hong Kong), Institute of Computational and Theoretical Studies, Hong Kong Baptist University, Kowloon Tong, Hong Kong
- Beijing Computational Science Research Center, Beijing, China
- Research Centre, HKBU Institute of Research and Continuing Education, Virtual University Park Building, South Area Hi-tech Industrial Park, Shenzhen, China
- * E-mail: (WJY); (CZ)
| |
Collapse
|
14
|
Fung PK, Haber AL, Robinson PA. Neural field theory of plasticity in the cerebral cortex. J Theor Biol 2012; 318:44-57. [PMID: 23036915 DOI: 10.1016/j.jtbi.2012.09.030] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2012] [Revised: 08/20/2012] [Accepted: 09/21/2012] [Indexed: 11/25/2022]
Abstract
A generalized timing-dependent plasticity rule is incorporated into a recent neural field theory to explore synaptic plasticity in the cerebral cortex, with both excitatory and inhibitory populations included. Analysis in the time and frequency domains reveals that cortical network behavior gives rise to a saddle-node bifurcation and resonant frequencies, including a gamma-band resonance. These system resonances constrain cortical synaptic dynamics and divide it into four classes, which depend on the type of synaptic plasticity window. Depending on the dynamical class, synaptic strengths can either have a stable fixed point, or can diverge in the absence of a separate saturation mechanism. Parameter exploration shows that time-asymmetric plasticity windows, which are signatures of spike-timing dependent plasticity, enable the richest variety of synaptic dynamics to occur. In particular, we predict a zone in parameter space which may allow brains to attain the marginal stability phenomena observed experimentally, although additional regulatory mechanisms may be required to maintain these parameters.
Collapse
Affiliation(s)
- P K Fung
- School of Physics, The University of Sydney, NSW 2006, Australia.
| | | | | |
Collapse
|
15
|
Bayati M, Valizadeh A. Effect of synaptic plasticity on the structure and dynamics of disordered networks of coupled neurons. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2012; 86:011925. [PMID: 23005470 DOI: 10.1103/physreve.86.011925] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2012] [Revised: 06/08/2012] [Indexed: 06/01/2023]
Abstract
In an all-to-all network of integrate-and-fire neurons in which there is a disorder in the intrinsic oscillatory frequencies of the neurons, we show that through spike-timing-dependent plasticity the synapses which have the high-frequency neurons as presynaptic tend to be potentiated while the links originated from the low-frequency neurons are weakened. The emergent effective flow of directed connections introduces the high-frequency neurons as the more influential elements in the network and facilitates synchronization by decreasing the synaptic cost for onset of synchronization.
Collapse
Affiliation(s)
- M Bayati
- Institute for Advanced Studies in Basic Sciences, PO Box 45195-1159, Zanjan, Iran
| | | |
Collapse
|
16
|
Blättler F, Hahnloser RHR. An efficient coding hypothesis links sparsity and selectivity of neural responses. PLoS One 2011; 6:e25506. [PMID: 22022405 PMCID: PMC3192758 DOI: 10.1371/journal.pone.0025506] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2011] [Accepted: 09/05/2011] [Indexed: 11/18/2022] Open
Abstract
To what extent are sensory responses in the brain compatible with first-order principles? The efficient coding hypothesis projects that neurons use as few spikes as possible to faithfully represent natural stimuli. However, many sparsely firing neurons in higher brain areas seem to violate this hypothesis in that they respond more to familiar stimuli than to nonfamiliar stimuli. We reconcile this discrepancy by showing that efficient sensory responses give rise to stimulus selectivity that depends on the stimulus-independent firing threshold and the balance between excitatory and inhibitory inputs. We construct a cost function that enforces minimal firing rates in model neurons by linearly punishing suprathreshold synaptic currents. By contrast, subthreshold currents are punished quadratically, which allows us to optimally reconstruct sensory inputs from elicited responses. We train synaptic currents on many renditions of a particular bird's own song (BOS) and few renditions of conspecific birds' songs (CONs). During training, model neurons develop a response selectivity with complex dependence on the firing threshold. At low thresholds, they fire densely and prefer CON and the reverse BOS (REV) over BOS. However, at high thresholds or when hyperpolarized, they fire sparsely and prefer BOS over REV and over CON. Based on this selectivity reversal, our model suggests that preference for a highly familiar stimulus corresponds to a high-threshold or strong-inhibition regime of an efficient coding strategy. Our findings apply to songbird mirror neurons, and in general, they suggest that the brain may be endowed with simple mechanisms to rapidly change selectivity of neural responses to focus sensory processing on either familiar or nonfamiliar stimuli. In summary, we find support for the efficient coding hypothesis and provide new insights into the interplay between the sparsity and selectivity of neural responses.
Collapse
Affiliation(s)
- Florian Blättler
- Institute of Neuroinformatics, University of Zurich/ETH Zurich, Zurich, Switzerland
| | | |
Collapse
|
17
|
Weisswange TH, Rothkopf CA, Rodemann T, Triesch J. Bayesian cue integration as a developmental outcome of reward mediated learning. PLoS One 2011; 6:e21575. [PMID: 21750717 PMCID: PMC3130032 DOI: 10.1371/journal.pone.0021575] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2011] [Accepted: 06/03/2011] [Indexed: 11/19/2022] Open
Abstract
Average human behavior in cue combination tasks is well predicted by bayesian inference models. As this capability is acquired over developmental timescales, the question arises, how it is learned. Here we investigated whether reward dependent learning, that is well established at the computational, behavioral, and neuronal levels, could contribute to this development. It is shown that a model free reinforcement learning algorithm can indeed learn to do cue integration, i.e. weight uncertain cues according to their respective reliabilities and even do so if reliabilities are changing. We also consider the case of causal inference where multimodal signals can originate from one or multiple separate objects and should not always be integrated. In this case, the learner is shown to develop a behavior that is closest to bayesian model averaging. We conclude that reward mediated learning could be a driving force for the development of cue integration and causal inference.
Collapse
|