1
|
Wang X, Jin Y, Du W, Wang J. Evolving Dual-Threshold Bienenstock-Cooper-Munro Learning Rules in Echo State Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:1572-1583. [PMID: 35763483 DOI: 10.1109/tnnls.2022.3184004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
The strengthening and the weakening of synaptic strength in existing Bienenstock-Cooper-Munro (BCM) learning rule are determined by a long-term potentiation (LTP) sliding modification threshold and the afferent synaptic activities. However, synaptic long-term depression (LTD) even affects low-active synapses during the induction of synaptic plasticity, which may lead to information loss. Biological experiments have found another LTD threshold that can induce either potentiation or depression or no change, even at the activated synapses. In addition, existing BCM learning rules can only select a set of fixed rule parameters, which is biologically implausible and practically inflexible to learn the structural information of input signals. In this article, an evolved dual-threshold BCM learning rule is proposed to regulate the reservoir internal connection weights of the echo-state-network (ESN), which can contribute to alleviating information loss and enhancing learning performance by introducing different optimal LTD thresholds for different postsynaptic neurons. Our experimental results show that the evolved dual-threshold BCM learning rule can result in the synergistic learning of different plasticity rules, effectively improving the learning performance of an ESN in comparison with existing neural plasticity learning rules and some state-of-the-art ESN variants on three widely used benchmark tasks and the prediction of an esterification process.
Collapse
|
2
|
Wang X, Jin Y, Hao K. Computational Modeling of Structural Synaptic Plasticity in Echo State Networks. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:11254-11266. [PMID: 33760748 DOI: 10.1109/tcyb.2021.3060466] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Most existing studies on computational modeling of neural plasticity have focused on synaptic plasticity. However, regulation of the internal weights in the reservoir based on synaptic plasticity often results in unstable learning dynamics. In this article, a structural synaptic plasticity learning rule is proposed to train the weights and add or remove neurons within the reservoir, which is shown to be able to alleviate the instability of the synaptic plasticity, and to contribute to increase the memory capacity of the network as well. Our experimental results also reveal that a few stronger connections may last for a longer period of time in a constantly changing network structure, and are relatively resistant to decay or disruptions in the learning process. These results are consistent with the evidence observed in biological systems. Finally, we show that an echo state network (ESN) using the proposed structural plasticity rule outperforms an ESN using synaptic plasticity and three state-of-the-art ESNs on four benchmark tasks.
Collapse
|
3
|
Feketa P, Meurer T, Kohlstedt H. Structural plasticity driven by task performance leads to criticality signatures in neuromorphic oscillator networks. Sci Rep 2022; 12:15321. [PMID: 36096910 PMCID: PMC9468161 DOI: 10.1038/s41598-022-19386-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2022] [Accepted: 08/29/2022] [Indexed: 12/04/2022] Open
Abstract
Oscillator networks rapidly become one of the promising vehicles for energy-efficient computing due to their intrinsic parallelism of execution. The criticality property of the oscillator-based networks is regarded to be essential for performing complex tasks. There are numerous bio-inspired synaptic and structural plasticity mechanisms available, especially for spiking neural networks, which can drive the network towards the criticality. However, there is no solid connection between these self-adaption mechanisms and the task performance, and it is not clear how and why particular self-adaptation mechanisms contribute to the solution of the task, although their relation to criticality is understood. Here we propose an evolutionary approach for the structural plasticity that relies solely on the task performance and does not contain any task-independent adaptation mechanisms, which usually contribute towards the criticality of the network. As a driver for the structural plasticity, we use a direct binary search guided by the performance of the classification task that can be interpreted as an interaction of the network with the environment. Remarkably, such interaction with the environment brings the network to criticality, although this property was not a part of the objectives of the employed structural plasticity mechanism. This observation confirms a duality of criticality and task performance, and legitimizes internal activity-dependent plasticity mechanisms from the viewpoint of evolution as mechanisms contributing to the task performance, but following the dual route. Finally, we analyze the trained network against task-independent information-theoretic measures and identify the interconnection graph’s entropy to be an essential ingredient for the classification task performance and network’s criticality.
Collapse
Affiliation(s)
- Petro Feketa
- Chair of Automation and Control, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany. .,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany.
| | - Thomas Meurer
- Chair of Automation and Control, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany.,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany
| | - Hermann Kohlstedt
- Chair of Nanoelectronics, Kiel University, Kaiserstraße 2, 24143, Kiel, Germany.,Kiel Nano, Surface and Interface Science KiNSIS, Kiel University, Christian-Albrechts-Platz 4, 24118, Kiel, Germany
| |
Collapse
|
4
|
Wang X, Jin Y, Hao K. Synergies between synaptic and intrinsic plasticity in echo state networks. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.12.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
5
|
|
6
|
Florescu D, Coca D. Learning with Precise Spike Times: A New Decoding Algorithm for Liquid State Machines. Neural Comput 2019; 31:1825-1852. [PMID: 31335291 DOI: 10.1162/neco_a_01218] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
There is extensive evidence that biological neural networks encode information in the precise timing of the spikes generated and transmitted by neurons, which offers several advantages over rate-based codes. Here we adopt a vector space formulation of spike train sequences and introduce a new liquid state machine (LSM) network architecture and a new forward orthogonal regression algorithm to learn an input-output signal mapping or to decode the brain activity. The proposed algorithm uses precise spike timing to select the presynaptic neurons relevant to each learning task. We show that using precise spike timing to train the LSM and selecting the readout presynaptic neurons leads to a significant increase in performance on binary classification tasks, in decoding neural activity from multielectrode array recordings, as well as in a speech recognition task, compared with what is achieved using the standard architecture and training methods.
Collapse
Affiliation(s)
- Dorian Florescu
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| | - Daniel Coca
- Department of Automatic Control and Systems Engineering, University of Sheffield, Sheffield, S1 3JD, U.K.
| |
Collapse
|
7
|
Srinivasan G, Panda P, Roy K. SpiLinC: Spiking Liquid-Ensemble Computing for Unsupervised Speech and Image Recognition. Front Neurosci 2018; 12:524. [PMID: 30190670 PMCID: PMC6116788 DOI: 10.3389/fnins.2018.00524] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Accepted: 07/12/2018] [Indexed: 11/30/2022] Open
Abstract
In this work, we propose a Spiking Neural Network (SNN) consisting of input neurons sparsely connected by plastic synapses to a randomly interlinked liquid, referred to as Liquid-SNN, for unsupervised speech and image recognition. We adapt the strength of the synapses interconnecting the input and liquid using Spike Timing Dependent Plasticity (STDP), which enables the neurons to self-learn a general representation of unique classes of input patterns. The presented unsupervised learning methodology makes it possible to infer the class of a test input directly using the liquid neuronal spiking activity. This is in contrast to standard Liquid State Machines (LSMs) that have fixed synaptic connections between the input and liquid followed by a readout layer (trained in a supervised manner) to extract the liquid states and infer the class of the input patterns. Moreover, the utility of LSMs has primarily been demonstrated for speech recognition. We find that training such LSMs is challenging for complex pattern recognition tasks because of the information loss incurred by using fixed input to liquid synaptic connections. We show that our Liquid-SNN is capable of efficiently recognizing both speech and image patterns by learning the rich temporal information contained in the respective input patterns. However, the need to enlarge the liquid for improving the accuracy introduces scalability challenges and training inefficiencies. We propose SpiLinC that is composed of an ensemble of multiple liquids operating in parallel. We use a “divide and learn” strategy for SpiLinC, where each liquid is trained on a unique segment of the input patterns that causes the neurons to self-learn distinctive input features. SpiLinC effectively recognizes a test pattern by combining the spiking activity of the constituent liquids, each of which identifies characteristic input features. As a result, SpiLinC offers competitive classification accuracy compared to the Liquid-SNN with added sparsity in synaptic connectivity and faster training convergence, both of which lead to improved energy efficiency in neuromorphic hardware implementations. We validate the efficacy of the proposed Liquid-SNN and SpiLinC on the entire digit subset of the TI46 speech corpus and handwritten digits from the MNIST dataset.
Collapse
Affiliation(s)
| | | | - Kaushik Roy
- Department of ECE, Purdue University, West Lafayette, IN, United States
| |
Collapse
|
8
|
Xue F, Li Q, Li X. The combination of circle topology and leaky integrator neurons remarkably improves the performance of echo state network on time series prediction. PLoS One 2017; 12:e0181816. [PMID: 28759581 PMCID: PMC5536322 DOI: 10.1371/journal.pone.0181816] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2016] [Accepted: 06/26/2017] [Indexed: 11/18/2022] Open
Abstract
Recently, echo state network (ESN) has attracted a great deal of attention due to its high accuracy and efficient learning performance. Compared with the traditional random structure and classical sigmoid units, simple circle topology and leaky integrator neurons have more advantages on reservoir computing of ESN. In this paper, we propose a new model of ESN with both circle reservoir structure and leaky integrator units. By comparing the prediction capability on Mackey-Glass chaotic time series of four ESN models: classical ESN, circle ESN, traditional leaky integrator ESN, circle leaky integrator ESN, we find that our circle leaky integrator ESN shows significantly better performance than other ESNs with roughly 2 orders of magnitude reduction of the predictive error. Moreover, this model has stronger ability to approximate nonlinear dynamics and resist noise than conventional ESN and ESN with only simple circle structure or leaky integrator neurons. Our results show that the combination of circle topology and leaky integrator neurons can remarkably increase dynamical diversity and meanwhile decrease the correlation of reservoir states, which contribute to the significant improvement of computational performance of Echo state network on time series prediction.
Collapse
Affiliation(s)
- Fangzheng Xue
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- College of Automation, Chongqing University, Chongqing 400044, China
| | - Qian Li
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- College of Automation, Chongqing University, Chongqing 400044, China
| | - Xiumin Li
- Key Laboratory of Dependable Service Computing in Cyber Physical Society of Ministry of Education, Chongqing University, Chongqing 400044, China
- College of Automation, Chongqing University, Chongqing 400044, China
- * E-mail:
| |
Collapse
|
9
|
Han HG, Zhang S, Qiao JF. An adaptive growing and pruning algorithm for designing recurrent neural network. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.02.038] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
10
|
Ghani A, See CH, Migdadi H, Asif R, Abd-Alhameed RAA, Noras JM. Reconfigurable neurons - making the most of configurable logic blocks (CLBs). 2015 INTERNET TECHNOLOGIES AND APPLICATIONS (ITA) 2015. [DOI: 10.1109/itecha.2015.7317451] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
|
11
|
Chrol-Cannon J, Jin Y. Learning structure of sensory inputs with synaptic plasticity leads to interference. Front Comput Neurosci 2015; 9:103. [PMID: 26300769 PMCID: PMC4525052 DOI: 10.3389/fncom.2015.00103] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2015] [Accepted: 07/20/2015] [Indexed: 12/14/2022] Open
Abstract
Synaptic plasticity is often explored as a form of unsupervised adaptation in cortical microcircuits to learn the structure of complex sensory inputs and thereby improve performance of classification and prediction. The question of whether the specific structure of the input patterns is encoded in the structure of neural networks has been largely neglected. Existing studies that have analyzed input-specific structural adaptation have used simplified, synthetic inputs in contrast to complex and noisy patterns found in real-world sensory data. In this work, input-specific structural changes are analyzed for three empirically derived models of plasticity applied to three temporal sensory classification tasks that include complex, real-world visual and auditory data. Two forms of spike-timing dependent plasticity (STDP) and the Bienenstock-Cooper-Munro (BCM) plasticity rule are used to adapt the recurrent network structure during the training process before performance is tested on the pattern recognition tasks. It is shown that synaptic adaptation is highly sensitive to specific classes of input pattern. However, plasticity does not improve the performance on sensory pattern recognition tasks, partly due to synaptic interference between consecutively presented input samples. The changes in synaptic strength produced by one stimulus are reversed by the presentation of another, thus largely preventing input-specific synaptic changes from being retained in the structure of the network. To solve the problem of interference, we suggest that models of plasticity be extended to restrict neural activity and synaptic modification to a subset of the neural circuit, which is increasingly found to be the case in experimental neuroscience.
Collapse
Affiliation(s)
- Joseph Chrol-Cannon
- Department of Computer Science, Faculty of Engineering and Physical Sciences, University of Surrey Guildford, UK
| | - Yaochu Jin
- Department of Computer Science, Faculty of Engineering and Physical Sciences, University of Surrey Guildford, UK
| |
Collapse
|
12
|
Schmidhuber J. Deep learning in neural networks: an overview. Neural Netw 2014; 61:85-117. [PMID: 25462637 DOI: 10.1016/j.neunet.2014.09.003] [Citation(s) in RCA: 3751] [Impact Index Per Article: 375.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2014] [Revised: 09/12/2014] [Accepted: 09/14/2014] [Indexed: 11/30/2022]
Abstract
In recent years, deep artificial neural networks (including recurrent ones) have won numerous contests in pattern recognition and machine learning. This historical survey compactly summarizes relevant work, much of it from the previous millennium. Shallow and Deep Learners are distinguished by the depth of their credit assignment paths, which are chains of possibly learnable, causal links between actions and effects. I review deep supervised learning (also recapitulating the history of backpropagation), unsupervised learning, reinforcement learning & evolutionary computation, and indirect search for short programs encoding deep and large networks.
Collapse
Affiliation(s)
- Jürgen Schmidhuber
- Swiss AI Lab IDSIA, Istituto Dalle Molle di Studi sull'Intelligenza Artificiale, University of Lugano & SUPSI, Galleria 2, 6928 Manno-Lugano, Switzerland.
| |
Collapse
|
13
|
Chrol-Cannon J, Jin Y. On the correlation between reservoir metrics and performance for time series classification under the influence of synaptic plasticity. PLoS One 2014; 9:e101792. [PMID: 25010415 PMCID: PMC4092026 DOI: 10.1371/journal.pone.0101792] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2014] [Accepted: 06/10/2014] [Indexed: 12/04/2022] Open
Abstract
Reservoir computing provides a simpler paradigm of training recurrent networks by initialising and adapting the recurrent connections separately to a supervised linear readout. This creates a problem, though. As the recurrent weights and topology are now separated from adapting to the task, there is a burden on the reservoir designer to construct an effective network that happens to produce state vectors that can be mapped linearly into the desired outputs. Guidance in forming a reservoir can be through the use of some established metrics which link a number of theoretical properties of the reservoir computing paradigm to quantitative measures that can be used to evaluate the effectiveness of a given design. We provide a comprehensive empirical study of four metrics; class separation, kernel quality, Lyapunov's exponent and spectral radius. These metrics are each compared over a number of repeated runs, for different reservoir computing set-ups that include three types of network topology and three mechanisms of weight adaptation through synaptic plasticity. Each combination of these methods is tested on two time-series classification problems. We find that the two metrics that correlate most strongly with the classification performance are Lyapunov's exponent and kernel quality. It is also evident in the comparisons that these two metrics both measure a similar property of the reservoir dynamics. We also find that class separation and spectral radius are both less reliable and less effective in predicting performance.
Collapse
Affiliation(s)
| | - Yaochu Jin
- Department of Computing, University of Surrey, Guildford, United Kingdom
- * E-mail:
| |
Collapse
|
14
|
Chrol-Cannon J, Jin Y. Computational modeling of neural plasticity for self-organization of neural networks. Biosystems 2014; 125:43-54. [PMID: 24769242 DOI: 10.1016/j.biosystems.2014.04.003] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2013] [Revised: 04/03/2014] [Accepted: 04/04/2014] [Indexed: 11/28/2022]
Abstract
Self-organization in biological nervous systems during the lifetime is known to largely occur through a process of plasticity that is dependent upon the spike-timing activity in connected neurons. In the field of computational neuroscience, much effort has been dedicated to building up computational models of neural plasticity to replicate experimental data. Most recently, increasing attention has been paid to understanding the role of neural plasticity in functional and structural neural self-organization, as well as its influence on the learning performance of neural networks for accomplishing machine learning tasks such as classification and regression. Although many ideas and hypothesis have been suggested, the relationship between the structure, dynamics and learning performance of neural networks remains elusive. The purpose of this article is to review the most important computational models for neural plasticity and discuss various ideas about neural plasticity's role. Finally, we suggest a few promising research directions, in particular those along the line that combines findings in computational neuroscience and systems biology, and their synergetic roles in understanding learning, memory and cognition, thereby bridging the gap between computational neuroscience, systems biology and computational intelligence.
Collapse
Affiliation(s)
- Joseph Chrol-Cannon
- Department of Computing, University of Surrey, Guildford GU2 7XH, United Kingdom
| | - Yaochu Jin
- Department of Computing, University of Surrey, Guildford GU2 7XH, United Kingdom.
| |
Collapse
|