1
|
Yang L, Wang Z, Wang G, Liang L, Liu M, Wang J. Brain-inspired modular echo state network for EEG-based emotion recognition. Front Neurosci 2024; 18:1305284. [PMID: 38495107 PMCID: PMC10940514 DOI: 10.3389/fnins.2024.1305284] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Accepted: 01/10/2024] [Indexed: 03/19/2024] Open
Abstract
Previous studies have successfully applied a lightweight recurrent neural network (RNN) called Echo State Network (ESN) for EEG-based emotion recognition. These studies use intrinsic plasticity (IP) and synaptic plasticity (SP) to tune the hidden reservoir layer of ESN, yet they require extra training procedures and are often computationally complex. Recent neuroscientific research reveals that the brain is modular, consisting of internally dense and externally sparse subnetworks. Furthermore, it has been proved that this modular topology facilitates information processing efficiency in both biological and artificial neural networks (ANNs). Motivated by these findings, we propose Modular Echo State Network (M-ESN), where the hidden layer of ESN is directly initialized to a more efficient modular structure. In this paper, we first describe our novel implementation method, which enables us to find the optimal module numbers, local and global connectivity. Then, the M-ESN is benchmarked on the DEAP dataset. Lastly, we explain why network modularity improves model performance. We demonstrate that modular organization leads to a more diverse distribution of node degrees, which increases network heterogeneity and subsequently improves classification accuracy. On the emotion arousal, valence, and stress/calm classification tasks, our M-ESN outperforms regular ESN by 5.44, 5.90, and 5.42%, respectively, while this difference when comparing with adaptation rules tuned ESNs are 0.77, 5.49, and 0.95%. Notably, our results are obtained using M-ESN with a much smaller reservoir size and simpler training process.
Collapse
Affiliation(s)
- Liuyi Yang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Zhaoze Wang
- School of Engineering and Applied Science, University of Pennsylvania, Pennsylvania, PA, United States
| | - Guoyu Wang
- Department of Auromation, Tiangong University, Tianjin, China
| | - Lixin Liang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Meng Liu
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| | - Junsong Wang
- College of Big Data and Internet, Shenzhen Technology University, Shenzhen, China
| |
Collapse
|
2
|
Pan W, Zhao F, Zeng Y, Han B. Adaptive structure evolution and biologically plausible synaptic plasticity for recurrent spiking neural networks. Sci Rep 2023; 13:16924. [PMID: 37805632 PMCID: PMC10560283 DOI: 10.1038/s41598-023-43488-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Accepted: 09/25/2023] [Indexed: 10/09/2023] Open
Abstract
The architecture design and multi-scale learning principles of the human brain that evolved over hundreds of millions of years are crucial to realizing human-like intelligence. Spiking neural network based Liquid State Machine (LSM) serves as a suitable architecture to study brain-inspired intelligence because of its brain-inspired structure and the potential for integrating multiple biological principles. Existing researches on LSM focus on different certain perspectives, including high-dimensional encoding or optimization of the liquid layer, network architecture search, and application to hardware devices. There is still a lack of in-depth inspiration from the learning and structural evolution mechanism of the brain. Considering these limitations, this paper presents a novel LSM learning model that integrates adaptive structural evolution and multi-scale biological learning rules. For structural evolution, an adaptive evolvable LSM model is developed to optimize the neural architecture design of liquid layer with separation property. For brain-inspired learning of LSM, we propose a dopamine-modulated Bienenstock-Cooper-Munros (DA-BCM) method that incorporates global long-term dopamine regulation and local trace-based BCM synaptic plasticity. Comparative experimental results on different decision-making tasks show that introducing structural evolution of the liquid layer, and the DA-BCM regulation of the liquid layer and the readout layer could improve the decision-making ability of LSM and flexibly adapt to rule reversal. This work is committed to exploring how evolution can help to design more appropriate network architectures and how multi-scale neuroplasticity principles coordinated to enable the optimization and learning of LSMs for relatively complex decision-making tasks.
Collapse
Affiliation(s)
- Wenxuan Pan
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Feifei Zhao
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Yi Zeng
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China.
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
- School of Future Technology, University of Chinese Academy of Sciences, Beijing, China.
- Center for Excellence in Brain Science and Intelligence Technology, Chinese Academy of Sciences, Shanghai, China.
| | - Bing Han
- Brain-inspired Cognitive Intelligence Lab, Institute of Automation, Chinese Academy of Sciences, Beijing, China
- School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
3
|
Optimizing the Neural Structure and Hyperparameters of Liquid State Machines Based on Evolutionary Membrane Algorithm. MATHEMATICS 2022. [DOI: 10.3390/math10111844] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
As one of the important artificial intelligence fields, brain-like computing attempts to give machines a higher intelligence level by studying and simulating the cognitive principles of the human brain. A spiking neural network (SNN) is one of the research directions of brain-like computing, characterized by better biogenesis and stronger computing power than the traditional neural network. A liquid state machine (LSM) is a neural computing model with a recurrent network structure based on SNN. In this paper, a learning algorithm based on an evolutionary membrane algorithm is proposed to optimize the neural structure and hyperparameters of an LSM. First, the object of the proposed algorithm is designed according to the neural structure and hyperparameters of the LSM. Second, the reaction rules of the proposed algorithm are employed to discover the best neural structure and hyperparameters of the LSM. Third, the membrane structure is that the skin membrane contains several elementary membranes to speed up the search of the proposed algorithm. In the simulation experiment, effectiveness verification is carried out on the MNIST and KTH datasets. In terms of the MNIST datasets, the best test results of the proposed algorithm with 500, 1000 and 2000 spiking neurons are 86.8%, 90.6% and 90.8%, respectively. The best test results of the proposed algorithm on KTH with 500, 1000 and 2000 spiking neurons are 82.9%, 85.3% and 86.3%, respectively. The simulation results show that the proposed algorithm has a more competitive advantage than other experimental algorithms.
Collapse
|
4
|
Iranmehr E, Shouraki SB, Faraji M. Developing a structural-based local learning rule for classification tasks using ionic liquid space-based reservoir. Neural Comput Appl 2022. [DOI: 10.1007/s00521-022-07345-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
5
|
Tian S, Qu L, Wang L, Hu K, Li N, Xu W. A neural architecture search based framework for liquid state machine design. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.02.076] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
6
|
Weddell S, Ayyagari S, Jones RD. Reservoir computing approaches to microsleep detection. J Neural Eng 2020; 18. [PMID: 33205754 DOI: 10.1088/1741-2552/abcb7f] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2020] [Accepted: 11/09/2020] [Indexed: 11/11/2022]
Abstract
The detection of microsleeps in a wide range of professionals working in high-risk occupations is very important to workplace safety. A microsleep classifier is presented that employs a reservoir computing (RC) methodology. Specifically, echo state networks (ESN) are used to enhance previous benchmark performances on microsleep detection. A clustered design using a novel ESN-based leaky integrator is presented. The effectiveness of this design lies with the simplicity of using a fine-grained architecture, containing up to 8 neurons per cluster, to capture individualized state dynamics and achieve optimal performance. This is the first study to have implemented and evaluated EEG-based microsleep detection using RC models for the detection of microsleeps from the EEG. Microsleep state detection was achieved using a cascaded ESN classifier with leaky-integrator neurons employing 60 principal components from 544 power spectral features. This resulted in a leave-one-subject-out average detection in performance of Φ= 0.51 ± 0.07 (mean ± SE), AUC-ROC = 0.88 ± 0.03, and AUC-PR = 0.44 ± 0.09. Although performance of EEG-based microsleep detection systems is still considered modest, this refined method achieved a new benchmark in microsleep detection.
Collapse
Affiliation(s)
- Steve Weddell
- Electrical and Computer Engineering, University of Canterbury College of Engineering, Christchurch, NEW ZEALAND
| | - Sudhanshu Ayyagari
- Electrical and Computer Engineering, University of Canterbury College of Engineering, Christchurch, NEW ZEALAND
| | - Richard D Jones
- Neurotechnology Research Programme, New Zealand Brain Research Institute, 66 Stewart Street, Christchurch, NEW ZEALAND
| |
Collapse
|
7
|
Iranmehr E, Shouraki SB, Faraji MM, Bagheri N, Linares-Barranco B. Bio-Inspired Evolutionary Model of Spiking Neural Networks in Ionic Liquid Space. Front Neurosci 2019; 13:1085. [PMID: 31787863 PMCID: PMC6856051 DOI: 10.3389/fnins.2019.01085] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2019] [Accepted: 09/25/2019] [Indexed: 11/27/2022] Open
Abstract
One of the biggest struggles while working with artificial neural networks is being able to come up with models which closely match biological observations. Biological neural networks seem to capable of creating and pruning dendritic spines, leading to synapses being changed, which results in higher learning capability. The latter forms the basis of the present study in which a new ionic model for reservoir-like networks, consisting of spiking neurons, is introduced. High plasticity of this model makes learning possible with a fewer number of neurons. In order to study the effect of the applied stimulus in an ionic liquid space through time, a diffusion operator is used which somehow compensates for the separation between spatial and temporal coding in spiking neural networks and therefore, makes the mentioned model suitable for spatiotemporal patterns. Inspired by partial structural changes in the human brain over the years, the proposed model evolves during the learning process. The effect of topological evolution on the proposed model's performance for some classification problems is studied in this paper. Several datasets have been used to evaluate the performance of the proposed model compared to the original LSM. Classification results via separation and accuracy values have shown that the proposed ionic liquid outperforms the original LSM.
Collapse
Affiliation(s)
- Ensieh Iranmehr
- Artificial Creatures Laboratory, Electrical Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Saeed Bagheri Shouraki
- Artificial Creatures Laboratory, Electrical Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Mohammad Mahdi Faraji
- Artificial Creatures Laboratory, Electrical Engineering Department, Sharif University of Technology, Tehran, Iran
| | - Nasim Bagheri
- Artificial Creatures Laboratory, Electrical Engineering Department, Sharif University of Technology, Tehran, Iran
| | | |
Collapse
|
8
|
Taherkhani A, Belatreche A, Li Y, Cosma G, Maguire LP, McGinnity TM. A review of learning in biologically plausible spiking neural networks. Neural Netw 2019; 122:253-272. [PMID: 31726331 DOI: 10.1016/j.neunet.2019.09.036] [Citation(s) in RCA: 73] [Impact Index Per Article: 14.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 11/30/2022]
Abstract
Artificial neural networks have been used as a powerful processing tool in various areas such as pattern recognition, control, robotics, and bioinformatics. Their wide applicability has encouraged researchers to improve artificial neural networks by investigating the biological brain. Neurological research has significantly progressed in recent years and continues to reveal new characteristics of biological neurons. New technologies can now capture temporal changes in the internal activity of the brain in more detail and help clarify the relationship between brain activity and the perception of a given stimulus. This new knowledge has led to a new type of artificial neural network, the Spiking Neural Network (SNN), that draws more faithfully on biological properties to provide higher processing abilities. A review of recent developments in learning of spiking neurons is presented in this paper. First the biological background of SNN learning algorithms is reviewed. The important elements of a learning algorithm such as the neuron model, synaptic plasticity, information encoding and SNN topologies are then presented. Then, a critical review of the state-of-the-art learning algorithms for SNNs using single and multiple spikes is presented. Additionally, deep spiking neural networks are reviewed, and challenges and opportunities in the SNN field are discussed.
Collapse
Affiliation(s)
- Aboozar Taherkhani
- School of Computer Science and Informatics, Faculty of Computing, Engineering and Media, De Montfort University, Leicester, UK.
| | - Ammar Belatreche
- Department of Computer and Information Sciences, Northumbria University, Newcastle upon Tyne, UK
| | - Yuhua Li
- School of Computer Science and Informatics, Cardiff University, Cardiff, UK
| | - Georgina Cosma
- Department of Computer Science, Loughborough University, Loughborough, UK
| | - Liam P Maguire
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK
| | - T M McGinnity
- Intelligent Systems Research Centre, Ulster University, Northern Ireland, Derry, UK; School of Science and Technology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
9
|
Zeng G, Huang X, Jiang T, Yu S. Short-term synaptic plasticity expands the operational range of long-term synaptic changes in neural networks. Neural Netw 2019; 118:140-147. [DOI: 10.1016/j.neunet.2019.06.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2018] [Revised: 04/23/2019] [Accepted: 06/03/2019] [Indexed: 02/04/2023]
|
10
|
Rodriguez N, Izquierdo E, Ahn YY. Optimal modularity and memory capacity of neural reservoirs. Netw Neurosci 2019; 3:551-566. [PMID: 31089484 PMCID: PMC6497001 DOI: 10.1162/netn_a_00082] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2018] [Accepted: 02/25/2019] [Indexed: 11/04/2022] Open
Abstract
The neural network is a powerful computing framework that has been exploited by biological evolution and by humans for solving diverse problems. Although the computational capabilities of neural networks are determined by their structure, the current understanding of the relationships between a neural network's architecture and function is still primitive. Here we reveal that a neural network's modular architecture plays a vital role in determining the neural dynamics and memory performance of the network of threshold neurons. In particular, we demonstrate that there exists an optimal modularity for memory performance, where a balance between local cohesion and global connectivity is established, allowing optimally modular networks to remember longer. Our results suggest that insights from dynamical analysis of neural networks and information-spreading processes can be leveraged to better design neural networks and may shed light on the brain's modular organization.
Collapse
Affiliation(s)
- Nathaniel Rodriguez
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
| | - Eduardo Izquierdo
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| | - Yong-Yeol Ahn
- School of Informatics, Computing, and Engineering, Indiana University, Bloomington, IN, USA
- Indiana University Network Science Institute, Bloomington, IN, USA
| |
Collapse
|
11
|
Liquid computing of spiking neural network with multi-clustered and active-neuron-dominant structure. Neurocomputing 2017. [DOI: 10.1016/j.neucom.2017.03.022] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
12
|
Familiarity Detection is an Intrinsic Property of Cortical Microcircuits with Bidirectional Synaptic Plasticity. eNeuro 2017; 4:eN-NWR-0361-16. [PMID: 28534043 PMCID: PMC5439184 DOI: 10.1523/eneuro.0361-16.2017] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2016] [Revised: 04/26/2017] [Accepted: 04/27/2017] [Indexed: 11/21/2022] Open
Abstract
Humans instantly recognize a previously seen face as “familiar.” To deepen our understanding of familiarity-novelty detection, we simulated biologically plausible neural network models of generic cortical microcircuits consisting of spiking neurons with random recurrent synaptic connections. NMDA receptor (NMDAR)-dependent synaptic plasticity was implemented to allow for unsupervised learning and bidirectional modifications. Network spiking activity evoked by sensory inputs consisting of face images altered synaptic efficacy, which resulted in the network responding more strongly to a previously seen face than a novel face. Network size determined how many faces could be accurately recognized as familiar. When the simulated model became sufficiently complex in structure, multiple familiarity traces could be retained in the same network by forming partially-overlapping subnetworks that differ slightly from each other, thereby resulting in a high storage capacity. Fisher’s discriminant analysis was applied to identify critical neurons whose spiking activity predicted familiar input patterns. Intriguingly, as sensory exposure was prolonged, the selected critical neurons tended to appear at deeper layers of the network model, suggesting recruitment of additional circuits in the network for incremental information storage. We conclude that generic cortical microcircuits with bidirectional synaptic plasticity have an intrinsic ability to detect familiar inputs. This ability does not require a specialized wiring diagram or supervision and can therefore be expected to emerge naturally in developing cortical circuits.
Collapse
|
13
|
Roy S, Basu A. An Online Structural Plasticity Rule for Generating Better Reservoirs. Neural Comput 2016; 28:2557-2584. [PMID: 27626967 DOI: 10.1162/neco_a_00886] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
In this letter, we propose a novel neuro-inspired low-resolution online unsupervised learning rule to train the reservoir or liquid of liquid state machines. The liquid is a sparsely interconnected huge recurrent network of spiking neurons. The proposed learning rule is inspired from structural plasticity and trains the liquid through formating and eliminating synaptic connections. Hence, the learning involves rewiring of the reservoir connections similar to structural plasticity observed in biological neural networks. The network connections can be stored as a connection matrix and updated in memory by using address event representation (AER) protocols, which are generally employed in neuromorphic systems. On investigating the pairwise separation property, we find that trained liquids provide 1.36 0.18 times more interclass separation while retaining similar intraclass separation as compared to random liquids. Moreover, analysis of the linear separation property reveals that trained liquids are 2.05 0.27 times better than random liquids. Furthermore, we show that our liquids are able to retain the generalization ability and generality of random liquids. A memory analysis shows that trained liquids have 83.67 5.79 ms longer fading memory than random liquids, which have shown 92.8 5.03 ms fading memory for a particular type of spike train inputs. We also throw some light on the dynamics of the evolution of recurrent connections within the liquid. Moreover, compared to separation-driven synaptic modification', a recently proposed algorithm for iteratively refining reservoirs, our learning rule provides 9.30%, 15.21%, and 12.52% more liquid separations and 2.8%, 9.1%, and 7.9% better classification accuracies for 4, 8, and 12 class pattern recognition tasks, respectively.
Collapse
Affiliation(s)
- Subhrajit Roy
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| | - Arindam Basu
- School of Electrical and Electronic Engineering, Nanyang Technological University, Singapore 639798
| |
Collapse
|
14
|
Wang H, Lam K, Fung CCA, Wong KYM, Wu S. Rich spectrum of neural field dynamics in the presence of short-term synaptic depression. PHYSICAL REVIEW. E, STATISTICAL, NONLINEAR, AND SOFT MATTER PHYSICS 2015; 92:032908. [PMID: 26465541 DOI: 10.1103/physreve.92.032908] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2015] [Indexed: 06/05/2023]
Abstract
In continuous attractor neural networks (CANNs), spatially continuous information such as orientation, head direction, and spatial location is represented by Gaussian-like tuning curves that can be displaced continuously in the space of the preferred stimuli of the neurons. We investigate how short-term synaptic depression (STD) can reshape the intrinsic dynamics of the CANN model and its responses to a single static input. In particular, CANNs with STD can support various complex firing patterns and chaotic behaviors. These chaotic behaviors have the potential to encode various stimuli in the neuronal system.
Collapse
Affiliation(s)
- He Wang
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China
| | - Kin Lam
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China
| | - C C Alan Fung
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China
| | - K Y Michael Wong
- Department of Physics, Hong Kong University of Science and Technology, Hong Kong, China
| | - Si Wu
- State Key Laboratory of Cognitive Neuroscience and Learning, IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
15
|
Abstract
The ability to process complex spatiotemporal information is a fundamental process underlying the behavior of all higher organisms. However, how the brain processes information in the temporal domain remains incompletely understood. We have explored the spatiotemporal information-processing capability of networks formed from dissociated rat E18 cortical neurons growing in culture. By combining optogenetics with microelectrode array recording, we show that these randomly organized cortical microcircuits are able to process complex spatiotemporal information, allowing the identification of a large number of temporal sequences and classification of musical styles. These experiments uncovered spatiotemporal memory processes lasting several seconds. Neural network simulations indicated that both short-term synaptic plasticity and recurrent connections are required for the emergence of this capability. Interestingly, NMDA receptor function is not a requisite for these short-term spatiotemporal memory processes. Indeed, blocking the NMDA receptor with the antagonist APV significantly improved the temporal processing ability of the networks, by reducing spontaneously occurring network bursts. These highly synchronized events have disastrous effects on spatiotemporal information processing, by transiently erasing short-term memory. These results show that the ability to process and integrate complex spatiotemporal information is an intrinsic property of generic cortical networks that does not require specifically designed circuits.
Collapse
|
16
|
Roy S, Banerjee A, Basu A. Liquid state machine with dendritically enhanced readout for low-power, neuromorphic VLSI implementations. IEEE TRANSACTIONS ON BIOMEDICAL CIRCUITS AND SYSTEMS 2014; 8:681-695. [PMID: 25361513 DOI: 10.1109/tbcas.2014.2362969] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
In this paper, we describe a new neuro-inspired, hardware-friendly readout stage for the liquid state machine (LSM), a popular model for reservoir computing. Compared to the parallel perceptron architecture trained by the p-delta algorithm, which is the state of the art in terms of performance of readout stages, our readout architecture and learning algorithm can attain better performance with significantly less synaptic resources making it attractive for VLSI implementation. Inspired by the nonlinear properties of dendrites in biological neurons, our readout stage incorporates neurons having multiple dendrites with a lumped nonlinearity (two compartment model). The number of synaptic connections on each branch is significantly lower than the total number of connections from the liquid neurons and the learning algorithm tries to find the best 'combination' of input connections on each branch to reduce the error. Hence, the learning involves network rewiring (NRW) of the readout network similar to structural plasticity observed in its biological counterparts. We show that compared to a single perceptron using analog weights, this architecture for the readout can attain, even by using the same number of binary valued synapses, up to 3.3 times less error for a two-class spike train classification problem and 2.4 times less error for an input rate approximation task. Even with 60 times larger synapses, a group of 60 parallel perceptrons cannot attain the performance of the proposed dendritically enhanced readout. An additional advantage of this method for hardware implementations is that the 'choice' of connectivity can be easily implemented exploiting address event representation (AER) protocols commonly used in current neuromorphic systems where the connection matrix is stored in memory. Also, due to the use of binary synapses, our proposed method is more robust against statistical variations.
Collapse
|