1
|
Multi-scale full spike pattern for semantic segmentation. Neural Netw 2024; 176:106330. [PMID: 38688068 DOI: 10.1016/j.neunet.2024.106330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Revised: 02/08/2024] [Accepted: 04/19/2024] [Indexed: 05/02/2024]
Abstract
Spiking neural networks (SNNs), as the brain-inspired neural networks, encode information in spatio-temporal dynamics. They have the potential to serve as low-power alternatives to artificial neural networks (ANNs) due to their sparse and event-driven nature. However, existing SNN-based models for pixel-level semantic segmentation tasks suffer from poor performance and high memory overhead, failing to fully exploit the computational effectiveness and efficiency of SNNs. To address these challenges, we propose the multi-scale and full spike segmentation network (MFS-Seg), which is based on the deep direct trained SNN and represents the first attempt to train a deep SNN with surrogate gradients for semantic segmentation. Specifically, we design an efficient fully-spike residual block (EFS-Res) to alleviate representation issues caused by spiking noise on different channels. EFS-Res utilizes depthwise separable convolution to improve the distributions of spiking feature maps. The visualization shows that our model can effectively extract the edge features of segmented objects. Furthermore, it can significantly reduce the memory overhead and energy consumption of the network. In addition, we theoretically analyze and prove that EFS-Res can avoid the degradation problem based on block dynamical isometry theory. Experimental results on the Camvid dataset, the DDD17 dataset, and the DSEC-Semantic dataset show that our model achieves comparable performance to the mainstream UNet network with up to 31× fewer parameters, while significantly reducing power consumption by over 13×. Overall, our MFS-Seg model demonstrates promising results in terms of performance, memory efficiency, and energy consumption, showcasing the potential of deep SNNs for semantic segmentation tasks. Our code is available in https://github.com/BICLab/MFS-Seg.
Collapse
|
2
|
Rapid diagnosis of systemic lupus erythematosus by Raman spectroscopy combined with spiking neural network. SPECTROCHIMICA ACTA. PART A, MOLECULAR AND BIOMOLECULAR SPECTROSCOPY 2024; 310:123904. [PMID: 38262298 DOI: 10.1016/j.saa.2024.123904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2023] [Revised: 11/30/2023] [Accepted: 01/15/2024] [Indexed: 01/25/2024]
Abstract
Multiple organs are affected by the autoimmune inflammatory connective tissue disease known as systemic lupus erythematosus (SLE). If not diagnosed and treated in a timely manner, it can lead to nephritis and damage to the blood system in severe cases, resulting in the patient's death. Therefore, correct and timely diagnosis and treatment are essential for patients. In this study, a framework based on neural network algorithm and Raman spectroscopy technique was established to diagnose SLE patients. Firstly, we pre-processed the obtained Raman data by three methods: baseline correction, smoothing processing and normalization methods, before using it as input for the model, and then ANN, ResNet and SNN classification models were established. The respective classification accuracies for SLE patients were 89.61%, 85.71%, and 95.65% for the three models, with corresponding AUC values of 0.8772, 0.8100, and 0.9555. The results of the experimental indicate that SNN possesses a good classification effect, and the number of model parameters is only 525,826, which is 414,221 less than that of ResNet model. Since the network only uses 0 and 1 to transmit information, and only has basic operations such as summation, compared with the second-generation artificial neural network, which simplifies the product operation of floating point numbers into multiple addition operations, the network has low energy consumption and is suitable for embedding portable Raman spectrometer for clinical diagnosis. This research highlights the significant potential for quick and precise SLE patient discrimination offered by Raman spectroscopy in conjunction with spiking neural networks.
Collapse
|
3
|
Trainable Spiking-YOLO for low-latency and high-performance object detection. Neural Netw 2024; 172:106092. [PMID: 38211460 DOI: 10.1016/j.neunet.2023.106092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2023] [Revised: 12/06/2023] [Accepted: 12/26/2023] [Indexed: 01/13/2024]
Abstract
Spiking neural networks (SNNs) are considered an attractive option for edge-side applications due to their sparse, asynchronous and event-driven characteristics. However, the application of SNNs to object detection tasks faces challenges in achieving good detection accuracy and high detection speed. To overcome the aforementioned challenges, we propose an end-to-end Trainable Spiking-YOLO (Tr-Spiking-YOLO) for low-latency and high-performance object detection. We evaluate our model on not only frame-based PASCAL VOC dataset but also event-based GEN1 Automotive Detection dataset, and investigate the impacts of different decoding methods on detection performance. The experimental results show that our model achieves competitive/better performance in terms of accuracy, latency and energy consumption compared to similar artificial neural network (ANN) and conversion-based SNN object detection model. Furthermore, when deployed on an edge device, our model achieves a processing speed of approximately from 14 to 39 FPS while maintaining a desirable mean Average Precision (mAP), which is capable of real-time detection on resource-constrained platforms.
Collapse
|
4
|
An efficient intrusion detection model based on convolutional spiking neural network. Sci Rep 2024; 14:7054. [PMID: 38528084 DOI: 10.1038/s41598-024-57691-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Accepted: 03/20/2024] [Indexed: 03/27/2024] Open
Abstract
Many intrusion detection techniques have been developed to ensure that the target system can function properly under the established rules. With the booming Internet of Things (IoT) applications, the resource-constrained nature of its devices makes it urgent to explore lightweight and high-performance intrusion detection models. Recent years have seen a particularly active application of deep learning (DL) techniques. The spiking neural network (SNN), a type of artificial intelligence that is associated with sparse computations and inherent temporal dynamics, has been viewed as a potential candidate for the next generation of DL. It should be noted, however, that current research into SNNs has largely focused on scenarios where limited computational resources and insufficient power sources are not considered. Consequently, even state-of-the-art SNN solutions tend to be inefficient. In this paper, a lightweight and effective detection model is proposed. With the help of rational algorithm design, the model integrates the advantages of SNNs as well as convolutional neural networks (CNNs). In addition to reducing resource usage, it maintains a high level of classification accuracy. The proposed model was evaluated against some current state-of-the-art models using a comprehensive set of metrics. Based on the experimental results, the model demonstrated improved adaptability to environments with limited computational resources and energy sources.
Collapse
|
5
|
A novel stochastic resonance based deep residual network for fault diagnosis of rolling bearing system. ISA TRANSACTIONS 2024:S0019-0578(24)00128-9. [PMID: 38582635 DOI: 10.1016/j.isatra.2024.03.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/26/2023] [Revised: 03/21/2024] [Accepted: 03/22/2024] [Indexed: 04/08/2024]
Abstract
Rolling bearings constitute one of the most vital components in mechanical equipment, monitoring and diagnosing the condition of rolling bearings is essential to ensure safe operation. In actual production, the collected fault signals typically contain noise and cannot be accurately identified. In the paper, stochastic resonance (SR) is introduced into a spiking neural network (SNN) as a feature enhancement method for fault signals with varying noise intensities, combining deep learning with SR to enhance classification accuracy. The output signal-to-noise ratio(SNR) can be enhanced with the SR effect when the noise-affected fault signal input into neurons. Validation of the method is carried out through experiments on the CWRU dataset, achieving classification accuracy of 99.9%. In high-noise environments, with SNR equal to -8 dB, SRDNs achieve over 92% accuracy, exhibiting better robustness and adaptability.
Collapse
|
6
|
Hierarchical rhythmic propagation of corticothalamic interactions for consciousness: A computational study. Comput Biol Med 2024; 169:107843. [PMID: 38141448 DOI: 10.1016/j.compbiomed.2023.107843] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 11/22/2023] [Accepted: 12/11/2023] [Indexed: 12/25/2023]
Abstract
Clarifying the mechanisms of loss and recovery of consciousness in the brain is a major challenge in neuroscience, and research on the spatiotemporal organization of rhythms at the brain region scale at different levels of consciousness remains scarce. By applying computational neuroscience, an extended corticothalamic network model was developed in this study to simulate the altered states of consciousness induced by different concentration levels of propofol. The cortex area containing oscillation spread from posterior to anterior in four successive time stages, defining four groups of brain regions. A quantitative analysis showed that hierarchical rhythm propagation was mainly due to heterogeneity in the inter-brain region connections. These results indicate that the proposed model is an anatomically data-driven testbed and a simulation platform with millisecond resolution. It facilitates understanding of activity coordination across multiple areas of the conscious brain and the mechanisms of action of anesthetics in terms of brain regions.
Collapse
|
7
|
An exact mapping from ReLU networks to spiking neural networks. Neural Netw 2023; 168:74-88. [PMID: 37742533 DOI: 10.1016/j.neunet.2023.09.011] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2023] [Revised: 08/31/2023] [Accepted: 09/04/2023] [Indexed: 09/26/2023]
Abstract
Deep spiking neural networks (SNNs) offer the promise of low-power artificial intelligence. However, training deep SNNs from scratch or converting deep artificial neural networks to SNNs without loss of performance has been a challenge. Here we propose an exact mapping from a network with Rectified Linear Units (ReLUs) to an SNN that fires exactly one spike per neuron. For our constructive proof, we assume that an arbitrary multi-layer ReLU network with or without convolutional layers, batch normalization and max pooling layers was trained to high performance on some training set. Furthermore, we assume that we have access to a representative example of input data used during training and to the exact parameters (weights and biases) of the trained ReLU network. The mapping from deep ReLU networks to SNNs causes zero percent drop in accuracy on CIFAR10, CIFAR100 and the ImageNet-like data sets Places365 and PASS. More generally our work shows that an arbitrary deep ReLU network can be replaced by an energy-efficient single-spike neural network without any loss of performance.
Collapse
|
8
|
Sparser spiking activity can be better: Feature Refine-and-Mask spiking neural network for event-based visual recognition. Neural Netw 2023; 166:410-423. [PMID: 37549609 DOI: 10.1016/j.neunet.2023.07.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 02/23/2023] [Accepted: 07/05/2023] [Indexed: 08/09/2023]
Abstract
Event-based visual, a new visual paradigm with bio-inspired dynamic perception and μs level temporal resolution, has prominent advantages in many specific visual scenarios and gained much research interest. Spiking neural network (SNN) is naturally suitable for dealing with event streams due to its temporal information processing capability and event-driven nature. However, existing works SNN neglect the fact that the input event streams are spatially sparse and temporally non-uniform, and just treat these variant inputs equally. This situation interferes with the effectiveness and efficiency of existing SNNs. In this paper, we propose the feature Refine-and-Mask SNN (RM-SNN), which has the ability of self-adaption to regulate the spiking response in a data-dependent way. We use the Refine-and-Mask (RM) module to refine all features and mask the unimportant features to optimize the membrane potential of spiking neurons, which in turn drops the spiking activity. Inspired by the fact that not all events in spatio-temporal streams are task-relevant, we execute the RM module in both temporal and channel dimensions. Extensive experiments on seven event-based benchmarks, DVS128 Gesture, DVS128 Gait, CIFAR10-DVS, N-Caltech101, DailyAction-DVS, UCF101-DVS, and HMDB51-DVS demonstrate that under the multi-scale constraints of input time window, RM-SNN can significantly reduce the network average spiking activity rate while improving the task performance. In addition, by visualizing spiking responses, we analyze why sparser spiking activity can be better. Code.
Collapse
|
9
|
Memristor-based spiking neural network with online reinforcement learning. Neural Netw 2023; 166:512-523. [PMID: 37579580 DOI: 10.1016/j.neunet.2023.07.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 04/28/2023] [Accepted: 07/24/2023] [Indexed: 08/16/2023]
Abstract
Neural networks implemented in memristor-based hardware can provide fast and efficient in-memory computation, but traditional learning methods such as error back-propagation are hardly feasible in it. Spiking neural networks (SNNs) are highly promising in this regard, as their weights can be changed locally in a self-organized manner without the demand for high-precision changes calculated with the use of information almost from the entire network. This problem is rather relevant for solving control tasks with neural-network reinforcement learning methods, as those are highly sensitive to any source of stochasticity in a model initialization, training, or decision-making procedure. This paper presents an online reinforcement learning algorithm in which the change of connection weights is carried out after processing each environment state during interaction-with-environment data generation. Another novel feature of the algorithm is that it is applied to SNNs with memristor-based STDP-like learning rules. The plasticity functions are obtained from real memristors based on poly-p-xylylene and CoFeB-LiNbO3 nanocomposite, which were experimentally assembled and analyzed. The SNN is comprised of leaky integrate-and-fire neurons. Environmental states are encoded by the timings of input spikes, and the control action is decoded by the first spike. The proposed learning algorithm solves the Cart-Pole benchmark task successfully. This result could be the first step towards implementing a real-time agent learning procedure in a continuous-time environment that can be run on neuromorphic systems with memristive synapses.
Collapse
|
10
|
[A bio-inspired hierarchical spiking neural network with biological synaptic plasticity for event camera object recognition]. SHENG WU YI XUE GONG CHENG XUE ZA ZHI = JOURNAL OF BIOMEDICAL ENGINEERING = SHENGWU YIXUE GONGCHENGXUE ZAZHI 2023; 40:692-699. [PMID: 37666759 PMCID: PMC10477392 DOI: 10.7507/1001-5515.202207040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Revised: 07/14/2023] [Indexed: 09/06/2023]
Abstract
With inherent sparse spike-based coding and asynchronous event-driven computation, spiking neural network (SNN) is naturally suitable for processing event stream data of event cameras. In order to improve the feature extraction and classification performance of bio-inspired hierarchical SNNs, in this paper an event camera object recognition system based on biological synaptic plasticity is proposed. In our system input event streams were firstly segmented adaptively using spiking neuron potential to improve computational efficiency of the system. Multi-layer feature learning and classification are implemented by our bio-inspired hierarchical SNN with synaptic plasticity. After Gabor filter-based event-driven convolution layer which extracted primary visual features of event streams, we used a feature learning layer with unsupervised spiking timing dependent plasticity (STDP) rule to help the network extract frequent salient features, and a feature learning layer with reward-modulated STDP rule to help the network learn diagnostic features. The classification accuracies of the network proposed in this paper on the four benchmark event stream datasets were better than the existing bio-inspired hierarchical SNNs. Moreover, our method showed good classification ability for short event stream input data, and was robust to input event stream noise. The results show that our method can improve the feature extraction and classification performance of this kind of SNNs for event camera object recognition.
Collapse
|
11
|
An unsupervised STDP-based spiking neural network inspired by biologically plausible learning rules and connections. Neural Netw 2023; 165:799-808. [PMID: 37418862 DOI: 10.1016/j.neunet.2023.06.019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2022] [Revised: 06/14/2023] [Accepted: 06/15/2023] [Indexed: 07/09/2023]
Abstract
The backpropagation algorithm has promoted the rapid development of deep learning, but it relies on a large amount of labeled data and still has a large gap with how humans learn. The human brain can quickly learn various conceptual knowledge in a self-organized and unsupervised manner, accomplished through coordinating various learning rules and structures in the human brain. Spike-timing-dependent plasticity (STDP) is a general learning rule in the brain, but spiking neural networks (SNNs) trained with STDP alone is inefficient and perform poorly. In this paper, taking inspiration from short-term synaptic plasticity, we design an adaptive synaptic filter and introduce the adaptive spiking threshold as the neuron plasticity to enrich the representation ability of SNNs. We also introduce an adaptive lateral inhibitory connection to adjust the spikes balance dynamically to help the network learn richer features. To speed up and stabilize the training of unsupervised spiking neural networks, we design a samples temporal batch STDP (STB-STDP), which updates weights based on multiple samples and moments. By integrating the above three adaptive mechanisms and STB-STDP, our model greatly accelerates the training of unsupervised spiking neural networks and improves the performance of unsupervised SNNs on complex tasks. Our model achieves the current state-of-the-art performance of unsupervised STDP-based SNNs in the MNIST and FashionMNIST datasets. Further, we tested on the more complex CIFAR10 dataset, and the results fully illustrate the superiority of our algorithm. Our model is also the first work to apply unsupervised STDP-based SNNs to CIFAR10. At the same time, in the small-sample learning scenario, it will far exceed the supervised ANN using the same structure.
Collapse
|
12
|
Prediction and detection of virtual reality induced cybersickness: a spiking neural network approach using spatiotemporal EEG brain data and heart rate variability. Brain Inform 2023; 10:15. [PMID: 37438494 DOI: 10.1186/s40708-023-00192-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 05/06/2023] [Indexed: 07/14/2023] Open
Abstract
Virtual Reality (VR) allows users to interact with 3D immersive environments and has the potential to be a key technology across many domain applications, including access to a future metaverse. Yet, consumer adoption of VR technology is limited by cybersickness (CS)-a debilitating sensation accompanied by a cluster of symptoms, including nausea, oculomotor issues and dizziness. A leading problem is the lack of automated objective tools to predict or detect CS in individuals, which can then be used for resistance training, timely warning systems or clinical intervention. This paper explores the spatiotemporal brain dynamics and heart rate variability involved in cybersickness and uses this information to both predict and detect CS episodes. The present study applies deep learning of EEG in a spiking neural network (SNN) architecture to predict CS prior to using VR (85.9%, F7) and detect it (76.6%, FP1, Cz). ECG-derived sympathetic heart rate variability (HRV) parameters can be used for both prediction (74.2%) and detection (72.6%) but at a lower accuracy than EEG. Multimodal data fusion of EEG and sympathetic HRV does not change this accuracy compared to ECG alone. The study found that Cz (premotor and supplementary motor cortex) and O2 (primary visual cortex) are key hubs in functionally connected networks associated with both CS events and susceptibility to CS. F7 is also suggested as a key area involved in integrating information and implementing responses to incongruent environments that induce cybersickness. Consequently, Cz, O2 and F7 are presented here as promising targets for intervention.
Collapse
|
13
|
Origin of the efficiency of spike timing-based neural computation for processing temporal information. Neural Netw 2023; 160:84-96. [PMID: 36621172 DOI: 10.1016/j.neunet.2022.12.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2021] [Revised: 10/12/2022] [Accepted: 12/21/2022] [Indexed: 12/28/2022]
Abstract
Although the advantage of spike timing-based over rate-based network computation has been recognized, the underlying mechanism remains unclear. Using Tempotron and Perceptron as elementary neural models, we examined the intrinsic difference between spike timing-based and rate-based computations. For more direct comparison, we modified Tempotron computation into rate-based computation with the retention of some temporal information. Previous studies have shown that spike timing-based computation are computationally more powerful than rate-based computation in terms of the number of computational units required and the capability in classifying random patterns. Our study showed that spike timing-based and rate-based Tempotron computations provided similar capability in classifying random spike patterns, as well as in text sentiment classification and spam text detection. However, spike timing-based computation is superior in performing a task involving discriminating forward vs. reverse sequence of events, i.e., information mainly temporal in nature. Further studies revealed that this superiority required the asymmetry in the profile of the postsynaptic potential (PSP), and that temporal sequence information was converted to biased spatial distribution of synaptic weight modifications during learning. Thus, the intrinsic PSP asymmetry is a mechanistic basis for the high efficiency of spike timing-based computation for processing temporal information.
Collapse
|
14
|
S 3NN: Time step reduction of spiking surrogate gradients for training energy efficient single-step spiking neural networks. Neural Netw 2023; 159:208-219. [PMID: 36657226 DOI: 10.1016/j.neunet.2022.12.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 10/03/2022] [Accepted: 12/13/2022] [Indexed: 12/23/2022]
Abstract
As the scales of neural networks increase, techniques that enable them to run with low computational cost and energy efficiency are required. From such demands, various efficient neural network paradigms, such as spiking neural networks (SNNs) or binary neural networks (BNNs), have been proposed. However, they have sticky drawbacks, such as degraded inference accuracy and latency. To solve these problems, we propose a single-step spiking neural network (S3NN), an energy-efficient neural network with low computational cost and high precision. The proposed S3NN processes the information between hidden layers by spikes as SNNs. Nevertheless, it has no temporal dimension so that there is no latency within training and inference phases as BNNs. Thus, the proposed S3NN has a lower computational cost than SNNs that require time-series processing. However, S3NN cannot adopt naïve backpropagation algorithms due to the non-differentiability nature of spikes. We deduce a suitable neuron model by reducing the surrogate gradient for multi-time step SNNs to a single-time step. We experimentally demonstrated that the obtained surrogate gradient allows S3NN to be trained appropriately. We also showed that the proposed S3NN could achieve comparable accuracy to full-precision networks while being highly energy-efficient.
Collapse
|
15
|
Inferring the temporal evolution of synaptic weights from dynamic functional connectivity. Brain Inform 2022; 9:28. [PMID: 36480076 PMCID: PMC9732068 DOI: 10.1186/s40708-022-00178-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2022] [Accepted: 11/14/2022] [Indexed: 12/13/2022] Open
Abstract
How to capture the temporal evolution of synaptic weights from measures of dynamic functional connectivity between the activity of different simultaneously recorded neurons is an important and open problem in systems neuroscience. Here, we report methodological progress to address this issue. We first simulated recurrent neural network models of spiking neurons with spike timing-dependent plasticity mechanisms that generate time-varying synaptic and functional coupling. We then used these simulations to test analytical approaches that infer fixed and time-varying properties of synaptic connectivity from directed functional connectivity measures, such as cross-covariance and transfer entropy. We found that, while both cross-covariance and transfer entropy provide robust estimates of which synapses are present in the network and their communication delays, dynamic functional connectivity measured via cross-covariance better captures the evolution of synaptic weights over time. We also established how measures of information transmission delays from static functional connectivity computed over long recording periods (i.e., several hours) can improve shorter time-scale estimates of the temporal evolution of synaptic weights from dynamic functional connectivity. These results provide useful information about how to accurately estimate the temporal variation of synaptic strength from spiking activity measures.
Collapse
|
16
|
Complex spiking neural networks with synaptic time-delay based on anti-interference function. Cogn Neurodyn 2022; 16:1485-1503. [PMID: 36408076 PMCID: PMC9666611 DOI: 10.1007/s11571-022-09803-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Revised: 02/13/2022] [Accepted: 03/21/2022] [Indexed: 01/16/2023] Open
Abstract
The research on a brain-like model with bio-interpretability is conductive to promoting its information processing ability in the field of artificial intelligence. Biological results show that the synaptic time-delay can improve the information processing abilities of the nervous system, which are an important factor related to the formation of brain cognitive functions. However, the synaptic plasticity with time-delay of a brain-like model still lacks bio-interpretability. In this study, combining excitatory and inhibitory synapses, we construct the complex spiking neural networks (CSNNs) with synaptic time-delay that more conforms biological characteristics, in which the topology has scale-free property and small-world property, and the nodes are represented by an Izhikevich neuron model. Then, the information processing abilities of CSNNs with different types of synaptic time-delay are comparatively evaluated based on the anti-interference function, and the mechanism of this function is discussed. Using two indicators of the anti-interference function and three kinds of noise, our simulation results consistently verify that: (i) From the perspective of anti-interference function, an CSNN with synaptic random time-delay outperforms an CSNN with synaptic fixed time-delay, which in turn outperforms an CSNN with synaptic none time-delay. The results imply that brain-like networks with more bio-interpretable synaptic time-delay have stronger information processing abilities. (ii) The synaptic plasticity is the intrinsic factor of the anti-interference function of CSNNs with different types of synaptic time-delay. (iii) The synaptic random time-delay makes an CSNN present better topological characteristics, which can improve the information processing ability of a brain-like network. It implies that synaptic time-delay is a factor that affects the anti-interference function at the level of performance.
Collapse
|
17
|
Continuous learning of spiking networks trained with local rules. Neural Netw 2022; 155:512-522. [PMID: 36166978 DOI: 10.1016/j.neunet.2022.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 06/29/2022] [Accepted: 09/02/2022] [Indexed: 10/31/2022]
Abstract
Artificial neural networks (ANNs) experience catastrophic forgetting (CF) during sequential learning. In contrast, the brain can learn continuously without any signs of catastrophic forgetting. Spiking neural networks (SNNs) are the next generation of ANNs with many features borrowed from biological neural networks. Thus, SNNs potentially promise better resilience to CF. In this paper, we study the susceptibility of SNNs to CF and test several biologically inspired methods for mitigating catastrophic forgetting. SNNs are trained with biologically plausible local training rules based on spike-timing-dependent plasticity (STDP). Local training prohibits the direct use of CF prevention methods based on gradients of a global loss function. We developed and tested the method to determine the importance of synapses (weights) based on stochastic Langevin dynamics without the need for the gradients. Several other methods of catastrophic forgetting prevention adapted from analog neural networks were tested as well. The experiments were performed on freely available datasets in the SpykeTorch environment.
Collapse
|
18
|
Hybrid memristor-CMOS neurons for in-situ learning in fully hardware memristive spiking neural networks. Sci Bull (Beijing) 2021; 66:1624-1633. [PMID: 36654296 DOI: 10.1016/j.scib.2021.04.014] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 03/03/2021] [Accepted: 03/26/2021] [Indexed: 02/03/2023]
Abstract
Spiking neural network, inspired by the human brain, consisting of spiking neurons and plastic synapses, is a promising solution for highly efficient data processing in neuromorphic computing. Recently, memristor-based neurons and synapses are becoming intriguing candidates to build spiking neural networks in hardware, owing to the close resemblance between their device dynamics and the biological counterparts. However, the functionalities of memristor-based neurons are currently very limited, and a hardware demonstration of fully memristor-based spiking neural networks supporting in-situ learning is very challenging. Here, a hybrid spiking neuron combining a memristor with simple digital circuits is designed and implemented in hardware to enhance neuron functions. The hybrid neuron with memristive dynamics not only realizes the basic leaky integrate-and-fire neuron function but also enables the in-situ tuning of the connected synaptic weights. Finally, a fully hardware spiking neural network with the hybrid neurons and memristive synapses is experimentally demonstrated for the first time, and in-situ Hebbian learning is achieved with this network. This work opens up a way towards the implementation of spiking neurons, supporting in-situ learning for future neuromorphic computing systems.
Collapse
|
19
|
A neuromimetic realization of hippocampal CA1 for theta wave generation. Neural Netw 2021; 142:548-563. [PMID: 34340189 DOI: 10.1016/j.neunet.2021.07.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2020] [Revised: 04/29/2021] [Accepted: 07/02/2021] [Indexed: 11/20/2022]
Abstract
Recent advances in neural engineering allowed the development of neuroprostheses which facilitate functionality in people with neurological problems. In this research, a real-time neuromorphic system is proposed to artificially reproduce the theta wave and firing patterns of different neuronal populations in the CA1, a sub-region of the hippocampus. The hippocampal theta oscillations (4-12 Hz) are an important electrophysiological rhythm that contributes in various cognitive functions, including navigation, memory, and novelty detection. The proposed CA1 neuromimetic circuit includes 100 linearized Pinsky-Rinzel neurons and 668 excitatory and inhibitory synapses on a field programmable gate array (FPGA). The implemented spiking neural network of the CA1 includes the main neuronal populations for the theta rhythm generation: excitatory pyramidal cells, PV+ basket cells, and Oriens Lacunosum-Moleculare (OLM) cells which are inhibitory interneurons. Moreover, the main inputs to the CA1 region from the entorhinal cortex via the perforant pathway, the CA3 via Schaffer collaterals, and the medial septum via fimbria-fornix are also implemented on the FPGA using a bursting leaky-integrate and fire (LIF) neuron model. The results of hardware realization show that the proposed CA1 neuromimetic circuit successfully reconstructs the theta oscillations and functionally illustrates the phase relations between firing responses of the different neuronal populations. It is also evaluated the impact of medial septum elimination on the firing patterns of the CA1 neuronal population and the theta wave's characteristics. This neuromorphic system can be considered as a potential platform that opens opportunities for neuroprosthetic applications in future works.
Collapse
|
20
|
Detection of COVID-19 from CT scan images: A spiking neural network-based approach. Neural Comput Appl 2021; 33:12591-12604. [PMID: 33879976 PMCID: PMC8050640 DOI: 10.1007/s00521-021-05910-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Accepted: 03/08/2021] [Indexed: 11/25/2022]
Abstract
The outbreak of a global pandemic called coronavirus has created unprecedented circumstances resulting into a large number of deaths and risk of community spreading throughout the world. Desperate times have called for desperate measures to detect the disease at an early stage via various medically proven methods like chest computed tomography (CT) scan, chest X-Ray, etc., in order to prevent the virus from spreading across the community. Developing deep learning models for analysing these kinds of radiological images is a well-known methodology in the domain of computer based medical image analysis. However, doing the same by mimicking the biological models and leveraging the newly developed neuromorphic computing chips might be more economical. These chips have been shown to be more powerful and are more efficient than conventional central and graphics processing units. Additionally, these chips facilitate the implementation of spiking neural networks (SNNs) in real-world scenarios. To this end, in this work, we have tried to simulate the SNNs using various deep learning libraries. We have applied them for the classification of chest CT scan images into COVID and non-COVID classes. Our approach has achieved very high F1 score of 0.99 for the potential-based model and outperforms many state-of-the-art models. The working code associated with our present work can be found here.
Collapse
|
21
|
Necessary conditions for STDP-based pattern recognition learning in a memristive spiking neural network. Neural Netw 2020; 134:64-75. [PMID: 33291017 DOI: 10.1016/j.neunet.2020.11.005] [Citation(s) in RCA: 48] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 09/19/2020] [Accepted: 11/12/2020] [Indexed: 11/28/2022]
Abstract
This work is aimed to study experimental and theoretical approaches for searching effective local training rules for unsupervised pattern recognition by high-performance memristor-based Spiking Neural Networks (SNNs). First, the possibility of weight change using Spike-Timing-Dependent Plasticity (STDP) is demonstrated with a pair of hardware analog neurons connected through a (CoFeB)x(LiNbO3)1-x nanocomposite memristor. Next, the learning convergence to a solution of binary clusterization task is analyzed in a wide range of memristive STDP parameters for a single-layer fully connected feedforward SNN. The memristive STDP behavior supplying convergence in this simple task is shown also to provide it in the handwritten digit recognition domain by the more complex SNN architecture with a Winner-Take-All competition between neurons. To investigate basic conditions necessary for training convergence, an original probabilistic generative model of a rate-based single-layer network with independent or competing neurons is built and thoroughly analyzed. The main result is a statement of "correlation growth-anticorrelation decay" principle which prompts near-optimal policy to configure model parameters. This principle is in line with requiring the binary clusterization convergence which can be defined as the necessary condition for optimal learning and used as the simple benchmark for tuning parameters of various neural network realizations with population-rate information coding. At last, a heuristic algorithm is described to experimentally find out the convergence conditions in a memristive SNN, including robustness to a device variability. Due to the generality of the proposed approach, it can be applied to a wide range of memristors and neurons of software- or hardware-based rate-coding single-layer SNNs when searching for local rules that ensure their unsupervised learning convergence in a pattern recognition task domain.
Collapse
|
22
|
Deterministic characteristics of spontaneous activity detected by multi-fractal analysis in a spiking neural network with long-tailed distributions of synaptic weights. Cogn Neurodyn 2020; 14:829-836. [PMID: 33101534 DOI: 10.1007/s11571-020-09605-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 05/13/2020] [Accepted: 06/02/2020] [Indexed: 10/24/2022] Open
Abstract
Cortical neural networks maintain autonomous electrical activity called spontaneous activity that represents the brain's dynamic internal state even in the absence of sensory stimuli. The spatio-temporal complexity of spontaneous activity is strongly related to perceptual, learning, and cognitive brain functions; multi-fractal analysis can be utilized to evaluate the complexity of spontaneous activity. Recent studies have shown that the deterministic dynamic behavior of spontaneous activity especially reflects the topological neural network characteristics and changes of neural network structures. However, it remains unclear whether multi-fractal analysis, recently widely utilized for neural activity, is effective for detecting the complexity of the deterministic dynamic process. To verify this point, we focused on the log-normal distribution of excitatory postsynaptic potentials (EPSPs) to evaluate the multi-fractality of spontaneous activity in a spiking neural network with a log-normal distribution of EPSPs. We found that the spiking activities exhibited multi-fractal characteristics. Moreover, to investigate the presence of a deterministic process in the spiking activity, we conducted a surrogate data analysis against the time-series of spiking activity. The results showed that the spontaneous spiking activity included the deterministic dynamic behavior. Overall, the combination of multi-fractal analysis and surrogate data analysis can detect deterministic complex neural activity. The multi-fractal analysis of neural activity used in this study could be widely utilized for brain modeling and evaluation methods for signals obtained by neuroimaging modalities.
Collapse
|
23
|
Cuneate spiking neural network learning to classify naturalistic texture stimuli under varying sensing conditions. Neural Netw 2020; 123:273-287. [PMID: 31887687 DOI: 10.1016/j.neunet.2019.11.020] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2019] [Revised: 10/22/2019] [Accepted: 11/25/2019] [Indexed: 11/18/2022]
Abstract
We implemented a functional neuronal network that was able to learn and discriminate haptic features from biomimetic tactile sensor inputs using a two-layer spiking neuron model and homeostatic synaptic learning mechanism. The first order neuron model was used to emulate biological tactile afferents and the second order neuron model was used to emulate biological cuneate neurons. We have evaluated 10 naturalistic textures using a passive touch protocol, under varying sensing conditions. Tactile sensor data acquired with five textures under five sensing conditions were used for a synaptic learning process, to tune the synaptic weights between tactile afferents and cuneate neurons. Using post-learning synaptic weights, we evaluated the individual and population cuneate neuron responses by decoding across 10 stimuli, under varying sensing conditions. This resulted in a high decoding performance. We further validated the decoding performance across stimuli, irrespective of sensing velocities using a set of 25 cuneate neuron responses. This resulted in a median decoding performance of 96% across the set of cuneate neurons. Being able to learn and perform generalized discrimination across tactile stimuli, makes this functional spiking tactile system effective and suitable for further robotic applications.
Collapse
|
24
|
Supervised learning in spiking neural networks: A review of algorithms and evaluations. Neural Netw 2020; 125:258-280. [PMID: 32146356 DOI: 10.1016/j.neunet.2020.02.011] [Citation(s) in RCA: 43] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2019] [Revised: 12/15/2019] [Accepted: 02/20/2020] [Indexed: 01/08/2023]
Abstract
As a new brain-inspired computational model of the artificial neural network, a spiking neural network encodes and processes neural information through precisely timed spike trains. Spiking neural networks are composed of biologically plausible spiking neurons, which have become suitable tools for processing complex temporal or spatiotemporal information. However, because of their intricately discontinuous and implicit nonlinear mechanisms, the formulation of efficient supervised learning algorithms for spiking neural networks is difficult, and has become an important problem in this research field. This article presents a comprehensive review of supervised learning algorithms for spiking neural networks and evaluates them qualitatively and quantitatively. First, a comparison between spiking neural networks and traditional artificial neural networks is provided. The general framework and some related theories of supervised learning for spiking neural networks are then introduced. Furthermore, the state-of-the-art supervised learning algorithms in recent years are reviewed from the perspectives of applicability to spiking neural network architecture and the inherent mechanisms of supervised learning algorithms. A performance comparison of spike train learning of some representative algorithms is also made. In addition, we provide five qualitative performance evaluation criteria for supervised learning algorithms for spiking neural networks and further present a new taxonomy for supervised learning algorithms depending on these five performance evaluation criteria. Finally, some future research directions in this research field are outlined.
Collapse
|
25
|
Effects of synaptic integration on the dynamics and computational performance of spiking neural network. Cogn Neurodyn 2020; 14:347-357. [PMID: 32399076 DOI: 10.1007/s11571-020-09572-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2019] [Revised: 01/07/2020] [Accepted: 02/11/2020] [Indexed: 12/22/2022] Open
Abstract
Neurons in the brain receive thousands of synaptic inputs from other neurons. This afferent information is processed by neurons through synaptic integration, which is an important information processing mechanism in biological neural networks. Synaptic currents integrated from spiking trains of presynaptic neurons have complex nonlinear dynamics which endow neurons with significant computational abilities. However, in many computational studies of neural networks, external input currents are often simply taken as a direct current that is static. In this paper, the influences of synaptic and noise external currents on the dynamics of spiking neural network and its computational capability have been investigated in detail. Our results show that due to the nonlinear synaptic integration, both of fast and slow excitatory synaptic currents have much more complex and oscillatory fluctuations than the noise current with the same average intensity. Thus network driven by synaptic external current exhibits remarkably more complex dynamics than that driven by noise external current. Interestingly, the enhancement of network activity is beneficial for information transmission, which is further supported by two computational tasks conducted on the liquid state machine (LSM) network. LSM with synaptic external current displays considerably better performance in both nonlinear fitting and pattern classification than that with noise external current. Synaptic integration can significantly enhance the entropy of activity patterns and computational performance of LSM. Our results demonstrate that the complex dynamics of nonlinear synaptic integration play a critical role in the computational abilities of neural networks and should be more broadly considered in the modelling studies of spiking neural networks.
Collapse
|
26
|
Realistic spiking neural network: Non-synaptic mechanisms improve convergence in cell assembly. Neural Netw 2019; 122:420-433. [PMID: 31841876 DOI: 10.1016/j.neunet.2019.09.038] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2019] [Revised: 09/17/2019] [Accepted: 09/23/2019] [Indexed: 01/26/2023]
Abstract
Learning in neural networks inspired by brain tissue has been studied for machine learning applications. However, existing works primarily focused on the concept of synaptic weight modulation, and other aspects of neuronal interactions, such as non-synaptic mechanisms, have been neglected. Non-synaptic interaction mechanisms have been shown to play significant roles in the brain, and four classes of these mechanisms can be highlighted: (i) electrotonic coupling; (ii) ephaptic interactions; (iii) electric field effects; and iv) extracellular ionic fluctuations. In this work, we proposed simple rules for learning inspired by recent findings in machine learning adapted to a realistic spiking neural network. We show that the inclusion of non-synaptic interaction mechanisms improves cell assembly convergence. By including extracellular ionic fluctuation represented by the extracellular electrodiffusion in the network, we showed the importance of these mechanisms to improve cell assembly convergence. Additionally, we observed a variety of electrophysiological patterns of neuronal activity, particularly bursting and synchronism when the convergence is improved.
Collapse
|
27
|
Network remodeling induced by transcranial brain stimulation: A computational model of tDCS-triggered cell assembly formation. Netw Neurosci 2019; 3:924-943. [PMID: 31637332 PMCID: PMC6777963 DOI: 10.1162/netn_a_00097] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 05/14/2019] [Indexed: 11/22/2022] Open
Abstract
Transcranial direct current stimulation (tDCS) is a variant of noninvasive neuromodulation, which promises treatment for brain diseases like major depressive disorder. In experiments, long-lasting aftereffects were observed, suggesting that persistent plastic changes are induced. The mechanism underlying the emergence of lasting aftereffects, however, remains elusive. Here we propose a model, which assumes that tDCS triggers a homeostatic response of the network involving growth and decay of synapses. The cortical tissue exposed to tDCS is conceived as a recurrent network of excitatory and inhibitory neurons, with synapses subject to homeostatically regulated structural plasticity. We systematically tested various aspects of stimulation, including electrode size and montage, as well as stimulation intensity and duration. Our results suggest that transcranial stimulation perturbs the homeostatic equilibrium and leads to a pronounced growth response of the network. The stimulated population eventually eliminates excitatory synapses with the unstimulated population, and new synapses among stimulated neurons are grown to form a cell assembly. Strong focal stimulation tends to enhance the connectivity within new cell assemblies, and repetitive stimulation with well-chosen duty cycles can increase the impact of stimulation even further. One long-term goal of our work is to help in optimizing the use of tDCS in clinical applications. Noninvasive brain stimulation techniques like tDCS have the potential to directly interfere with neural activity, but may also trigger activity-dependent plasticity. We propose a model to study the mechanism of tDCS and persistent aftereffects that may be induced as a consequence of homeostatic structural plasticity. Based on the idea that tDCS perturbs the ongoing activity of neurons, our model predicts that the stimulation also triggers a rearrangement of synapses among stimulated and unstimulated neurons, eventually leading to network remodeling and cell assembly formation. Focal and strong stimulation leads to stronger cell assemblies, and so does repetitive stimulation with optimized stimulation protocols. This is the first original work studying possible long-lasting aftereffects of transcranial stimulation at the mesoscopic neuronal network level using a computational model.
Collapse
|
28
|
Locally connected spiking neural networks for unsupervised feature learning. Neural Netw 2019; 119:332-340. [PMID: 31499357 DOI: 10.1016/j.neunet.2019.08.016] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2019] [Revised: 08/08/2019] [Accepted: 08/14/2019] [Indexed: 11/22/2022]
Abstract
In recent years, spiking neural networks (SNNs) have demonstrated great success in completing various machine learning tasks. We introduce a method for learning image features with locally connected layers in SNNs using a spike-timing-dependent plasticity (STDP) rule. In our approach, sub-networks compete via inhibitory interactions to learn features from different locations of the input space. These locally-connected SNNs (LC-SNNs) manifest key topological features of the spatial interaction of biological neurons. We explore a biologically inspired n-gram classification approach allowing parallel processing over various patches of the image space. We report the classification accuracy of simple two-layer LC-SNNs on two image datasets, which respectively match state-of-art performance and are the first results to date. LC-SNNs have the advantage of fast convergence to a dataset representation, and they require fewer learnable parameters than other SNN approaches with unsupervised learning. Robustness tests demonstrate that LC-SNNs exhibit graceful degradation of performance despite the random deletion of large numbers of synapses and neurons. Our results have been obtained using the BindsNET library, which allows efficient machine learning implementations of spiking neural networks.
Collapse
|
29
|
Indirect and direct training of spiking neural networks for end-to-end control of a lane-keeping vehicle. Neural Netw 2019; 121:21-36. [PMID: 31526952 DOI: 10.1016/j.neunet.2019.05.019] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 04/08/2019] [Accepted: 05/20/2019] [Indexed: 11/17/2022]
Abstract
Building spiking neural networks (SNNs) based on biological synaptic plasticities holds a promising potential for accomplishing fast and energy-efficient computing, which is beneficial to mobile robotic applications. However, the implementations of SNNs in robotic fields are limited due to the lack of practical training methods. In this paper, we therefore introduce both indirect and direct end-to-end training methods of SNNs for a lane-keeping vehicle. First, we adopt a policy learned using the Deep Q-Learning (DQN) algorithm and then subsequently transfer it to an SNN using supervised learning. Second, we adopt the reward-modulated spike-timing-dependent plasticity (R-STDP) for training SNNs directly, since it combines the advantages of both reinforcement learning and the well-known spike-timing-dependent plasticity (STDP). We examine the proposed approaches in three scenarios in which a robot is controlled to keep within lane markings by using an event-based neuromorphic vision sensor. We further demonstrate the advantages of the R-STDP approach in terms of the lateral localization accuracy and training time steps by comparing them with other three algorithms presented in this paper.
Collapse
|
30
|
A normative approach to neuromotor control. BIOLOGICAL CYBERNETICS 2019; 113:83-92. [PMID: 30178151 DOI: 10.1007/s00422-018-0777-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/01/2018] [Accepted: 08/20/2018] [Indexed: 06/08/2023]
Abstract
While we can readily observe and model the dynamics of our limbs, analyzing the neurons that drive movement is not nearly as straightforward. As a result, their role in motor behavior (e.g., forward models, state estimators, controllers, etc.) remains elusive. Computational explanations of electrophysiological data often rely on firing rate models or deterministic spiking models. Yet neither can accurately describe the interactions of neurons that issue spikes, probabilistically. Here we take a normative approach by designing a probabilistic spiking network to implement LQR control for a limb model. We find typical results: cosine tuning curves, population vectors that correlate with reaching directions, low-dimensional oscillatory activity for reaches that have no oscillatory movement, and changes in neuron's tuning curves after force field adaptation. Importantly, while the model is consistent with these empirically derived correlations, we can also analyze it in terms of the known causal mechanism: an LQR controller and the probability distributions of the neurons that encode it. Redesigning the system under a different set of assumptions (e.g. a different controller, or network architecture) would yield a new set of testable predictions. We suggest this normative approach can be a framework for examining the motor system, providing testable links between observed neural activity and motor behavior.
Collapse
|
31
|
Deep learning in spiking neural networks. Neural Netw 2018; 111:47-63. [PMID: 30682710 DOI: 10.1016/j.neunet.2018.12.002] [Citation(s) in RCA: 205] [Impact Index Per Article: 34.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2018] [Revised: 12/02/2018] [Accepted: 12/03/2018] [Indexed: 12/14/2022]
Abstract
In recent years, deep learning has revolutionized the field of machine learning, for computer vision in particular. In this approach, a deep (multilayer) artificial neural network (ANN) is trained, most often in a supervised manner using backpropagation. Vast amounts of labeled training examples are required, but the resulting classification accuracy is truly impressive, sometimes outperforming humans. Neurons in an ANN are characterized by a single, static, continuous-valued activation. Yet biological neurons use discrete spikes to compute and transmit information, and the spike times, in addition to the spike rates, matter. Spiking neural networks (SNNs) are thus more biologically realistic than ANNs, and are arguably the only viable option if one wants to understand how the brain computes at the neuronal description level. The spikes of biological neurons are sparse in time and space, and event-driven. Combined with bio-plausible local learning rules, this makes it easier to build low-power, neuromorphic hardware for SNNs. However, training deep SNNs remains a challenge. Spiking neurons' transfer function is usually non-differentiable, which prevents using backpropagation. Here we review recent supervised and unsupervised methods to train deep SNNs, and compare them in terms of accuracy and computational cost. The emerging picture is that SNNs still lag behind ANNs in terms of accuracy, but the gap is decreasing, and can even vanish on some tasks, while SNNs typically require many fewer operations and are the better candidates to process spatio-temporal data.
Collapse
|
32
|
The impact of encoding-decoding schemes and weight normalization in spiking neural networks. Neural Netw 2018; 108:365-378. [PMID: 30261415 DOI: 10.1016/j.neunet.2018.08.024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2018] [Revised: 07/07/2018] [Accepted: 08/28/2018] [Indexed: 11/23/2022]
Abstract
Spike-timing Dependent Plasticity (STDP) is a learning mechanism that can capture causal relationships between events. STDP is considered a foundational element of memory and learning in biological neural networks. Previous research efforts endeavored to understand the functionality of STDP's learning window in spiking neural networks (SNNs). In this study, we investigate the interaction among different encoding/decoding schemes, STDP learning windows and normalization rules for the SNN classifier, trained and tested on MNIST, NIST and ETH80-Contour datasets. The results show that when no normalization rules are applied, classical STDP typically achieves the best performance. Additionally, first-spike decoding classifiers require much less decoding time than a spike count decoding classifier. Thirdly, when no normalization rule is applied, the classifier accuracy decreases as the encoding duration increases from 10ms to 34ms using count decoding scheme. Finally, normalization of output weights is shown to improve the performance of a first-spike decoding classifier, which reveals the importance of weight normalization to SNN.
Collapse
|
33
|
Abstract
Graphical processing units (GPUs) can significantly accelerate spiking neural network (SNN) simulations by exploiting parallelism for independent computations. Both the changes in membrane potential at each time-step, and checking for spiking threshold crossings for each neuron, can be calculated independently. However, because synaptic transmission requires communication between many different neurons, efficient parallel processing may be hindered, either by data transfers between GPU and CPU at each time-step or, alternatively, by running many parallel computations for neurons that do not elicit any spikes. This, in turn, would lower the effective throughput of the simulations. Traditionally, a central processing unit (CPU, host) administers the execution of parallel processes on the GPU (device), such as memory initialization on the device, data transfer between host and device, and starting and synchronizing parallel processes. The parallel computing platform CUDA 5.0 introduced dynamic parallelism, which allows the initiation of new parallel applications within an ongoing parallel kernel. Here, we apply dynamic parallelism for synaptic updating in SNN simulations on a GPU. Our algorithm eliminates the need to start many parallel applications at each time-step, and the associated lags of data transfer between CPU and GPU memories. We report a significant speed-up of SNN simulations, when compared to former accelerated parallelization strategies for SNNs on a GPU.
Collapse
|
34
|
Bio-inspired spiking neural network for nonlinear systems control. Neural Netw 2018; 104:15-25. [PMID: 29702424 DOI: 10.1016/j.neunet.2018.04.002] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2017] [Revised: 02/08/2018] [Accepted: 04/03/2018] [Indexed: 11/21/2022]
Abstract
Spiking neural networks (SNN) are the third generation of artificial neural networks. SNN are the closest approximation to biological neural networks. SNNs make use of temporal spike trains to command inputs and outputs, allowing a faster and more complex computation. As demonstrated by biological organisms, they are a potentially good approach to designing controllers for highly nonlinear dynamic systems in which the performance of controllers developed by conventional techniques is not satisfactory or difficult to implement. SNN-based controllers exploit their ability for online learning and self-adaptation to evolve when transferred from simulations to the real world. SNN's inherent binary and temporary way of information codification facilitates their hardware implementation compared to analog neurons. Biological neural networks often require a lower number of neurons compared to other controllers based on artificial neural networks. In this work, these neuronal systems are imitated to perform the control of non-linear dynamic systems. For this purpose, a control structure based on spiking neural networks has been designed. Particular attention has been paid to optimizing the structure and size of the neural network. The proposed structure is able to control dynamic systems with a reduced number of neurons and connections. A supervised learning process using evolutionary algorithms has been carried out to perform controller training. The efficiency of the proposed network has been verified in two examples of dynamic systems control. Simulations show that the proposed control based on SNN exhibits superior performance compared to other approaches based on Neural Networks and SNNs.
Collapse
|
35
|
The effect of an exogenous magnetic field on neural coding in deep spiking neural networks. J Integr Neurosci 2018. [PMID: 29526851 DOI: 10.31083/jin-170046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
A ten-layer feed forward network was constructed in the presence of an exogenous alternating magnetic field. Results indicate that for rate coding, the firing rate is increased in the presence of an exogenous alternating magnetic field and particularly with increasing enhancement of the alternating magnetic field amplitude. For temporal coding, in the presence of alternating magnetic field, the interspike intervals of the spiking sequence are decreased and the distribution of interspike intervals tends to be uniform.
Collapse
|
36
|
Robust spike-train learning in spike-event based weight update. Neural Netw 2017; 96:33-46. [PMID: 28957730 DOI: 10.1016/j.neunet.2017.08.010] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2017] [Revised: 08/17/2017] [Accepted: 08/22/2017] [Indexed: 10/18/2022]
Abstract
Supervised learning algorithms in a spiking neural network either learn a spike-train pattern for a single neuron receiving input spike-train from multiple input synapses or learn to output the first spike time in a feedforward network setting. In this paper, we build upon spike-event based weight update strategy to learn continuous spike-train in a spiking neural network with a hidden layer using a dead zone on-off based adaptive learning rate rule which ensures convergence of the learning process in the sense of weight convergence and robustness of the learning process to external disturbances. Based on different benchmark problems, we compare this new method with other relevant spike-train learning algorithms. The results show that the speed of learning is much improved and the rate of successful learning is also greatly improved.
Collapse
|
37
|
A spiking neural network model of the midbrain superior colliculus that generates saccadic motor commands. BIOLOGICAL CYBERNETICS 2017; 111:249-268. [PMID: 28528360 PMCID: PMC5506246 DOI: 10.1007/s00422-017-0719-9] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Accepted: 05/08/2017] [Indexed: 06/07/2023]
Abstract
Single-unit recordings suggest that the midbrain superior colliculus (SC) acts as an optimal controller for saccadic gaze shifts. The SC is proposed to be the site within the visuomotor system where the nonlinear spatial-to-temporal transformation is carried out: the population encodes the intended saccade vector by its location in the motor map (spatial), and its trajectory and velocity by the distribution of firing rates (temporal). The neurons' burst profiles vary systematically with their anatomical positions and intended saccade vectors, to account for the nonlinear main-sequence kinematics of saccades. Yet, the underlying collicular mechanisms that could result in these firing patterns are inaccessible to current neurobiological techniques. Here, we propose a simple spiking neural network model that reproduces the spike trains of saccade-related cells in the intermediate and deep SC layers during saccades. The model assumes that SC neurons have distinct biophysical properties for spike generation that depend on their anatomical position in combination with a center-surround lateral connectivity. Both factors are needed to account for the observed firing patterns. Our model offers a basis for neuronal algorithms for spatiotemporal transformations and bio-inspired optimal controllers.
Collapse
|
38
|
Evolving Spiking Neural Networks for Recognition of Aged Voices. J Voice 2017; 31:24-33. [PMID: 27049449 DOI: 10.1016/j.jvoice.2016.02.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2015] [Accepted: 02/22/2016] [Indexed: 10/22/2022]
Abstract
The aging of the voice, known as presbyphonia, is a natural process that can cause great change in vocal quality of the individual. This is a relevant problem to those people who use their voices professionally, and its early identification can help determine a suitable treatment to avoid its progress or even to eliminate the problem. This work focuses on the development of a new model for the identification of aging voices (independently of their chronological age), using as input attributes parameters extracted from the voice and glottal signals. The proposed model, named Quantum binary-real evolving Spiking Neural Network (QbrSNN), is based on spiking neural networks (SNNs), with an unsupervised training algorithm, and a Quantum-Inspired Evolutionary Algorithm that automatically determines the most relevant attributes and the optimal parameters that configure the SNN. The QbrSNN model was evaluated in a database composed of 120 records, containing samples from three groups of speakers. The results obtained indicate that the proposed model provides better accuracy than other approaches, with fewer input attributes.
Collapse
|
39
|
A new bio-inspired stimulator to suppress hyper-synchronized neural firing in a cortical network. J Theor Biol 2016; 410:107-118. [PMID: 27620666 DOI: 10.1016/j.jtbi.2016.09.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2016] [Revised: 08/03/2016] [Accepted: 09/08/2016] [Indexed: 12/20/2022]
Abstract
Hyper-synchronous neural oscillations are the character of several neurological diseases such as epilepsy. On the other hand, glial cells and particularly astrocytes can influence neural synchronization. Therefore, based on the recent researches, a new bio-inspired stimulator is proposed which basically is a dynamical model of the astrocyte biophysical model. The performance of the new stimulator is investigated on a large-scale, cortical network. Both excitatory and inhibitory synapses are also considered in the simulated spiking neural network. The simulation results show that the new stimulator has a good performance and is able to reduce recurrent abnormal excitability which in turn avoids the hyper-synchronous neural firing in the spiking neural network. In this way, the proposed stimulator has a demand controlled characteristic and is a good candidate for deep brain stimulation (DBS) technique to successfully suppress the neural hyper-synchronization.
Collapse
|
40
|
Real-time radionuclide identification in γ-emitter mixtures based on spiking neural network. Appl Radiat Isot 2015; 109:405-409. [PMID: 26706284 DOI: 10.1016/j.apradiso.2015.12.029] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Accepted: 12/04/2015] [Indexed: 11/29/2022]
Abstract
Portal radiation monitors dedicated to the prevention of illegal traffic of nuclear materials at international borders need to deliver as fast as possible a radionuclide identification of a potential radiological threat. Spectrometry techniques applied to identify the radionuclides contributing to γ-emitter mixtures are usually performed using off-line spectrum analysis. As an alternative to these usual methods, a real-time processing based on an artificial neural network and Bayes' rule is proposed for fast radionuclide identification. The validation of this real-time approach was carried out using γ-emitter spectra ((241)Am, (133)Ba, (207)Bi, (60)Co, (137)Cs) obtained with a high-efficiency well-type NaI(Tl). The first tests showed that the proposed algorithm enables a fast identification of each γ-emitting radionuclide using the information given by the whole spectrum. Based on an iterative process, the on-line analysis only needs low-statistics spectra without energy calibration to identify the nature of a radiological threat.
Collapse
|
41
|
Towards biological plausibility of electronic noses: A spiking neural network based approach for tea odour classification. Neural Netw 2015; 71:142-9. [PMID: 26356597 DOI: 10.1016/j.neunet.2015.07.014] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2014] [Revised: 05/18/2015] [Accepted: 07/23/2015] [Indexed: 11/24/2022]
Abstract
The paper presents a novel encoding scheme for neuronal code generation for odour recognition using an electronic nose (EN). This scheme is based on channel encoding using multiple Gaussian receptive fields superimposed over the temporal EN responses. The encoded data is further applied to a spiking neural network (SNN) for pattern classification. Two forms of SNN, a back-propagation based SpikeProp and a dynamic evolving SNN are used to learn the encoded responses. The effects of information encoding on the performance of SNNs have been investigated. Statistical tests have been performed to determine the contribution of the SNN and the encoding scheme to overall odour discrimination. The approach has been implemented in odour classification of orthodox black tea (Kangra-Himachal Pradesh Region) thereby demonstrating a biomimetic approach for EN data analysis.
Collapse
|
42
|
A generalized analog implementation of piecewise linear neuron models using CCII building blocks. Neural Netw 2013; 51:26-38. [PMID: 24365534 DOI: 10.1016/j.neunet.2013.12.004] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2012] [Revised: 11/02/2013] [Accepted: 12/04/2013] [Indexed: 11/28/2022]
Abstract
This paper presents a set of reconfigurable analog implementations of piecewise linear spiking neuron models using second generation current conveyor (CCII) building blocks. With the same topology and circuit elements, without W/L modification which is impossible after circuit fabrication, these circuits can produce different behaviors, similar to the biological neurons, both for a single neuron as well as a network of neurons just by tuning reference current and voltage sources. The models are investigated, in terms of analog implementation feasibility and costs, targeting large scale hardware implementations. Results show that, in order to gain the best performance, area and accuracy; these models can be compromised. Simulation results are presented for different neuron behaviors with CMOS 350 nm technology.
Collapse
|
43
|
Event management for large scale event-driven digital hardware spiking neural networks. Neural Netw 2013; 45:83-93. [PMID: 23522624 DOI: 10.1016/j.neunet.2013.02.005] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2012] [Revised: 11/10/2012] [Accepted: 02/22/2013] [Indexed: 11/17/2022]
Abstract
The interest in brain-like computation has led to the design of a plethora of innovative neuromorphic systems. Individually, spiking neural networks (SNNs), event-driven simulation and digital hardware neuromorphic systems get a lot of attention. Despite the popularity of event-driven SNNs in software, very few digital hardware architectures are found. This is because existing hardware solutions for event management scale badly with the number of events. This paper introduces the structured heap queue, a pipelined digital hardware data structure, and demonstrates its suitability for event management. The structured heap queue scales gracefully with the number of events, allowing the efficient implementation of large scale digital hardware event-driven SNNs. The scaling is linear for memory, logarithmic for logic resources and constant for processing time. The use of the structured heap queue is demonstrated on a field-programmable gate array (FPGA) with an image segmentation experiment and a SNN of 65,536 neurons and 513,184 synapses. Events can be processed at the rate of 1 every 7 clock cycles and a 406×158 pixel image is segmented in 200 ms.
Collapse
|
44
|
A case for spiking neural network simulation based on configurable multiple-FPGA systems. Cogn Neurodyn 2011; 5:301-9. [PMID: 22942919 DOI: 10.1007/s11571-011-9170-0] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2011] [Accepted: 08/12/2011] [Indexed: 11/30/2022] Open
Abstract
Recent neuropsychological research has begun to reveal that neurons encode information in the timing of spikes. Spiking neural network simulations are a flexible and powerful method for investigating the behaviour of neuronal systems. Simulation of the spiking neural networks in software is unable to rapidly generate output spikes in large-scale of neural network. An alternative approach, hardware implementation of such system, provides the possibility to generate independent spikes precisely and simultaneously output spike waves in real time, under the premise that spiking neural network can take full advantage of hardware inherent parallelism. We introduce a configurable FPGA-oriented hardware platform for spiking neural network simulation in this work. We aim to use this platform to combine the speed of dedicated hardware with the programmability of software so that it might allow neuroscientists to put together sophisticated computation experiments of their own model. A feed-forward hierarchy network is developed as a case study to describe the operation of biological neural systems (such as orientation selectivity of visual cortex) and computational models of such systems. This model demonstrates how a feed-forward neural network constructs the circuitry required for orientation selectivity and provides platform for reaching a deeper understanding of the primate visual system. In the future, larger scale models based on this framework can be used to replicate the actual architecture in visual cortex, leading to more detailed predictions and insights into visual perception phenomenon.
Collapse
|