1
|
Wang J, Gao R, Zheng H, Zhu H, Shi CJR. SSGCNet: A Sparse Spectra Graph Convolutional Network for Epileptic EEG Signal Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:12157-12171. [PMID: 37030729 DOI: 10.1109/tnnls.2023.3252569] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/19/2023]
Abstract
In this article, we propose a sparse spectra graph convolutional network (SSGCNet) for epileptic electroencephalogram (EEG) signal classification. The goal is to develop a lightweighted deep learning model while retaining a high level of classification accuracy. To do so, we propose a weighted neighborhood field graph (WNFG) to represent EEG signals. The WNFG reduces redundant edges between graph nodes and has lower graph generation time and memory usage than the baseline solution. The sequential graph convolutional network is further developed from a WNFG by combining sparse weight pruning and the alternating direction method of multipliers (ADMM). Compared with the state-of-the-art method, our method has the same classification accuracy on the Bonn public dataset and the spikes and slow waves (SSW) clinical real dataset when the connection rate is ten times smaller.
Collapse
|
2
|
Lai X, Cao J, Lin Z. An Accelerated Maximally Split ADMM for a Class of Generalized Ridge Regression. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:958-972. [PMID: 34437070 DOI: 10.1109/tnnls.2021.3104840] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Ridge regression (RR) has been commonly used in machine learning, but is facing computational challenges in big data applications. To meet the challenges, this article develops a highly parallel new algorithm, i.e., an accelerated maximally split alternating direction method of multipliers (A-MS-ADMM), for a class of generalized RR (GRR) that allows different regularization factors for different regression coefficients. Linear convergence of the new algorithm along with its convergence ratio is established. Optimal parameters of the algorithm for the GRR with a particular set of regularization factors are derived, and a selection scheme of the algorithm parameters for the GRR with general regularization factors is also discussed. The new algorithm is then applied in the training of single-layer feedforward neural networks. Experimental results on performance validation on real-world benchmark datasets for regression and classification and comparisons with existing methods demonstrate the fast convergence, low computational complexity, and high parallelism of the new algorithm.
Collapse
|
3
|
Dai Y, Zhang Y, Wu Q. Over-relaxed multi-block ADMM algorithms for doubly regularized support vector machines. Neurocomputing 2023. [DOI: 10.1016/j.neucom.2023.01.082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
4
|
Zhang S, Wang T, Cao J, Liu J. Multichannel Matrix Randomized Autoencoder. Neural Process Lett 2022. [DOI: 10.1007/s11063-022-11134-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/29/2022]
|
5
|
An accelerated optimization algorithm for the elastic-net extreme learning machine. INT J MACH LEARN CYB 2022. [DOI: 10.1007/s13042-022-01636-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
6
|
Li J, Hu J, Zhao G, Huang S, Liu Y. Tensor based stacked fuzzy neural network for efficient data regression. Soft comput 2022; 27:1-30. [PMID: 35992191 PMCID: PMC9382627 DOI: 10.1007/s00500-022-07402-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/29/2022] [Indexed: 11/26/2022]
Abstract
Random vector functional link and extreme learning machine have been extended by the type-2 fuzzy sets with vector stacked methods, this extension leads to a new way to use tensor to construct learning structure for the type-2 fuzzy sets-based learning framework. In this paper, type-2 fuzzy sets-based random vector functional link, type-2 fuzzy sets-based extreme learning machine and Tikhonov-regularized extreme learning machine are fused into one network, a tensor way of stacking data is used to incorporate the nonlinear mappings when using type-2 fuzzy sets. In this way, the network could learn the sub-structure by three sub-structures' algorithms, which are merged into one tensor structure via the type-2 fuzzy mapping results. To the stacked single fuzzy neural network, the consequent part parameters learning is implemented by unfolding tensor-based matrix regression. The newly proposed stacked single fuzzy neural network shows a new way to design the hybrid fuzzy neural network with the higher order fuzzy sets and higher order data structure. The effective of the proposed stacked single fuzzy neural network are verified by the classical testing benchmarks and several statistical testing methods.
Collapse
Affiliation(s)
- Jie Li
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot, 010021 China
| | - Jiale Hu
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot, 010021 China
| | - Guoliang Zhao
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot, 010021 China
- State Key Laboratory of Reproductive Regulation and Breeding of Grassland Livestock, Inner Mongolia University, Hohhot, 010021 China
| | - Sharina Huang
- College of Science, Inner Mongolia Agricultural University, Hohhot, 010018 China
| | - Yang Liu
- College of Electronic Information Engineering, Inner Mongolia University, Hohhot, 010021 China
| |
Collapse
|
7
|
Zheng Y, Chen B, Wang S, Wang W, Qin W. Mixture Correntropy-Based Kernel Extreme Learning Machines. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:811-825. [PMID: 33079685 DOI: 10.1109/tnnls.2020.3029198] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Kernel-based extreme learning machine (KELM), as a natural extension of ELM to kernel learning, has achieved outstanding performance in addressing various regression and classification problems. Compared with the basic ELM, KELM has a better generalization ability owing to no needs of the number of hidden nodes given beforehand and random projection mechanism. Since KELM is derived under the minimum mean square error (MMSE) criterion for the Gaussian assumption of noise, its performance may deteriorate under the non-Gaussian cases, seriously. To improve the robustness of KELM, this article proposes a mixture correntropy-based KELM (MC-KELM), which adopts the recently proposed maximum mixture correntropy criterion as the optimization criterion, instead of using the MMSE criterion. In addition, an online sequential version of MC-KELM (MCOS-KELM) is developed to deal with the case that the data arrive sequentially (one-by-one or chunk-by-chunk). Experimental results on regression and classification data sets are reported to validate the performance superiorities of the new methods.
Collapse
|
8
|
Ma R, Wang T, Cao J, Dong F. Minimum error entropy criterion‐based randomised autoencoder. COGNITIVE COMPUTATION AND SYSTEMS 2021. [DOI: 10.1049/ccs2.12030] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Affiliation(s)
- Rongzhi Ma
- Machine Learning and I‐health International Cooperation Base of Zhejiang Province Hangzhou Dianzi University China
| | - Tianlei Wang
- Machine Learning and I‐health International Cooperation Base of Zhejiang Province Hangzhou Dianzi University China
- Artificial Intelligence Institute Hangzhou Dianzi University Zhejiang China
| | - Jiuwen Cao
- Machine Learning and I‐health International Cooperation Base of Zhejiang Province Hangzhou Dianzi University China
- Artificial Intelligence Institute Hangzhou Dianzi University Zhejiang China
- Research Center for Intelligent Sensing Zhejiang Lab Hangzhou China
| | - Fang Dong
- School of Information and Electrical Engineering Zhejiang University City College China
| |
Collapse
|
9
|
Wang T, Cao J, Lai X, Wu QMJ. Hierarchical One-Class Classifier With Within-Class Scatter-Based Autoencoders. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:3770-3776. [PMID: 32822309 DOI: 10.1109/tnnls.2020.3015860] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Autoencoding is a vital branch of representation learning in deep neural networks (DNNs). The extreme learning machine-based autoencoder (ELM-AE) has been recently developed and has gained popularity for its fast learning speed and ease of implementation. However, the ELM-AE uses random hidden node parameters without tuning, which may generate meaningless encoded features. In this brief, we first propose a within-class scatter information constraint-based AE (WSI-AE) that minimizes both the reconstruction error and the within-class scatter of the encoded features. We then build stacked WSI-AEs into a one-class classification (OCC) algorithm based on the hierarchical regularized least-squared method. The effectiveness of our approach was experimentally demonstrated in comparisons with several state-of-the-art AEs and OCC algorithms. The evaluations were performed on several benchmark data sets.
Collapse
|
10
|
Hu D, Cao J, Lai X, Liu J, Wang S, Ding Y. Epileptic Signal Classification Based on Synthetic Minority Oversampling and Blending Algorithm. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2020.3009020] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
11
|
Cao J, Zhu J, Hu W, Kummert A. Epileptic Signal Classification With Deep EEG Features by Stacked CNNs. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2019.2936441] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
12
|
Yang J, Cao J, Wang T, Xue A, Chen B. Regularized correntropy criterion based semi-supervised ELM. Neural Netw 2019; 122:117-129. [PMID: 31677440 DOI: 10.1016/j.neunet.2019.09.030] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2019] [Revised: 08/08/2019] [Accepted: 09/20/2019] [Indexed: 12/01/2022]
Abstract
Along with the explosive growing of data, semi-supervised learning attracts increasing attention in the past years due to its powerful capability in labeling unlabeled data and knowledge mining. As an emerging method, the semi-supervised extreme learning machine (SSELM), that builds on ELM, has been developed for data classification and shown superiorities in learning efficiency and accuracy. However, the optimization of SSELM as well as most of the other ELMs is generally based on the mean square error (MSE) criterion, which has been shown less effective in dealing with non-Gaussian noises. In this paper, a robust regularized correntropy criterion based SSELM (RC-SSELM) is developed. The optimization of the output weight matrix of RC-SSELM is derived by the fixed-point iteration based approach. A convergent analysis of the proposed RC-SSELM is presented based on the half-quadratic optimization technique. Experimental results on 4 synthetic datasets and 13 benchmark UCI datasets are provided to show the superiorities of the proposed RC-SSELM over SSELM and other state-of-the-art methods.
Collapse
Affiliation(s)
- Jie Yang
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Hangzhou Dianzi University, Zhejiang, 310018, China
| | - Jiuwen Cao
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Hangzhou Dianzi University, Zhejiang, 310018, China; Artificial Intelligence Institute, Hangzhou Dianzi University, Zhejiang, 310018, China.
| | - Tianlei Wang
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Hangzhou Dianzi University, Zhejiang, 310018, China
| | - Anke Xue
- Key Lab for IOT and Information Fusion Technology of Zhejiang, Hangzhou Dianzi University, Zhejiang, 310018, China
| | - Badong Chen
- School of Electronic and Information Engineering, Xi'an Jiaotong University, Xi'an, 710049, China
| |
Collapse
|