1
|
Wu C, Wang Y, Qiu S, He H. A bimodal deep learning network based on CNN for fine motor imagery. Cogn Neurodyn 2024; 18:3791-3804. [PMID: 39712133 PMCID: PMC11655732 DOI: 10.1007/s11571-024-10159-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2023] [Revised: 06/20/2024] [Accepted: 08/03/2024] [Indexed: 12/24/2024] Open
Abstract
Motor imagery (MI) is an important brain-computer interface (BCI) paradigm. The traditional MI paradigm (imagining different limbs) limits the intuitive control of the outer devices, while fine MI paradigm (imagining different joint movements from the same limb) can control the mechanical arm without cognitive disconnection. However, the decoding performance of fine MI limits its application. Electroencephalogram (EEG) and functional near-infrared spectroscopy (fNIRS) are widely used in BCI systems because of their portability and easy operation. In this study, a fine MI paradigm including four classes (hand, wrist, shoulder and rest) was designed, and the data of EEG-fNIRS bimodal brain activity was collected from 12 subjects. Event-related desynchronization (ERD) from EEG signals shows a contralateral dominant phenomenon, and there is difference between the ERD of the four classes. For fNIRS signal in the time dimension, the time periods with significant difference can be observed in the activation patterns of four MI tasks. Spatially, the signal peak based brain topographic map also shows difference of these four MI tasks. The EEG signal and fNIRS signal of these four classes are distinguishable. In this study, a bimodal fusion network is proposed to improve the fine MI tasks decoding performance. The features of these two modalities are extracted separately by two feature extractors based on convolutional neural networks (CNN). The recognition performance was significantly improved by the bimodal method proposed in this study, compared with the performance of the single-modal network. The proposed method outperformed all comparison methods, and achieved a four-class accuracy of 58.96%. This paper demonstrates the feasibility of EEG and fNIRS bimodal BCI systems for fine MI, and shows the effectiveness of the proposed bimodal fusion method. This research is supposed to support fine MI-based BCI systems with theories and techniques.
Collapse
Affiliation(s)
- Chenyao Wu
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Yu Wang
- University of Chinese Academy of Sciences, Beijing, 100049 China
- National Engineering & Technology Research Center for ASIC Design, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
| | - Shuang Qiu
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| | - Huiguang He
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, Key Laboratory of Brain Cognition and Brain-inspired Intelligence Technology, Institute of Automation, Chinese Academy of Sciences, Beijing, 100190 China
- University of Chinese Academy of Sciences, Beijing, 100049 China
| |
Collapse
|
2
|
Wang X, Yang W, Qi W, Wang Y, Ma X, Wang W. STaRNet: A spatio-temporal and Riemannian network for high-performance motor imagery decoding. Neural Netw 2024; 178:106471. [PMID: 38945115 DOI: 10.1016/j.neunet.2024.106471] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2024] [Revised: 06/11/2024] [Accepted: 06/16/2024] [Indexed: 07/02/2024]
Abstract
Brain-computer interfaces (BCIs), representing a transformative form of human-computer interaction, empower users to interact directly with external environments through brain signals. In response to the demands for high accuracy, robustness, and end-to-end capabilities within BCIs based on motor imagery (MI), this paper introduces STaRNet, a novel model that integrates multi-scale spatio-temporal convolutional neural networks (CNNs) with Riemannian geometry. Initially, STaRNet integrates a multi-scale spatio-temporal feature extraction module that captures both global and local features, facilitating the construction of Riemannian manifolds from these comprehensive spatio-temporal features. Subsequently, a matrix logarithm operation transforms the manifold-based features into the tangent space, followed by a dense layer for classification. Without preprocessing, STaRNet surpasses state-of-the-art (SOTA) models by achieving an average decoding accuracy of 83.29% and a kappa value of 0.777 on the BCI Competition IV 2a dataset, and 95.45% accuracy with a kappa value of 0.939 on the High Gamma Dataset. Additionally, a comparative analysis between STaRNet and several SOTA models, focusing on the most challenging subjects from both datasets, highlights exceptional robustness of STaRNet. Finally, the visualizations of learned frequency bands demonstrate that temporal convolutions have learned MI-related frequency bands, and the t-SNE analyses of features across multiple layers of STaRNet exhibit strong feature extraction capabilities. We believe that the accurate, robust, and end-to-end capabilities of the STaRNet will facilitate the advancement of BCIs.
Collapse
Affiliation(s)
- Xingfu Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenjie Yang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenxia Qi
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yu Wang
- National Engineering and Technology Research Center for ASIC Design, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Xiaojun Ma
- National Engineering and Technology Research Center for ASIC Design, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wei Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
3
|
Akuthota S, K R, Ravichander J. Artifact removal and motor imagery classification in EEG using advanced algorithms and modified DNN. Heliyon 2024; 10:e27198. [PMID: 38560190 PMCID: PMC10980936 DOI: 10.1016/j.heliyon.2024.e27198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 02/21/2024] [Accepted: 02/26/2024] [Indexed: 04/04/2024] Open
Abstract
This paper presents an advanced approach for EEG artifact removal and motor imagery classification using a combination of Four Class Iterative Filtering and Filter Bank Common Spatial Pattern Algorithm with a Modified Deep Neural Network (DNN) classifier. The research aims to enhance the accuracy and reliability of BCI systems by addressing the challenges posed by EEG artifacts and complex motor imagery tasks. The methodology begins by introducing FCIF, a novel technique for ocular artifact removal, utilizing iterative filtering and filter banks. FCIF's mathematical formulation allows for effective artifact mitigation, thereby improving the quality of EEG data. In tandem, the FC-FBCSP algorithm is introduced, extending the Filter Bank Common Spatial Pattern approach to handle four-class motor imagery classification. The Modified DNN classifier enhances the discriminatory power of the FC-FBCSP features, optimizing the classification process. The paper showcases a comprehensive experimental setup, featuring the utilization of BCI Competition IV Dataset 2a & 2b. Detailed preprocessing steps, including filtering and feature extraction, are presented with mathematical rigor. Results demonstrate the remarkable artifact removal capabilities of FCIF and the classification prowess of FC-FBCSP combined with the Modified DNN classifier. Comparative analysis highlights the superiority of the proposed approach over baseline methods and the method achieves the mean accuracy of 98.575%.
Collapse
Affiliation(s)
- Srinath Akuthota
- Department of Electronics & Communication Engineering, SR University, Warangal-506371, Telangana, India
| | - RajKumar K
- Department of Electronics & Communication Engineering, SR University, Warangal-506371, Telangana, India
| | - Janapati Ravichander
- Department of Electronics & Communication Engineering, SR University, Warangal-506371, Telangana, India
| |
Collapse
|
4
|
Deng H, Li M, Zuo H, Zhou H, Qi E, Wu X, Xu G. Personalized motor imagery prediction model based on individual difference of ERP. J Neural Eng 2024; 21:016027. [PMID: 38359457 DOI: 10.1088/1741-2552/ad29d6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Accepted: 02/15/2024] [Indexed: 02/17/2024]
Abstract
Objective. Motor imagery-based brain-computer interaction (MI-BCI) is a novel method of achieving human and external environment interaction that can assist individuals with motor disorders to rehabilitate. However, individual differences limit the utility of the MI-BCI. In this study, a personalized MI prediction model based on the individual difference of event-related potential (ERP) is proposed to solve the MI individual difference.Approach.A novel paradigm named action observation-based multi-delayed matching posture task evokes ERP during a delayed matching posture task phase by retrieving picture stimuli and videos, and generates MI electroencephalogram through action observation and autonomous imagery in an action observation-based motor imagery phase. Based on the correlation between the ERP and MI, a logistic regression-based personalized MI prediction model is built to predict each individual's suitable MI action. 32 subjects conducted the MI task with or without the help of the prediction model to select the MI action. Then classification accuracy of the MI task is used to evaluate the proposed model and three traditional MI methods.Main results.The personalized MI prediction model successfully predicts suitable action among 3 sets of daily actions. Under suitable MI action, the individual's ERP amplitude and event-related desynchronization (ERD) intensity are the largest, which helps to improve the accuracy by 14.25%.Significance.The personalized MI prediction model that uses the temporal ERP features to predict the classification accuracy of MI is feasible for improving the individual's MI-BCI performance, providing a new personalized solution for the individual difference and practical BCI application.
Collapse
Affiliation(s)
- Haodong Deng
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| | - Mengfan Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| | - Haoxin Zuo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| | - Huihui Zhou
- Peng Cheng Laboratory, 518000 Shenzhen, People's Republic of China
| | - Enming Qi
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| | - Xue Wu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, 300132 Tianjin, People's Republic of China
- School of Health Sciences and Biomedical Engineering, Hebei University of Technology, 300132 Tianjin, People's Republic of China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, 300132 Tianjin, People's Republic of China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, 300132 Tianjin, People's Republic of China
| |
Collapse
|
5
|
Wang X, Wang Y, Qi W, Kong D, Wang W. BrainGridNet: A two-branch depthwise CNN for decoding EEG-based multi-class motor imagery. Neural Netw 2024; 170:312-324. [PMID: 38006734 DOI: 10.1016/j.neunet.2023.11.037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2023] [Revised: 11/14/2023] [Accepted: 11/16/2023] [Indexed: 11/27/2023]
Abstract
Brain-computer interfaces (BCIs) based on motor imagery (MI) enable the disabled to interact with the world through brain signals. To meet demands of real-time, stable, and diverse interactions, it is crucial to develop lightweight networks that can accurately and reliably decode multi-class MI tasks. In this paper, we introduce BrainGridNet, a convolutional neural network (CNN) framework that integrates two intersecting depthwise CNN branches with 3D electroencephalography (EEG) data to decode a five-class MI task. The BrainGridNet attains competitive results in both the time and frequency domains, with superior performance in the frequency domain. As a result, an accuracy of 80.26 percent and a kappa value of 0.753 are achieved by BrainGridNet, surpassing the state-of-the-art (SOTA) model. Additionally, BrainGridNet shows optimal computational efficiency, excels in decoding the most challenging subject, and maintains robust accuracy despite the random loss of 16 electrode signals. Finally, the visualizations demonstrate that BrainGridNet learns discriminative features and identifies critical brain regions and frequency bands corresponding to each MI class. The convergence of BrainGridNet's strong feature extraction capability, high decoding accuracy, steady decoding efficacy, and low computational costs renders it an appealing choice for facilitating the development of BCIs.
Collapse
Affiliation(s)
- Xingfu Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Yu Wang
- Neural Computation and Brain Computer Interaction (NeuBCI) Research Center for Brain-inspired Intelligence, National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Wenxia Qi
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China
| | - Delin Kong
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China
| | - Wei Wang
- CAS Key Laboratory of Space Manufacturing Technology, Technology and Engineering Center for Space Utilization, Chinese Academy of Sciences, Beijing, China; University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|