1
|
Cai M, Hong J. Joint multi-feature extraction and transfer learning in motor imagery brain computer interface. Comput Methods Biomech Biomed Engin 2024:1-12. [PMID: 39286921 DOI: 10.1080/10255842.2024.2404541] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2024] [Revised: 08/14/2024] [Accepted: 09/10/2024] [Indexed: 09/19/2024]
Abstract
Motor imagery brain computer interface (BCI) systems are considered one of the most crucial paradigms and have received extensive attention from researchers worldwide. However, the non-stationary from subject-to-subject transfer is a substantial challenge for robust BCI operations. To address this issue, this paper proposes a novel approach that integrates joint multi-feature extraction, specifically combining common spatial patterns (CSP) and wavelet packet transforms (WPT), along with transfer learning (TL) in motor imagery BCI systems. This approach leverages the time-frequency characteristics of WPT and the spatial characteristics of CSP while utilizing transfer learning to facilitate EEG identification for target subjects based on knowledge acquired from non-target subjects. Using dataset IVa from BCI Competition III, our proposed approach achieves an impressive average classification accuracy of 93.4%, outperforming five kinds of state-of-the-art approaches. Furthermore, it offers the advantage of enabling the design of various auxiliary problems to learn different aspects of the target problem from unlabeled data through transfer learning, thereby facilitating the implementation of innovative ideas within our proposed approach. Simultaneously, it demonstrates that integrating CSP and WPT while transferring knowledge from other subjects is highly effective in enhancing the average classification accuracy of EEG signals and it provides a novel solution to address subject-to-subject transfer challenges in motor imagery BCI systems.
Collapse
Affiliation(s)
- Miao Cai
- Department of Integrated Traditional Chinese and Western Medicine, Xi'an Children's Hospital, Xi'an, China
| | - Jie Hong
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi'an, China
| |
Collapse
|
2
|
Yu S, Wang Z, Wang F, Chen K, Yao D, Xu P, Zhang Y, Wang H, Zhang T. Multiclass classification of motor imagery tasks based on multi-branch convolutional neural network and temporal convolutional network model. Cereb Cortex 2024; 34:bhad511. [PMID: 38183186 DOI: 10.1093/cercor/bhad511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 12/06/2023] [Accepted: 12/08/2023] [Indexed: 01/07/2024] Open
Abstract
Motor imagery (MI) is a cognitive process wherein an individual mentally rehearses a specific movement without physically executing it. Recently, MI-based brain-computer interface (BCI) has attracted widespread attention. However, accurate decoding of MI and understanding of neural mechanisms still face huge challenges. These seriously hinder the clinical application and development of BCI systems based on MI. Thus, it is very necessary to develop new methods to decode MI tasks. In this work, we propose a multi-branch convolutional neural network (MBCNN) with a temporal convolutional network (TCN), an end-to-end deep learning framework to decode multi-class MI tasks. We first used MBCNN to capture the MI electroencephalography signals information on temporal and spectral domains through different convolutional kernels. Then, we introduce TCN to extract more discriminative features. The within-subject cross-session strategy is used to validate the classification performance on the dataset of BCI Competition IV-2a. The results showed that we achieved 75.08% average accuracy for 4-class MI task classification, outperforming several state-of-the-art approaches. The proposed MBCNN-TCN-Net framework successfully captures discriminative features and decodes MI tasks effectively, improving the performance of MI-BCIs. Our findings could provide significant potential for improving the clinical application and development of MI-based BCI systems.
Collapse
Affiliation(s)
- Shiqi Yu
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Zedong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Fei Wang
- School of Computer and Software, Chengdu Jincheng College, Chengdu 610097, China
| | - Kai Chen
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
| | - Dezhong Yao
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Peng Xu
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yong Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Hesong Wang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
| | - Tao Zhang
- Microecology Research Center, Baiyun Branch, Nanfang Hospital, Southern Medical University, Guangzhou 510515, China
- Mental Health Education Center and School of Science, Xihua University, Chengdu 610039, China
- Key Laboratory for Neuroinformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
3
|
Niu L, Bin J, Wang JKS, Zhan G, Jia J, Zhang L, Gan Z, Kang X. Effect of 3D paradigm synchronous motion for SSVEP-based hybrid BCI-VR system. Med Biol Eng Comput 2023; 61:2481-2495. [PMID: 37191865 DOI: 10.1007/s11517-023-02845-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2022] [Accepted: 05/05/2023] [Indexed: 05/17/2023]
Abstract
A brain-computer interface (BCI) system and virtual reality (VR) are integrated as a more interactive hybrid system (BCI-VR) that allows the user to manipulate the car. A virtual scene in the VR system that is the same as the physical environment is built, and the object's movement can be observed in the VR scene. The four-class three-dimensional (3D) paradigm is designed and moves synchronously in virtual reality. The dynamic paradigm may affect their attention according to the experimenters' feedback. Fifteen subjects in our experiment steered the car according to a specified motion trajectory. According to our online experimental result, different motion trajectories of the paradigm have various effects on the system's performance, and training can mitigate this adverse effect. Moreover, the hybrid system using frequencies between 5 and 10 Hz indicates better performance than those using lower or higher stimulation frequencies. The experiment results show a maximum average accuracy of 0.956 and a maximum information transfer rate (ITR) of 41.033 bits/min. It suggests that a hybrid system provides a high-performance way of brain-computer interaction. This research could encourage more interesting applications involving BCI and VR technologies.
Collapse
Affiliation(s)
- Lan Niu
- Laboratory for Neural Interface and Brain Computer Interface, Engineering Research Center of AI & Robotics, Shanghai Engineering Research Center of AI & Robotics, MOE Frontiers Center for Brain Science, State Key Laboratory of Medical Neurobiology, Institute of AI & Robotics, Institute of Meta-Medical, Academy for Engineering & Technology, Ministry of Education, FudanUniversity, Shanghai, China
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China
| | - Jianxiong Bin
- Laboratory for Neural Interface and Brain Computer Interface, Engineering Research Center of AI & Robotics, Shanghai Engineering Research Center of AI & Robotics, MOE Frontiers Center for Brain Science, State Key Laboratory of Medical Neurobiology, Institute of AI & Robotics, Institute of Meta-Medical, Academy for Engineering & Technology, Ministry of Education, FudanUniversity, Shanghai, China
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China
| | | | - Gege Zhan
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China
| | - Jie Jia
- Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai, 200040, China
| | - Lihua Zhang
- Laboratory for Neural Interface and Brain Computer Interface, Engineering Research Center of AI & Robotics, Shanghai Engineering Research Center of AI & Robotics, MOE Frontiers Center for Brain Science, State Key Laboratory of Medical Neurobiology, Institute of AI & Robotics, Institute of Meta-Medical, Academy for Engineering & Technology, Ministry of Education, FudanUniversity, Shanghai, China
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China
| | - Zhongxue Gan
- Laboratory for Neural Interface and Brain Computer Interface, Engineering Research Center of AI & Robotics, Shanghai Engineering Research Center of AI & Robotics, MOE Frontiers Center for Brain Science, State Key Laboratory of Medical Neurobiology, Institute of AI & Robotics, Institute of Meta-Medical, Academy for Engineering & Technology, Ministry of Education, FudanUniversity, Shanghai, China
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China
| | - Xiaoyang Kang
- Laboratory for Neural Interface and Brain Computer Interface, Engineering Research Center of AI & Robotics, Shanghai Engineering Research Center of AI & Robotics, MOE Frontiers Center for Brain Science, State Key Laboratory of Medical Neurobiology, Institute of AI & Robotics, Institute of Meta-Medical, Academy for Engineering & Technology, Ministry of Education, FudanUniversity, Shanghai, China.
- Ji Hua Laboratory, Foshan, 528000, Guangdong Province, China.
- Yiwu Research Institute of Fudan University, Chengbei Road, Yiwu City, 322000, Zhejiang, China.
- Research Center for Intelligent Sensing, Zhejiang Lab, Hangzhou, 311100, China.
| |
Collapse
|