1
|
Junqueira B, Aristimunha B, Chevallier S, de Camargo RY. A systematic evaluation of Euclidean alignment with deep learning for EEG decoding. J Neural Eng 2024; 21:036038. [PMID: 38776898 DOI: 10.1088/1741-2552/ad4f18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2024] [Accepted: 05/22/2024] [Indexed: 05/25/2024]
Abstract
Objective:Electroencephalography signals are frequently used for various Brain-Computer interface (BCI) tasks. While deep learning (DL) techniques have shown promising results, they are hindered by the substantial data requirements. By leveraging data from multiple subjects, transfer learning enables more effective training of DL models. A technique that is gaining popularity is Euclidean alignment (EA) due to its ease of use, low computational complexity, and compatibility with DL models. However, few studies evaluate its impact on the training performance of shared and individual DL models. In this work, we systematically evaluate the effect of EA combined with DL for decoding BCI signals.Approach:We used EA as a pre-processing step to train shared DL models with data from multiple subjects and evaluated their transferability to new subjects.Main results:Our experimental results show that it improves decoding in the target subject by 4.33% and decreases convergence time by more than 70%. We also trained individual models for each subject to use as a majority-voting ensemble classifier. In this scenario, using EA improved the 3-model ensemble accuracy by 3.71%. However, when compared to the shared model with EA, the ensemble accuracy was 3.62% lower.Significance:EA succeeds in the task of improving transfer learning performance with DL models and, could be used as a standard pre-processing technique.
Collapse
Affiliation(s)
- Bruna Junqueira
- University of São Paulo, Sao Paulo, Brazil
- Université Paris-Saclay, Inria TAU team, LISN-CNRS, Orsay, France
| | - Bruno Aristimunha
- Université Paris-Saclay, Inria TAU team, LISN-CNRS, Orsay, France
- Federal University of ABC, Santo Andre, Brazil
| | | | | |
Collapse
|
2
|
Ferdi AY, Ghazli A. Authentication with a one-dimensional CNN model using EEG-based brain-computer interface. Comput Methods Biomech Biomed Engin 2024:1-12. [PMID: 38767327 DOI: 10.1080/10255842.2024.2355490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2024] [Accepted: 05/10/2024] [Indexed: 05/22/2024]
Abstract
Brain-computer interface (BCI) technology uses electroencephalogram (EEG) signals to create a direct interaction between the human body and its surroundings. Motor imagery (MI) classification using EEG signals is an important application that can help a rehabilitated or motor-impaired stroke patient perform certain tasks. Robust classification of these signals is an important step toward making the use of EEG more practical in many applications and less dependent on trained professionals. Deep learning methods have produced impressive results in BCI in recent years, especially with the availability of large electroencephalography (EEG) data sets. Dealing with EEG-MI signals is difficult because noise and other signal sources can interfere with the electrical amplitude of the brain, and its generalization ability is limited, so it is difficult to improve EEG classifiers. To address these issues, this paper presents a methodology based on one-dimensional convolutional neural networks (1-D CNN) for motor imagery (MI) recognition for the right hand, left hand, feet, and sedentary task. The proposed model is a lightweight model with fewer parameters and has an accuracy of 91.75%. Then, in an innovative exploitation of the four output classes, there is an idea that allows people with disabilities who are deprived of security measures, such as entering a secret code, to use the output classification, such as password codes. It is also an idea for a unique authentication system that is more secure and less vulnerable to theft or the like for a healthy person at the same time.
Collapse
Affiliation(s)
- Ahmed Yassine Ferdi
- University of Tahri Mohammed, Bechar, Algeria
- Laboratory of LTIT, Tahri Mohammed University of Bechar, Algeria
| | | |
Collapse
|
3
|
Deng H, Li M, Li J, Guo M, Xu G. A robust multi-branch multi-attention-mechanism EEGNet for motor imagery BCI decoding. J Neurosci Methods 2024; 405:110108. [PMID: 38458260 DOI: 10.1016/j.jneumeth.2024.110108] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2023] [Revised: 02/28/2024] [Accepted: 03/05/2024] [Indexed: 03/10/2024]
Abstract
BACKGROUND Motor-Imagery-based Brain-Computer Interface (MI-BCI) is a promising technology to assist communication, movement, and neurological rehabilitation for motor-impaired individuals. Electroencephalography (EEG) decoding techniques using deep learning (DL) possess noteworthy advantages due to automatic feature extraction and end-to-end learning. However, the DL-based EEG decoding models tend to show large variations due to intersubject variability of EEG, which results from inconsistencies of different subjects' optimal hyperparameters. NEW METHODS This study proposes a multi-branch multi-attention mechanism EEGNet model (MBMANet) for robust decoding. It applies the multi-branch EEGNet structure to achieve various feature extractions. Further, the different attention mechanisms introduced in each branch attain diverse adaptive weight adjustments. This combination of multi-branch and multi-attention mechanisms allows for multi-level feature fusion to provide robust decoding for different subjects. RESULTS The MBMANet model has a four-classification accuracy of 83.18% and kappa of 0.776 on the BCI Competition IV-2a dataset, which outperforms other eight CNN-based decoding models. This consistently satisfactory performance across all nine subjects indicates that the proposed model is robust. CONCLUSIONS The combine of multi-branch and multi-attention mechanisms empowers the DL-based models to adaptively learn different EEG features, which provides a feasible solution for dealing with data variability. It also gives the MBMANet model more accurate decoding of motion intentions and lower training costs, thus improving the MI-BCI's utility and robustness.
Collapse
Affiliation(s)
- Haodong Deng
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Mengfan Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China.
| | - Jundi Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Miaomiao Guo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China; Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, Tianjin 300132, China; Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin 300132, China; School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin 300132, China
| |
Collapse
|
4
|
Ferrero L, Soriano-Segura P, Navarro J, Jones O, Ortiz M, Iáñez E, Azorín JM, Contreras-Vidal JL. Brain-machine interface based on deep learning to control asynchronously a lower-limb robotic exoskeleton: a case-of-study. J Neuroeng Rehabil 2024; 21:48. [PMID: 38581031 PMCID: PMC10996198 DOI: 10.1186/s12984-024-01342-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 03/15/2024] [Indexed: 04/07/2024] Open
Abstract
BACKGROUND This research focused on the development of a motor imagery (MI) based brain-machine interface (BMI) using deep learning algorithms to control a lower-limb robotic exoskeleton. The study aimed to overcome the limitations of traditional BMI approaches by leveraging the advantages of deep learning, such as automated feature extraction and transfer learning. The experimental protocol to evaluate the BMI was designed as asynchronous, allowing subjects to perform mental tasks at their own will. METHODS A total of five healthy able-bodied subjects were enrolled in this study to participate in a series of experimental sessions. The brain signals from two of these sessions were used to develop a generic deep learning model through transfer learning. Subsequently, this model was fine-tuned during the remaining sessions and subjected to evaluation. Three distinct deep learning approaches were compared: one that did not undergo fine-tuning, another that fine-tuned all layers of the model, and a third one that fine-tuned only the last three layers. The evaluation phase involved the exclusive closed-loop control of the exoskeleton device by the participants' neural activity using the second deep learning approach for the decoding. RESULTS The three deep learning approaches were assessed in comparison to an approach based on spatial features that was trained for each subject and experimental session, demonstrating their superior performance. Interestingly, the deep learning approach without fine-tuning achieved comparable performance to the features-based approach, indicating that a generic model trained on data from different individuals and previous sessions can yield similar efficacy. Among the three deep learning approaches compared, fine-tuning all layer weights demonstrated the highest performance. CONCLUSION This research represents an initial stride toward future calibration-free methods. Despite the efforts to diminish calibration time by leveraging data from other subjects, complete elimination proved unattainable. The study's discoveries hold notable significance for advancing calibration-free approaches, offering the promise of minimizing the need for training trials. Furthermore, the experimental evaluation protocol employed in this study aimed to replicate real-life scenarios, granting participants a higher degree of autonomy in decision-making regarding actions such as walking or stopping gait.
Collapse
Affiliation(s)
- Laura Ferrero
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain.
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain.
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain.
- NSF IUCRC BRAIN, University of Houston, Houston, USA.
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA.
| | - Paula Soriano-Segura
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - Jacobo Navarro
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- International Affiliate NSF IUCRC BRAIN Site, Tecnológico de Monterrey, Monterrey, Mexico
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| | - Oscar Jones
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| | - Mario Ortiz
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - Eduardo Iáñez
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
| | - José M Azorín
- Brain-Machine Interface Systems Lab, Miguel Hernández University of Elche, Elche, Spain
- Instituto de Investigación en Ingeniería de Elche-I3E, Miguel Hernández University of Elche, Elche, Spain
- International Affiliate NSF IUCRC BRAIN Site, Miguel Hernández University of Elche, Elche, Spain
- Valencian Graduate School and Research Network of Artificial Intelligence-valgrAI, Valencia, Spain
| | - José L Contreras-Vidal
- NSF IUCRC BRAIN, University of Houston, Houston, USA
- Non-Invasive Brain Machine Interface Systems, University of Houston, Houston, TX, USA
| |
Collapse
|
5
|
Zhang S, Wang Q, Zhang B, Liang Z, Zhang L, Li L, Huang G, Zhang Z, Feng B, Yu T. Cauchy non-convex sparse feature selection method for the high-dimensional small-sample problem in motor imagery EEG decoding. Front Neurosci 2023; 17:1292724. [PMID: 38027478 PMCID: PMC10654780 DOI: 10.3389/fnins.2023.1292724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 10/17/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The time, frequency, and space information of electroencephalogram (EEG) signals is crucial for motor imagery decoding. However, these temporal-frequency-spatial features are high-dimensional small-sample data, which poses significant challenges for motor imagery decoding. Sparse regularization is an effective method for addressing this issue. However, the most commonly employed sparse regularization models in motor imagery decoding, such as the least absolute shrinkage and selection operator (LASSO), is a biased estimation method and leads to the loss of target feature information. Methods In this paper, we propose a non-convex sparse regularization model that employs the Cauchy function. By designing a proximal gradient algorithm, our proposed model achieves closer-to-unbiased estimation than existing sparse models. Therefore, it can learn more accurate, discriminative, and effective feature information. Additionally, the proposed method can perform feature selection and classification simultaneously, without requiring additional classifiers. Results We conducted experiments on two publicly available motor imagery EEG datasets. The proposed method achieved an average classification accuracy of 82.98% and 64.45% in subject-dependent and subject-independent decoding assessment methods, respectively. Conclusion The experimental results show that the proposed method can significantly improve the performance of motor imagery decoding, with better classification performance than existing feature selection and deep learning methods. Furthermore, the proposed model shows better generalization capability, with parameter consistency over different datasets and robust classification across different training sample sizes. Compared with existing sparse regularization methods, the proposed method converges faster, and with shorter model training time.
Collapse
Affiliation(s)
- Shaorong Zhang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, China
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Qihui Wang
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Benxin Zhang
- School of Electronic Engineering and Automation, Guilin University of Electronic Technology, Guilin, China
| | - Zhen Liang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, China
| | - Li Zhang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, China
| | - Linling Li
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, China
| | - Gan Huang
- Guangdong Key Laboratory for Biomedical Measurements and Ultrasound Imaging, National-Regional Key Technology Engineering Laboratory for Medical Ultrasound, School of Biomedical Engineering, Shenzhen University Medical School, Shenzhen, China
| | - Zhiguo Zhang
- Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
| | - Bao Feng
- School of Electronic Information and Automation, Guilin University of Aerospace Technology, Guilin, China
| | - Tianyou Yu
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
| |
Collapse
|
6
|
Sun C, Mou C. Survey on the research direction of EEG-based signal processing. Front Neurosci 2023; 17:1203059. [PMID: 37521708 PMCID: PMC10372445 DOI: 10.3389/fnins.2023.1203059] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 06/16/2023] [Indexed: 08/01/2023] Open
Abstract
Electroencephalography (EEG) is increasingly important in Brain-Computer Interface (BCI) systems due to its portability and simplicity. In this paper, we provide a comprehensive review of research on EEG signal processing techniques since 2021, with a focus on preprocessing, feature extraction, and classification methods. We analyzed 61 research articles retrieved from academic search engines, including CNKI, PubMed, Nature, IEEE Xplore, and Science Direct. For preprocessing, we focus on innovatively proposed preprocessing methods, channel selection, and data augmentation. Data augmentation is classified into conventional methods (sliding windows, segmentation and recombination, and noise injection) and deep learning methods [Generative Adversarial Networks (GAN) and Variation AutoEncoder (VAE)]. We also pay attention to the application of deep learning, and multi-method fusion approaches, including both conventional algorithm fusion and fusion between conventional algorithms and deep learning. Our analysis identifies 35 (57.4%), 18 (29.5%), and 37 (60.7%) studies in the directions of preprocessing, feature extraction, and classification, respectively. We find that preprocessing methods have become widely used in EEG classification (96.7% of reviewed papers) and comparative experiments have been conducted in some studies to validate preprocessing. We also discussed the adoption of channel selection and data augmentation and concluded several mentionable matters about data augmentation. Furthermore, deep learning methods have shown great promise in EEG classification, with Convolutional Neural Networks (CNNs) being the main structure of deep neural networks (92.3% of deep learning papers). We summarize and analyze several innovative neural networks, including CNNs and multi-structure fusion. However, we also identified several problems and limitations of current deep learning techniques in EEG classification, including inappropriate input, low cross-subject accuracy, unbalanced between parameters and time costs, and a lack of interpretability. Finally, we highlight the emerging trend of multi-method fusion approaches (49.2% of reviewed papers) and analyze the data and some examples. We also provide insights into some challenges of multi-method fusion. Our review lays a foundation for future studies to improve EEG classification performance.
Collapse
|
7
|
Zhang J, Liu D, Chen W, Pei Z, Wang J. Deep Convolutional Neural Network for EEG-Based Motor Decoding. MICROMACHINES 2022; 13:1485. [PMID: 36144108 PMCID: PMC9504902 DOI: 10.3390/mi13091485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Revised: 08/27/2022] [Accepted: 08/29/2022] [Indexed: 06/16/2023]
Abstract
Brain-machine interfaces (BMIs) have been applied as a pattern recognition system for neuromodulation and neurorehabilitation. Decoding brain signals (e.g., EEG) with high accuracy is a prerequisite to building a reliable and practical BMI. This study presents a deep convolutional neural network (CNN) for EEG-based motor decoding. Both upper-limb and lower-limb motor imagery were detected from this end-to-end learning with four datasets. An average classification accuracy of 93.36 ± 1.68% was yielded on the four datasets. We compared the proposed approach with two other models, i.e., multilayer perceptron and the state-of-the-art framework with common spatial patterns and support vector machine. We observed that the performance of the CNN-based framework was significantly better than the other two models. Feature visualization was further conducted to evaluate the discriminative channels employed for the decoding. We showed the feasibility of the proposed architecture to decode motor imagery from raw EEG data without manually designed features. With the advances in the fields of computer vision and speech recognition, deep learning can not only boost the EEG decoding performance but also help us gain more insight from the data, which may further broaden the knowledge of neuroscience for brain mapping.
Collapse
Affiliation(s)
- Jing Zhang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
- Center of Artificial Intelligence, Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China
| | - Dong Liu
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
- ByteDance, Hangzhou 311100, China
| | - Weihai Chen
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
- Center of Artificial Intelligence, Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China
| | - Zhongcai Pei
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
- Center of Artificial Intelligence, Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China
| | - Jianhua Wang
- School of Automation Science and Electrical Engineering, Beihang University, Beijing 100191, China
- Center of Artificial Intelligence, Hangzhou Innovation Institute, Beihang University, Hangzhou 310051, China
| |
Collapse
|