1
|
Miao M, Yang Z, Sheng Z, Xu B, Zhang W, Cheng X. Multi-source deep domain adaptation ensemble framework for cross-dataset motor imagery EEG transfer learning. Physiol Meas 2024; 45:055024. [PMID: 38772402 DOI: 10.1088/1361-6579/ad4e95] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/25/2023] [Accepted: 05/21/2024] [Indexed: 05/23/2024]
Abstract
Objective. Electroencephalography (EEG) is an important kind of bioelectric signal for measuring physiological activities of the brain, and motor imagery (MI) EEG has significant clinical application prospects. Convolutional neural network has become a mainstream algorithm for MI EEG classification, however lack of subject-specific data considerably restricts its decoding accuracy and generalization performance. To address this challenge, a novel transfer learning (TL) framework using auxiliary dataset to improve the MI EEG classification performance of target subject is proposed in this paper.Approach. We developed a multi-source deep domain adaptation ensemble framework (MSDDAEF) for cross-dataset MI EEG decoding. The proposed MSDDAEF comprises three main components: model pre-training, deep domain adaptation, and multi-source ensemble. Moreover, for each component, different designs were examined to verify the robustness of MSDDAEF.Main results. Bidirectional validation experiments were performed on two large public MI EEG datasets (openBMI and GIST). The highest average classification accuracy of MSDDAEF reaches 74.28% when openBMI serves as target dataset and GIST serves as source dataset. While the highest average classification accuracy of MSDDAEF is 69.85% when GIST serves as target dataset and openBMI serves as source dataset. In addition, the classification performance of MSDDAEF surpasses several well-established studies and state-of-the-art algorithms.Significance. The results of this study show that cross-dataset TL is feasible for left/right-hand MI EEG decoding, and further indicate that MSDDAEF is a promising solution for addressing MI EEG cross-dataset variability.
Collapse
Affiliation(s)
- Minmin Miao
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| | - Zhong Yang
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
| | - Zhenzhen Sheng
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| | - Baoguo Xu
- School of Instrument Science and Engineering, Southeast University, Nanjing, People's Republic of China
| | - Wenbin Zhang
- College of Computer Science and Software Engineering, Hohai University, Nanjing, Jiangsu Province, People's Republic of China
| | - Xinmin Cheng
- School of Information Engineering, Huzhou University, Huzhou, People's Republic of China
- Zhejiang Province Key Laboratory of Smart Management & Application of Modern Agricultural Resources, Huzhou University, Huzhou, People's Republic of China
| |
Collapse
|
2
|
Pan J, Liang R, He Z, Li J, Liang Y, Zhou X, He Y, Li Y. ST-SCGNN: A Spatio-Temporal Self-Constructing Graph Neural Network for Cross-Subject EEG-Based Emotion Recognition and Consciousness Detection. IEEE J Biomed Health Inform 2024; 28:777-788. [PMID: 38015677 DOI: 10.1109/jbhi.2023.3335854] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2023]
Abstract
In this paper, a novel spatio-temporal self-constructing graph neural network (ST-SCGNN) is proposed for cross-subject emotion recognition and consciousness detection. For spatio-temporal feature generation, activation and connection pattern features are first extracted and then combined to leverage their complementary emotion-related information. Next, a self-constructing graph neural network with a spatio-temporal model is presented. Specifically, the graph structure of the neural network is dynamically updated by the self-constructing module of the input signal. Experiments based on the SEED and SEED-IV datasets showed that the model achieved average accuracies of 85.90% and 76.37%, respectively. Both values exceed the state-of-the-art metrics with the same protocol. In clinical besides, patients with disorders of consciousness (DOC) suffer severe brain injuries, and sufficient training data for EEG-based emotion recognition cannot be collected. Our proposed ST-SCGNN method for cross-subject emotion recognition was first attempted in training in ten healthy subjects and testing in eight patients with DOC. We found that two patients obtained accuracies significantly higher than chance level and showed similar neural patterns with healthy subjects. Covert consciousness and emotion-related abilities were thus demonstrated in these two patients. Our proposed ST-SCGNN for cross-subject emotion recognition could be a promising tool for consciousness detection in DOC patients.
Collapse
|
3
|
Sireesha V, Tallapragada VVS, Naresh M, Pradeep Kumar GV. EEG-BCI-based motor imagery classification using double attention convolutional network. Comput Methods Biomech Biomed Engin 2024:1-20. [PMID: 38164118 DOI: 10.1080/10255842.2023.2298369] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
This article aims to improve and diversify signal processing techniques to execute a brain-computer interface (BCI) based on neurological phenomena observed when performing motor tasks using motor imagery (MI). The noise present in the original data, such as intermodulation noise, crosstalk, and other unwanted noise, is removed by Modify Least Mean Square (M-LMS) in the pre-processing stage. Traditional LMSs were unable to extract all the noise from the images. After pre-processing, the required features, such as statistical features, entropy features, etc., were extracted using Common Spatial Pattern (CSP) and Pearson's Correlation Coefficient (PCC) instead of the traditional single feature extraction model. The arithmetic optimization algorithm cannot select the features accurately and fails to reduce the feature dimensionality of the data. Thus, an Extended Arithmetic operation optimization (ExAo) algorithm is used to select the most significant attributes from the extracted features. The proposed model uses Double Attention Convolutional Neural Networks (DAttnConvNet) to classify the types of EEG signals based on optimal feature selection. Here, the attention mechanism is used to select and optimize the features to improve the classification accuracy and efficiency of the model. In EEG motor imagery datasets, the proposed model has been analyzed under class, which obtained an accuracy of 99.98% in class Baseline (B), 99.82% in class Imagined movement of a right fist (R) and 99.61% in class Imagined movement of both fists (RL). In the EEG dataset, the proposed model can obtain a high accuracy of 97.94% compared to EEG datasets of other models.
Collapse
Affiliation(s)
- V Sireesha
- Department of Computer Science and Engineering, School of Technology, GITAM University, Hyderabad, India
| | | | - M Naresh
- Department of ECE, Matrusri Engineering College, Saidabad, Hyderabad, India
| | - G V Pradeep Kumar
- Department of ECE, Chaitanya Bharathi Institute of Technology, Hyderabad, India
| |
Collapse
|
4
|
Sun H, Jin J, Daly I, Huang Y, Zhao X, Wang X, Cichocki A. Feature learning framework based on EEG graph self-attention networks for motor imagery BCI systems. J Neurosci Methods 2023; 399:109969. [PMID: 37683772 DOI: 10.1016/j.jneumeth.2023.109969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2023] [Revised: 08/18/2023] [Accepted: 09/03/2023] [Indexed: 09/10/2023]
Abstract
Learning distinguishable features from raw EEG signals is crucial for accurate classification of motor imagery (MI) tasks. To incorporate spatial relationships between EEG sources, we developed a feature set based on an EEG graph. In this graph, EEG channels represent the nodes, with power spectral density (PSD) features defining their properties, and the edges preserving the spatial information. We designed an EEG based graph self-attention network (EGSAN) to learn low-dimensional embedding vector for EEG graph, which can be used as distinguishable features for motor imagery task classification. We evaluated our EGSAN model on two publicly available MI EEG datasets, each containing different types of motor imagery tasks. Our experiments demonstrate that our proposed model effectively extracts distinguishable features from EEG graphs, achieving significantly higher classification accuracies than existing state-of-the-art methods.
Collapse
Affiliation(s)
- Hao Sun
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Jing Jin
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China; Shenzhen Research Institute of East China University of Science and Technology, Shen Zhen 518063, China.
| | - Ian Daly
- Brain-Computer Interfacing and Neural Engineering Laboratory, School of Computer Science and Electronic Engineering, University of Essex, Colchester CO4 3SQ, United Kingdom
| | - Yitao Huang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Xueqing Zhao
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Xingyu Wang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Andrzej Cichocki
- RIKEN Brain Science Institute, Wako 351-0198, Japan; Nicolaus Copernicus University (UMK), 87-100 Torun, Poland
| |
Collapse
|
5
|
Siviero I, Menegaz G, Storti SF. Functional Connectivity and Feature Fusion Enhance Multiclass Motor-Imagery Brain-Computer Interface Performance. SENSORS (BASEL, SWITZERLAND) 2023; 23:7520. [PMID: 37687976 PMCID: PMC10490741 DOI: 10.3390/s23177520] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 08/24/2023] [Accepted: 08/27/2023] [Indexed: 09/10/2023]
Abstract
(1) Background: in the field of motor-imagery brain-computer interfaces (MI-BCIs), obtaining discriminative features among multiple MI tasks poses a significant challenge. Typically, features are extracted from single electroencephalography (EEG) channels, neglecting their interconnections, which leads to limited results. To address this limitation, there has been growing interest in leveraging functional brain connectivity (FC) as a feature in MI-BCIs. However, the high inter- and intra-subject variability has so far limited its effectiveness in this domain. (2) Methods: we propose a novel signal processing framework that addresses this challenge. We extracted translation-invariant features (TIFs) obtained from a scattering convolution network (SCN) and brain connectivity features (BCFs). Through a feature fusion approach, we combined features extracted from selected channels and functional connectivity features, capitalizing on the strength of each component. Moreover, we employed a multiclass support vector machine (SVM) model to classify the extracted features. (3) Results: using a public dataset (IIa of the BCI Competition IV), we demonstrated that the feature fusion approach outperformed existing state-of-the-art methods. Notably, we found that the best results were achieved by merging TIFs with BCFs, rather than considering TIFs alone. (4) Conclusions: our proposed framework could be the key for improving the performance of a multiclass MI-BCI system.
Collapse
Affiliation(s)
- Ilaria Siviero
- Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona, Italy;
| | - Gloria Menegaz
- Department of Engineering for Innovation Medicine, University of Verona, Strada Le Grazie 15, 37134 Verona, Italy;
| | - Silvia Francesca Storti
- Department of Engineering for Innovation Medicine, University of Verona, Strada Le Grazie 15, 37134 Verona, Italy;
| |
Collapse
|
6
|
Miao M, Zheng L, Xu B, Yang Z, Hu W. A multiple frequency bands parallel spatial–temporal 3D deep residual learning framework for EEG-based emotion recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
7
|
Proverbio AM, Tacchini M, Jiang K. Event-related brain potential markers of visual and auditory perception: A useful tool for brain computer interface systems. Front Behav Neurosci 2022; 16:1025870. [PMID: 36523756 PMCID: PMC9744781 DOI: 10.3389/fnbeh.2022.1025870] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 11/03/2022] [Indexed: 06/27/2024] Open
Abstract
OBJECTIVE A majority of BCI systems, enabling communication with patients with locked-in syndrome, are based on electroencephalogram (EEG) frequency analysis (e.g., linked to motor imagery) or P300 detection. Only recently, the use of event-related brain potentials (ERPs) has received much attention, especially for face or music recognition, but neuro-engineering research into this new approach has not been carried out yet. The aim of this study was to provide a variety of reliable ERP markers of visual and auditory perception for the development of new and more complex mind-reading systems for reconstructing the mental content from brain activity. METHODS A total of 30 participants were shown 280 color pictures (adult, infant, and animal faces; human bodies; written words; checkerboards; and objects) and 120 auditory files (speech, music, and affective vocalizations). This paradigm did not involve target selection to avoid artifactual waves linked to decision-making and response preparation (e.g., P300 and motor potentials), masking the neural signature of semantic representation. Overall, 12,000 ERP waveforms × 126 electrode channels (1 million 512,000 ERP waveforms) were processed and artifact-rejected. RESULTS Clear and distinct category-dependent markers of perceptual and cognitive processing were identified through statistical analyses, some of which were novel to the literature. Results are discussed from the view of current knowledge of ERP functional properties and with respect to machine learning classification methods previously applied to similar data. CONCLUSION The data showed a high level of accuracy (p ≤ 0.01) in the discriminating the perceptual categories eliciting the various electrical potentials by statistical analyses. Therefore, the ERP markers identified in this study could be significant tools for optimizing BCI systems [pattern recognition or artificial intelligence (AI) algorithms] applied to EEG/ERP signals.
Collapse
Affiliation(s)
- Alice Mado Proverbio
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Marta Tacchini
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
| | - Kaijun Jiang
- Laboratory of Cognitive Electrophysiology, Department of Psychology, University of Milano-Bicocca, Milan, Italy
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| |
Collapse
|
8
|
Cao L, Wu H, Chen S, Dong Y, Zhu C, Jia J, Fan C. A Novel Deep Learning Method Based on an Overlapping Time Window Strategy for Brain-Computer Interface-Based Stroke Rehabilitation. Brain Sci 2022; 12:1502. [PMID: 36358428 PMCID: PMC9688819 DOI: 10.3390/brainsci12111502] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2022] [Revised: 10/06/2022] [Accepted: 10/31/2022] [Indexed: 09/22/2023] Open
Abstract
Globally, stroke is a leading cause of death and disability. The classification of motor intentions using brain activity is an important task in the rehabilitation of stroke patients using brain-computer interfaces (BCIs). This paper presents a new method for model training in EEG-based BCI rehabilitation by using overlapping time windows. For this aim, three different models, a convolutional neural network (CNN), graph isomorphism network (GIN), and long short-term memory (LSTM), are used for performing the classification task of motor attempt (MA). We conducted several experiments with different time window lengths, and the results showed that the deep learning approach based on overlapping time windows achieved improvements in classification accuracy, with the LSTM combined vote-counting strategy (VS) having achieved the highest average classification accuracy of 90.3% when the window size was 70. The results verified that the overlapping time window strategy is useful for increasing the efficiency of BCI rehabilitation.
Collapse
Affiliation(s)
- Lei Cao
- Department of Artificial Intelligence, Shanghai Maritime University, Shanghai 201306, China
| | - Hailiang Wu
- Department of Artificial Intelligence, Shanghai Maritime University, Shanghai 201306, China
| | - Shugeng Chen
- Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Yilin Dong
- Department of Artificial Intelligence, Shanghai Maritime University, Shanghai 201306, China
| | - Changming Zhu
- Department of Artificial Intelligence, Shanghai Maritime University, Shanghai 201306, China
| | - Jie Jia
- Department of Rehabilitation Medicine, Huashan Hospital, Fudan University, Shanghai 200040, China
| | - Chunjiang Fan
- Department of Rehabilitation Medicine, Wuxi Rehabilitation Hospital, Wuxi 214001, China
| |
Collapse
|
9
|
Cao L, Wang W, Huang C, Xu Z, Wang H, Jia J, Chen S, Dong Y, Fan C, de Albuquerque VHC. An Effective Fusing Approach by Combining Connectivity Network Pattern and Temporal-Spatial Analysis for EEG-Based BCI Rehabilitation. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2264-2274. [PMID: 35969547 DOI: 10.1109/tnsre.2022.3198434] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Motor-modality-based brain computer interface (BCI) could promote the neural rehabilitation for stroke patients. Temporal-spatial analysis was commonly used for pattern recognition in this task. This paper introduced a novel connectivity network analysis for EEG-based feature selection. The network features of connectivity pattern not only captured the spatial activities responding to motor task, but also mined the interactive pattern among these cerebral regions. Furthermore, the effective combination between temporal-spatial analysis and network analysis was evaluated for improving the performance of BCI classification (81.7%). And the results demonstrated that it could raise the classification accuracies for most of patients (6 of 7 patients). This proposed method was meaningful for developing the effective BCI training program for stroke rehabilitation.
Collapse
|
10
|
Peng Y, Jin F, Kong W, Nie F, Lu BL, Cichocki A. OGSSL: A Semi-Supervised Classification Model Coupled With Optimal Graph Learning for EEG Emotion Recognition. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1288-1297. [PMID: 35576431 DOI: 10.1109/tnsre.2022.3175464] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Electroencephalogram(EEG) signals are generated from central nervous system which are difficult to disguise, leading to its popularity in emotion recognition. Recently,semi-supervisedlearning exhibits promisingemotion recognition performance by involving unlabeled EEG data into model training. However, if we first build a graph to characterize the sample similarities and then perform label propagation on this graph, these two steps cannotwell collaborate with each other. In this paper, we propose an OptimalGraph coupledSemi-Supervised Learning (OGSSL) model for EEG emotion recognition by unifying the adaptive graph learning and emotion recognition into a single objective. Besides, we improve the label indicator matrix of unlabeledsamples in order to directly obtain theiremotional states. Moreover, the key EEG frequency bands and brain regions in emotion expression are automatically recognized by the projectionmatrix of OGSSL. Experimental results on the SEED-IV data set demonstrate that 1) OGSSL achieves excellent average accuracies of 76.51%, 77.08% and 81.29% in three cross-sessionemotion recognition tasks, 2) OGSSL is competent for discriminative EEG feature selection in emotion recognition, and 3) the Gamma frequency band, the left/righttemporal, prefrontal,and (central) parietal lobes are identified to be more correlated with the occurrence of emotions.
Collapse
|