1
|
Rong F, Yang B, Guan C. Decoding Multi-Class Motor Imagery From Unilateral Limbs Using EEG Signals. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3399-3409. [PMID: 39236133 DOI: 10.1109/tnsre.2024.3454088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/07/2024]
Abstract
The EEG is a widely utilized neural signal source, particularly in motor imagery-based brain-computer interface (MI-BCI), offering distinct advantages in applications like stroke rehabilitation. Current research predominantly concentrates on the bilateral limbs paradigm and decoding, but the use scenarios for stroke rehabilitation are typically for unilateral upper limbs. There is a significant challenge to decoding unilateral MI of multitasks due to the overlapped spatial neural activities of the tasks. This study aims to formulate a novel MI-BCI experimental paradigm for unilateral limbs with multitasks. The paradigm encompasses four imagined movement directions: top-bottom, left-right, top right-bottom left, and top left-bottom right. Forty-six healthy subjects participated in this experiment. Commonly used machine learning techniques, such as FBCSP, EEGNet, deepConvNet, and FBCNet, were employed for evaluation. To improve decoding accuracy, we propose an MVCA method that introduces temporal convolution and attention mechanism to effectively capture temporal features from multiple perspectives. With the MVCA model, we have achieved 40.6% and 64.89% classification accuracies for the four-class and two-class scenarios (top right-bottom left and top left-bottom right), respectively. Conclusion: This is the first study demonstrating that motor imagery of multiple directions in unilateral limbs can be decoded. In particular, decoding two directions, right top to left bottom and left top to right bottom, provides the best accuracy, which sheds light on future studies. This study advances the development of the MI-BCI paradigm, offering preliminary evidence for the feasibility of decoding multiple directional information from EEG. This, in turn, enhances the dimensions of MI control commands.
Collapse
|
2
|
Lee M, Park HY, Park W, Kim KT, Kim YH, Jeong JH. Multi-Task Heterogeneous Ensemble Learning-Based Cross-Subject EEG Classification Under Stroke Patients. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1767-1778. [PMID: 38683717 DOI: 10.1109/tnsre.2024.3395133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Robot-assisted motor training is applied for neurorehabilitation in stroke patients, using motor imagery (MI) as a representative paradigm of brain-computer interfaces to offer real-life assistance to individuals facing movement challenges. However, the effectiveness of training with MI may vary depending on the location of the stroke lesion, which should be considered. This paper introduces a multi-task electroencephalogram-based heterogeneous ensemble learning (MEEG-HEL) specifically designed for cross-subject training. In the proposed framework, common spatial patterns were used for feature extraction, and the features according to stroke lesions are shared and selected through sequential forward floating selection. The heterogeneous ensembles were used as classifiers. Nine patients with chronic ischemic stroke participated, engaging in MI and motor execution (ME) paradigms involving finger tapping. The classification criteria for the multi-task were established in two ways, taking into account the characteristics of stroke patients. In the cross-subject session, the first involved a direction recognition task for two-handed classification, achieving a performance of 0.7419 (±0.0811) in MI and 0.7061 (±0.1270) in ME. The second task focused on motor assessment for lesion location, resulting in a performance of 0.7457 (±0.1317) in MI and 0.6791 (±0.1253) in ME. Comparing the specific-subject session, except for ME on the motor assessment task, performance on both tasks was significantly higher than the cross-subject session. Furthermore, classification performance was similar to or statistically higher in cross-subject sessions compared to baseline models. The proposed MEEG-HEL holds promise in improving the practicality of neurorehabilitation in clinical settings and facilitating the detection of lesions.
Collapse
|
3
|
Pei Y, Xu J, Chen Q, Wang C, Yu F, Zhang L, Luo W. DTP-Net: Learning to Reconstruct EEG Signals in Time-Frequency Domain by Multi-Scale Feature Reuse. IEEE J Biomed Health Inform 2024; 28:2662-2673. [PMID: 38277252 DOI: 10.1109/jbhi.2024.3358917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2024]
Abstract
Electroencephalography (EEG) signals are prone to contamination by noise, such as ocular and muscle artifacts. Minimizing these artifacts is crucial for EEG-based downstream applications like disease diagnosis and brain-computer interface (BCI). This paper presents a new EEG denoising model, DTP-Net. It is a fully convolutional neural network comprising Densely-connected Temporal Pyramids (DTPs) placed between two learnable time-frequency transformations. In the time-frequency domain, DTPs facilitate efficient propagation of multi-scale features extracted from EEG signals of any length, leading to effective noise reduction. Comprehensive experiments on two public semi-simulated datasets demonstrate that the proposed DTP-Net consistently outperforms existing state-of-the-art methods on metrics including relative root mean square error (RRMSE) and signal-to-noise ratio improvement ( ∆SNR). Moreover, the proposed DTP-Net is applied to a BCI classification task, yielding an improvement of up to 5.55% in accuracy. This confirms the potential of DTP-Net for applications in the fields of EEG-based neuroscience and neuro-engineering. An in-depth analysis further illustrates the representation learning behavior of each module in DTP-Net, demonstrating its robustness and reliability.
Collapse
|
4
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
5
|
Liu T, Li B, Zhang C, Chen P, Zhao W, Yan B. Real-Time Classification of Motor Imagery Using Dynamic Window-Level Granger Causality Analysis of fMRI Data. Brain Sci 2023; 13:1406. [PMID: 37891775 PMCID: PMC10604978 DOI: 10.3390/brainsci13101406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2023] [Revised: 09/18/2023] [Accepted: 09/26/2023] [Indexed: 10/29/2023] Open
Abstract
This article presents a method for extracting neural signal features to identify the imagination of left- and right-hand grasping movements. A functional magnetic resonance imaging (fMRI) experiment is employed to identify four brain regions with significant activations during motor imagery (MI) and the effective connections between these regions of interest (ROIs) were calculated using Dynamic Window-level Granger Causality (DWGC). Then, a real-time fMRI (rt-fMRI) classification system for left- and right-hand MI is developed using the Open-NFT platform. We conducted data acquisition and processing on three subjects, and all of whom were recruited from a local college. As a result, the maximum accuracy of using Support Vector Machine (SVM) classifier on real-time three-class classification (rest, left hand, and right hand) with effective connections is 69.3%. And it is 3% higher than that of traditional multivoxel pattern classification analysis on average. Moreover, it significantly improves classification accuracy during the initial stage of MI tasks while reducing the latency effects in real-time decoding. The study suggests that the effective connections obtained through the DWGC method serve as valuable features for real-time decoding of MI using fMRI. Moreover, they exhibit higher sensitivity to changes in brain states. This research offers theoretical support and technical guidance for extracting neural signal features in the context of fMRI-based studies.
Collapse
Affiliation(s)
| | | | | | | | | | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou 450001, China; (T.L.)
| |
Collapse
|
6
|
Liu K, Yang M, Xing X, Yu Z, Wu W. SincMSNet: a Sinc filter convolutional neural network for EEG motor imagery classification. J Neural Eng 2023; 20:056024. [PMID: 37683664 DOI: 10.1088/1741-2552/acf7f4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Accepted: 09/08/2023] [Indexed: 09/10/2023]
Abstract
Objective.Motor imagery (MI) is widely used in brain-computer interfaces (BCIs). However, the decode of MI-EEG using convolutional neural networks (CNNs) remains a challenge due to individual variability.Approach.We propose a fully end-to-end CNN called SincMSNet to address this issue. SincMSNet employs the Sinc filter to extract subject-specific frequency band information and utilizes mixed-depth convolution to extract multi-scale temporal information for each band. It then applies a spatial convolutional block to extract spatial features and uses a temporal log-variance block to obtain classification features. The model of SincMSNet is trained under the joint supervision of cross-entropy and center loss to achieve inter-class separable and intra-class compact representations of EEG signals.Main results.We evaluated the performance of SincMSNet on the BCIC-IV-2a (four-class) and OpenBMI (two-class) datasets. SincMSNet achieves impressive results, surpassing benchmark methods. In four-class and two-class inter-session analysis, it achieves average accuracies of 80.70% and 71.50% respectively. In four-class and two-class single-session analysis, it achieves average accuracies of 84.69% and 76.99% respectively. Additionally, visualizations of the learned band-pass filter bands by Sinc filters demonstrate the network's ability to extract subject-specific frequency band information from EEG.Significance.This study highlights the potential of SincMSNet in improving the performance of MI-EEG decoding and designing more robust MI-BCIs. The source code for SincMSNet can be found at:https://github.com/Want2Vanish/SincMSNet.
Collapse
Affiliation(s)
- Ke Liu
- Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
- Key Laboratory of Big Data Intelligent Computing, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Mingzhao Yang
- Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Xin Xing
- Chongqing Key Laboratory of Computational Intelligence, Chongqing University of Posts and Telecommunications, Chongqing 400065, People's Republic of China
| | - Zhuliang Yu
- College of Automation Science and Engineering, South China University of Technology, Guangzhou 510641, People's Republic of China
| | - Wei Wu
- Alto Neuroscience, Inc., Los Altos, CA 94022, United States of America
| |
Collapse
|
7
|
Ma J, Yang B, Qiu W, Zhang J, Yan L, Wang W. Recognizable Rehabilitation Movements of Multiple Unilateral Upper Limb: an fMRI Study of Motor Execution and Motor Imagery. J Neurosci Methods 2023; 392:109861. [PMID: 37075914 DOI: 10.1016/j.jneumeth.2023.109861] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Revised: 02/18/2023] [Accepted: 04/15/2023] [Indexed: 04/21/2023]
Abstract
BACKGROUND This paper presents a study investigating the recognizability of multiple unilateral upper limb movements in stroke rehabilitation. METHODS A functional magnetic experiment is employed to study motor execution (ME) and motor imagery (MI) of four movements for the unilateral upper limb: hand-grasping, hand-handling, arm-reaching, and wrist-twisting. The functional magnetic resonance imaging (fMRI) images of ME and MI tasks are statistically analyzed to delineate the region of interest (ROI). Then parameter estimation associated with ROIs for each ME and MI task are evaluated, where differences in ROIs for different movements are compared using analysis of covariance (ANCOVA). RESULTS All movements of ME and MI tasks activate motor areas of the brain, and there are significant differences (p<0.05) in ROIs evoked by different movements. The activation area is larger when executing the hand-grasping task instead of the others. CONCLUSION The four movements we propose can be adopted as MI tasks, especially for stroke rehabilitation, since they are highly recognizable and capable of activating more brain areas during MI and ME.
Collapse
Affiliation(s)
- Jun Ma
- School of Mechatronic Engineering and Automation, School of Medicine, Research Center of Brain Computer Engineering, Shanghai University, Shanghai, 200441, China
| | - Banghua Yang
- School of Mechatronic Engineering and Automation, School of Medicine, Research Center of Brain Computer Engineering, Shanghai University, Shanghai, 200441, China; Engineering Research Center of Traditional Chinese Medicine Intelligent Rehabilitation, Ministry of Education, 201203, Shanghai, China.
| | - Wenzheng Qiu
- School of Mechatronic Engineering and Automation, School of Medicine, Research Center of Brain Computer Engineering, Shanghai University, Shanghai, 200441, China
| | - Jian Zhang
- Shanghai Universal Medical Imaging Diagnostic Center, Shanghai University, 200441, Shanghai China
| | - Linfeng Yan
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 710038, Shaanxi, China
| | - Wen Wang
- Department of Radiology & Functional and Molecular Imaging Key Lab of Shaanxi Province, Tangdu Hospital, Fourth Military Medical University, 710038, Shaanxi, China.
| |
Collapse
|
8
|
Gwon D, Won K, Song M, Nam CS, Jun SC, Ahn M. Review of public motor imagery and execution datasets in brain-computer interfaces. Front Hum Neurosci 2023; 17:1134869. [PMID: 37063105 PMCID: PMC10101208 DOI: 10.3389/fnhum.2023.1134869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Accepted: 03/10/2023] [Indexed: 04/18/2023] Open
Abstract
The demand for public datasets has increased as data-driven methodologies have been introduced in the field of brain-computer interfaces (BCIs). Indeed, many BCI datasets are available in various platforms or repositories on the web, and the studies that have employed these datasets appear to be increasing. Motor imagery is one of the significant control paradigms in the BCI field, and many datasets related to motor tasks are open to the public already. However, to the best of our knowledge, these studies have yet to investigate and evaluate the datasets, although data quality is essential for reliable results and the design of subject- or system-independent BCIs. In this study, we conducted a thorough investigation of motor imagery/execution EEG datasets recorded from healthy participants published over the past 13 years. The 25 datasets were collected from six repositories and subjected to a meta-analysis. In particular, we reviewed the specifications of the recording settings and experimental design, and evaluated the data quality measured by classification accuracy from standard algorithms such as Common Spatial Pattern (CSP) and Linear Discriminant Analysis (LDA) for comparison and compatibility across the datasets. As a result, we found that various stimulation types, such as text, figure, or arrow, were used to instruct subjects what to imagine and the length of each trial also differed, ranging from 2.5 to 29 s with a mean of 9.8 s. Typically, each trial consisted of multiple sections: pre-rest (2.38 s), imagination ready (1.64 s), imagination (4.26 s, ranging from 1 to 10 s), the post-rest (3.38 s). In a meta-analysis of the total of 861 sessions from all datasets, the mean classification accuracy of the two-class (left-hand vs. right-hand motor imagery) problem was 66.53%, and the population of the BCI poor performers, those who are unable to reach proficiency in using a BCI system, was 36.27% according to the estimated accuracy distribution. Further, we analyzed the CSP features and found that each dataset forms a cluster, and some datasets overlap in the feature space, indicating a greater similarity among them. Finally, we checked the minimal essential information (continuous signals, event type/latency, and channel information) that should be included in the datasets for convenient use, and found that only 71% of the datasets met those criteria. Our attempts to evaluate and compare the public datasets are timely, and these results will contribute to understanding the dataset's quality and recording settings as well as the use of using public datasets for future work on BCIs.
Collapse
Affiliation(s)
- Daeun Gwon
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| | - Kyungho Won
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
| | - Minseok Song
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| | - Chang S. Nam
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh, NC, United States
- Department of Industrial and Management Systems Engineering, Kyung Hee University, Yongin-si, Republic of Korea
| | - Sung Chan Jun
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
- AI Graudate School, Gwangju Institute of Science and Technology, Gwangju, Republic of Korea
| | - Minkyu Ahn
- Department of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
- School of Computer Science and Electrical Engineering, Handong Global University, Pohang, Republic of Korea
| |
Collapse
|
9
|
Kansal S, Garg D, Upadhyay A, Mittal S, Talwar GS. A novel deep learning approach to predict subject arm movements from EEG-based signals. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08310-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
|
10
|
Zhang R, Chen Y, Xu Z, Zhang L, Hu Y, Chen M. Recognition of single upper limb motor imagery tasks from EEG using multi-branch fusion convolutional neural network. Front Neurosci 2023; 17:1129049. [PMID: 36908782 PMCID: PMC9992961 DOI: 10.3389/fnins.2023.1129049] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 02/03/2023] [Indexed: 02/24/2023] Open
Abstract
Motor imagery-based brain-computer interfaces (MI-BCI) have important application values in the field of neurorehabilitation and robot control. At present, MI-BCI mostly use bilateral upper limb motor tasks, but there are relatively few studies on single upper limb MI tasks. In this work, we conducted studies on the recognition of motor imagery EEG signals of the right upper limb and proposed a multi-branch fusion convolutional neural network (MF-CNN) for learning the features of the raw EEG signals as well as the two-dimensional time-frequency maps at the same time. The dataset used in this study contained three types of motor imagery tasks: extending the arm, rotating the wrist, and grasping the object, 25 subjects were included. In the binary classification experiment between the grasping object and the arm-extending tasks, MF-CNN achieved an average classification accuracy of 78.52% and kappa value of 0.57. When all three tasks were used for classification, the accuracy and kappa value were 57.06% and 0.36, respectively. The comparison results showed that the classification performance of MF-CNN is higher than that of single CNN branch algorithms in both binary-class and three-class classification. In conclusion, MF-CNN makes full use of the time-domain and frequency-domain features of EEG, can improve the decoding accuracy of single limb motor imagery tasks, and it contributes to the application of MI-BCI in motor function rehabilitation training after stroke.
Collapse
Affiliation(s)
- Rui Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Yadi Chen
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Zongxin Xu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Lipeng Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Yuxia Hu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| | - Mingming Chen
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, China
| |
Collapse
|
11
|
Cho JH, Jeong JH, Lee SW. NeuroGrasp: Real-Time EEG Classification of High-Level Motor Imagery Tasks Using a Dual-Stage Deep Learning Framework. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13279-13292. [PMID: 34748509 DOI: 10.1109/tcyb.2021.3122969] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Brain-computer interfaces (BCIs) have been widely employed to identify and estimate a user's intention to trigger a robotic device by decoding motor imagery (MI) from an electroencephalogram (EEG). However, developing a BCI system driven by MI related to natural hand-grasp tasks is challenging due to its high complexity. Although numerous BCI studies have successfully decoded large body parts, such as the movement intention of both hands, arms, or legs, research on MI decoding of high-level behaviors such as hand grasping is essential to further expand the versatility of MI-based BCIs. In this study, we propose NeuroGrasp, a dual-stage deep learning framework that decodes multiple hand grasping from EEG signals under the MI paradigm. The proposed method effectively uses an EEG and electromyography (EMG)-based learning, such that EEG-based inference at test phase becomes possible. The EMG guidance during model training allows BCIs to predict hand grasp types from EEG signals accurately. Consequently, NeuroGrasp improved classification performance offline, and demonstrated a stable classification performance online. Across 12 subjects, we obtained an average offline classification accuracy of 0.68 (±0.09) in four-grasp-type classifications and 0.86 (±0.04) in two-grasp category classifications. In addition, we obtained an average online classification accuracy of 0.65 (±0.09) and 0.79 (±0.09) across six high-performance subjects. Because the proposed method has demonstrated a stable classification performance when evaluated either online or offline, in the future, we expect that the proposed method could contribute to different BCI applications, including robotic hands or neuroprosthetics for handling everyday objects.
Collapse
|
12
|
M 3CV: A multi-subject, multi-session, and multi-task database for EEG-based biometrics challenge. Neuroimage 2022; 264:119666. [PMID: 36206939 DOI: 10.1016/j.neuroimage.2022.119666] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2021] [Revised: 09/10/2022] [Accepted: 10/03/2022] [Indexed: 11/09/2022] Open
Abstract
EEG signals exhibit commonality and variability across subjects, sessions, and tasks. But most existing EEG studies focus on mean group effects (commonality) by averaging signals over trials and subjects. The substantial intra- and inter-subject variability of EEG have often been overlooked. The recently significant technological advances in machine learning, especially deep learning, have brought technological innovations to EEG signal application in many aspects, but there are still great challenges in cross-session, cross-task, and cross-subject EEG decoding. In this work, an EEG-based biometric competition based on a large-scale M3CV (A Multi-subject, Multi-session, and Multi-task Database for investigation of EEG Commonality and Variability) database was launched to better characterize and harness the intra- and inter-subject variability and promote the development of machine learning algorithm in this field. In the M3CV database, EEG signals were recorded from 106 subjects, of which 95 subjects repeated two sessions of the experiments on different days. The whole experiment consisted of 6 paradigms, including resting-state, transient-state sensory, steady-state sensory, cognitive oddball, motor execution, and steady-state sensory with selective attention with 14 types of EEG signals, 120000 epochs. Two learning tasks (identification and verification), performance metrics, and baseline methods were introduced in the competition. In general, the proposed M3CV dataset and the EEG-based biometric competition aim to provide the opportunity to develop advanced machine learning algorithms for achieving an in-depth understanding of the commonality and variability of EEG signals across subjects, sessions, and tasks.
Collapse
|
13
|
Zhu F, Li Y, Shi Z, Shi W. TV-NARX and Coiflets WPT based time-frequency Granger causality with application to corticomuscular coupling in hand-grasping. Front Neurosci 2022; 16:1014495. [PMID: 36248661 PMCID: PMC9560889 DOI: 10.3389/fnins.2022.1014495] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Accepted: 09/12/2022] [Indexed: 11/21/2022] Open
Abstract
The study of the synchronous characteristics and functional connections between the functional cortex and muscles of hand-grasping movements is important in basic research, clinical disease diagnosis and rehabilitation evaluation. The electroencephalogram (EEG) and electromyographic signal (EMG) signals of 15 healthy participants were used to analyze the corticomuscular coupling under grasping movements by holding three different objects, namely, card, ball, and cup by using the time-frequency Granger causality method based on time-varying nonlinear autoregressive with exogenous input (TV-NARX) model and Coiflets wavelet packet transform. The results show that there is a bidirectional coupling between cortex and muscles under grasping movement, and it is mainly reflected in the beta and gamma frequency bands, in which there is a statistically significant difference (p < 0.05) among the different grasping actions during the movement execution period in the beta frequency band, and a statistically significant difference (p < 0.1) among the different grasping actions during the movement preparation period in the gamma frequency band. The results show that the proposed method can effectively characterize the EEG-EMG synchronization features and functional connections in different frequency bands during the movement preparation and execution phases in the time-frequency domain, and reveal the neural control mechanism of sensorimotor system to control the hand-grasping function achievement by regulating the intensity of neuronal synchronization oscillations.
Collapse
Affiliation(s)
- Feifei Zhu
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Yurong Li
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
- *Correspondence: Yurong Li
| | - Zhengyi Shi
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| | - Wuxiang Shi
- College of Electrical Engineering and Automation, Fuzhou University, Fuzhou, China
- Fujian Provincial Key Laboratory of Medical Instrument and Pharmaceutical Technology, Fuzhou University, Fuzhou, China
| |
Collapse
|
14
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
15
|
A new 2-class unilateral upper limb motor imagery tasks for stroke rehabilitation training. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2022. [DOI: 10.1016/j.medntd.2021.100100] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
|
16
|
Lee DY, Jeong JH, Lee BH, Lee SW. Motor Imagery Classification Using Inter-Task Transfer Learning via A Channel-Wise Variational Autoencoder-based Convolutional Neural Network. IEEE Trans Neural Syst Rehabil Eng 2022; 30:226-237. [PMID: 35041605 DOI: 10.1109/tnsre.2022.3143836] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Highly sophisticated control based on a brain-computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (±0.04) and 0.69 (±0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.09~0.27 and 0.08~0.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.
Collapse
|
17
|
Jeong JH, Cho JH, Shim KH, Kwon BH, Lee BH, Lee DY, Lee DH, Lee SW. Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. Gigascience 2020; 9:giaa098. [PMID: 33034634 PMCID: PMC7539536 DOI: 10.1093/gigascience/giaa098] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 08/12/2020] [Accepted: 09/07/2020] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Non-invasive brain-computer interfaces (BCIs) have been developed for realizing natural bi-directional interaction between users and external robotic systems. However, the communication between users and BCI systems through artificial matching is a critical issue. Recently, BCIs have been developed to adopt intuitive decoding, which is the key to solving several problems such as a small number of classes and manually matching BCI commands with device control. Unfortunately, the advances in this area have been slow owing to the lack of large and uniform datasets. This study provides a large intuitive dataset for 11 different upper extremity movement tasks obtained during multiple recording sessions. The dataset includes 60-channel electroencephalography, 7-channel electromyography, and 4-channel electro-oculography of 25 healthy participants collected over 3-day sessions for a total of 82,500 trials across all the participants. FINDINGS We validated our dataset via neurophysiological analysis. We observed clear sensorimotor de-/activation and spatial distribution related to real-movement and motor imagery, respectively. Furthermore, we demonstrated the consistency of the dataset by evaluating the classification performance of each session using a baseline machine learning method. CONCLUSIONS The dataset includes the data of multiple recording sessions, various classes within the single upper extremity, and multimodal signals. This work can be used to (i) compare the brain activities associated with real movement and imagination, (ii) improve the decoding performance, and (iii) analyze the differences among recording sessions. Hence, this study, as a Data Note, has focused on collecting data required for further advances in the BCI technology.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Kyung-Hwan Shim
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byoung-Hee Kwon
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byeong-Hoo Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Do-Yeun Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Dae-Hyeok Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
- Department of Artificial Intelligence, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| |
Collapse
|