1
|
AL-Quraishi MS, Tan WH, Elamvazuthi I, Ooi CP, Saad NM, Al-Hiyali MI, Karim H, Azhar Ali SS. Cortical signals analysis to recognize intralimb mobility using modified RNN and various EEG quantities. Heliyon 2024; 10:e30406. [PMID: 38726180 PMCID: PMC11079093 DOI: 10.1016/j.heliyon.2024.e30406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/17/2024] [Accepted: 04/25/2024] [Indexed: 05/12/2024] Open
Abstract
Electroencephalogram (EEG) signals are critical in interpreting sensorimotor activities for predicting body movements. However, their efficacy in identifying intralimb movements, such as the dorsiflexion and plantar flexion of the foot, remains suboptimal. This study aims to explore whether various EEG signal quantities can effectively recognize intralimb movements to facilitate the development of Brain-Computer Interface (BCI) devices for foot rehabilitation. This research involved twenty-two healthy, right-handed participants. EEG data were collected using 21 electrodes positioned over the motor cortex, while two electromyography (EMG) electrodes recorded the onset of ankle joint movements. The study focused on analyzing slow cortical potential (SCP) and sensorimotor rhythms (SMR) in alpha and beta bands from the EEG. Five key features-fourth-order Autoregressive feature, variance, waveform length, standard deviation, and permutation entropy-were extracted. A modified Recurrent Neural Network (RNN) including Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms was developed for movement recognition. These were compared against conventional machine learning algorithms, including nonlinear Support Vector Machine (SVM) and k Nearest Neighbourhood (kNN) classifiers. The performance of the proposed models was assessed using two data schemes: within-subject and across-subjects. The findings demonstrated that the GRU and LSTM models significantly outperformed traditional machine learning algorithms in recognizing different EEG signal quantities for intralimb movement. The study indicates that deep learning models, particularly GRU and LSTM, hold superior potential over standard machine learning techniques in identifying intralimb movements using EEG signals. Where the accuracies of LSTM for within and across subjects were 98.87 ± 1.80 % and 87.38 ± 0.86 % respectively. Whereas the accuracy of GRU within and across subjects were 99.18 ± 1.28 % and 86.44 ± 0.69 % respectively. This advancement could significantly benefit the development of BCI devices aimed at foot rehabilitation, suggesting a new avenue for enhancing physical therapy outcomes.
Collapse
Affiliation(s)
- Maged S. AL-Quraishi
- Interdisciplinary Research Center for Smart Mobility and Logistics (IRC-SML), King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
| | - Wooi Haw Tan
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Irraivan Elamvazuthi
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - Chee Pun Ooi
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Naufal M. Saad
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - Mohammed Isam Al-Hiyali
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - H.A. Karim
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Syed Saad Azhar Ali
- Interdisciplinary Research Center for Smart Mobility and Logistics (IRC-SML), King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
- Aerospace Engineering Department, King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
| |
Collapse
|
2
|
Lee M, Park HY, Park W, Kim KT, Kim YH, Jeong JH. Multi-Task Heterogeneous Ensemble Learning-Based Cross-Subject EEG Classification Under Stroke Patients. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1767-1778. [PMID: 38683717 DOI: 10.1109/tnsre.2024.3395133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2024]
Abstract
Robot-assisted motor training is applied for neurorehabilitation in stroke patients, using motor imagery (MI) as a representative paradigm of brain-computer interfaces to offer real-life assistance to individuals facing movement challenges. However, the effectiveness of training with MI may vary depending on the location of the stroke lesion, which should be considered. This paper introduces a multi-task electroencephalogram-based heterogeneous ensemble learning (MEEG-HEL) specifically designed for cross-subject training. In the proposed framework, common spatial patterns were used for feature extraction, and the features according to stroke lesions are shared and selected through sequential forward floating selection. The heterogeneous ensembles were used as classifiers. Nine patients with chronic ischemic stroke participated, engaging in MI and motor execution (ME) paradigms involving finger tapping. The classification criteria for the multi-task were established in two ways, taking into account the characteristics of stroke patients. In the cross-subject session, the first involved a direction recognition task for two-handed classification, achieving a performance of 0.7419 (±0.0811) in MI and 0.7061 (±0.1270) in ME. The second task focused on motor assessment for lesion location, resulting in a performance of 0.7457 (±0.1317) in MI and 0.6791 (±0.1253) in ME. Comparing the specific-subject session, except for ME on the motor assessment task, performance on both tasks was significantly higher than the cross-subject session. Furthermore, classification performance was similar to or statistically higher in cross-subject sessions compared to baseline models. The proposed MEEG-HEL holds promise in improving the practicality of neurorehabilitation in clinical settings and facilitating the detection of lesions.
Collapse
|
3
|
Padfield N, Agius Anastasi A, Camilleri T, Fabri S, Bugeja M, Camilleri K. BCI-controlled wheelchairs: end-users' perceptions, needs, and expectations, an interview-based study. Disabil Rehabil Assist Technol 2024; 19:1539-1551. [PMID: 37166297 DOI: 10.1080/17483107.2023.2211602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2023] [Accepted: 05/03/2023] [Indexed: 05/12/2023]
Abstract
PURPOSE Brain-computer interface (BCI)-controlled wheelchairs have the potential to improve the independence of people with mobility impairments. The low uptake of BCI devices has been linked to a lack of knowledge among researchers of the needs of end-users that should influence BCI development. MATERIALS AND METHODS This study used semi-structured interviews to learn about the perceptions, needs, and expectations of spinal cord injury (SCI) patients with regards to a BCI-controlled wheelchair. Topics discussed in the interview include: paradigms, shared control, safety, robustness, channel selection, hardware, and experimental design. The interviews were recorded and then transcribed. Analysis was carried out using coding based on grounded theory principles. RESULTS The majority of participants had a positive view of BCI-controlled wheelchair technology and were willing to use the technology. Core issues were raised regarding safety, cost and aesthetics. Interview discussions were linked to state-of-the-art BCI technology. The results challenge the current reliance of researchers on the motor-imagery paradigm by suggesting end-users expect highly intuitive paradigms. There also needs to be a stronger focus on obstacle avoidance and safety features in BCI wheelchairs. Finally, the development of control approaches that can be personalized for individual users may be instrumental for widespread adoption of these devices. CONCLUSIONS This study, based on interviews with SCI patients, indicates that BCI-controlled wheelchairs are a promising assistive technology that would be well received by end-users. Recommendations for a more person-centered design of BCI controlled wheelchairs are made and clear avenues for future research are identified.
Collapse
Affiliation(s)
- Natasha Padfield
- Centre for Biomedical Cybernetics, University of Malta, Msida, Malta
| | | | - Tracey Camilleri
- Department of Systems and Control Engineering, University of Malta, Msida, Malta
| | - Simon Fabri
- Department of Systems and Control Engineering, University of Malta, Msida, Malta
| | - Marvin Bugeja
- Department of Systems and Control Engineering, University of Malta, Msida, Malta
| | - Kenneth Camilleri
- Centre for Biomedical Cybernetics, University of Malta, Msida, Malta
- Department of Systems and Control Engineering, University of Malta, Msida, Malta
| |
Collapse
|
4
|
Mammone N, Ieracitano C, Spataro R, Guger C, Cho W, Morabito FC. A Few-Shot Transfer Learning Approach for Motion Intention Decoding from Electroencephalographic Signals. Int J Neural Syst 2024; 34:2350068. [PMID: 38073546 DOI: 10.1142/s0129065723500685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
In this study, a few-shot transfer learning approach was introduced to decode movement intention from electroencephalographic (EEG) signals, allowing to recognize new tasks with minimal adaptation. To this end, a dataset of EEG signals recorded during the preparation of complex sub-movements was created from a publicly available data collection. The dataset was divided into two parts: the source domain dataset (including 5 classes) and the support (target domain) dataset, (including 2 classes) with no overlap between the two datasets in terms of classes. The proposed methodology consists in projecting EEG signals into the space-frequency-time domain, in processing such projections (rearranged in channels × frequency frames) by means of a custom EEG-based deep neural network (denoted as EEGframeNET5), and then adapting the system to recognize new tasks through a few-shot transfer learning approach. The proposed method achieved an average accuracy of 72.45 ± 4.19% in the 5-way classification of samples from the source domain dataset, outperforming comparable studies in the literature. In the second phase of the study, a few-shot transfer learning approach was proposed to adapt the neural system and make it able to recognize new tasks in the support dataset. The results demonstrated the system's ability to adapt and recognize new tasks with an average accuracy of 80 ± 0.12% in discriminating hand opening/closing preparation and outperforming reported results in the literature. This study suggests the effectiveness of EEG in capturing information related to the motor preparation of complex movements, potentially paving the way for BCI systems based on motion planning decoding. The proposed methodology could be straightforwardly extended to advanced EEG signal processing in other scenarios, such as motor imagery or neural disorder classification.
Collapse
Affiliation(s)
- Nadia Mammone
- DICEAM, University Mediterranea of Reggio Calabria Via Zehender, Loc. Feo di Vito, Reggio Calabria, 89122, Italy
| | - Cosimo Ieracitano
- DICEAM, University Mediterranea of Reggio Calabria Via Zehender, Loc. Feo di Vito, Reggio Calabria, 89122, Italy
| | - Rossella Spataro
- ALS Clinical Research Center, BiND, University of Palermo, Palermo, Italy
- Intensive Rehabilitation Unit, Villa delle Ginestre Hospital, Palermo, Italy
| | | | - Woosang Cho
- g.tec Medical Engineering GmbH, 4521, Schiedlberg, Austria
| | - Francesco Carlo Morabito
- DICEAM, University Mediterranea of Reggio Calabria Via Zehender, Loc. Feo di Vito, Reggio Calabria, 89122, Italy
| |
Collapse
|
5
|
Dong R, Zhang X, Li H, Masengo G, Zhu A, Shi X, He C. EEG generation mechanism of lower limb active movement intention and its virtual reality induction enhancement: a preliminary study. Front Neurosci 2024; 17:1305850. [PMID: 38352938 PMCID: PMC10861750 DOI: 10.3389/fnins.2023.1305850] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2023] [Accepted: 12/28/2023] [Indexed: 02/16/2024] Open
Abstract
Introduction Active rehabilitation requires active neurological participation when users use rehabilitation equipment. A brain-computer interface (BCI) is a direct communication channel for detecting changes in the nervous system. Individuals with dyskinesia have unclear intentions to initiate movement due to physical or psychological factors, which is not conducive to detection. Virtual reality (VR) technology can be a potential tool to enhance the movement intention from pre-movement neural signals in clinical exercise therapy. However, its effect on electroencephalogram (EEG) signals is not yet known. Therefore, the objective of this paper is to construct a model of the EEG signal generation mechanism of lower limb active movement intention and then investigate whether VR induction could improve movement intention detection based on EEG. Methods Firstly, a neural dynamic model of lower limb active movement intention generation was established from the perspective of signal transmission and information processing. Secondly, the movement-related EEG signal was calculated based on the model, and the effect of VR induction was simulated. Movement-related cortical potential (MRCP) and event-related desynchronization (ERD) features were extracted to analyze the enhancement of movement intention. Finally, we recorded EEG signals of 12 subjects in normal and VR environments to verify the effectiveness and feasibility of the above model and VR induction enhancement of lower limb active movement intention for individuals with dyskinesia. Results Simulation and experimental results show that VR induction can effectively enhance the EEG features of subjects and improve the detectability of movement intention. Discussion The proposed model can simulate the EEG signal of lower limb active movement intention, and VR induction can enhance the early and accurate detectability of lower limb active movement intention. It lays the foundation for further robot control based on the actual needs of users.
Collapse
Affiliation(s)
- Runlin Dong
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Xiaodong Zhang
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- Shaanxi Key Laboratory of Intelligent Robots, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Hanzhe Li
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Gilbert Masengo
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Aibin Zhu
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
- Shaanxi Key Laboratory of Intelligent Robots, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Xiaojun Shi
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an, Shaanxi, China
| | - Chen He
- General Department, AVIC Creative Robotics Co., Ltd., Xi’an, Shaanxi, China
| |
Collapse
|
6
|
Bi J, Chu M, Wang G, Gao X. TSPNet: a time-spatial parallel network for classification of EEG-based multiclass upper limb motor imagery BCI. Front Neurosci 2023; 17:1303242. [PMID: 38161801 PMCID: PMC10754979 DOI: 10.3389/fnins.2023.1303242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2023] [Accepted: 11/28/2023] [Indexed: 01/03/2024] Open
Abstract
The classification of electroencephalogram (EEG) motor imagery signals has emerged as a prominent research focus within the realm of brain-computer interfaces. Nevertheless, the conventional, limited categories (typically just two or four) offered by brain-computer interfaces fail to provide an extensive array of control modes. To address this challenge, we propose the Time-Spatial Parallel Network (TSPNet) for recognizing six distinct categories of upper limb motor imagery. Within TSPNet, temporal and spatial features are extracted separately, with the time dimension feature extractor and spatial dimension feature extractor performing their respective functions. Following this, the Time-Spatial Parallel Feature Extractor is employed to decouple the connection between temporal and spatial features, thus diminishing feature redundancy. The Time-Spatial Parallel Feature Extractor deploys a gating mechanism to optimize weight distribution and parallelize time-spatial features. Additionally, we introduce a feature visualization algorithm based on signal occlusion frequency to facilitate a qualitative analysis of TSPNet. In a six-category scenario, TSPNet achieved an accuracy of 49.1% ± 0.043 on our dataset and 49.7% ± 0.029 on a public dataset. Experimental results conclusively establish that TSPNet outperforms other deep learning methods in classifying data from these two datasets. Moreover, visualization results vividly illustrate that our proposed framework can generate distinctive classifier patterns for multiple categories of upper limb motor imagery, discerned through signals of varying frequencies. These findings underscore that, in comparison to other deep learning methods, TSPNet excels in intention recognition, which bears immense significance for non-invasive brain-computer interfaces.
Collapse
Affiliation(s)
- Jingfeng Bi
- School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Ming Chu
- School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Gang Wang
- School of Automation, Beijing University of Posts and Telecommunications, Beijing, China
| | - Xiaoshan Gao
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China
| |
Collapse
|
7
|
Jeong JH, Cho JH, Lee BH, Lee SW. Real-Time Deep Neurolinguistic Learning Enhances Noninvasive Neural Language Decoding for Brain-Machine Interaction. IEEE TRANSACTIONS ON CYBERNETICS 2023; 53:7469-7482. [PMID: 36251899 DOI: 10.1109/tcyb.2022.3211694] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Electroencephalogram (EEG)-based brain-machine interface (BMI) has been utilized to help patients regain motor function and has recently been validated for its use in healthy people because of its ability to directly decipher human intentions. In particular, neurolinguistic research using EEGs has been investigated as an intuitive and naturalistic communication tool between humans and machines. In this study, the human mind directly decoded the neural languages based on speech imagery using the proposed deep neurolinguistic learning. Through real-time experiments, we evaluated whether BMI-based cooperative tasks between multiple users could be accomplished using a variety of neural languages. We successfully demonstrated a BMI system that allows a variety of scenarios, such as essential activity, collaborative play, and emotional interaction. This outcome presents a novel BMI frontier that can interact at the level of human-like intelligence in real time and extends the boundaries of the communication paradigm.
Collapse
|
8
|
Jia H, Feng F, Caiafa CF, Duan F, Zhang Y, Sun Z, Sole-Casals J. Multi-Class Classification of Upper Limb Movements With Filter Bank Task-Related Component Analysis. IEEE J Biomed Health Inform 2023; 27:3867-3877. [PMID: 37227915 DOI: 10.1109/jbhi.2023.3278747] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/27/2023]
Abstract
The classification of limb movements can provide with control commands in non-invasive brain-computer interface. Previous studies on the classification of limb movements have focused on the classification of left/right limbs; however, the classification of different types of upper limb movements has often been ignored despite that it provides more active-evoked control commands in the brain-computer interface. Nevertheless, few machine learning method can be used as the state-of-the-art method in the multi-class classification of limb movements. This work focuses on the multi-class classification of upper limb movements and proposes the multi-class filter bank task-related component analysis (mFBTRCA) method, which consists of three steps: spatial filtering, similarity measuring and filter bank selection. The spatial filter, namely the task-related component analysis, is first used to remove noise from EEG signals. The canonical correlation measures the similarity of the spatial-filtered signals and is used for feature extraction. The correlation features are extracted from multiple low-frequency filter banks. The minimum-redundancy maximum-relevance selects the essential features from all the correlation features, and finally, the support vector machine is used to classify the selected features. The proposed method compared against previously used models is evaluated using two datasets. mFBTRCA achieved a classification accuracy of 0.4193 ± 0.0780 (7 classes) and 0.4032 ± 0.0714 (5 classes), respectively, which improves on the best accuracies achieved using the compared methods (0.3590 ± 0.0645 and 0.3159 ± 0.0736, respectively). The proposed method is expected to provide more control commands in the applications of non-invasive brain-computer interfaces.
Collapse
|
9
|
Batistić L, Lerga J, Stanković I. Detection of motor imagery based on short-term entropy of time-frequency representations. Biomed Eng Online 2023; 22:41. [PMID: 37143020 PMCID: PMC10157970 DOI: 10.1186/s12938-023-01102-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 04/25/2023] [Indexed: 05/06/2023] Open
Abstract
BACKGROUND Motor imagery is a cognitive process of imagining a performance of a motor task without employing the actual movement of muscles. It is often used in rehabilitation and utilized in assistive technologies to control a brain-computer interface (BCI). This paper provides a comparison of different time-frequency representations (TFR) and their Rényi and Shannon entropies for sensorimotor rhythm (SMR) based motor imagery control signals in electroencephalographic (EEG) data. The motor imagery task was guided by visual guidance, visual and vibrotactile (somatosensory) guidance or visual cue only. RESULTS When using TFR-based entropy features as an input for classification of different interaction intentions, higher accuracies were achieved (up to 99.87%) in comparison to regular time-series amplitude features (for which accuracy was up to 85.91%), which is an increase when compared to existing methods. In particular, the highest accuracy was achieved for the classification of the motor imagery versus the baseline (rest state) when using Shannon entropy with Reassigned Pseudo Wigner-Ville time-frequency representation. CONCLUSIONS Our findings suggest that the quantity of useful classifiable motor imagery information (entropy output) changes during the period of motor imagery in comparison to baseline period; as a result, there is an increase in the accuracy and F1 score of classification when using entropy features in comparison to the accuracy and the F1 of classification when using amplitude features, hence, it is manifested as an improvement of the ability to detect motor imagery.
Collapse
Affiliation(s)
- Luka Batistić
- University of Rijeka - Faculty of Engineering, Vukovarska 58, 51000, Rijeka, Croatia
- Center for Artificial Intelligence and Cybersecurity, University of Rijeka, R. Matejčić 2, 51000, Rijeka, Croatia
| | - Jonatan Lerga
- University of Rijeka - Faculty of Engineering, Vukovarska 58, 51000, Rijeka, Croatia.
- Center for Artificial Intelligence and Cybersecurity, University of Rijeka, R. Matejčić 2, 51000, Rijeka, Croatia.
| | - Isidora Stanković
- University of Montenegro, Džordža Vašingtona bb, 81000, Podgorica, Montenegro
| |
Collapse
|
10
|
Said RR, Heyat MBB, Song K, Tian C, Wu Z. A Systematic Review of Virtual Reality and Robot Therapy as Recent Rehabilitation Technologies Using EEG-Brain-Computer Interface Based on Movement-Related Cortical Potentials. BIOSENSORS 2022; 12:bios12121134. [PMID: 36551100 PMCID: PMC9776155 DOI: 10.3390/bios12121134] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/24/2022] [Revised: 11/24/2022] [Accepted: 12/02/2022] [Indexed: 06/01/2023]
Abstract
To enhance the treatment of motor function impairment, patients' brain signals for self-control as an external tool may be an extraordinarily hopeful option. For the past 10 years, researchers and clinicians in the brain-computer interface (BCI) field have been using movement-related cortical potential (MRCP) as a control signal in neurorehabilitation applications to induce plasticity by monitoring the intention of action and feedback. Here, we reviewed the research on robot therapy (RT) and virtual reality (VR)-MRCP-based BCI rehabilitation technologies as recent advancements in human healthcare. A list of 18 full-text studies suitable for qualitative review out of 322 articles published between 2000 and 2022 was identified based on inclusion and exclusion criteria. We used PRISMA guidelines for the systematic review, while the PEDro scale was used for quality evaluation. Bibliometric analysis was conducted using the VOSviewer software to identify the relationship and trends of key items. In this review, 4 studies used VR-MRCP, while 14 used RT-MRCP-based BCI neurorehabilitation approaches. The total number of subjects in all identified studies was 107, whereby 4.375 ± 6.3627 were patient subjects and 6.5455 ± 3.0855 were healthy subjects. The type of electrodes, the epoch, classifiers, and the performance information that are being used in the RT- and VR-MRCP-based BCI rehabilitation application are provided in this review. Furthermore, this review also describes the challenges facing this field, solutions, and future directions of these smart human health rehabilitation technologies. By key items relationship and trends analysis, we found that motor control, rehabilitation, and upper limb are important key items in the MRCP-based BCI field. Despite the potential of these rehabilitation technologies, there is a great scarcity of literature related to RT and VR-MRCP-based BCI. However, the information on these rehabilitation methods can be beneficial in developing RT and VR-MRCP-based BCI rehabilitation devices to induce brain plasticity and restore motor impairment. Therefore, this review will provide the basis and references of the MRCP-based BCI used in rehabilitation applications for further clinical and research development.
Collapse
Affiliation(s)
- Ramadhan Rashid Said
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Md Belal Bin Heyat
- IoT Research Center, College of Computer Science and Software Engineering, Shenzhen University, Shenzhen 518060, China
| | - Keer Song
- Franklin College of Arts and Science, University of Georgia, Athens, GA 30602, USA
| | - Chao Tian
- Department of Women’s Health, Sichuan Cancer Hospital, Chengdu 610044, China
| | - Zhe Wu
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu 611731, China
| |
Collapse
|
11
|
Cho JH, Jeong JH, Lee SW. NeuroGrasp: Real-Time EEG Classification of High-Level Motor Imagery Tasks Using a Dual-Stage Deep Learning Framework. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13279-13292. [PMID: 34748509 DOI: 10.1109/tcyb.2021.3122969] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Brain-computer interfaces (BCIs) have been widely employed to identify and estimate a user's intention to trigger a robotic device by decoding motor imagery (MI) from an electroencephalogram (EEG). However, developing a BCI system driven by MI related to natural hand-grasp tasks is challenging due to its high complexity. Although numerous BCI studies have successfully decoded large body parts, such as the movement intention of both hands, arms, or legs, research on MI decoding of high-level behaviors such as hand grasping is essential to further expand the versatility of MI-based BCIs. In this study, we propose NeuroGrasp, a dual-stage deep learning framework that decodes multiple hand grasping from EEG signals under the MI paradigm. The proposed method effectively uses an EEG and electromyography (EMG)-based learning, such that EEG-based inference at test phase becomes possible. The EMG guidance during model training allows BCIs to predict hand grasp types from EEG signals accurately. Consequently, NeuroGrasp improved classification performance offline, and demonstrated a stable classification performance online. Across 12 subjects, we obtained an average offline classification accuracy of 0.68 (±0.09) in four-grasp-type classifications and 0.86 (±0.04) in two-grasp category classifications. In addition, we obtained an average online classification accuracy of 0.65 (±0.09) and 0.79 (±0.09) across six high-performance subjects. Because the proposed method has demonstrated a stable classification performance when evaluated either online or offline, in the future, we expect that the proposed method could contribute to different BCI applications, including robotic hands or neuroprosthetics for handling everyday objects.
Collapse
|
12
|
Jia H, Sun Z, Duan F, Zhang Y, Caiafa CF, Solé-Casals J. Improving pre-movement pattern detection with filter bank selection. J Neural Eng 2022; 19. [PMID: 36317288 DOI: 10.1088/1741-2552/ac9e75] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2022] [Accepted: 10/28/2022] [Indexed: 11/17/2022]
Abstract
Objective. Pre-movement decoding plays an important role in detecting the onsets of actions using low-frequency electroencephalography (EEG) signals before the movement of an upper limb. In this work, a binary classification method is proposed between two different states.Approach. The proposed method, referred to as filter bank standard task-related component analysis (FBTRCA), is to incorporate filter bank selection into the standard task-related component analysis (STRCA) method. In FBTRCA, the EEG signals are first divided into multiple sub-bands which start at specific fixed frequencies and end frequencies that follow in an arithmetic sequence. The STRCA method is then applied to the EEG signals in these bands to extract CCPs. The minimum redundancy maximum relevance feature selection method is used to select essential features from these correlation patterns in all sub-bands. Finally, the selected features are classified using the binary support vector machine classifier. A convolutional neural network (CNN) is an alternative approach to select canonical correlation patterns.Main Results. Three methods were evaluated using EEG signals in the time window from 2 s before the movement onset to 1 s after the movement onset. In the binary classification between a movement state and the resting state, the FBTRCA achieved an average accuracy of 0.8968 ± 0.0847 while the accuracies of STRCA and CNN were 0.8228 ± 0.1149 and 0.8828 ± 0.0917, respectively. In the binary classification between two actions, the accuracies of STRCA, CNN, and FBTRCA were 0.6611 ± 0.1432, 0.6993 ± 0.1271, 0.7178 ± 0.1274, respectively. Feature selection using filter banks, as in FBTRCA, produces comparable results to STRCA.Significance. The proposed method provides a way to select filter banks in pre-movement decoding, and thus it improves the classification performance. The improved pre-movement decoding of single upper limb movements is expected to provide people with severe motor disabilities with a more natural, non-invasive control of their external devices.
Collapse
Affiliation(s)
- Hao Jia
- Data and Signal Processing Research Group, University of Vic-Central University of Catalonia, Vic, Catalonia, Spain
| | - Zhe Sun
- Computational Engineering Applications Unit, Head Office for Information Systems and Cybersecurity, RIKEN, Saitama, Japan
| | - Feng Duan
- Tianjin Key Laboratory of Brain Science and Intelligent Rehabilitation, College of Artificial Intelligence, Nankai University, Tianjin, People's Republic of China
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA 18015, United States of America.,Department of Electrical and Computer Engineering, Lehigh University, Bethlehem, PA, 18015, United States of America
| | - Cesar F Caiafa
- Instituto Argentino de Radioastronomía, CONICET CCT La Plata/CIC-PBA/UNLP, V. Elisa, Argentina
| | - Jordi Solé-Casals
- Data and Signal Processing Research Group, University of Vic-Central University of Catalonia, Vic, Catalonia, Spain.,Department of Psychiatry, University of Cambridge, Cambridge CB2 3EB, United Kingdom
| |
Collapse
|
13
|
Zhang L, Ren H, Zhang R, Chen M, Li R, Shi L, Yao D, Gao J, Hu Y. Time-estimation process could cause the disappearence of readiness potential. Cogn Neurodyn 2022; 16:1003-1011. [PMID: 36237414 PMCID: PMC9508310 DOI: 10.1007/s11571-021-09766-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2021] [Revised: 11/22/2021] [Accepted: 12/05/2021] [Indexed: 11/03/2022] Open
Abstract
Generally, the readiness potential (RP) is considered to be the scalp electroencephalography (EEG) activity preceding movement. In our previous study, we found early RP was absent among approximately half of the subjects during instructed action, but we did not identify the mechanism causing the disappearance of the RP. In this study, we investigated whether the time-estimation process could cause the disappearance of the RP. First, we designed experiments consisting of motor execution (ME), motor execution after time estimation (MEATE), and time estimation (TE) tasks, and we collected and preprocessed the EEG data of 16 subjects. Second, we compared the event related potential (ERP) waveform and scalp topography between ME and MEATE tasks. Then, to explore the influence of time-estimation, we analyzed the difference in ERP between MEATE and TE tasks. Finally, we used source imaging to probe the activation of brain regions during the three tasks, and we calculated the average activation amplitude of eight motor related brain regions. We found that the RP occurred in the ME task but not in the MEATE task. We also found that the waveform of the difference in ERP between the MEATE and TE tasks was similar to that of the ME task. The results of source imaging indicated that, compared to the ME task, the activation amplitude of the supplementary motor area (SMA) decreased significantly for the MEATE task. Our results suggested that the time estimation process could cause the disappearance of the RP. This phenomenon might be caused by the counteraction of neural electrical activity related to time estimation and motor preparation in the SMA.
Collapse
Affiliation(s)
- Lipeng Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Haikun Ren
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Rui Zhang
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Mingming Chen
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Ruiqi Li
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Li Shi
- Department of Automation, Tsinghua University, Beijing, China
- Beijing National Research Center for Information Science and Technology, Beijing, China
| | - Dezhong Yao
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, Chengdu, China
| | - Jinfeng Gao
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| | - Yuxia Hu
- School of Electrical Engineering, Zhengzhou University, Zhengzhou, China
- Henan Key Laboratory of Brain Science and pBrain-Computer Interface Technology, Zhengzhou, China
| |
Collapse
|
14
|
Jeong JH, Cho JH, Lee YE, Lee SH, Shin GH, Kweon YS, Millán JDR, Müller KR, Lee SW. 2020 International brain-computer interface competition: A review. Front Hum Neurosci 2022; 16:898300. [PMID: 35937679 PMCID: PMC9354666 DOI: 10.3389/fnhum.2022.898300] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 07/01/2022] [Indexed: 11/16/2022] Open
Abstract
The brain-computer interface (BCI) has been investigated as a form of communication tool between the brain and external devices. BCIs have been extended beyond communication and control over the years. The 2020 international BCI competition aimed to provide high-quality neuroscientific data for open access that could be used to evaluate the current degree of technical advances in BCI. Although there are a variety of remaining challenges for future BCI advances, we discuss some of more recent application directions: (i) few-shot EEG learning, (ii) micro-sleep detection (iii) imagined speech decoding, (iv) cross-session classification, and (v) EEG(+ear-EEG) detection in an ambulatory environment. Not only did scientists from the BCI field compete, but scholars with a broad variety of backgrounds and nationalities participated in the competition to address these challenges. Each dataset was prepared and separated into three data that were released to the competitors in the form of training and validation sets followed by a test set. Remarkable BCI advances were identified through the 2020 competition and indicated some trends of interest to BCI researchers.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- School of Computer Science, Chungbuk National University, Cheongju, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Eun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seo-Hyun Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Gi-Hwan Shin
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Young-Seok Kweon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - José del R. Millán
- Department of Electrical and Computer Engineering, University of Texas at Austin, Austin, TX, United States
| | - Klaus-Robert Müller
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Machine Learning Group, Department of Computer Science, Berlin Institute of Technology, Berlin, Germany
- Max Planck Institute for Informatics, Saarbrucken, Germany
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
15
|
Musellim S, Han DK, Jeong JH, Lee SW. Prototype-based Domain Generalization Framework for Subject-Independent Brain-Computer Interfaces. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:711-714. [PMID: 36086535 DOI: 10.1109/embc48229.2022.9871434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Brain-computer interface (BCI) is challenging to use in practice due to the inter/intra-subject variability of electroencephalography (EEG). The BCI system, in general, necessitates a calibration technique to obtain subject/session-specific data in order to tune the model each time the system is utilized. This issue is acknowledged as a key hindrance to BCI, and a new strategy based on domain generalization has recently evolved to address it. In light of this, we've concentrated on developing an EEG classification framework that can be applied directly to data from unknown domains (i.e. subjects), using only data acquired from separate subjects previously. For this purpose, in this paper, we proposed a framework that employs the open-set recognition technique as an auxiliary task to learn subject-specific style features from the source dataset while helping the shared feature extractor with mapping the features of the unseen target dataset as a new unseen domain. Our aim is to impose cross-instance style in-variance in the same domain and reduce the open space risk on the potential unseen subject in order to improve the generalization ability of the shared feature extractor. Our experiments showed that using the domain information as an auxiliary network increases the generalization performance. Clinical relevance-This study suggests a strategy to improve the performance of the subject-independent BCI systems. Our framework can help to reduce the need for further calibration and can be utilized for a range of mental state monitoring tasks (e.g. neurofeedback, identification of epileptic seizures, and sleep disorders).
Collapse
|
16
|
Lee DY, Jeong JH, Lee BH, Lee SW. Motor Imagery Classification Using Inter-Task Transfer Learning via A Channel-Wise Variational Autoencoder-based Convolutional Neural Network. IEEE Trans Neural Syst Rehabil Eng 2022; 30:226-237. [PMID: 35041605 DOI: 10.1109/tnsre.2022.3143836] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Highly sophisticated control based on a brain-computer interface (BCI) requires decoding kinematic information from brain signals. The forearm is a region of the upper limb that is often used in everyday life, but intuitive movements within the same limb have rarely been investigated in previous BCI studies. In this study, we focused on various forearm movement decoding from electroencephalography (EEG) signals using a small number of samples. Ten healthy participants took part in an experiment and performed motor execution (ME) and motor imagery (MI) of the intuitive movement tasks (Dataset I). We propose a convolutional neural network using a channel-wise variational autoencoder (CVNet) based on inter-task transfer learning. We approached that training the reconstructed ME-EEG signals together will also achieve more sufficient classification performance with only a small amount of MI-EEG signals. The proposed CVNet was validated on our own Dataset I and a public dataset, BNCI Horizon 2020 (Dataset II). The classification accuracies of various movements are confirmed to be 0.83 (±0.04) and 0.69 (±0.04) for Dataset I and II, respectively. The results show that the proposed method exhibits performance increases of approximately 0.09~0.27 and 0.08~0.24 compared with the conventional models for Dataset I and II, respectively. The outcomes suggest that the training model for decoding imagined movements can be performed using data from ME and a small number of data samples from MI. Hence, it is presented the feasibility of BCI learning strategies that can sufficiently learn deep learning with a few amount of calibration dataset and time only, with stable performance.
Collapse
|
17
|
Motor Imagination of Lower Limb Movements at Different Frequencies. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2021:4073739. [PMID: 34976324 PMCID: PMC8716247 DOI: 10.1155/2021/4073739] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/06/2021] [Revised: 11/10/2021] [Accepted: 11/20/2021] [Indexed: 11/26/2022]
Abstract
Motor imagination (MI) is the mental process of only imagining an action without an actual movement. Research on MI has made significant progress in feature information detection and machine learning decoding algorithms, but there are still problems, such as a low overall recognition rate and large differences in individual execution effects, which make the development of MI run into a bottleneck. Aiming at solving this bottleneck problem, the current study optimized the quality of the MI original signal by “enhancing the difficulty of imagination tasks,” conducted the qualitative and quantitative analyses of EEG rhythm characteristics, and used quantitative indicators, such as ERD mean value and recognition rate. Research on the comparative analysis of the lower limb MI of different tasks, namely, high-frequency motor imagination (HFMI) and low-frequency motor imagination (LFMI), was conducted. The results validate the following: the average ERD of HFMI (−1.827) is less than that of LFMI (−1.3487) in the alpha band, so did (−3.4756 < −2.2891) in the beta band. In the alpha and beta characteristic frequency bands, the average ERD of HFMI is smaller than that of LFMI, and the ERD values of the two are significantly different (p=0.0074 < 0.01; r = 0.945). The ERD intensity STD values of HFMI are less than those of LFMI. which suggests that the ERD intensity individual difference among the subjects is smaller in the HFMI mode than in the LFMI mode. The average recognition rate of HFMI is higher than that of LFMI (87.84% > 76.46%), and the recognition rate of the two modes is significantly different (p=0.0034 < 0.01; r = 0.429). In summary, this research optimizes the quality of MI brain signal sources by enhancing the difficulty of imagination tasks, achieving the purpose of improving the overall recognition rate of the lower limb MI of the participants and reducing the differences of individual execution effects and signal quality among the subjects.
Collapse
|
18
|
Autthasan P, Chaisaen R, Sudhawiyangkul T, Rangpong P, Kiatthaveephong S, Dilokthanakul N, Bhakdisongkhram G, Phan H, Guan C, Wilaiprasitporn T. MIN2Net: End-to-End Multi-Task Learning for Subject-Independent Motor Imagery EEG Classification. IEEE Trans Biomed Eng 2021; 69:2105-2118. [PMID: 34932469 DOI: 10.1109/tbme.2021.3137184] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
OBJECTIVE Advances in the motor imagery (MI)-based brain-computer interfaces (BCIs) allow control of several applications by decoding neurophysiological phenomena, which are usually recorded by electroencephalography (EEG) using a non-invasive technique. Despite significant advances in MI-based BCI, EEG rhythms are specific to a subject and various changes over time. These issues point to significant challenges to enhance the classification performance, especially in a subject-independent manner. METHODS To overcome these challenges, we propose MIN2Net, a novel end-to-end multi-task learning to tackle this task. We integrate deep metric learning into a multi-task autoencoder to learn a compact and discriminative latent representation from EEG and perform classification simultaneously. RESULTS This approach reduces the complexity in pre-processing, results in significant performance improvement on EEG classification. Experimental results in a subject-independent manner show that MIN2Net outperforms the state-of-the-art techniques, achieving an F1-score improvement of 6.72 %, and 2.23 % on the SMR-BCI, and OpenBMI datasets, respectively. CONCLUSION We demonstrate that MIN2Net improves discriminative information in the latent representation. SIGNIFICANCE This study indicates the possibility and practicality of using this model to develop MI-based BCI applications for new users without calibration.
Collapse
|
19
|
Lee YE, Shin GH, Lee M, Lee SW. Mobile BCI dataset of scalp- and ear-EEGs with ERP and SSVEP paradigms while standing, walking, and running. Sci Data 2021; 8:315. [PMID: 34930915 PMCID: PMC8688416 DOI: 10.1038/s41597-021-01094-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 11/08/2021] [Indexed: 11/24/2022] Open
Abstract
We present a mobile dataset obtained from electroencephalography (EEG) of the scalp and around the ear as well as from locomotion sensors by 24 participants moving at four different speeds while performing two brain-computer interface (BCI) tasks. The data were collected from 32-channel scalp-EEG, 14-channel ear-EEG, 4-channel electrooculography, and 9-channel inertial measurement units placed at the forehead, left ankle, and right ankle. The recording conditions were as follows: standing, slow walking, fast walking, and slight running at speeds of 0, 0.8, 1.6, and 2.0 m/s, respectively. For each speed, two different BCI paradigms, event-related potential and steady-state visual evoked potential, were recorded. To evaluate the signal quality, scalp- and ear-EEG data were qualitatively and quantitatively validated during each speed. We believe that the dataset will facilitate BCIs in diverse mobile environments to analyze brain activities and evaluate the performance quantitatively for expanding the use of practical BCIs.
Collapse
Affiliation(s)
- Young-Eun Lee
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Gi-Hwan Shin
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Minji Lee
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Seong-Whan Lee
- Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841, Republic of Korea. .,Korea University, Department of Artificial Intelligence, Seoul, 02841, Republic of Korea.
| |
Collapse
|
20
|
Leeuwis N, Yoon S, Alimardani M. Functional Connectivity Analysis in Motor-Imagery Brain Computer Interfaces. Front Hum Neurosci 2021; 15:732946. [PMID: 34720907 PMCID: PMC8555469 DOI: 10.3389/fnhum.2021.732946] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2021] [Accepted: 09/03/2021] [Indexed: 11/25/2022] Open
Abstract
Motor Imagery BCI systems have a high rate of users that are not capable of modulating their brain activity accurately enough to communicate with the system. Several studies have identified psychological, cognitive, and neurophysiological measures that might explain this MI-BCI inefficiency. Traditional research had focused on mu suppression in the sensorimotor area in order to classify imagery, but this does not reflect the true dynamics that underlie motor imagery. Functional connectivity reflects the interaction between brain regions during the MI task and resting-state network and is a promising tool in improving MI-BCI classification. In this study, 54 novice MI-BCI users were split into two groups based on their accuracy and their functional connectivity was compared in three network scales (Global, Large and Local scale) during the resting-state, left vs. right-hand motor imagery task, and the transition between the two phases. Our comparison of High and Low BCI performers showed that in the alpha band, functional connectivity in the right hemisphere was increased in High compared to Low aptitude MI-BCI users during motor imagery. These findings contribute to the existing literature that indeed connectivity might be a valuable feature in MI-BCI classification and in solving the MI-BCI inefficiency problem.
Collapse
Affiliation(s)
- Nikki Leeuwis
- Department of Cognitive Science and Artificial Intelligence, Tilburg University, Tilburg, Netherlands
| | | | | |
Collapse
|
21
|
Duan F, Jia H, Sun Z, Zhang K, Dai Y, Zhang Y. Decoding Premovement Patterns with Task-Related Component Analysis. Cognit Comput 2021. [DOI: 10.1007/s12559-021-09941-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
22
|
Xu B, Wang Y, Deng L, Wu C, Zhang W, Li H, Song A. Decoding Hand Movement Types and Kinematic Information From Electroencephalogram. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1744-1755. [PMID: 34428142 DOI: 10.1109/tnsre.2021.3106897] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Brain-computer interfaces (BCIs) have achieved successful control of assistive devices, e.g. neuroprosthesis or robotic arm. Previous research based on hand movements Electroencephalogram (EEG) has shown limited success in precise and natural control. In this study, we explored the possibilities of decoding movement types and kinematic information for three reach-and-execute actions using movement-related cortical potentials (MRCPs). EEG signals were acquired from 12 healthy subjects during the execution of pinch, palmar and precision disk rotation actions that involved two levels of speeds and forces. In the case of discrimination between hand movement types under each of four different kinematics conditions, we obtained the average peak accuracies of 83.44% and 73.83% for the binary and 3-class classification, respectively. In the case of discrimination between different movement kinematics for each of three actions, the average peak accuracies of 82.9% and 58.2% could be achieved for the two and 4-class scenario. In both cases, peak decoding performance was significantly higher than the subject-specific chance level. We found that hand movement types all could be classified when these actions were executed at four different kinematic parameters. Meanwhile, for each of three hand movements, we decoded movement parameters successfully. Furthermore, the feasibility of decoding hand movements during hand retraction process was also demonstrated. These findings are of great importance for controlling neuroprosthesis or other rehabilitation devices in a fine and natural way, which would drastically increase the acceptance of motor impaired users.
Collapse
|
23
|
Ieracitano C, Morabito FC, Hussain A, Mammone N. A Hybrid-Domain Deep Learning-Based BCI For Discriminating Hand Motion Planning From EEG Sources. Int J Neural Syst 2021; 31:2150038. [PMID: 34376121 DOI: 10.1142/s0129065721500386] [Citation(s) in RCA: 24] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
In this paper, a hybrid-domain deep learning (DL)-based neural system is proposed to decode hand movement preparation phases from electroencephalographic (EEG) recordings. The system exploits information extracted from the temporal-domain and time-frequency-domain, as part of a hybrid strategy, to discriminate the temporal windows (i.e. EEG epochs) preceding hand sub-movements (open/close) and the resting state. To this end, for each EEG epoch, the associated cortical source signals in the motor cortex and the corresponding time-frequency (TF) maps are estimated via beamforming and Continuous Wavelet Transform (CWT), respectively. Two Convolutional Neural Networks (CNNs) are designed: specifically, the first CNN is trained over a dataset of temporal (T) data (i.e. EEG sources), and is referred to as T-CNN; the second CNN is trained over a dataset of TF data (i.e. TF-maps of EEG sources), and is referred to as TF-CNN. Two sets of features denoted as T-features and TF-features, extracted from T-CNN and TF-CNN, respectively, are concatenated in a single features vector (denoted as TTF-features vector) which is used as input to a standard multi-layer perceptron for classification purposes. Experimental results show a significant performance improvement of our proposed hybrid-domain DL approach as compared to temporal-only and time-frequency-only-based benchmark approaches, achieving an average accuracy of [Formula: see text]%.
Collapse
Affiliation(s)
- Cosimo Ieracitano
- DICEAM, University Mediterranea of Reggio Calabria, Via Graziella Feo di Vito, Reggio Calabria, 89124, Italy
| | - Francesco Carlo Morabito
- DICEAM, University Mediterranea of Reggio Calabria, Via Graziella Feo di Vito, Reggio Calabria, 89124, Italy
| | - Amir Hussain
- School of Computing, Edinburgh Napier University, Edinburgh EH10 5DT, Scotland, UK
| | - Nadia Mammone
- DICEAM, University Mediterranea of Reggio Calabria, Via Graziella Feo di Vito, Reggio Calabria, 89124, Italy
| |
Collapse
|
24
|
Lee M, Jeong JH, Kim YH, Lee SW. Decoding Finger Tapping With the Affected Hand in Chronic Stroke Patients During Motor Imagery and Execution. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1099-1109. [PMID: 34101595 DOI: 10.1109/tnsre.2021.3087506] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
In stroke rehabilitation, motor imagery based on a brain-computer interface is an extremely useful method to control an external device and utilize neurofeedback. Many studies have reported on the classification performance of motor imagery to decode individual fingers in stroke patients compared with healthy controls. However, classification performance for a given limb is still low because the differences between patients owing to brain reorganization after stroke are not considered. We used electroencephalography signals from eleven healthy controls and eleven stroke patients in this study. The subjects performed a finger tapping task during motor execution, and motor imagery was performed with the dominant and affected hands in the healthy controls and stroke patients, respectively. All fingers except for the thumb were classified using the proposed framework based on a voting module. The averaged four-class accuracies during motor execution and motor imagery were 53.16 ± 8.42% and 46.94 ± 5.99% for the healthy controls and 53.17 ± 14.09% and 66.00 ± 14.96% for the stroke patients, respectively. Importantly, the classification accuracies in the stroke patients were statistically higher than those in healthy controls during motor imagery. However, there was no significant difference between the accuracies of motor execution and motor imagery. These findings show the potential for high classification performance for a given limb during motor imagery in stroke patients. These results can also provide insights into controlling an external device on the basis of a brain-computer interface.
Collapse
|
25
|
Multilinear Discriminative Spatial Patterns for Movement-Related Cortical Potential Based on EEG Classification with Tensor Representation. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2021; 2021:6634672. [PMID: 34135952 PMCID: PMC8175166 DOI: 10.1155/2021/6634672] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2020] [Revised: 04/09/2021] [Accepted: 05/15/2021] [Indexed: 11/17/2022]
Abstract
The discriminative spatial patterns (DSP) algorithm is a classical and effective feature extraction technique for decoding of voluntary finger premovements from electroencephalography (EEG). As a purely data-driven subspace learning algorithm, DSP essentially is a spatial-domain filter and does not make full use of the information in frequency domain. The paper presents multilinear discriminative spatial patterns (MDSP) to derive multiple interrelated lower dimensional discriminative subspaces of low frequency movement-related cortical potential (MRCP). Experimental results on two finger movement tasks' EEG datasets demonstrate the effectiveness of the proposed MDSP method.
Collapse
|
26
|
Wang J, Bi L, Fei W, Guan C. Decoding Single-Hand and Both-Hand Movement Directions From Noninvasive Neural Signals. IEEE Trans Biomed Eng 2020; 68:1932-1940. [PMID: 33108279 DOI: 10.1109/tbme.2020.3034112] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Decoding human movement parameters from electroencephalograms (EEG) signals is of great value for human-machine collaboration. However, existing studies on hand movement direction decoding concentrate on the decoding of a single-hand movement direction from EEG signals given the opposite hand is maintained still. In practice, the cooperative movement of both hands is common. In this paper, we investigated the neural signatures and decoding of single-hand and both-hand movement directions from EEG signals. The potentials of EEG signals and power sums in the low frequency band of EEG signals from 24 channels were used as decoding features. The linear discriminant analysis (LDA) and support vector machine (SVM) classifiers were used for decoding. Experimental results showed a significant difference in the negative offset maximums of movement-related cortical potentials (MRCPs) at electrode Cz between single-hand and both-hand movements. The recognition accuracies for six-class classification, including two single-hand and four both-hand movement directions, reached 70.29%± 10.85% by using EEG potentials as features with the SVM classifier. These findings showed the feasibility of decoding single-hand and both-hand movement directions. This work can lay a foundation for the future development of an active human-machine collaboration system based on EEG signals and open a new research direction in the field of decoding hand movement parameters from EEG signals.
Collapse
|
27
|
Cho JH, Jeong JH, Lee SW. Decoding of Grasp Motions from EEG Signals Based on a Novel Data Augmentation Strategy. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:3015-3018. [PMID: 33018640 DOI: 10.1109/embc44109.2020.9175784] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Electroencephalogram (EEG) based braincomputer interface (BCI) systems are useful tools for clinical purposes like neural prostheses. In this study, we collected EEG signals related to grasp motions. Five healthy subjects participated in this experiment. They executed and imagined five sustained-grasp actions. We proposed a novel data augmentation method that increases the amount of training data using labels obtained from electromyogram (EMG) signals analysis. For implementation, we recorded EEG and EMG simultaneously. The data augmentation over the original EEG data concluded higher classification accuracy than other competitors. As a result, we obtained the average classification accuracy of 52.49(±8.74)% for motor execution (ME) and 40.36(±3.39)% for motor imagery (MI). These are 9.30% and 6.19% higher, respectively than the result of the comparable methods. Moreover, the proposed method could minimize the need for the calibration session, which reduces the practicality of most BCIs. This result is encouraging, and the proposed method could potentially be used in future applications such as a BCI-driven robot control for handling various daily use objects.
Collapse
|
28
|
Jeong JH, Cho JH, Shim KH, Kwon BH, Lee BH, Lee DY, Lee DH, Lee SW. Multimodal signal dataset for 11 intuitive movement tasks from single upper extremity during multiple recording sessions. Gigascience 2020; 9:giaa098. [PMID: 33034634 PMCID: PMC7539536 DOI: 10.1093/gigascience/giaa098] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Revised: 08/12/2020] [Accepted: 09/07/2020] [Indexed: 02/02/2023] Open
Abstract
BACKGROUND Non-invasive brain-computer interfaces (BCIs) have been developed for realizing natural bi-directional interaction between users and external robotic systems. However, the communication between users and BCI systems through artificial matching is a critical issue. Recently, BCIs have been developed to adopt intuitive decoding, which is the key to solving several problems such as a small number of classes and manually matching BCI commands with device control. Unfortunately, the advances in this area have been slow owing to the lack of large and uniform datasets. This study provides a large intuitive dataset for 11 different upper extremity movement tasks obtained during multiple recording sessions. The dataset includes 60-channel electroencephalography, 7-channel electromyography, and 4-channel electro-oculography of 25 healthy participants collected over 3-day sessions for a total of 82,500 trials across all the participants. FINDINGS We validated our dataset via neurophysiological analysis. We observed clear sensorimotor de-/activation and spatial distribution related to real-movement and motor imagery, respectively. Furthermore, we demonstrated the consistency of the dataset by evaluating the classification performance of each session using a baseline machine learning method. CONCLUSIONS The dataset includes the data of multiple recording sessions, various classes within the single upper extremity, and multimodal signals. This work can be used to (i) compare the brain activities associated with real movement and imagination, (ii) improve the decoding performance, and (iii) analyze the differences among recording sessions. Hence, this study, as a Data Note, has focused on collecting data required for further advances in the BCI technology.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Jeong-Hyun Cho
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Kyung-Hwan Shim
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byoung-Hee Kwon
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Byeong-Hoo Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Do-Yeun Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Dae-Hyeok Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
- Department of Artificial Intelligence, Korea University, 145 Anam-ro, Seongbuk-gu, Seoul 02841, South Korea
| |
Collapse
|
29
|
Lee M, Yoon JG, Lee SW. Predicting Motor Imagery Performance From Resting-State EEG Using Dynamic Causal Modeling. Front Hum Neurosci 2020; 14:321. [PMID: 32903663 PMCID: PMC7438792 DOI: 10.3389/fnhum.2020.00321] [Citation(s) in RCA: 26] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Accepted: 07/20/2020] [Indexed: 11/22/2022] Open
Abstract
Motor imagery-based brain–computer interfaces (MI-BCIs) send commands to a computer using the brain activity registered when a subject imagines—but does not perform—a given movement. However, inconsistent MI-BCI performance occurs in variations of brain signals across subjects and experiments; this is considered to be a significant problem in practical BCI. Moreover, some subjects exhibit a phenomenon referred to as “BCI-inefficiency,” in which they are unable to generate brain signals for BCI control. These subjects have significant difficulties in using BCI. The primary goal of this study is to identify the connections of the resting-state network that affect MI performance and predict MI performance using these connections. We used a public database of MI, which includes the results of psychological questionnaires and pre-experimental resting-state taken over two sessions on different days. A dynamic causal model was used to calculate the coupling strengths between brain regions with directionality. Specifically, we investigated the motor network in resting-state, including the dorsolateral prefrontal cortex, which performs motor planning. As a result, we observed a significant difference in the connectivity strength from the supplementary motor area to the right dorsolateral prefrontal cortex between the low- and high-MI performance groups. This coupling, measured in the resting-state, is significantly stronger in the high-MI performance group than the low-MI performance group. The connection strength is positively correlated with MI-BCI performance (Session 1: r = 0.54; Session 2: r = 0.42). We also predicted MI performance using linear regression based on this connection (r-squared = 0.31). The proposed predictors, based on dynamic causal modeling, can develop new strategies for improving BCI performance. These findings can further our understanding of BCI-inefficiency and help BCI users to lower costs and save time.
Collapse
Affiliation(s)
- Minji Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Jae-Geun Yoon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seong-Whan Lee
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
30
|
Shin GH, Lee M, Kim HJ, Lee SW. Prediction of Event Related Potential Speller Performance Using Resting-State EEG. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2973-2976. [PMID: 33018630 DOI: 10.1109/embc44109.2020.9175914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Event-related potential (ERP) speller can be utilized in device control and communication for locked-in or severely injured patients. However, problems such as inter-subject performance instability and ERP-illiteracy are still unresolved. Therefore, it is necessary to predict classification performance before performing an ERP speller in order to use it efficiently. In this study, we investigated the correlations with ERP speller performance using a resting-state before an ERP speller. In specific, we used spectral power and functional connectivity according to four brain regions and five frequency bands. As a result, the delta power in the frontal region and functional connectivity in the delta, alpha, gamma bands are significantly correlated with the ERP speller performance. Also, we predicted the ERP speller performance using EEG features in the resting-state. These findings may contribute to investigating the ERP-illiteracy and considering the appropriate alternatives for each user.
Collapse
|
31
|
Jeong JH, Shim KH, Kim DJ, Lee SW. Brain-Controlled Robotic Arm System Based on Multi-Directional CNN-BiLSTM Network Using EEG Signals. IEEE Trans Neural Syst Rehabil Eng 2020; 28:1226-1238. [DOI: 10.1109/tnsre.2020.2981659] [Citation(s) in RCA: 83] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|