1
|
Pan H, Ding P, Wang F, Li T, Zhao L, Nan W, Fu Y, Gong A. Comprehensive evaluation methods for translating BCI into practical applications: usability, user satisfaction and usage of online BCI systems. Front Hum Neurosci 2024; 18:1429130. [PMID: 38903409 PMCID: PMC11188342 DOI: 10.3389/fnhum.2024.1429130] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2024] [Accepted: 05/20/2024] [Indexed: 06/22/2024] Open
Abstract
Although brain-computer interface (BCI) is considered a revolutionary advancement in human-computer interaction and has achieved significant progress, a considerable gap remains between the current technological capabilities and their practical applications. To promote the translation of BCI into practical applications, the gold standard for online evaluation for classification algorithms of BCI has been proposed in some studies. However, few studies have proposed a more comprehensive evaluation method for the entire online BCI system, and it has not yet received sufficient attention from the BCI research and development community. Therefore, the qualitative leap from analyzing and modeling for offline BCI data to the construction of online BCI systems and optimizing their performance is elaborated, and then user-centred is emphasized, and then the comprehensive evaluation methods for translating BCI into practical applications are detailed and reviewed in the article, including the evaluation of the usability (including effectiveness and efficiency of systems), the evaluation of the user satisfaction (including BCI-related aspects, etc.), and the evaluation of the usage (including the match between the system and user, etc.) of online BCI systems. Finally, the challenges faced in the evaluation of the usability and user satisfaction of online BCI systems, the efficacy of online BCI systems, and the integration of BCI and artificial intelligence (AI) and/or virtual reality (VR) and other technologies to enhance the intelligence and user experience of the system are discussed. It is expected that the evaluation methods for online BCI systems elaborated in this review will promote the translation of BCI into practical applications.
Collapse
Affiliation(s)
- He Pan
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Fan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Tianwen Li
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Lei Zhao
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
- Faculty of Science, Kunming University of Science and Technology, Kunming, China
| | - Wenya Nan
- Department of Psychology, School of Education, Shanghai Normal University, Shanghai, China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, China
| | - Anmin Gong
- School of Information Engineering, Chinese People's Armed Police Force Engineering University, Xi’an, China
| |
Collapse
|
2
|
AL-Quraishi MS, Tan WH, Elamvazuthi I, Ooi CP, Saad NM, Al-Hiyali MI, Karim H, Azhar Ali SS. Cortical signals analysis to recognize intralimb mobility using modified RNN and various EEG quantities. Heliyon 2024; 10:e30406. [PMID: 38726180 PMCID: PMC11079093 DOI: 10.1016/j.heliyon.2024.e30406] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2024] [Revised: 04/17/2024] [Accepted: 04/25/2024] [Indexed: 05/12/2024] Open
Abstract
Electroencephalogram (EEG) signals are critical in interpreting sensorimotor activities for predicting body movements. However, their efficacy in identifying intralimb movements, such as the dorsiflexion and plantar flexion of the foot, remains suboptimal. This study aims to explore whether various EEG signal quantities can effectively recognize intralimb movements to facilitate the development of Brain-Computer Interface (BCI) devices for foot rehabilitation. This research involved twenty-two healthy, right-handed participants. EEG data were collected using 21 electrodes positioned over the motor cortex, while two electromyography (EMG) electrodes recorded the onset of ankle joint movements. The study focused on analyzing slow cortical potential (SCP) and sensorimotor rhythms (SMR) in alpha and beta bands from the EEG. Five key features-fourth-order Autoregressive feature, variance, waveform length, standard deviation, and permutation entropy-were extracted. A modified Recurrent Neural Network (RNN) including Long Short-term Memory (LSTM) and Gated Recurrent Unit (GRU) algorithms was developed for movement recognition. These were compared against conventional machine learning algorithms, including nonlinear Support Vector Machine (SVM) and k Nearest Neighbourhood (kNN) classifiers. The performance of the proposed models was assessed using two data schemes: within-subject and across-subjects. The findings demonstrated that the GRU and LSTM models significantly outperformed traditional machine learning algorithms in recognizing different EEG signal quantities for intralimb movement. The study indicates that deep learning models, particularly GRU and LSTM, hold superior potential over standard machine learning techniques in identifying intralimb movements using EEG signals. Where the accuracies of LSTM for within and across subjects were 98.87 ± 1.80 % and 87.38 ± 0.86 % respectively. Whereas the accuracy of GRU within and across subjects were 99.18 ± 1.28 % and 86.44 ± 0.69 % respectively. This advancement could significantly benefit the development of BCI devices aimed at foot rehabilitation, suggesting a new avenue for enhancing physical therapy outcomes.
Collapse
Affiliation(s)
- Maged S. AL-Quraishi
- Interdisciplinary Research Center for Smart Mobility and Logistics (IRC-SML), King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
| | - Wooi Haw Tan
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Irraivan Elamvazuthi
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - Chee Pun Ooi
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Naufal M. Saad
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - Mohammed Isam Al-Hiyali
- Department of Electrical and Electronic Engineering, Universiti Teknologi PETRONAS, Bandar Seri Iskandar, 36210, Perak, Malaysia
| | - H.A. Karim
- Center of Digital Home, Faculty of Engineering, Multimedia University, 63100, Cyberjaya, Selangor, Malaysia
| | - Syed Saad Azhar Ali
- Interdisciplinary Research Center for Smart Mobility and Logistics (IRC-SML), King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
- Aerospace Engineering Department, King Fahd University of Petroleum & Minerals (KFUPM), Dhahran, 31261, Saudi Arabia
| |
Collapse
|
3
|
Zhang X, Zhang T, Jiang Y, Zhang W, Lu Z, Wang Y, Tao Q. A novel brain-controlled prosthetic hand method integrating AR-SSVEP augmentation, asynchronous control, and machine vision assistance. Heliyon 2024; 10:e26521. [PMID: 38463871 PMCID: PMC10920167 DOI: 10.1016/j.heliyon.2024.e26521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/27/2023] [Accepted: 02/14/2024] [Indexed: 03/12/2024] Open
Abstract
Background and objective The brain-computer interface (BCI) system based on steady-state visual evoked potentials (SSVEP) is expected to help disabled patients achieve alternative prosthetic hand assistance. However, the existing study still has some shortcomings in interaction aspects such as stimulus paradigm and control logic. The purpose of this study is to innovate the visual stimulus paradigm and asynchronous decoding/control strategy by integrating augmented reality technology, and propose an asynchronous pattern recognition algorithm, thereby improving the interaction logic and practical application capabilities of the prosthetic hand with the BCI system. Methods An asynchronous visual stimulus paradigm based on an augmented reality (AR) interface was proposed in this paper, in which there were 8 control modes, including Grasp, Put down, Pinch, Point, Fist, Palm push, Hold pen, and Initial. According to the attentional orienting characteristics of the paradigm, a novel asynchronous pattern recognition algorithm that combines center extended canonical correlation analysis and support vector machine (Center-ECCA-SVM) was proposed. Then, this study proposed an intelligent BCI system switch based on a deep learning object detection algorithm (YOLOv4) to improve the level of user interaction. Finally, two experiments were designed to test the performance of the brain-controlled prosthetic hand system and its practical performance in real scenarios. Results Under the AR paradigm of this study, compared with the liquid crystal display (LCD) paradigm, the average SSVEP spectrum amplitude of multiple subjects increased by 17.41%, and the signal-noise ratio (SNR) increased by 3.52%. The average stimulus pattern recognition accuracy was 96.71 ± 3.91%, which was 2.62% higher than the LCD paradigm. Under the data analysis time of 2s, the Center-ECCA-SVM classifier obtained 94.66 ± 3.87% and 97.40 ± 2.78% asynchronous pattern recognition accuracy under the Normal metric and the Tolerant metric, respectively. And the YOLOv4-tiny model achieves a speed of 25.29fps and a 96.4% confidence in the prosthetic hand in real-time detection. Finally, the brain-controlled prosthetic hand helped the subjects to complete 4 kinds of daily life tasks in the real scene, and the time-consuming were all within an acceptable range, which verified the effectiveness and practicability of the system. Conclusion This research is based on improving the user interaction level of the prosthetic hand with the BCI system, and has made improvements in the SSVEP paradigm, asynchronous pattern recognition, interaction, and control logic. Furthermore, it also provides support for BCI areas for alternative prosthetic control, and movement disorder rehabilitation programs.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Teng Zhang
- Zhejiang Normal University, Jinhua, Zhejiang, 321004, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Yongyu Jiang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Weiming Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Yu Wang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Wulumuqi, Xinjiang, 830000, China
| |
Collapse
|
4
|
Degirmenci M, Yuce YK, Perc M, Isler Y. EEG-based finger movement classification with intrinsic time-scale decomposition. Front Hum Neurosci 2024; 18:1362135. [PMID: 38505099 PMCID: PMC10948500 DOI: 10.3389/fnhum.2024.1362135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Accepted: 02/15/2024] [Indexed: 03/21/2024] Open
Abstract
Introduction Brain-computer interfaces (BCIs) are systems that acquire the brain's electrical activity and provide control of external devices. Since electroencephalography (EEG) is the simplest non-invasive method to capture the brain's electrical activity, EEG-based BCIs are very popular designs. Aside from classifying the extremity movements, recent BCI studies have focused on the accurate coding of the finger movements on the same hand through their classification by employing machine learning techniques. State-of-the-art studies were interested in coding five finger movements by neglecting the brain's idle case (i.e., the state that brain is not performing any mental tasks). This may easily cause more false positives and degrade the classification performances dramatically, thus, the performance of BCIs. This study aims to propose a more realistic system to decode the movements of five fingers and the no mental task (NoMT) case from EEG signals. Methods In this study, a novel praxis for feature extraction is utilized. Using Proper Rotational Components (PRCs) computed through Intrinsic Time Scale Decomposition (ITD), which has been successfully applied in different biomedical signals recently, features for classification are extracted. Subsequently, these features were applied to the inputs of well-known classifiers and their different implementations to discriminate between these six classes. The highest classifier performances obtained in both subject-independent and subject-dependent cases were reported. In addition, the ANOVA-based feature selection was examined to determine whether statistically significant features have an impact on the classifier performances or not. Results As a result, the Ensemble Learning classifier achieved the highest accuracy of 55.0% among the tested classifiers, and ANOVA-based feature selection increases the performance of classifiers on five-finger movement determination in EEG-based BCI systems. Discussion When compared with similar studies, proposed praxis achieved a modest yet significant improvement in classification performance although the number of classes was incremented by one (i.e., NoMT).
Collapse
Affiliation(s)
- Murside Degirmenci
- Department of Biomedical Technologies, Izmir Katip Celebi University, Izmir, Türkiye
| | - Yilmaz Kemal Yuce
- Department of Computer Engineering, Alanya Alaaddin Keykubat University, Alanya, Antalya, Türkiye
| | - Matjaž Perc
- Faculty of Natural Sciences and Mathematics, University of Maribor, Maribor, Slovenia
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
- Complexity Science Hub Vienna, Vienna, Austria
- Department of Physics, Kyung Hee University, Seoul, Republic of Korea
| | - Yalcin Isler
- Department of Biomedical Engineering, Izmir Katip Celebi University, Izmir, Türkiye
| |
Collapse
|
5
|
Guan S, Yuan Z, Wang F, Li J, Kang X, Lu B. Multi-class Motor Imagery Recognition of Single Joint in Upper Limb Based on Multi-domain Feature Fusion. Neural Process Lett 2023. [DOI: 10.1007/s11063-023-11185-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
6
|
Siribunyaphat N, Punsawad Y. Brain-Computer Interface Based on Steady-State Visual Evoked Potential Using Quick-Response Code Pattern for Wheelchair Control. SENSORS (BASEL, SWITZERLAND) 2023; 23:2069. [PMID: 36850667 PMCID: PMC9964090 DOI: 10.3390/s23042069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2023] [Revised: 02/09/2023] [Accepted: 02/10/2023] [Indexed: 06/18/2023]
Abstract
Brain-computer interfaces (BCIs) are widely utilized in control applications for people with severe physical disabilities. Several researchers have aimed to develop practical brain-controlled wheelchairs. An existing electroencephalogram (EEG)-based BCI based on steady-state visually evoked potential (SSVEP) was developed for device control. This study utilized a quick-response (QR) code visual stimulus pattern for a robust existing system. Four commands were generated using the proposed visual stimulation pattern with four flickering frequencies. Moreover, we employed a relative power spectrum density (PSD) method for the SSVEP feature extraction and compared it with an absolute PSD method. We designed experiments to verify the efficiency of the proposed system. The results revealed that the proposed SSVEP method and algorithm yielded an average classification accuracy of approximately 92% in real-time processing. For the wheelchair simulated via independent-based control, the proposed BCI control required approximately five-fold more time than the keyboard control for real-time control. The proposed SSVEP method using a QR code pattern can be used for BCI-based wheelchair control. However, it suffers from visual fatigue owing to long-time continuous control. We will verify and enhance the proposed system for wheelchair control in people with severe physical disabilities.
Collapse
Affiliation(s)
| | - Yunyong Punsawad
- School of Informatics, Walailak University, Nakhon Si Thammarat 80160, Thailand
- Informatics Innovative Center of Excellence, Walailak University, Nakhon Si Thammarat 80160, Thailand
| |
Collapse
|
7
|
Lyu X, Ding P, Li S, Dong Y, Su L, Zhao L, Gong A, Fu Y. Human factors engineering of BCI: an evaluation for satisfaction of BCI based on motor imagery. Cogn Neurodyn 2023; 17:105-118. [PMID: 36704636 PMCID: PMC9871150 DOI: 10.1007/s11571-022-09808-z] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Revised: 03/04/2022] [Accepted: 04/01/2022] [Indexed: 01/29/2023] Open
Abstract
Existing brain-computer interface (BCI) research has made great progress in improving the accuracy and information transfer rate (ITR) of BCI systems. However, the practicability of BCI is still difficult to achieve. One of the important reasons for this difficulty is that human factors are not fully considered in the research and development of BCI. As a result, BCI systems have not yet reached users' expectations. In this study, we investigate a BCI system of motor imagery for lower limb synchronous rehabilitation as an example. From the perspective of human factors engineering of BCI, a comprehensive evaluation method of BCI system development is proposed based on the concept of human-centered design and evaluation. Subjects' satisfaction ratings for BCI sensors, visual analog scale (VAS), subjects' satisfaction rating of the BCI system, and the mental workload rating for subjects manipulating the BCI system, as well as interview/follow-up comprehensive evaluation of motor imagery of BCI (MI-BCI) system satisfaction were used. The methods and concepts proposed in this study provide useful insights for the design of personalized MI-BCI. We expect that the human factors engineering of BCI could be applied to the design and satisfaction evaluation of MI-BCI, so as to promote the practical application of this kind of BCI.
Collapse
Affiliation(s)
- Xiaotong Lyu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Siyu Li
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Yuyang Dong
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Lei Su
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Lei Zhao
- Faculty of Science, Kunming University of Science and Technology, Kunming, Yunnan China
| | - Anmin Gong
- School of Information Engineering, Chinese People’s Armed Police Force Engineering University, Xian, Shanxi China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming, Yunnan China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming, Yunnan China
| |
Collapse
|
8
|
An EEG-based subject-independent emotion recognition model using a differential-evolution-based feature selection algorithm. Knowl Inf Syst 2022. [DOI: 10.1007/s10115-022-01762-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
9
|
Peterson SM, Rao RPN, Brunton BW. Learning neural decoders without labels using multiple data streams. J Neural Eng 2022; 19. [PMID: 35905727 DOI: 10.1088/1741-2552/ac857c] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Accepted: 07/29/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Recent advances in neural decoding have accelerated the development of brain-computer interfaces aimed at assisting users with everyday tasks such as speaking, walking, and manipulating objects. However, current approaches for training neural decoders commonly require large quantities of labeled data, which can be laborious or infeasible to obtain in real-world settings. Alternatively, self-supervised models that share self-generated pseudo-labels between two data streams have shown exceptional performance on unlabeled audio and video data, but it remains unclear how well they extend to neural decoding. APPROACH We learn neural decoders without labels by leveraging multiple simultaneously recorded data streams, including neural, kinematic, and physiological signals. Specifically, we apply cross-modal, self-supervised deep clustering to train decoders that can classify movements from brain recordings. After training, we then isolate the decoders for each input data stream and compare the accuracy of decoders trained using cross-modal deep clustering against supervised and unimodal, self-supervised models. MAIN RESULTS We find that sharing pseudo-labels between two data streams during training substantially increases decoding performance compared to unimodal, self-supervised models, with accuracies approaching those of supervised decoders trained on labeled data. Next, we extend cross-modal decoder training to three or more modalities, achieving state-of-the-art neural decoding accuracy that matches or slightly exceeds the performance of supervised models. Significance: We demonstrate that cross-modal, self-supervised decoding can be applied to train neural decoders when few or no labels are available and extend the cross-modal framework to share information among three or more data streams, further improving self-supervised training.
Collapse
Affiliation(s)
- Steven M Peterson
- Biology, University of Washington, 4000 15th Ave NE, Seattle, Washington, 98195, UNITED STATES
| | - Rajesh P N Rao
- Department of Computer Science and Engineering College of Engineering, University of Washington, Box 352350, Seattle, Washington, 98195, UNITED STATES
| | - Bingni W Brunton
- University of Washington, 4000 15th Ave NE, Seattle, Washington, 98195, UNITED STATES
| |
Collapse
|
10
|
Qu J, Guo H, Wang W, Dang S. Prediction of Human-Computer Interaction Intention Based on Eye Movement and Electroencephalograph Characteristics. Front Psychol 2022; 13:816127. [PMID: 35496176 PMCID: PMC9039167 DOI: 10.3389/fpsyg.2022.816127] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 01/19/2022] [Indexed: 11/13/2022] Open
Abstract
In order to solve the problem of unsmooth and inefficient human-computer interaction process in the information age, a method for human-computer interaction intention prediction based on electroencephalograph (EEG) signals and eye movement signals is proposed. This approach is different from previous methods where researchers predict using data from human-computer interaction and a single physiological signal. This method uses the eye movements and EEG signals that clearly characterized the interaction intention as the prediction basis. In addition, this approach is not only tested with multiple human-computer interaction intentions, but also takes into account the operator in different cognitive states. The experimental results show that this method has some advantages over the methods proposed by other researchers. In Experiment 1, using the eye movement signal fixation point abscissa Position X (PX), fixation point ordinate Position Y (PY), and saccade amplitude (SA) to judge the interaction intention, the accuracy reached 92%, In experiment 2, only relying on the pupil diameter, pupil size (PS) and fixed time, fixed time (FD) of eye movement signals can not achieve higher accuracy of the operator’s cognitive state, so EEG signals are added. The cognitive state was identified separately by combining the screened EEG parameters Rα/β with the eye movement signal pupil diameter and fixation time, with an accuracy of 91.67%. The experimental combination of eye movement and EEG signal features can be used to predict the operator’s interaction intention and cognitive state.
Collapse
Affiliation(s)
- Jue Qu
- School of Aeronautics, Northwestern Polytechnical University, Xi'an, China.,Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Hao Guo
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Wei Wang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| | - Sina Dang
- Air and Missile Defense College, Air Force Engineering University, Xi'an, China
| |
Collapse
|
11
|
A novel classification framework using multiple bandwidth method with optimized CNN for brain–computer interfaces with EEG-fNIRS signals. Neural Comput Appl 2021. [DOI: 10.1007/s00521-021-06202-4] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
12
|
Noorbasha SK, Florence Sudha G. Novel approach to remove Electrical Shift and Linear Trend artifact from single channel EEG. Biomed Phys Eng Express 2021; 7. [PMID: 34584019 DOI: 10.1088/2057-1976/ac2aee] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/28/2021] [Indexed: 11/12/2022]
Abstract
Electroencephalogram (EEG) signals are crucial to Brain-Computer Interfacing (BCI). However, these are vulnerable to a variety of unintended artifacts that could negatively impact the precise brain function assessment. This paper provides a new algorithm to eliminate Electrical Shift and Linear Trend artifact (ESLT) in EEG using Singular Spectrum Analysis (SSA) and Enhanced local Polynomial (LP) Approximation-based Total Variation (EPATV). The contaminated single channel EEG is subdivided into multiple bands of frequency components by SSA. In order to acquire all LP and TV components, EPATV filtering is applied over the contaminated component frequency band. Filtered sub-signal is collected by subtracting both the LP and TV components from the component contaminated frequency band. Then, the addition of filtered sub-signal and remaining SSA frequency band components yield the final denoised EEG signal. The effectiveness of the proposed method in this paper is evaluated using the data obtained from three databases and compared with the existing methods. From the extensive simulation results, it is inferred that the algorithm discussed in the paper is effective when compared the existing methods, exhibiting a highest averaged Correlation Coefficient (CC) of 0.9534, averaged Signal to Noise Ratio (SNR) of 10.2208dB, lowest averaged Relative Root Mean Square Error (RRMSE) value 0.2787 and averaged Mean absolute Error (MAE) inαband value of 0.0557. The algorithm presented in this paper may be a viable choice for extracting ESLT artifact from a small streaming section of the EEG without requirement of the initial calibration or enormous EEG data.
Collapse
Affiliation(s)
- Sayedu Khasim Noorbasha
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry-605014, India
| | - Gnanou Florence Sudha
- Department of Electronics and Communication Engineering, Pondicherry Engineering College, Puducherry-605014, India
| |
Collapse
|
13
|
Controlling an Anatomical Robot Hand Using the Brain-Computer Interface Based on Motor Imagery. ADVANCES IN HUMAN-COMPUTER INTERACTION 2021. [DOI: 10.1155/2021/5515759] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
More than one billion people face disabilities worldwide, according to the World Health Organization (WHO). In Sri Lanka, there are thousands of people suffering from a variety of disabilities, especially hand disabilities, due to the civil war in the country. The Ministry of Health of Sri Lanka reports that by 2025, the number of people with disabilities in Sri Lanka will grow by 24.2%. In the field of robotics, new technologies for handicapped people are now being built to make their lives simple and effective. The aim of this research is to develop a 3-finger anatomical robot hand model for handicapped people and control (flexion and extension) the robot hand using motor imagery. Eight EEG electrodes were used to extract EEG signals from the primary motor cortex. Data collection and testing were performed for a period of 42 s timespan. According to the test results, eight EEG electrodes were sufficient to acquire the motor imagery for flexion and extension of finger movements. The overall accuracy of the experiments was found at 89.34% (mean = 22.32) at the 0.894 precision. We also observed that the proposed design provided promising results for the performance of the task (grab, hold, and release activities) of hand-disabled persons.
Collapse
|
14
|
Shahbakhti M, Rodrigues AS, Augustyniak P, Broniec-Wójcik A, Sološenko A, Beiramvand M, Marozas V. SWT-kurtosis based algorithm for elimination of electrical shift and linear trend from EEG signals. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102373] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
15
|
Miften FS, Diykh M, Abdulla S, Siuly S, Green JH, Deo RC. A new framework for classification of multi-category hand grasps using EMG signals. Artif Intell Med 2020; 112:102005. [PMID: 33581825 DOI: 10.1016/j.artmed.2020.102005] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 12/10/2020] [Accepted: 12/23/2020] [Indexed: 11/26/2022]
Abstract
Electromyogram (EMG) signals have had a great impact on many applications, including prosthetic or rehabilitation devices, human-machine interactions, clinical and biomedical areas. In recent years, EMG signals have been used as a popular tool to generate device control commands for rehabilitation equipment, such as robotic prostheses. This intention of this study was to design an EMG signal-based expert model for hand-grasp classification that could enhance prosthetic hand movements for people with disabilities. The study, thus, aimed to introduce an innovative framework for recognising hand movements using EMG signals. The proposed framework consists of logarithmic spectrogram-based graph signal (LSGS), AdaBoost k-means (AB-k-means) and an ensemble of feature selection (FS) techniques. First, the LSGS model is applied to analyse and extract the desirable features from EMG signals. Then, to assist in selecting the most influential features, an ensemble FS is added to the design. Finally, in the classification phase, a novel classification model, named AB-k-means, is developed to classify the selected EMG features into different hand grasps. The proposed hybrid model, LSGS-based scheme is evaluated with a publicly available EMG hand movement dataset from the UCI repository. Using the same dataset, the LSGS-AB-k-means design model is also benchmarked with several classifications including the state-of-the-art algorithms. The results demonstrate that the proposed model achieves a high classification rate and demonstrates superior results compared to several previous research works. This study, therefore, establishes that the proposed model can accurately classify EMG hand grasps and can be implemented as a control unit with low cost and a high classification rate.
Collapse
Affiliation(s)
| | - Mohammed Diykh
- School of Sciences, University of Southern Queensland, Australia; University of Thi-Qar, College of Education for Pure Science, Iraq.
| | - Shahab Abdulla
- USQ College, University of Southern Queensland, Australia.
| | - Siuly Siuly
- Institute for Sustainable Industries & Liveable Cities, Victoria University, Australia.
| | - Jonathan H Green
- USQ College, University of Southern Queensland, Australia; Faculty of the Humanities, University of the Free State, South Africa.
| | - Ravinesh C Deo
- School of Sciences, University of Southern Queensland, Australia.
| |
Collapse
|
16
|
Gannouni S, Belwafi K, Aboalsamh H, AlSamhan Z, Alebdi B, Almassad Y, Alobaedallah H. EEG-Based BCI System to Detect Fingers Movements. Brain Sci 2020; 10:brainsci10120965. [PMID: 33321915 PMCID: PMC7763179 DOI: 10.3390/brainsci10120965] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 11/24/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022] Open
Abstract
The advancement of assistive technologies toward the restoration of the mobility of paralyzed and/or amputated limbs will go a long way. Herein, we propose a system that adopts the brain-computer interface technology to control prosthetic fingers with the use of brain signals. To predict the movements of each finger, complex electroencephalogram (EEG) signal processing algorithms should be applied to remove the outliers, extract features, and be able to handle separately the five human fingers. The proposed method deals with a multi-class classification problem. Our machine learning strategy to solve this problem is built on an ensemble of one-class classifiers, each of which is dedicated to the prediction of the intention to move a specific finger. Regions of the brain that are sensitive to the movements of the fingers are identified and located. The average accuracy of the proposed EEG signal processing chain reached 81% for five subjects. Unlike the majority of existing prototypes that allow only one single finger to be controlled and only one movement to be performed at a time, the system proposed will enable multiple fingers to perform movements simultaneously. Although the proposed system classifies five tasks, the obtained accuracy is too high compared with a binary classification system. The proposed system contributes to the advancement of a novel prosthetic solution that allows people with severe disabilities to perform daily tasks in an easy manner.
Collapse
|
17
|
Iwama S, Tsuchimoto S, Hayashi M, Mizuguchi N, Ushiba J. Scalp electroencephalograms over ipsilateral sensorimotor cortex reflect contraction patterns of unilateral finger muscles. Neuroimage 2020; 222:117249. [PMID: 32798684 DOI: 10.1016/j.neuroimage.2020.117249] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Revised: 08/02/2020] [Accepted: 08/06/2020] [Indexed: 12/17/2022] Open
Abstract
A variety of neural substrates are implicated in the initiation, coordination, and stabilization of voluntary movements underpinned by adaptive contraction and relaxation of agonist and antagonist muscles. To achieve such flexible and purposeful control of the human body, brain systems exhibit extensive modulation during the transition from resting state to motor execution and to maintain proper joint impedance. However, the neural structures contributing to such sensorimotor control under unconstrained and naturalistic conditions are not fully characterized. To elucidate which brain regions are implicated in generating and coordinating voluntary movements, we employed a physiologically inspired, two-stage method to decode relaxation and three patterns of contraction in unilateral finger muscles (i.e., extension, flexion, and co-contraction) from high-density scalp electroencephalograms (EEG). The decoder consisted of two parts employed in series. The first discriminated between relaxation and contraction. If the EEG data were discriminated as contraction, the second stage then discriminated among the three contraction patterns. Despite the difficulty in dissociating detailed contraction patterns of muscles within a limb from scalp EEG signals, the decoder performance was higher than chance-level by 2-fold in the four-class classification. Moreover, weighted features in the trained decoders revealed EEG features differentially contributing to decoding performance. During the first stage, consistent with previous reports, weighted features were localized around sensorimotor cortex (SM1) contralateral to the activated fingers, while those during the second stage were localized around ipsilateral SM1. The loci of these weighted features suggested that the coordination of unilateral finger muscles induced different signaling patterns in ipsilateral SM1 contributing to motor control. Weighted EEG features enabled a deeper understanding of human sensorimotor processing as well as of a more naturalistic control of brain-computer interfaces.
Collapse
Affiliation(s)
- Seitaro Iwama
- School of Fundamental Science and Technology, Graduate School of Keio University, Kanagawa, Japan
| | - Shohei Tsuchimoto
- School of Fundamental Science and Technology, Graduate School of Keio University, Kanagawa, Japan; Center of Assistive Robotics and Rehabilitation for Longevity and Good Health, National Center for Geriatrics and Gerontology, Aichi, Japan
| | - Masaaki Hayashi
- School of Fundamental Science and Technology, Graduate School of Keio University, Kanagawa, Japan
| | - Nobuaki Mizuguchi
- Center of Assistive Robotics and Rehabilitation for Longevity and Good Health, National Center for Geriatrics and Gerontology, Aichi, Japan; Department of Biosciences and informatics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama, Kanagawa 223-8522, Japan
| | - Junichi Ushiba
- Department of Biosciences and informatics, Faculty of Science and Technology, Keio University, 3-14-1 Hiyoshi, Kouhoku-ku, Yokohama, Kanagawa 223-8522, Japan.
| |
Collapse
|
18
|
Feng N, Hu F, Wang H, Gouda MA. Decoding of voluntary and involuntary upper-limb motor imagery based on graph fourier transform and cross-frequency coupling coefficients. J Neural Eng 2020; 17:056043. [PMID: 33045685 DOI: 10.1088/1741-2552/abc024] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Brain-computer interface (BCI) technology based on motor imagery (MI) control has become a research hotspot but continues to encounter numerous challenges. BCI can assist in the recovery of stroke patients and serve as a key technology in robot control. Current research on MI almost exclusively focuses on the hands, feet, and tongue. Therefore, the purpose of this paper is to establish a four-class MI BCI system, in which the four types are the four articulations within the right upper limbs, involving the shoulder, elbow, wrist, and hand. APPROACH Ten subjects were chosen to perform nine upper-limb analytic movements, after which the differences were compared in P300, movement-related potentials(MRPS), and event-related desynchronization/event-related synchronization under voluntary MI (V-MI) and involuntary MI (INV-MI). Next, the cross-frequency coupling (CFC) coefficient based on mutual information was extracted from the electrodes and frequency bands with interest. Combined with the image Fourier transform and twin bounded support vector machine classifier, four kinds of electroencephalography data were classified, and the classifier's parameters were optimized using a genetic algorithm. MAIN RESULTS The results were shown to be encouraging, with an average accuracy of 93.2% and 92.2% for V-MI and INV-MI, respectively, and over 95% for any three classes and any two classes. In most cases, the accuracy of feature extraction using the proximal articulations as the basis was found to be relatively high and had better performance. SIGNIFICANCE This paper discussed four types of MI according to three aspects under two modes and classed them by combining graph Fourier transform and CFC. Accordingly, the theoretical discussion and classification methods may provide a fundamental theoretical basis for BCI interface applications.
Collapse
Affiliation(s)
- Naishi Feng
- Department of Mechanical Engineering and Automation, Northeastern University, Shenyang City, Liaoning, People's Republic of China
| | | | | | | |
Collapse
|
19
|
Wu C, Qiu S, Xing J, He H. A CNN-based compare network for classification of SSVEPs in human walking. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2986-2990. [PMID: 33018633 DOI: 10.1109/embc44109.2020.9176649] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Brain-computer interface (BCI) can provide a way for the disabled to interact with the outside world. Steady-state visual evoked potential (SSVEP), which evokes potential through visual stimulation is one of important BCI paradigms. In laboratory environment, the classification accuracy of SSVEPs is excellent. However, in motion state, the accuracy will be greatly affected and reduce quite a lot. In this paper, in order to improve the classification accuracy of the SSVEP signals in the motion state, we collected SSVEP data of five targets at three speeds of 0km/h, 2.5km/h and 5km/h. A compare network based on convolutional neural network (CNN) was proposed to learn the relationship between EEG signal and the template corresponding to each stimulus frequency and classify. Compared with traditional methods (i.e., CCA, FBCCA and SVM) and state-of-the-art method (CNN) on the collected SSVEP datasets of 20 subjects, the method we proposed always performed best at different speeds. Therefore, these results validated the effectiveness of the method. In addition, compared with the speed of 0 km / h, the accuracy of the compare network at a high walking rate (5km/h) did not decrease much, and it could still maintain a good performance.
Collapse
|
20
|
Kato M, Kanoga S, Hoshino T, Fukami T. Motor Imagery Classification of Finger Motions Using Multiclass CSP. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:2991-2994. [PMID: 33018634 DOI: 10.1109/embc44109.2020.9176612] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Electroencephalogram (EEG) data during motor imagery tasks regarding small-scale physical dynamics such as finger motions have low discriminability because capturing the spatial difference of the motions is difficult. We assumed that more discriminative features can be captured if spatial filters maximize the independence of each class data. This study constructed spatial filters named multiclass common spatial pattern (CSP), which maximize an approximation of mutual in-formation of extracted components and class labels, and applied them to a five-class motor-imagery dataset containing finger motion tasks. By applying multiclass CSP, the classification accuracies were improved (Mean SD: 40.6 ± 10.1%) compared with classical CSP (21.8 ± 2.5%) and no spatial filtering case (38.7±10.0%). In addition, we visualized learned spatial filters to assess the trend of discriminative features of finger motions. For these results, it was clear that multiclass CSP captured task-specific spatial maps for each finger motion and outperformed multiclass motor-imagery classification performance about 2% even when the tasks are small-scale physical dynamics.
Collapse
|
21
|
Dietz V. Neural coordination of bilateral power and precision finger movements. Eur J Neurosci 2020; 54:8249-8255. [PMID: 32682343 DOI: 10.1111/ejn.14911] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 07/02/2020] [Accepted: 07/03/2020] [Indexed: 11/29/2022]
Abstract
The dexterity of hands and fingers is related to the strength of control by cortico-motoneuronal connections which exclusively exist in primates. The cortical command is associated with a task-specific, rapid proprioceptive adaptation of forces applied by hands and fingers to an object. This neural control differs between "power grip" movements (e.g., reach and grasp of a cup) where hand and fingers act as a unity and "precision grip" movements (e.g., picking up a raspberry) where fingers move independently from the hand. In motor tasks requiring hands and fingers of both sides a "neural coupling" (reflected in bilateral reflex responses to unilateral stimulations) coordinates power grip movements (e.g., opening a bottle). In contrast, during bilateral precision movements, such as playing piano, the fingers of both hands move independently, due to a direct cortico-motoneuronal control, while the hands are coupled (e.g., to maintain the rhythm between the two sides). While most studies on prehension concern unilateral hand movements, many activities of daily life are tackled by bilateral power grips where a neural coupling serves for an automatic movement performance. In primates this mode of motor control is supplemented by a system that enables the uni- or bilateral performance of skilled individual finger movements.
Collapse
Affiliation(s)
- Volker Dietz
- Spinal Injury Center, University Hospital Balgrist, Zürich, Switzerland
| |
Collapse
|
22
|
A Comprehensive sLORETA Study on the Contribution of Cortical Somatomotor Regions to Motor Imagery. Brain Sci 2019; 9:brainsci9120372. [PMID: 31847114 PMCID: PMC6955896 DOI: 10.3390/brainsci9120372] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2019] [Revised: 12/06/2019] [Accepted: 12/07/2019] [Indexed: 12/02/2022] Open
Abstract
Brain–computer interface (BCI) is a technology used to convert brain signals to control external devices. Researchers have designed and built many interfaces and applications in the last couple of decades. BCI is used for prevention, detection, diagnosis, rehabilitation, and restoration in healthcare. EEG signals are analyzed in this paper to help paralyzed people in rehabilitation. The electroencephalogram (EEG) signals recorded from five healthy subjects are used in this study. The sensor level EEG signals are converted to source signals using the inverse problem solution. Then, the cortical sources are calculated using sLORETA methods at nine regions marked by a neurophysiologist. The features are extracted from cortical sources by using the common spatial pattern (CSP) method and classified by a support vector machine (SVM). Both the sensor and the computed cortical signals corresponding to motor imagery of the hand and foot are used to train the SVM algorithm. Then, the signals outside the training set are used to test the classification performance of the classifier. The 0.1–30 Hz and mu rhythm band-pass filtered activity is also analyzed for the EEG signals. The classification performance and recognition of the imagery improved up to 100% under some conditions for the cortical level. The cortical source signals at the regions contributing to motor commands are investigated and used to improve the classification of motor imagery.
Collapse
|
23
|
Joadder M, Siuly S, Kabir E, Wang H, Zhang Y. A New Design of Mental State Classification for Subject Independent BCI Systems. Ing Rech Biomed 2019. [DOI: 10.1016/j.irbm.2019.05.004] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Tonic Cold Pain Detection Using Choi–Williams Time-Frequency Distribution Analysis of EEG Signals: A Feasibility Study. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163433] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Detecting pain based on analyzing electroencephalography (EEG) signals can enhance the ability of caregivers to characterize and manage clinical pain. However, the subjective nature of pain and the nonstationarity of EEG signals increase the difficulty of pain detection using EEG signals analysis. In this work, we present an EEG-based pain detection approach that analyzes the EEG signals using a quadratic time-frequency distribution, namely the Choi–Williams distribution (CWD). The use of the CWD enables construction of a time-frequency representation (TFR) of the EEG signals to characterize the time-varying spectral components of the EEG signals. The TFR of the EEG signals is analyzed to extract 12 time-frequency features for pain detection. These features are used to train a support vector machine classifier to distinguish between EEG signals that are associated with the no-pain and pain classes. To evaluate the performance of our proposed approach, we have recorded EEG signals for 24 healthy subjects under tonic cold pain stimulus. Moreover, we have developed two performance evaluation procedures—channel- and feature-based evaluation procedures—to study the effect of the utilized EEG channels and time-frequency features on the accuracy of pain detection. The experimental results show that our proposed approach achieved an average classification accuracy of 89.24% in distinguishing between the no-pain and pain classes. In addition, the classification performance achieved using our proposed approach outperforms the classification results reported in several existing EEG-based pain detection approaches.
Collapse
|