1
|
Guan Z, Zhang X, Huang W, Li K, Chen D, Li W, Sun J, Chen L, Mao Y, Sun H, Tang X, Cao L, Li Y. A Method for Detecting Depression in Adolescence Based on an Affective Brain-Computer Interface and Resting-State Electroencephalogram Signals. Neurosci Bull 2024:10.1007/s12264-024-01319-7. [PMID: 39565521 DOI: 10.1007/s12264-024-01319-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 04/27/2024] [Indexed: 11/21/2024] Open
Abstract
Depression is increasingly prevalent among adolescents and can profoundly impact their lives. However, the early detection of depression is often hindered by the time-consuming diagnostic process and the absence of objective biomarkers. In this study, we propose a novel approach for depression detection based on an affective brain-computer interface (aBCI) and the resting-state electroencephalogram (EEG). By fusing EEG features associated with both emotional and resting states, our method captures comprehensive depression-related information. The final depression detection model, derived through decision fusion with multiple independent models, further enhances detection efficacy. Our experiments involved 40 adolescents with depression and 40 matched controls. The proposed model achieved an accuracy of 86.54% on cross-validation and 88.20% on the independent test set, demonstrating the efficiency of multimodal fusion. In addition, further analysis revealed distinct brain activity patterns between the two groups across different modalities. These findings hold promise for new directions in depression detection and intervention.
Collapse
Affiliation(s)
- Zijing Guan
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Xiaofei Zhang
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China
| | - Weichen Huang
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Kendi Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Di Chen
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Weiming Li
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China
| | - Jiaqi Sun
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China
| | - Lei Chen
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China
| | - Yimiao Mao
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China
| | - Huijun Sun
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Xiongzi Tang
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China
| | - Liping Cao
- The Affiliated Brain Hospital, Guangzhou Medical University, Guangzhou, 510370, China.
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, 510641, China.
- Research Center for Brain-Computer Interface, Pazhou Lab, Guangzhou, 510330, China.
| |
Collapse
|
2
|
She J, Liu Y, Xu Z, Xiang B, Li N, Liu W, Yan F, Yan L. Long-Lasting Neural Activity Indexed by Cognitive Function Underlying Unconscious Color Perception. IEEE SENSORS JOURNAL 2024; 24:37169-37182. [DOI: 10.1109/jsen.2024.3444274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Affiliation(s)
- Jingyang She
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Yan Liu
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Zhipeng Xu
- Zhongnan Hospital of Wuhan University, Wuhan, China
| | - Biao Xiang
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Ningna Li
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Wenjiang Liu
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Fuwu Yan
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| | - Lirong Yan
- College of Automotive Engineering, Wuhan University of Technology, Wuhan, China
| |
Collapse
|
3
|
An S, Kim S, Chikontwe P, Park SH. Dual Attention Relation Network With Fine-Tuning for Few-Shot EEG Motor Imagery Classification. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2024; 35:15479-15493. [PMID: 37379192 DOI: 10.1109/tnnls.2023.3287181] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/30/2023]
Abstract
Recently, motor imagery (MI) electroencephalography (EEG) classification techniques using deep learning have shown improved performance over conventional techniques. However, improving the classification accuracy on unseen subjects is still challenging due to intersubject variability, scarcity of labeled unseen subject data, and low signal-to-noise ratio (SNR). In this context, we propose a novel two-way few-shot network able to efficiently learn how to learn representative features of unseen subject categories and classify them with limited MI EEG data. The pipeline includes an embedding module that learns feature representations from a set of signals, a temporal-attention module to emphasize important temporal features, an aggregation-attention module for key support signal discovery, and a relation module for final classification based on relation scores between a support set and a query signal. In addition to the unified learning of feature similarity and a few-shot classifier, our method can emphasize informative features in support data relevant to the query, which generalizes better on unseen subjects. Furthermore, we propose to fine-tune the model before testing by arbitrarily sampling a query signal from the provided support set to adapt to the distribution of the unseen subject. We evaluate our proposed method with three different embedding modules on cross-subject and cross-dataset classification tasks using brain-computer interface (BCI) competition IV 2a, 2b, and GIST datasets. Extensive experiments show that our model significantly improves over the baselines and outperforms existing few-shot approaches.
Collapse
|
4
|
Wang Z, Hu H, Zhou T, Xu T, Zhao X. Average Time Consumption per Character: A Practical Performance Metric for Generic Synchronous BCI Spellers. IEEE Trans Biomed Eng 2024; 71:2684-2698. [PMID: 38602850 DOI: 10.1109/tbme.2024.3387469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/13/2024]
Abstract
OBJECTIVE The information transfer rate (ITR) is widely accepted as a performance metric for generic brain-computer interface (BCI) spellers, while it is noticeable that the communication speed given by ITR is actually an upper bound which however can never be reached in real systems. A new performance metric is therefore needed. METHODS In this paper, a new metric named average time consumption per character (ATCPC) is proposed. It quantifies how long it takes on average to type one character using a typical synchronous BCI speller. To analytically derive ATCPC, the real typing process is modelled with a random walk on a graph. Misclassification and backspace are carefully characterized. A close-form formula of ATCPC is obtained through computing the hitting time of the random walk. The new metric is validated through simulated typing experiments and compared with ITR. RESULTS Firstly, the formula and simulation show a good consistency. Secondly, ITR always tends to overestimate the communication speed, while ATCPC is more realistic. CONCLUSION The proposed ATCPC metric is valid. SIGNIFICANCE ATCPC is a qualified substitute for ITR. ATCPC also reveals the great potential of keyboard optimization to further enhance the performance of BCI spellers, which was hardly investigated before.
Collapse
|
5
|
Cheng Y, Yan L, Shoukat MU, She J, Liu W, Shi C, Wu Y, Yan F. An improved SSVEP-based brain-computer interface with low-contrast visual stimulation and its application in UAV control. J Neurophysiol 2024; 132:809-821. [PMID: 38985934 DOI: 10.1152/jn.00029.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 07/03/2024] [Accepted: 07/04/2024] [Indexed: 07/12/2024] Open
Abstract
Efficient communication and regulation are crucial for advancing brain-computer interfaces (BCIs), with the steady-state visual-evoked potential (SSVEP) paradigm demonstrating high accuracy and information transfer rates. However, the conventional SSVEP paradigm encounters challenges related to visual occlusion and fatigue. In this study, we propose an improved SSVEP paradigm that addresses these issues by lowering the contrast of visual stimulation. The improved paradigms outperform the traditional paradigm in the experiments, significantly reducing the visual stimulation of the SSVEP paradigm. Furthermore, we apply this enhanced paradigm to a BCI navigation system, enabling two-dimensional navigation of unmanned aerial vehicles (UAVs) through a first-person perspective. Experimental results indicate the enhanced SSVEP-based BCI system's accuracy in performing navigation and search tasks. Our findings highlight the feasibility of the enhanced SSVEP paradigm in mitigating visual occlusion and fatigue issues, presenting a more intuitive and natural approach for BCIs to control external equipment.NEW & NOTEWORTHY In this article, we proposed an improved steady-state visual-evoked potential (SSVEP) paradigm and constructed an SSVEP-based brain-computer interface (BCI) system to navigate the unmanned aerial vehicle (UAV) in two-dimensional (2-D) physical space. We proposed a modified method for evaluating visual fatigue including subjective score and objective indices. The results indicated that the improved SSVEP paradigm could effectively reduce visual fatigue while maintaining high accuracy.
Collapse
Affiliation(s)
- Yu Cheng
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
| | - Lirong Yan
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
- Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, People's Republic of China
| | - Muhammad Usman Shoukat
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
| | - Jingyang She
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
| | - Wenjiang Liu
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
- Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, People's Republic of China
| | - Changcheng Shi
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
- Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, People's Republic of China
| | - Yibo Wu
- Wuhan Leishen Special Equipment Co. Ltd., Wuhan, People's Republic of China
| | - Fuwu Yan
- Hubei Key Laboratory of Advanced Technology for Automotive Components, Wuhan University of Technology, Wuhan, People's Republic of China
- Foshan Xianhu Laboratory of the Advanced Energy Science and Technology Guangdong Laboratory, Foshan, People's Republic of China
| |
Collapse
|
6
|
Liu X, Hu B, Si Y, Wang Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med Biol Eng Comput 2024; 62:1981-1990. [PMID: 38509350 DOI: 10.1007/s11517-024-03070-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024]
Abstract
Brain-Computer Interfaces (BCIs) have shown great potential in providing communication and control for individuals with severe motor disabilities. However, traditional BCIs that rely on electroencephalography (EEG) signals suffer from low information transfer rates and high variability across users. Recently, eye movement signals have emerged as a promising alternative due to their high accuracy and robustness. Eye movement signals are the electrical or mechanical signals generated by the movements and behaviors of the eyes, serving to denote the diverse forms of eye movements, such as fixations, smooth pursuit, and other oculomotor activities like blinking. This article presents a review of recent studies on the development of BCI typing systems that incorporate eye movement signals. We first discuss the basic principles of BCI and the recent advancements in text entry. Then, we provide a comprehensive summary of the latest advancements in BCI typing systems that leverage eye movement signals. This includes an in-depth analysis of hybrid BCIs that are built upon the integration of electrooculography (EOG) and eye tracking technology, aiming to enhance the performance and functionality of the system. Moreover, we highlight the advantages and limitations of different approaches, as well as potential future directions. Overall, eye movement signals hold great potential for enhancing the usability and accessibility of BCI typing systems, and further research in this area could lead to more effective communication and control for individuals with motor disabilities.
Collapse
Affiliation(s)
- Xi Liu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Bingliang Hu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Yang Si
- Department of Neurology, Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Chengdu, 611731, China
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
| |
Collapse
|
7
|
Liu H, Wang Z, Li R, Zhao X, Xu T, Zhou T, Hu H. A comparative study of stereo-dependent SSVEP targets and their impact on VR-BCI performance. Front Neurosci 2024; 18:1367932. [PMID: 38660227 PMCID: PMC11041379 DOI: 10.3389/fnins.2024.1367932] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2024] [Accepted: 03/27/2024] [Indexed: 04/26/2024] Open
Abstract
Steady-state visual evoked potential brain-computer interfaces (SSVEP-BCI) have attracted significant attention due to their ease of deployment and high performance in terms of information transfer rate (ITR) and accuracy, making them a promising candidate for integration with consumer electronics devices. However, as SSVEP characteristics are directly associated with visual stimulus attributes, the influence of stereoscopic vision on SSVEP as a critical visual attribute has yet to be fully explored. Meanwhile, the promising combination of virtual reality (VR) devices and BCI applications is hampered by the significant disparity between VR environments and traditional 2D displays. This is not only due to the fact that screen-based SSVEP generally operates under static, stable conditions with simple and unvaried visual stimuli but also because conventional luminance-modulated stimuli can quickly induce visual fatigue. This study attempts to address these research gaps by designing SSVEP paradigms with stereo-related attributes and conducting a comparative analysis with the traditional 2D planar paradigm under the same VR environment. This study proposed two new paradigms: the 3D paradigm and the 3D-Blink paradigm. The 3D paradigm induces SSVEP by modulating the luminance of spherical targets, while the 3D-Blink paradigm employs modulation of the spheres' opacity instead. The results of offline 4-object selection experiments showed that the accuracy of 3D and 2D paradigm was 85.67 and 86.17% with canonical correlation analysis (CCA) and 86.17 and 91.73% with filter bank canonical correlation analysis (FBCCA), which is consistent with the reduction in the signal-to-noise ratio (SNR) of SSVEP harmonics for the 3D paradigm observed in the frequency-domain analysis. The 3D-Blink paradigm achieved 75.00% of detection accuracy and 27.02 bits/min of ITR with 0.8 seconds of stimulus time and task-related component analysis (TRCA) algorithm, demonstrating its effectiveness. These findings demonstrate that the 3D and 3D-Blink paradigms supported by VR can achieve improved user comfort and satisfactory performance, while further algorithmic optimization and feature analysis are required for the stereo-related paradigms. In conclusion, this study contributes to a deeper understanding of the impact of binocular stereoscopic vision mechanisms on SSVEP paradigms and promotes the application of SSVEP-BCI in diverse VR environments.
Collapse
Affiliation(s)
- Haifeng Liu
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Zhengyu Wang
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Ruxue Li
- School of Information Science and Technology, ShanghaiTech University, Shanghai, China
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
| | - Xi Zhao
- School of Microelectronics, Shanghai University, Shanghai, China
| | - Tianheng Xu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- Shanghai Frontier Innovation Research Institute, Shanghai, China
| | - Ting Zhou
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- School of Microelectronics, Shanghai University, Shanghai, China
- Shanghai Frontier Innovation Research Institute, Shanghai, China
| | - Honglin Hu
- Shanghai Advanced Research Institute, Chinese Academy of Sciences, Shanghai, China
- University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Pérez-Velasco S, Marcos-Martínez D, Santamaría-Vázquez E, Martínez-Cagigal V, Moreno-Calderón S, Hornero R. Unraveling motor imagery brain patterns using explainable artificial intelligence based on Shapley values. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 246:108048. [PMID: 38308997 DOI: 10.1016/j.cmpb.2024.108048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 01/22/2024] [Accepted: 01/23/2024] [Indexed: 02/05/2024]
Abstract
BACKGROUND AND OBJECTIVE Motor imagery (MI) based brain-computer interfaces (BCIs) are widely used in rehabilitation due to the close relationship that exists between MI and motor execution (ME). However, the underlying brain mechanisms of MI remain not well understood. Most MI-BCIs use the sensorimotor rhythms elicited in the primary motor cortex (M1) and somatosensory cortex (S1), which consist of an event-related desynchronization followed by an event-related synchronization. Consequently, this has resulted in systems that only record signals around M1 and S1. However, MI could involve a more complex network including sensory, association, and motor areas. In this study, we hypothesize that the superior accuracies achieved by new deep learning (DL) models applied to MI decoding rely on focusing on a broader MI activation of the brain. Parallel to the success of DL, the field of explainable artificial intelligence (XAI) has seen continuous development to provide explanations for DL networks success. The goal of this study is to use XAI in combination with DL to extract information about MI brain activation patterns from non-invasive electroencephalography (EEG) signals. METHODS We applied an adaptation of Shapley additive explanations (SHAP) to EEGSym, a state-of-the-art DL network with exceptional transfer learning capabilities for inter-subject MI classification. We obtained the SHAP values from two public databases comprising 171 users generating left and right hand MI instances with and without real-time feedback. RESULTS We found that EEGSym based most of its prediction on the signal of the frontal electrodes, i.e. F7 and F8, and on the first 1500 ms of the analyzed imagination period. We also found that MI involves a broad network not only based on M1 and S1, but also on the prefrontal cortex (PFC) and the posterior parietal cortex (PPC). We further applied this knowledge to select a 8-electrode configuration that reached inter-subject accuracies of 86.5% ± 10.6% on the Physionet dataset and 88.7% ± 7.0% on the Carnegie Mellon University's dataset. CONCLUSION Our results demonstrate the potential of combining DL and SHAP-based XAI to unravel the brain network involved in producing MI. Furthermore, SHAP values can optimize the requirements for out-of-laboratory BCI applications involving real users.
Collapse
Affiliation(s)
- Sergio Pérez-Velasco
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain.
| | - Diego Marcos-Martínez
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - Eduardo Santamaría-Vázquez
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - Víctor Martínez-Cagigal
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| | - Selene Moreno-Calderón
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain
| | - Roberto Hornero
- Biomedical Engineering Group, E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina (CIBER-BBN), Spain
| |
Collapse
|
9
|
Hu L, Zhu J, Chen S, Zhou Y, Song Z, Li Y. A Wearable Asynchronous Brain-Computer Interface Based on EEG-EOG Signals With Fewer Channels. IEEE Trans Biomed Eng 2024; 71:504-513. [PMID: 37616137 DOI: 10.1109/tbme.2023.3308371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) have tremendous application potential in communication, mechatronic control and rehabilitation. However, existing BCI systems are bulky, expensive and require laborious preparation before use. This study proposes a practical and user-friendly BCI system without compromising performance. METHODS A hybrid asynchronous BCI system was developed based on an elaborately designed wearable electroencephalography (EEG) amplifier that is compact, easy to use and offers a high signal-to-noise ratio (SNR). The wearable BCI system can detect P300 signals by processing EEG signals from three channels and operates asynchronously by integrating blink detection. RESULT The wearable EEG amplifier obtains high quality EEG signals and introduces preprocessing capabilities to BCI systems. The wearable BCI system achieves an average accuracy of 94.03±4.65%, an average information transfer rate (ITR) of 31.42±7.39 bits/min and an average false-positive rate (FPR) of 1.78%. CONCLUSION The experimental results demonstrate the feasibility and practicality of the developed wearable EEG amplifier and BCI system. SIGNIFICANCE Wearable asynchronous BCI systems with fewer channels are possible, indicating that BCI applications can be transferred from the laboratory to real-world scenarios.
Collapse
|
10
|
Hancer E, Subasi A. EEG-based emotion recognition using dual tree complex wavelet transform and random subspace ensemble classifier. Comput Methods Biomech Biomed Engin 2023; 26:1772-1784. [PMID: 36367337 DOI: 10.1080/10255842.2022.2143714] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
Emotions are strongly admitted as a main source to establish meaningful interactions between humans and computers. Thanks to the advancements in electroencephalography (EEG), especially in the usage of portable and cheap wearable EEG devices, the demand for identifying emotions has extremely increased. However, the overall scientific knowledge and works concerning EEG-based emotion recognition is still limited. To cover this issue, we introduce an EEG-based emotion recognition framework in this study. The proposed framework involves the following stages: preprocessing, feature extraction, feature selection and classification. For the preprocessing stage, multi scale principle component analysis and sysmlets-4 filter are used. A version of discrete wavelet transform (DWT), namely dual tree complex wavelet transform (DTCWT) is utilized for the feature extraction stage. To reduce the feature dimension size, a variety of statistical criteria are employed. For the final stage, we adopt ensemble classifiers due to their promising performance in classification problems. The proposed framework achieves nearly 96.8% accuracy by using random subspace ensemble classifier. It can therefore be resulted that the proposed EEG-based framework performs well in terms of identifying emotions.
Collapse
Affiliation(s)
- Emrah Hancer
- Department of Software Engineering, Bucak Technology Faculty, Mehmet Akif Ersoy University, Burdur, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| |
Collapse
|
11
|
Hag A, Al-Shargie F, Handayani D, Asadi H. Mental Stress Classification Based on Selected Electroencephalography Channels Using Correlation Coefficient of Hjorth Parameters. Brain Sci 2023; 13:1340. [PMID: 37759941 PMCID: PMC10527440 DOI: 10.3390/brainsci13091340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2023] [Revised: 09/11/2023] [Accepted: 09/13/2023] [Indexed: 09/29/2023] Open
Abstract
Electroencephalography (EEG) signals offer invaluable insights into diverse activities of the human brain, including the intricate physiological and psychological responses associated with mental stress. A major challenge, however, is accurately identifying mental stress while mitigating the limitations associated with a large number of EEG channels. Such limitations encompass computational complexity, potential overfitting, and the prolonged setup time for electrode placement, all of which can hinder practical applications. To address these challenges, this study presents the novel CCHP method, aimed at identifying and ranking commonly optimal EEG channels based on their sensitivity to the mental stress state. This method's uniqueness lies in its ability not only to find common channels, but also to prioritize them according to their responsiveness to stress, ensuring consistency across subjects and making it potentially transformative for real-world applications. From our rigorous examinations, eight channels emerged as universally optimal in detecting stress variances across participants. Leveraging features from the time, frequency, and time-frequency domains of these channels, and employing machine learning algorithms, notably RLDA, SVM, and KNN, our approach achieved a remarkable accuracy of 81.56% with the SVM algorithm outperforming existing methodologies. The implications of this research are profound, offering a stepping stone toward the development of real-time stress detection devices, and consequently, enabling clinicians to make more informed therapeutic decisions based on comprehensive brain activity monitoring.
Collapse
Affiliation(s)
- Ala Hag
- School of Computer Science & Engineering, Taylor’s University, Jalan Taylors, Subang Jaya 47500, Selangor, Malaysia;
| | - Fares Al-Shargie
- Institute for Intelligent Systems Research and Innovation, Deakin University, Geelong, VIC 3216, Australia
| | - Dini Handayani
- Department of Electrical Engineering, Abu Dhabi University, Abu Dhabi P.O. Box 59911, United Arab Emirates;
| | - Houshyar Asadi
- Computer Science Department, KICT, International Islamic University Malaysia, Kuala Lumpur 53100, Selangor, Malaysia
| |
Collapse
|
12
|
Ma X, Chen W, Pei Z, Liu J, Huang B, Chen J. A Temporal Dependency Learning CNN With Attention Mechanism for MI-EEG Decoding. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3188-3200. [PMID: 37498754 DOI: 10.1109/tnsre.2023.3299355] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
Deep learning methods have been widely explored in motor imagery (MI)-based brain computer interface (BCI) systems to decode electroencephalography (EEG) signals. However, most studies fail to fully explore temporal dependencies among MI-related patterns generated in different stages during MI tasks, resulting in limited MI-EEG decoding performance. Apart from feature extraction, learning temporal dependencies is equally important to develop a subject-specific MI-based BCI because every subject has their own way of performing MI tasks. In this paper, a novel temporal dependency learning convolutional neural network (CNN) with attention mechanism is proposed to address MI-EEG decoding. The network first learns spatial and spectral information from multi-view EEG data via the spatial convolution block. Then, a series of non-overlapped time windows is employed to segment the output data, and the discriminative feature is further extracted from each time window to capture MI-related patterns generated in different stages. Furthermore, to explore temporal dependencies among discriminative features in different time windows, we design a temporal attention module that assigns different weights to features in various time windows and fuses them into more discriminative features. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and OpenBMI datasets show that our proposed network outperforms the state-of-the-art algorithms and achieves the average accuracy of 79.48%, improved by 2.30% on the BCIC-IV-2a dataset. We demonstrate that learning temporal dependencies effectively improves MI-EEG decoding performance. The code is available at https://github.com/Ma-Xinzhi/LightConvNet.
Collapse
|
13
|
Santamaría-Vázquez E, Martínez-Cagigal V, Marcos-Martínez D, Rodríguez-González V, Pérez-Velasco S, Moreno-Calderón S, Hornero R. MEDUSA©: A novel Python-based software ecosystem to accelerate brain-computer interface and cognitive neuroscience research. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2023; 230:107357. [PMID: 36693292 DOI: 10.1016/j.cmpb.2023.107357] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 12/14/2022] [Accepted: 01/15/2023] [Indexed: 06/17/2023]
Abstract
BACKGROUND AND OBJECTIVE Neurotechnologies have great potential to transform our society in ways that are yet to be uncovered. The rate of development in this field has increased significantly in recent years, but there are still barriers that need to be overcome before bringing neurotechnologies to the general public. One of these barriers is the difficulty of performing experiments that require complex software, such as brain-computer interfaces (BCI) or cognitive neuroscience experiments. Current platforms have limitations in terms of functionality and flexibility to meet the needs of researchers, who often need to implement new experimentation settings. This work was aimed to propose a novel software ecosystem, called MEDUSA©, to overcome these limitations. METHODS We followed strict development practices to optimize MEDUSA© for research in BCI and cognitive neuroscience, making special emphasis in the modularity, flexibility and scalability of our solution. Moreover, it was implemented in Python, an open-source programming language that reduces the development cost by taking advantage from its high-level syntax and large number of community packages. RESULTS MEDUSA© provides a complete suite of signal processing functions, including several deep learning architectures or connectivity analysis, and ready-to-use BCI and neuroscience experiments, making it one of the most complete solutions nowadays. We also put special effort in providing tools to facilitate the development of custom experiments, which can be easily shared with the community through an app market available in our website to promote reproducibility. CONCLUSIONS MEDUSA© is a novel software ecosystem for modern BCI and neurotechnology experimentation that provides state-of-the-art tools and encourages the participation of the community to make a difference for the progress of these fields. Visit the official website at https://www.medusabci.com/ to know more about this project.
Collapse
Affiliation(s)
- Eduardo Santamaría-Vázquez
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| | - Víctor Martínez-Cagigal
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| | - Diego Marcos-Martínez
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain.
| | - Víctor Rodríguez-González
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| | - Sergio Pérez-Velasco
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain.
| | - Selene Moreno-Calderón
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain.
| | - Roberto Hornero
- Biomedical Engineering Group (GIB), E.T.S Ingenieros de Telecomunicación, University of Valladolid, Paseo de Belén 15, Valladolid, 47011, Spain; Centro de Investigación Biomédica en Red en Bioingeniería, Biomateriales y Nanomedicina, (CIBER-BBN), Spain.
| |
Collapse
|
14
|
Mussi MG, Adams KD. EEG hybrid brain-computer interfaces: A scoping review applying an existing hybrid-BCI taxonomy and considerations for pediatric applications. Front Hum Neurosci 2022; 16:1007136. [PMID: 36466619 PMCID: PMC9715435 DOI: 10.3389/fnhum.2022.1007136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/27/2022] [Indexed: 01/31/2025] Open
Abstract
Most hybrid brain-computer interfaces (hBCI) aim at improving the performance of single-input BCI. Many combinations are possible to configure an hBCI, such as using multiple brain input signals, different stimuli or more than one input system. Multiple studies have been done since 2010 where such interfaces have been tested and analyzed. Results and conclusions are promising but little has been discussed as to what is the best approach for the pediatric population, should they use hBCI as an assistive technology. Children might face greater challenges when using BCI and might benefit from less complex interfaces. Hence, in this scoping review we included 42 papers that developed hBCI systems for the purpose of control of assistive devices or communication software, and we analyzed them through the lenses of potential use in clinical settings and for children. We extracted taxonomic categories proposed in previous studies to describe the types of interfaces that have been developed. We also proposed interface characteristics that could be observed in different hBCI, such as type of target, number of targets and number of steps before selection. Then, we discussed how each of the extracted characteristics could influence the overall complexity of the system and what might be the best options for applications for children. Effectiveness and efficiency were also collected and included in the analysis. We concluded that the least complex hBCI interfaces might involve having a brain inputs and an external input, with a sequential role of operation, and visual stimuli. Those interfaces might also use a minimal number of targets of the strobic type, with one or two steps before the final selection. We hope this review can be used as a guideline for future hBCI developments and as an incentive to the design of interfaces that can also serve children who have motor impairments.
Collapse
Affiliation(s)
- Matheus G. Mussi
- Assistive Technology Laboratory, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, AB, Canada
| | - Kim D. Adams
- Assistive Technology Laboratory, Faculty of Rehabilitation Medicine, University of Alberta, Edmonton, AB, Canada
| |
Collapse
|
15
|
Hu H, Pu Z, Li H, Liu Z, Wang P. Learning Optimal Time-Frequency-Spatial Features by the CiSSA-CSP Method for Motor Imagery EEG Classification. SENSORS (BASEL, SWITZERLAND) 2022; 22:8526. [PMID: 36366225 PMCID: PMC9658317 DOI: 10.3390/s22218526] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 10/27/2022] [Accepted: 11/02/2022] [Indexed: 06/16/2023]
Abstract
The common spatial pattern (CSP) is a popular method in feature extraction for motor imagery (MI) electroencephalogram (EEG) classification in brain-computer interface (BCI) systems. However, combining temporal and spectral information in the CSP-based spatial features is still a challenging issue, which greatly affects the performance of MI-based BCI systems. Here, we propose a novel circulant singular spectrum analysis embedded CSP (CiSSA-CSP) method for learning the optimal time-frequency-spatial features to improve the MI classification accuracy. Specifically, raw EEG data are first segmented into multiple time segments and spectrum-specific sub-bands are further derived by CiSSA from each time segment in a set of non-overlapping filter bands. CSP features extracted from all time-frequency segments contain more sufficient time-frequency-spatial information. An experimental study was implemented on the publicly available EEG dataset (BCI Competition III dataset IVa) and a self-collected experimental EEG dataset to validate the effectiveness of the CiSSA-CSP method. Experimental results demonstrate that discriminative and robust features are extracted effectively. Compared with several state-of-the-art methods, the proposed method exhibited optimal accuracies of 96.6% and 95.2% on the public and experimental datasets, respectively, which confirms that it is a promising method for improving the performance of MI-based BCIs.
Collapse
Affiliation(s)
| | | | | | | | - Peng Wang
- Correspondence: ; Tel.: +86-10-6277-2007
| |
Collapse
|
16
|
Peng R, Zhao C, Jiang J, Kuang G, Cui Y, Xu Y, Du H, Shao J, Wu D. TIE-EEGNet: Temporal Information Enhanced EEGNet for Seizure Subtype Classification. IEEE Trans Neural Syst Rehabil Eng 2022; 30:2567-2576. [PMID: 36063519 DOI: 10.1109/tnsre.2022.3204540] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
Electroencephalogram (EEG) based seizure subtype classification is very important in clinical diagnostics. However, manual seizure subtype classification is expensive and time-consuming, whereas automatic classification usually needs a large number of labeled samples for model training. This paper proposes an EEGNet-based slim deep neural network, which relieves the labeled data requirement in EEG-based seizure subtype classification. A temporal information enhancement module with sinusoidal encoding is used to augment the first convolution layer of EEGNet. A training strategy for automatic hyper-parameter selection is also proposed. Experiments on the public TUSZ dataset and our own CHSZ dataset with infants and children demonstrated that our proposed TIE-EEGNet outperformed several traditional and deep learning models in cross-subject seizure subtype classification. Additionally, it also achieved the best performance in a challenging transfer learning scenario. Both our code and the CHSZ dataset are publicized.
Collapse
|
17
|
Kwon H, Lee S. Friend-guard adversarial noise designed for electroencephalogram-based brain–computer interface spellers. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.06.089] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
18
|
Wang Z, Jin J, Xu R, Liu C, Wang X, Cichocki A. Efficient Spatial Filters Enhance SSVEP Target Recognition Based on Task-Related Component Analysis. IEEE Trans Cogn Dev Syst 2022. [DOI: 10.1109/tcds.2021.3096812] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Zhiqiang Wang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Jing Jin
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Ren Xu
- Guger Technologies OG, Graz, Austria
| | - Chang Liu
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | - Xingyu Wang
- Key Laboratory of Smart Manufacturing in Energy Chemical Process, Ministry of Education, East China University of Science and Technology, Shanghai, China
| | | |
Collapse
|
19
|
Han Z, Chang H, Zhou X, Wang J, Wang L, Shao Y. E2ENNet: An end-to-end neural network for emotional brain-computer interface. Front Comput Neurosci 2022; 16:942979. [PMID: 36034935 PMCID: PMC9413837 DOI: 10.3389/fncom.2022.942979] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Accepted: 07/22/2022] [Indexed: 11/13/2022] Open
Abstract
Objectve Emotional brain-computer interface can recognize or regulate human emotions for workload detection and auxiliary diagnosis of mental illness. However, the existing EEG emotion recognition is carried out step by step in feature engineering and classification, resulting in high engineering complexity and limiting practical applications in traditional EEG emotion recognition tasks. We propose an end-to-end neural network, i.e., E2ENNet. Methods Baseline removal and sliding window slice used for preprocessing of the raw EEG signal, convolution blocks extracted features, LSTM network obtained the correlations of features, and the softmax function classified emotions. Results Extensive experiments in subject-dependent experimental protocol are conducted to evaluate the performance of the proposed E2ENNet, achieves state-of-the-art accuracy on three public datasets, i.e., 96.28% of 2-category experiment on DEAP dataset, 98.1% of 2-category experiment on DREAMER dataset, and 41.73% of 7-category experiment on MPED dataset. Conclusion Experimental results show that E2ENNet can directly extract more discriminative features from raw EEG signals. Significance This study provides a methodology for implementing a plug-and-play emotional brain-computer interface system.
Collapse
Affiliation(s)
- Zhichao Han
- School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China
| | - Hongli Chang
- The Key Laboratory of Child Development and Learning Science of Ministry of Education, Southeast University, Southeast University, Nanjing, China
| | - Xiaoyan Zhou
- School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China
- *Correspondence: Xiaoyan Zhou
| | - Jihao Wang
- School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China
| | - Lili Wang
- School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China
| | - Yongbin Shao
- School of Electronic and Information Engineering, Nanjing University of Information Science and Technology, Nanjing, China
| |
Collapse
|
20
|
Perez-Velasco S, Santamaria-Vazquez E, Martinez-Cagigal V, Marcos-Martinez D, Hornero R. EEGSym: Overcoming Inter-Subject Variability in Motor Imagery Based BCIs With Deep Learning. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1766-1775. [PMID: 35759578 DOI: 10.1109/tnsre.2022.3186442] [Citation(s) in RCA: 15] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
In this study, we present a new Deep Learning (DL) architecture for Motor Imagery (MI) based Brain Computer Interfaces (BCIs) called EEGSym. Our implementation aims to improve previous state-of-the-art performances on MI classification by overcoming inter-subject variability and reducing BCI inefficiency, which has been estimated to affect 10-50% of the population. This convolutional neural network includes the use of inception modules, residual connections and a design that introduces the symmetry of the brain through the mid-sagittal plane into the network architecture. It is complemented with a data augmentation technique that improves the generalization of the model and with the use of transfer learning across different datasets. We compare EEGSym's performance on inter-subject MI classification with ShallowConvNet, DeepConvNet, EEGNet and EEG-Inception. This comparison is performed on 5 publicly available datasets that include left or right hand motor imagery of 280 subjects. This population is the largest that has been evaluated in similar studies to date. EEGSym significantly outperforms the baseline models reaching accuracies of 88.6±9.0 on Physionet, 83.3±9.3 on OpenBMI, 85.1±9.5 on Kaya2018, 87.4±8.0 on Meng2019 and 90.2±6.5 on Stieger2021. At the same time, it allows 95.7% of the tested population (268 out of 280 users) to reach BCI control (≥70% accuracy). Furthermore, these results are achieved using only 16 electrodes of the more than 60 available on some datasets. Our implementation of EEGSym, which includes new advances for EEG processing with DL, outperforms previous state-of-the-art approaches on inter-subject MI classification.
Collapse
|
21
|
Motor Imagery Classification via Kernel-Based Domain Adaptation on an SPD Manifold. Brain Sci 2022; 12:brainsci12050659. [PMID: 35625045 PMCID: PMC9139384 DOI: 10.3390/brainsci12050659] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2022] [Revised: 05/02/2022] [Accepted: 05/13/2022] [Indexed: 11/21/2022] Open
Abstract
Background: Recording the calibration data of a brain–computer interface is a laborious process and is an unpleasant experience for the subjects. Domain adaptation is an effective technology to remedy the shortage of target data by leveraging rich labeled data from the sources. However, most prior methods have needed to extract the features of the EEG signal first, which triggers another challenge in BCI classification, due to small sample sets or a lack of labels for the target. Methods: In this paper, we propose a novel domain adaptation framework, referred to as kernel-based Riemannian manifold domain adaptation (KMDA). KMDA circumvents the tedious feature extraction process by analyzing the covariance matrices of electroencephalogram (EEG) signals. Covariance matrices define a symmetric positive definite space (SPD) that can be described by Riemannian metrics. In KMDA, the covariance matrices are aligned in the Riemannian manifold, and then are mapped to a high dimensional space by a log-Euclidean metric Gaussian kernel, where subspace learning is performed by minimizing the conditional distribution distance between the sources and the target while preserving the target discriminative information. We also present an approach to convert the EEG trials into 2D frames (E-frames) to further lower the dimension of covariance descriptors. Results: Experiments on three EEG datasets demonstrated that KMDA outperforms several state-of-the-art domain adaptation methods in classification accuracy, with an average Kappa of 0.56 for BCI competition IV dataset IIa, 0.75 for BCI competition IV dataset IIIa, and an average accuracy of 81.56% for BCI competition III dataset IVa. Additionally, the overall accuracy was further improved by 5.28% with the E-frames. KMDA showed potential in addressing subject dependence and shortening the calibration time of motor imagery-based brain–computer interfaces.
Collapse
|
22
|
Tang Z, Zhang L, Chen X, Ying J, Wang X, Wang H. Wearable Supernumerary Robotic Limb System Using a Hybrid Control Approach Based on Motor Imagery and Object Detection. IEEE Trans Neural Syst Rehabil Eng 2022; 30:1298-1309. [PMID: 35511846 DOI: 10.1109/tnsre.2022.3172974] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Motor disorder of upper limbs has seriously affected the daily life of the patients with hemiplegia after stroke. We developed a wearable supernumerary robotic limb (SRL) system using a hybrid control approach based on motor imagery (MI) and object detection for upper-limb motion assistance. SRL system included an SRL hardware subsystem and a hybrid control software subsystem. The system obtained the patient's motion intention through MI electroencephalogram (EEG) recognition method based on graph convolutional network (GCN) and gated recurrent unit network (GRU) to control the left and right movements of SRL, and the object detection technology was used together for a quick grasp of target objects to compensate for the disadvantages when using MI EEG alone like fewer control instructions and lower control efficiency. Offline training experiment was designed to obtain subjects' MI recognition models and evaluate the feasibility of the MI EEG recognition method; online control experiment was designed to verify the effectiveness of our wearable SRL system. The results showed that the proposed MI EEG recognition method (GCN+GRU) could effectively improve the MI classification accuracy (90.04% ± 2.36%) compared with traditional methods; all subjects were able to complete the target object grasping tasks within 23 seconds by controlling the SRL, and the highest average grasping success rate achieved 90.67% in bag grasping task. The SRL system can effectively assist people with upper-limb motor disorder to perform upper-limb tasks in daily life by natural human-robot interaction, and improve their ability of self-help and enhance their confidence of life.
Collapse
|
23
|
Värbu K, Muhammad N, Muhammad Y. Past, Present, and Future of EEG-Based BCI Applications. SENSORS (BASEL, SWITZERLAND) 2022; 22:3331. [PMID: 35591021 PMCID: PMC9101004 DOI: 10.3390/s22093331] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 04/05/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
An electroencephalography (EEG)-based brain-computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed.
Collapse
Affiliation(s)
- Kaido Värbu
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Naveed Muhammad
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Yar Muhammad
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK;
| |
Collapse
|
24
|
Huang J, Yang P, Xiong B, Wan B, Su K, Zhang ZQ. Latency Aligning Task-related Component Analysis Using Wave Propagation for Enhancing SSVEP-based BCIs. IEEE Trans Neural Syst Rehabil Eng 2022; 30:851-859. [PMID: 35324445 DOI: 10.1109/tnsre.2022.3162029] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Due to the high robustness to artifacts, steady-state visual evoked potential (SSVEP) has been widely applied to construct high-speed brain-computer interfaces (BCIs). Thus far, many spatial filtering methods have been proposed to enhance the target identification performance for SSVEP-based BCIs, and task-related component analysis (TRCA) is among the most effective ones. In this paper, we further extend TRCA and propose a new method called Latency Aligning TRCA (LA-TRCA), which aligns visual latencies on channels to obtain accurate phase information from task-related signals. Based on the SSVEP wave propagation theory, SSVEP spreads from posterior occipital areas over the cortex with a fixed phase velocity. Via estimation of the phase velocity using phase shifts of channels, the visual latencies on different channels can be determined for inter-channel alignment. TRCA is then applied to aligned data epochs for target recognition. For the validation purpose, the classification performance comparison between the proposed LA-TRCA and TRCA-based expansions were performed on two different SSVEP datasets. The experimental results illustrated that the proposed LA-TRCA method outperformed the other TRCA-based expansions, which thus demonstrated the effectiveness of the proposed approach for enhancing the SSVEP detection performance.
Collapse
|
25
|
Ni Z, Xu J, Wu Y, Li M, Xu G, Xu B. Improving Cross-State and Cross-Subject Visual ERP-based BCI with Temporal Modeling and Adversarial Training. IEEE Trans Neural Syst Rehabil Eng 2022; 30:369-379. [PMID: 35133966 DOI: 10.1109/tnsre.2022.3150007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Brain-computer interface (BCI) is a useful device for people without relying on peripheral nerves and muscles. However, the performance of the event-related potential (ERP)-based BCI declines when applying it to real environments, especially in cross-state and cross-subject conditions. Here we employ temporal modeling and adversarial training to improve the visual ERP-based BCI under different mental workload states and to alleviate the problems above. The rationality of our method is that the ERP-based BCI is based on electroencephalography (EEG) signals recorded from the scalp's surface, continuously changing with time and somewhat stochastic. In this paper, we propose a hierarchical recurrent network to encode all ERP signals in each repetition at the same time and model them with a temporal manner to predict which visual event elicited an ERP. The hierarchical architecture is a simple yet effective method for organizing recurrent layers in a deep structure to model long sequence signals. Taking a cue from recent advances in adversarial training, we further applied dynamic adversarial perturbations to create adversarial examples to enhance the model performance. We conduct our experiments on one published visual ERP-based BCI task with 15 subjects and 3 different auditory workload states. The results indicate that our hierarchical method can effectively model the long sequence EEG raw data, outperform the baselines on most conditions, including cross-state and cross-subject conditions. Finally, we show how deep learning-based methods with limited EEG data can improve ERP-based BCI with adversarial training. Our code will be released at https://github.com/aispeech-lab/VisBCI.
Collapse
|
26
|
Fu R, Li Z, Wang J. An optimized GMM algorithm and its application in single-trial motor imagination recognition. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103327] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
27
|
Chen L, Chen P, Zhao S, Luo Z, Chen W, Pei Y, Zhao H, Jiang J, Xu M, Yan Y, Yin E. Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain-computer interface. J Neural Eng 2021; 18. [PMID: 34654000 DOI: 10.1088/1741-2552/ac3044] [Citation(s) in RCA: 24] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 10/15/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-controlled robotic arms have shown broad application prospects with the development of robotics, science and information decoding. However, disadvantages, such as poor flexibility restrict its wide application.Approach. In order to alleviate these drawbacks, this study proposed a robotic arm asynchronous control system based on steady-state visual evoked potential (SSVEP) in an augmented reality (AR) environment. In the AR environment, the participants were able to concurrently see the robot arm and visual stimulation interface through the AR device. Therefore, there was no need to switch attention frequently between the visual stimulation interface and the robotic arm. This study proposed a multi-template algorithm based on canonical correlation analysis and task-related component analysis to identify 12 targets. An optimization strategy based on dynamic window was adopted to adjust the duration of visual stimulation adaptively.Main results. Experimental results of this study found that the high-frequency SSVEP-based brain-computer interface (BCI) realized the switch of the system state, which controlled the robotic arm asynchronously. The average accuracy of the offline experiment was 94.97%, whereas the average information translate rate was 67.37 ± 14.27 bits·min-1. The online results from ten healthy subjects showed that the average selection time of a single online command was 2.04 s, which effectively reduced the visual fatigue of the subjects. Each subject could quickly complete the puzzle task.Significance. The experimental results demonstrated the feasibility and potential of this human-computer interaction strategy and provided new ideas for BCI-controlled robots.
Collapse
Affiliation(s)
- Lingling Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China
| | - Pengfei Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Shaokai Zhao
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Zhiguo Luo
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Wei Chen
- National Research Center for Rehabilitation Technical Aids, Beijing 100176, People's Republic of China
| | - Yu Pei
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Hongyu Zhao
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,East China University of Science and Technology, Shanghai 200237, People's Republic of China
| | - Jing Jiang
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, People's Republic of China
| | - Minpeng Xu
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,Tianjin University, Tianjin 300072, People's Republic of China
| | - Ye Yan
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| |
Collapse
|
28
|
Liang Y, Liu B, Zhang H. A Convolutional Neural Network Combined With Prototype Learning Framework for Brain Functional Network Classification of Autism Spectrum Disorder. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2193-2202. [PMID: 34648452 DOI: 10.1109/tnsre.2021.3120024] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The application of deep learning methods in brain disease diagnosis is becoming a new research hotspot. This study constructed brain functional networks based on the functional magnetic resonance imaging (fMRI) data, and proposed a novel convolutional neural network combined with a prototype learning (CNNPL) framework to classify brain functional networks for the diagnosis of autism spectrum disorder (ASD). At the bottom of CNNPL, traditional CNN was employed as the basic feature extractor, while at the top of CNNPL multiple prototypes were automatically learnt on the features to represent different categories. A generalized prototype loss based on distance cross-entropy was proposed to jointly learn the parameters of the CNN feature extractor and the prototypes. The classification was implemented with prototype matching. A transfer learning strategy was introduced to our CNNPL for weight initialization in the subsequent fine-tuning phase to promote model training. We conducted systematic experiments on the aggregate multi-sites ASD dataset. Experimental results revealed that our model outperforms the current state-of-the-art methods in ASD classification and can reliably learn inter-site biomarkers, indicating the robustness of our model on large-scale dataset with inter-site variability. Furthermore, our model demonstrated robust learning capability for high-level organization of brain functionality. Our study also identified important brain regions as biomarkers associated with ASD classification. Together, our proposed model provides a promising solution for learning and classifying brain functional networks, and thus contributes to the biomarker extraction and imaging diagnosis of ASD.
Collapse
|
29
|
Li H, Li N, Xing Y, Zhang S, Liu C, Cai W, Hong W, Zhang Q. P300 as a Potential Indicator in the Evaluation of Neurocognitive Disorders After Traumatic Brain Injury. Front Neurol 2021; 12:690792. [PMID: 34566838 PMCID: PMC8458648 DOI: 10.3389/fneur.2021.690792] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 08/12/2021] [Indexed: 11/29/2022] Open
Abstract
Few objective indices can be used when evaluating neurocognitive disorders after a traumatic brain injury (TBI). P300 has been widely studied in mental disorders, cognitive dysfunction, and brain injury. Daily life ability and social function are key indices in the assessment of neurocognitive disorders after a TBI. The present study focused on the correlation between P300 and impairment of daily living activity and social function. We enrolled 234 patients with neurocognitive disorders after a TBI according to ICD-10 and 277 age- and gender-matched healthy volunteers. The daily living activity and social function were assessed by the social disability screening schedule (SDSS) scale, activity of daily living (ADL) scale, and scale of personality change following a TBI. P300 was evoked by a visual oddball paradigm. The results showed that the scores of the ADL scale, SDSS scale, and scale of personality change in the patient group were significantly higher than those in the control group. The amplitudes of Fz, Cz, and Pz in the patient group were significantly lower than those in the control group and were negatively correlated with the scores of the ADL and SDSS scales. In conclusion, a lower P300 amplitude means a greater impairment of daily life ability and social function, which suggested more severity of neurocognitive disorders after a TBI. P300 could be a potential indicator in evaluating the severity of neurocognitive disorders after a TBI.
Collapse
Affiliation(s)
- Haozhe Li
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| | - Ningning Li
- Hongkou District Mental Health Center, Shanghai, China
| | - Yan Xing
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| | - Shengyu Zhang
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| | - Chao Liu
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| | - Weixiong Cai
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| | - Wu Hong
- Shanghai Mental Health Center, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Qinting Zhang
- Shanghai Key Laboratory of Forensic Medicine, Key Lab of Forensic Science, Ministry of Justice, Shanghai Forensic Service Platform, Academy of Forensic Science, Shanghai, China
| |
Collapse
|
30
|
Wang X, Lu H, Shen X, Ma L, Wang Y. Prosthetic control system based on motor imagery. Comput Methods Biomech Biomed Engin 2021; 25:764-771. [PMID: 34533381 DOI: 10.1080/10255842.2021.1977800] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
A brain-computer interface (BCI) can be used for function replacement through the control of devices, such as prostheses, by identifying the subject's intent from brain activity. We process electroencephalography (EEG) signals related to motor imagery to improve the accuracy of intent classification. The original signals are decomposed into three layers based on db4 wavelet basis. The wavelet soft threshold denoising method is used to improve the signal-to-noise ratio. The sample entropy algorithm is used to extract the features of the signal after noise reduction and reconstruction. Combined with event-related synchronisation/desynchronisation (ERS/ERD) phenomenon, the sample entropy in the motor imagery time periods of C3, C4 and Cz is selected as the feature value. Feature vectors are then used as the input of three classifiers. From the evaluated classifiers, the backpropagation (BP) neural network provides the best EEG signal classification (93% accuracy). BP neural network is thus selected as the final classifier and used to design a prosthetic control system based on motor imagery. The classification results are wirelessly transmitted to control a prosthesis successfully via commands of hand opening, fist clenching, and external wrist rotation. Such functionality may allow amputees to complete simple activities of daily living. Thus, this study is valuable for subsequent developments in rehabilitation.
Collapse
Affiliation(s)
- Xuemei Wang
- School of Information Science and Technology, Nantong University, Nantong, China
| | - Huiqin Lu
- School of Information Science and Technology, Nantong University, Nantong, China
| | - Xiaoyan Shen
- School of Information Science and Technology, Nantong University, Nantong, China.,Collaborative Innovation Center for Nerve Regeneration, Nantong University, Nantong, China
| | - Lei Ma
- School of Information Science and Technology, Nantong University, Nantong, China
| | - Yan Wang
- School of Information Science and Technology, Nantong University, Nantong, China
| |
Collapse
|
31
|
Controlling a Mouse Pointer with a Single-Channel EEG Sensor. SENSORS 2021; 21:s21165481. [PMID: 34450924 PMCID: PMC8400812 DOI: 10.3390/s21165481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/04/2021] [Revised: 08/03/2021] [Accepted: 08/11/2021] [Indexed: 11/29/2022]
Abstract
(1) Goals: The purpose of this study was to analyze the feasibility of using the information obtained from a one-channel electro-encephalography (EEG) signal to control a mouse pointer. We used a low-cost headset, with one dry sensor placed at the FP1 position, to steer a mouse pointer and make selections through a combination of the user’s attention level with the detection of voluntary blinks. There are two types of cursor movements: spinning and linear displacement. A sequence of blinks allows for switching between these movement types, while the attention level modulates the cursor’s speed. The influence of the attention level on performance was studied. Additionally, Fitts’ model and the evolution of the emotional states of participants, among other trajectory indicators, were analyzed. (2) Methods: Twenty participants distributed into two groups (Attention and No-Attention) performed three runs, on different days, in which 40 targets had to be reached and selected. Target positions and distances from the cursor’s initial position were chosen, providing eight different indices of difficulty (IDs). A self-assessment manikin (SAM) test and a final survey provided information about the system’s usability and the emotions of participants during the experiment. (3) Results: The performance was similar to some brain–computer interface (BCI) solutions found in the literature, with an averaged information transfer rate (ITR) of 7 bits/min. Concerning the cursor navigation, some trajectory indicators showed our proposed approach to be as good as common pointing devices, such as joysticks, trackballs, and so on. Only one of the 20 participants reported difficulty in managing the cursor and, according to the tests, most of them assessed the experience positively. Movement times and hit rates were significantly better for participants belonging to the attention group. (4) Conclusions: The proposed approach is a feasible low-cost solution to manage a mouse pointer.
Collapse
|
32
|
Yang C, Yan X, Wang Y, Chen Y, Zhang H, Gao X. Spatio-temporal equalization multi-window algorithm for asynchronous SSVEP-based BCI. J Neural Eng 2021; 18. [PMID: 34237711 DOI: 10.1088/1741-2552/ac127f] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2021] [Accepted: 07/08/2021] [Indexed: 11/11/2022]
Abstract
Objective.Asynchronous brain-computer interfaces (BCIs) show significant advantages in many practical application scenarios. Compared with the rapid development of synchronous BCIs technology, the progress of asynchronous BCI research, in terms of containing multiple targets and training-free detection, is still relatively slow. In order to improve the practicability of the BCI, a spatio-temporal equalization multi-window algorithm (STE-MW) was proposed for asynchronous detection of steady-state visual evoked potential (SSVEP) without the need for acquiring calibration data.Approach.The algorithm used SIE strategy to intercept EEG signals of different lengths through multiple stacked time windows and statistical decisions-making based on Bayesian risk decision-making. Different from the traditional asynchronous algorithms based on the 'non-control state detection' methods, this algorithm was based on the 'statistical inspection-rejection decision' mode and did not require a separate classification of non-control states, so it can be effectively applied to detections for large-scale candidates.Main results.Online experimental results involving 14 healthy subjects showed that, in the continuously input experiments of 40 targets, the algorithm achieved the average recognition accuracy of97.2±2.6%and the average information transfer rate (ITR) of106.3±32.0 bitsmin-1. At the same time, the average false alarm rate in the 240 s resting state test was0.607±0.602 min-1. In the free spelling experiments involving patients with severe amyotrophic lateral sclerosis, the subjects achieved an accuracy of 92.7% and an average ITR of 43.65 bits min-1in two free spelling experiments.Significance.This algorithm can achieve high-performance, high-precision, and asynchronous detection of SSVEP signals with low algorithm complexity and false alarm rate under multi-targets and training-free conditions, which is helpful for the development of asynchronous BCI systems.
Collapse
Affiliation(s)
- Chen Yang
- School of Electronic Engineering, Beijing University of Posts and Telecommunications.,School of Medicine, Tsinghua University
| | - Xinyi Yan
- School of Medicine, Tsinghua University
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences
| | | | - Hongxin Zhang
- School of Electronic Engineering, Beijing University of Posts and Telecommunications
| | | |
Collapse
|
33
|
Katyal EA, Singla R. EEG-based hybrid QWERTY mental speller with high information transfer rate. Med Biol Eng Comput 2021; 59:633-661. [PMID: 33594631 DOI: 10.1007/s11517-020-02310-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Accepted: 12/30/2020] [Indexed: 11/27/2022]
Abstract
BACKGROUND Brain-computer interface (BCI) spellers detect variations in brain waves to help subjects communicate with the world. This study introduces a P300-SSVEP hybrid BCI-based QWERTY speller. METHODS The proposed hybrid speller, combines SSVEP and P300 features using a hybrid paradigm. P300 was used as time division multiplexing index which results in the use of lesser number of assumed frequencies for SSVEP elicitation. Each flickering frequency was also assigned a unique colour, to enhance system accuracy. RESULTS On the basis of 20 subjects, an average accuracy of classification of 96.42% and a mean information transfer rate (ITR) of 131.0 bits per min. (BPM) was achieved during the free spelling trial (trial-F). COMPARISON The t test results revealed that the hybrid QWERTY speller performed significantly better (on the basis of mean classification accuracy and ITR) as compared to the traditional P300 speller) and the QWERTY SSVEP speller. Also, the amount of time taken to spell a word was significantly lesser in the case of hybrid QWERTY speller in contrast to traditional P300 speller while it was almost the same as compared to QWERTY SSVEP speller. CONCLUSION QWERTY speller outperformed the stereotypical P300 speller as well as QWERTY SSVEP speller.
Collapse
Affiliation(s)
- Er Akshay Katyal
- ICE Department, Dr B.R. Ambedkar N.I.T. Jalandhar, GT Road Bye-Pass, Jalandhar, Punjab, 144011, India.
| | - Rajesh Singla
- ICE Department, Dr B.R. Ambedkar N.I.T. Jalandhar, GT Road Bye-Pass, Jalandhar, Punjab, 144011, India
| |
Collapse
|
34
|
Belkhiria C, Peysakhovich V. Electro-Encephalography and Electro-Oculography in Aeronautics: A Review Over the Last Decade (2010-2020). FRONTIERS IN NEUROERGONOMICS 2020; 1:606719. [PMID: 38234309 PMCID: PMC10790927 DOI: 10.3389/fnrgo.2020.606719] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2024]
Abstract
Electro-encephalography (EEG) and electro-oculography (EOG) are methods of electrophysiological monitoring that have potentially fruitful applications in neuroscience, clinical exploration, the aeronautical industry, and other sectors. These methods are often the most straightforward way of evaluating brain oscillations and eye movements, as they use standard laboratory or mobile techniques. This review describes the potential of EEG and EOG systems and the application of these methods in aeronautics. For example, EEG and EOG signals can be used to design brain-computer interfaces (BCI) and to interpret brain activity, such as monitoring the mental state of a pilot in determining their workload. The main objectives of this review are to, (i) offer an in-depth review of literature on the basics of EEG and EOG and their application in aeronautics; (ii) to explore the methodology and trends of research in combined EEG-EOG studies over the last decade; and (iii) to provide methodological guidelines for beginners and experts when applying these methods in environments outside the laboratory, with a particular focus on human factors and aeronautics. The study used databases from scientific, clinical, and neural engineering fields. The review first introduces the characteristics and the application of both EEG and EOG in aeronautics, undertaking a large review of relevant literature, from early to more recent studies. We then built a novel taxonomy model that includes 150 combined EEG-EOG papers published in peer-reviewed scientific journals and conferences from January 2010 to March 2020. Several data elements were reviewed for each study (e.g., pre-processing, extracted features and performance metrics), which were then examined to uncover trends in aeronautics and summarize interesting methods from this important body of literature. Finally, the review considers the advantages and limitations of these methods as well as future challenges.
Collapse
|
35
|
Developing a Motor Imagery-Based Real-Time Asynchronous Hybrid BCI Controller for a Lower-Limb Exoskeleton. SENSORS 2020; 20:s20247309. [PMID: 33352714 PMCID: PMC7766128 DOI: 10.3390/s20247309] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Revised: 12/14/2020] [Accepted: 12/14/2020] [Indexed: 11/18/2022]
Abstract
This study aimed to develop an intuitive gait-related motor imagery (MI)-based hybrid brain-computer interface (BCI) controller for a lower-limb exoskeleton and investigate the feasibility of the controller under a practical scenario including stand-up, gait-forward, and sit-down. A filter bank common spatial pattern (FBCSP) and mutual information-based best individual feature (MIBIF) selection were used in the study to decode MI electroencephalogram (EEG) signals and extract a feature matrix as an input to the support vector machine (SVM) classifier. A successive eye-blink switch was sequentially combined with the EEG decoder in operating the lower-limb exoskeleton. Ten subjects demonstrated more than 80% accuracy in both offline (training) and online. All subjects successfully completed a gait task by wearing the lower-limb exoskeleton through the developed real-time BCI controller. The BCI controller achieved a time ratio of 1.45 compared with a manual smartwatch controller. The developed system can potentially be benefit people with neurological disorders who may have difficulties operating manual control.
Collapse
|
36
|
do Nascimento LMS, Bonfati LV, Freitas MLB, Mendes Junior JJA, Siqueira HV, Stevan SL. Sensors and Systems for Physical Rehabilitation and Health Monitoring-A Review. SENSORS (BASEL, SWITZERLAND) 2020; 20:E4063. [PMID: 32707749 PMCID: PMC7436073 DOI: 10.3390/s20154063] [Citation(s) in RCA: 32] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Revised: 07/09/2020] [Accepted: 07/12/2020] [Indexed: 01/03/2023]
Abstract
The use of wearable equipment and sensing devices to monitor physical activities, whether for well-being, sports monitoring, or medical rehabilitation, has expanded rapidly due to the evolution of sensing techniques, cheaper integrated circuits, and the development of connectivity technologies. In this scenario, this paper presents a state-of-the-art review of sensors and systems for rehabilitation and health monitoring. Although we know the increasing importance of data processing techniques, our focus was on analyzing the implementation of sensors and biomedical applications. Although many themes overlap, we organized this review based on three groups: Sensors in Healthcare, Home Medical Assistance, and Continuous Health Monitoring; Systems and Sensors in Physical Rehabilitation; and Assistive Systems.
Collapse
Affiliation(s)
- Lucas Medeiros Souza do Nascimento
- Graduate Program in Electrical Engineering (PPGEE), Federal University of Technology of Parana (UTFPR), Ponta Grossa (PR) 84016-210, Brazil; (L.M.S.d.N.); (L.V.B.); (M.L.B.F.); (H.V.S.)
| | - Lucas Vacilotto Bonfati
- Graduate Program in Electrical Engineering (PPGEE), Federal University of Technology of Parana (UTFPR), Ponta Grossa (PR) 84016-210, Brazil; (L.M.S.d.N.); (L.V.B.); (M.L.B.F.); (H.V.S.)
| | - Melissa La Banca Freitas
- Graduate Program in Electrical Engineering (PPGEE), Federal University of Technology of Parana (UTFPR), Ponta Grossa (PR) 84016-210, Brazil; (L.M.S.d.N.); (L.V.B.); (M.L.B.F.); (H.V.S.)
| | - José Jair Alves Mendes Junior
- Graduate Program in Electrical Engineering and Industrial Informatics (CPGEI), Federal University of Technology of Parana (UTFPR), Curitiba (PR) 80230-901, Brazil;
| | - Hugo Valadares Siqueira
- Graduate Program in Electrical Engineering (PPGEE), Federal University of Technology of Parana (UTFPR), Ponta Grossa (PR) 84016-210, Brazil; (L.M.S.d.N.); (L.V.B.); (M.L.B.F.); (H.V.S.)
| | - Sergio Luiz Stevan
- Graduate Program in Electrical Engineering (PPGEE), Federal University of Technology of Parana (UTFPR), Ponta Grossa (PR) 84016-210, Brazil; (L.M.S.d.N.); (L.V.B.); (M.L.B.F.); (H.V.S.)
| |
Collapse
|
37
|
|
38
|
Jin J, Li S, Daly I, Miao Y, Liu C, Wang X, Cichocki A. The Study of Generic Model Set for Reducing Calibration Time in P300-Based Brain–Computer Interface. IEEE Trans Neural Syst Rehabil Eng 2020; 28:3-12. [DOI: 10.1109/tnsre.2019.2956488] [Citation(s) in RCA: 69] [Impact Index Per Article: 13.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
|