1
|
Liu X, Hu B, Si Y, Wang Q. The role of eye movement signals in non-invasive brain-computer interface typing system. Med Biol Eng Comput 2024; 62:1981-1990. [PMID: 38509350 DOI: 10.1007/s11517-024-03070-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Accepted: 03/05/2024] [Indexed: 03/22/2024]
Abstract
Brain-Computer Interfaces (BCIs) have shown great potential in providing communication and control for individuals with severe motor disabilities. However, traditional BCIs that rely on electroencephalography (EEG) signals suffer from low information transfer rates and high variability across users. Recently, eye movement signals have emerged as a promising alternative due to their high accuracy and robustness. Eye movement signals are the electrical or mechanical signals generated by the movements and behaviors of the eyes, serving to denote the diverse forms of eye movements, such as fixations, smooth pursuit, and other oculomotor activities like blinking. This article presents a review of recent studies on the development of BCI typing systems that incorporate eye movement signals. We first discuss the basic principles of BCI and the recent advancements in text entry. Then, we provide a comprehensive summary of the latest advancements in BCI typing systems that leverage eye movement signals. This includes an in-depth analysis of hybrid BCIs that are built upon the integration of electrooculography (EOG) and eye tracking technology, aiming to enhance the performance and functionality of the system. Moreover, we highlight the advantages and limitations of different approaches, as well as potential future directions. Overall, eye movement signals hold great potential for enhancing the usability and accessibility of BCI typing systems, and further research in this area could lead to more effective communication and control for individuals with motor disabilities.
Collapse
Affiliation(s)
- Xi Liu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- University of Chinese Academy of Sciences, Beijing, 100049, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Bingliang Hu
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China
| | - Yang Si
- Department of Neurology, Sichuan Academy of Medical Science and Sichuan Provincial People's Hospital, Chengdu, 611731, China
- University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Quan Wang
- Key Laboratory of Spectral Imaging Technology, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
- Key Laboratory of Biomedical Spectroscopy of Xi'an, Xi'an Institute of Optics and Precision Mechanics, Chinese Academy of Sciences, Xi'an, 710119, China.
| |
Collapse
|
2
|
Ma X, Chen W, Pei Z, Zhang Y, Chen J. Attention-based convolutional neural network with multi-modal temporal information fusion for motor imagery EEG decoding. Comput Biol Med 2024; 175:108504. [PMID: 38701593 DOI: 10.1016/j.compbiomed.2024.108504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Revised: 04/15/2024] [Accepted: 04/21/2024] [Indexed: 05/05/2024]
Abstract
Convolutional neural network (CNN) has been widely applied in motor imagery (MI)-based brain computer interface (BCI) to decode electroencephalography (EEG) signals. However, due to the limited perceptual field of convolutional kernel, CNN only extracts features from local region without considering long-term dependencies for EEG decoding. Apart from long-term dependencies, multi-modal temporal information is equally important for EEG decoding because it can offer a more comprehensive understanding of the temporal dynamics of neural processes. In this paper, we propose a novel deep learning network that combines CNN with self-attention mechanism to encapsulate multi-modal temporal information and global dependencies. The network first extracts multi-modal temporal information from two distinct perspectives: average and variance. A shared self-attention module is then designed to capture global dependencies along these two feature dimensions. We further design a convolutional encoder to explore the relationship between average-pooled and variance-pooled features and fuse them into more discriminative features. Moreover, a data augmentation method called signal segmentation and recombination is proposed to improve the generalization capability of the proposed network. The experimental results on the BCI Competition IV-2a (BCIC-IV-2a) and BCI Competition IV-2b (BCIC-IV-2b) datasets show that our proposed method outperforms the state-of-the-art methods and achieves 4-class average accuracy of 85.03% on the BCIC-IV-2a dataset. The proposed method implies the effectiveness of multi-modal temporal information fusion in attention-based deep learning networks and provides a new perspective for MI-EEG decoding. The code is available at https://github.com/Ma-Xinzhi/EEG-TransNet.
Collapse
Affiliation(s)
- Xinzhi Ma
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Weihai Chen
- School of Electrical Engineering and Automation, Anhui University, Hefei, China.
| | - Zhongcai Pei
- School of Automation Science and Electrical Engineering, Beihang University, Beijing, China; Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Yue Zhang
- Hangzhou Innovation Institute, Beihang University, Hangzhou, China
| | - Jianer Chen
- Department of Geriatric Rehabilitation, Third Affiliated Hospital, Zhejiang Chinese Medical University, Hangzhou, China
| |
Collapse
|
3
|
Yao Q, Qiu B. Algorithm design of a combinatorial mathematical model for computer random signals. PeerJ Comput Sci 2024; 10:e1873. [PMID: 38435588 PMCID: PMC10909170 DOI: 10.7717/peerj-cs.1873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2023] [Accepted: 01/22/2024] [Indexed: 03/05/2024]
Abstract
To improve the processing effect of computer random signals, the manuscript employs the intelligent signal recognition algorithm to design a combinatorial mathematical model for computer random signals, and studies the parameter estimation of conventional frequency hopping signal (FHS) based on optimizing kernel function (KF). First, the mathematical form and graphical representation of the ambiguity function of the conventional FHS are explored. Furthermore, a new KF is presented according to its fuzzy function (FF) and the parameters of conventional FHSs are estimated according to the time-frequency distribution corresponding to the KF. Then, simulation experiments are carried out in different types of interference noise environments. The proposed combinatorial mathematical model for computer random signals shows a practical impact, and can effectively improve the effect of random signal combination.
Collapse
Affiliation(s)
- Qinghua Yao
- Xuchang Vocational College of Ceramic, Xuchang, Henan, China
| | - Benhua Qiu
- Department of Basic Courses, Zhengzhou University of Science and Technology, Zhengzhou, Henan, China
| |
Collapse
|
4
|
Hu L, Zhu J, Chen S, Zhou Y, Song Z, Li Y. A Wearable Asynchronous Brain-Computer Interface Based on EEG-EOG Signals With Fewer Channels. IEEE Trans Biomed Eng 2024; 71:504-513. [PMID: 37616137 DOI: 10.1109/tbme.2023.3308371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) have tremendous application potential in communication, mechatronic control and rehabilitation. However, existing BCI systems are bulky, expensive and require laborious preparation before use. This study proposes a practical and user-friendly BCI system without compromising performance. METHODS A hybrid asynchronous BCI system was developed based on an elaborately designed wearable electroencephalography (EEG) amplifier that is compact, easy to use and offers a high signal-to-noise ratio (SNR). The wearable BCI system can detect P300 signals by processing EEG signals from three channels and operates asynchronously by integrating blink detection. RESULT The wearable EEG amplifier obtains high quality EEG signals and introduces preprocessing capabilities to BCI systems. The wearable BCI system achieves an average accuracy of 94.03±4.65%, an average information transfer rate (ITR) of 31.42±7.39 bits/min and an average false-positive rate (FPR) of 1.78%. CONCLUSION The experimental results demonstrate the feasibility and practicality of the developed wearable EEG amplifier and BCI system. SIGNIFICANCE Wearable asynchronous BCI systems with fewer channels are possible, indicating that BCI applications can be transferred from the laboratory to real-world scenarios.
Collapse
|
5
|
Peketi S, Dhok SB. Machine Learning Enabled P300 Classifier for Autism Spectrum Disorder Using Adaptive Signal Decomposition. Brain Sci 2023; 13:brainsci13020315. [PMID: 36831857 PMCID: PMC9954262 DOI: 10.3390/brainsci13020315] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/06/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023] Open
Abstract
Joint attention skills deficiency in Autism spectrum disorder (ASD) hinders individuals from communicating effectively. The P300 Electroencephalogram (EEG) signal-based brain-computer interface (BCI) helps these individuals in neurorehabilitation training to overcome this deficiency. The detection of the P300 signal is more challenging in ASD as it is noisy, has less amplitude, and has a higher latency than in other individuals. This paper presents a novel application of the variational mode decomposition (VMD) technique in a BCI system involving ASD subjects for P300 signal identification. The EEG signal is decomposed into five modes using VMD. Thirty linear and non-linear time and frequency domain features are extracted for each mode. Synthetic minority oversampling technique data augmentation is performed to overcome the class imbalance problem in the chosen dataset. Then, a comparative analysis of three popular machine learning classifiers is performed for this application. VMD's fifth mode with a support vector machine (fine Gaussian kernel) classifier gave the best performance parameters, namely accuracy, F1-score, and the area under the curve, as 91.12%, 91.18%, and 96.6%, respectively. These results are better when compared to other state-of-the-art methods.
Collapse
|
6
|
Han J, Xu M, Xiao X, Yi W, Jung TP, Ming D. A high-speed hybrid brain-computer interface with more than 200 targets. J Neural Eng 2023; 20:016025. [PMID: 36608342 DOI: 10.1088/1741-2552/acb105] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 01/06/2023] [Indexed: 01/07/2023]
Abstract
Objective. Brain-computer interfaces (BCIs) have recently made significant strides in expanding their instruction set, which has attracted wide attention from researchers. The number of targets and commands is a key indicator of how well BCIs can decode the brain's intentions. No studies have reported a BCI system with over 200 targets.Approach. This study developed the first high-speed BCI system with up to 216 targets that were encoded by a combination of electroencephalography features, including P300, motion visual evoked potential (mVEP), and steady-state visual evoked potential (SSVEP). Specifically, the hybrid BCI paradigm used the time-frequency division multiple access strategy to elaborately tag targets with P300 and mVEP of different time windows, along with SSVEP of different frequencies. The hybrid features were then decoded by task-discriminant component analysis and linear discriminant analysis. Ten subjects participated in the offline and online cued-guided spelling experiments. Other ten subjects took part in online free-spelling experiments.Main results.The offline results showed that the mVEP and P300 components were prominent in the central, parietal, and occipital regions, while the most distinct SSVEP feature was in the occipital region. The online cued-guided spelling and free-spelling results showed that the proposed BCI system achieved an average accuracy of 85.37% ± 7.49% and 86.00% ± 5.98% for the 216-target classification, resulting in an average information transfer rate (ITR) of 302.83 ± 39.20 bits min-1and 204.47 ± 37.56 bits min-1, respectively. Notably, the peak ITR could reach up to 367.83 bits min-1.Significance.This study developed the first high-speed BCI system with more than 200 targets, which holds promise for extending BCI's application scenarios.
Collapse
Affiliation(s)
- Jin Han
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Minpeng Xu
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xiaolin Xiao
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing 100854, People's Republic of China
| | - Tzyy-Ping Jung
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Swartz Centre for Computational Neuroscience, University of California, San Diego, CA, United States of America
| | - Dong Ming
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
7
|
Cho JH, Jeong JH, Lee SW. NeuroGrasp: Real-Time EEG Classification of High-Level Motor Imagery Tasks Using a Dual-Stage Deep Learning Framework. IEEE TRANSACTIONS ON CYBERNETICS 2022; 52:13279-13292. [PMID: 34748509 DOI: 10.1109/tcyb.2021.3122969] [Citation(s) in RCA: 9] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Brain-computer interfaces (BCIs) have been widely employed to identify and estimate a user's intention to trigger a robotic device by decoding motor imagery (MI) from an electroencephalogram (EEG). However, developing a BCI system driven by MI related to natural hand-grasp tasks is challenging due to its high complexity. Although numerous BCI studies have successfully decoded large body parts, such as the movement intention of both hands, arms, or legs, research on MI decoding of high-level behaviors such as hand grasping is essential to further expand the versatility of MI-based BCIs. In this study, we propose NeuroGrasp, a dual-stage deep learning framework that decodes multiple hand grasping from EEG signals under the MI paradigm. The proposed method effectively uses an EEG and electromyography (EMG)-based learning, such that EEG-based inference at test phase becomes possible. The EMG guidance during model training allows BCIs to predict hand grasp types from EEG signals accurately. Consequently, NeuroGrasp improved classification performance offline, and demonstrated a stable classification performance online. Across 12 subjects, we obtained an average offline classification accuracy of 0.68 (±0.09) in four-grasp-type classifications and 0.86 (±0.04) in two-grasp category classifications. In addition, we obtained an average online classification accuracy of 0.65 (±0.09) and 0.79 (±0.09) across six high-performance subjects. Because the proposed method has demonstrated a stable classification performance when evaluated either online or offline, in the future, we expect that the proposed method could contribute to different BCI applications, including robotic hands or neuroprosthetics for handling everyday objects.
Collapse
|
8
|
Wu JY, Ching CTS, Wang HMD, Liao LD. Emerging Wearable Biosensor Technologies for Stress Monitoring and Their Real-World Applications. BIOSENSORS 2022; 12:1097. [PMID: 36551064 PMCID: PMC9776100 DOI: 10.3390/bios12121097] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 11/15/2022] [Indexed: 06/17/2023]
Abstract
Wearable devices are being developed faster and applied more widely. Wearables have been used to monitor movement-related physiological indices, including heartbeat, movement, and other exercise metrics, for health purposes. People are also paying more attention to mental health issues, such as stress management. Wearable devices can be used to monitor emotional status and provide preliminary diagnoses and guided training functions. The nervous system responds to stress, which directly affects eye movements and sweat secretion. Therefore, the changes in brain potential, eye potential, and cortisol content in sweat could be used to interpret emotional changes, fatigue levels, and physiological and psychological stress. To better assess users, stress-sensing devices can be integrated with applications to improve cognitive function, attention, sports performance, learning ability, and stress release. These application-related wearables can be used in medical diagnosis and treatment, such as for attention-deficit hyperactivity disorder (ADHD), traumatic stress syndrome, and insomnia, thus facilitating precision medicine. However, many factors contribute to data errors and incorrect assessments, including the various wearable devices, sensor types, data reception methods, data processing accuracy and algorithms, application reliability and validity, and actual user actions. Therefore, in the future, medical platforms for wearable devices and applications should be developed, and product implementations should be evaluated clinically to confirm product accuracy and perform reliable research.
Collapse
Affiliation(s)
- Ju-Yu Wu
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Congo Tak-Shing Ching
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Department of Electrical Engineering, National Chi Nan University, No. 1 University Road, Puli Township, Nantou County 545301, Taiwan
| | - Hui-Min David Wang
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Lun-De Liao
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| |
Collapse
|
9
|
Mussi MG, Adams KD. EEG hybrid brain-computer interfaces: A scoping review applying an existing hybrid-BCI taxonomy and considerations for pediatric applications. Front Hum Neurosci 2022; 16:1007136. [DOI: 10.3389/fnhum.2022.1007136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/27/2022] [Indexed: 11/18/2022] Open
Abstract
Most hybrid brain-computer interfaces (hBCI) aim at improving the performance of single-input BCI. Many combinations are possible to configure an hBCI, such as using multiple brain input signals, different stimuli or more than one input system. Multiple studies have been done since 2010 where such interfaces have been tested and analyzed. Results and conclusions are promising but little has been discussed as to what is the best approach for the pediatric population, should they use hBCI as an assistive technology. Children might face greater challenges when using BCI and might benefit from less complex interfaces. Hence, in this scoping review we included 42 papers that developed hBCI systems for the purpose of control of assistive devices or communication software, and we analyzed them through the lenses of potential use in clinical settings and for children. We extracted taxonomic categories proposed in previous studies to describe the types of interfaces that have been developed. We also proposed interface characteristics that could be observed in different hBCI, such as type of target, number of targets and number of steps before selection. Then, we discussed how each of the extracted characteristics could influence the overall complexity of the system and what might be the best options for applications for children. Effectiveness and efficiency were also collected and included in the analysis. We concluded that the least complex hBCI interfaces might involve having a brain inputs and an external input, with a sequential role of operation, and visual stimuli. Those interfaces might also use a minimal number of targets of the strobic type, with one or two steps before the final selection. We hope this review can be used as a guideline for future hBCI developments and as an incentive to the design of interfaces that can also serve children who have motor impairments.
Collapse
|
10
|
Prabhakar SK, Ju YG, Rajaguru H, Won DO. Sparse measures with swarm-based pliable hidden Markov model and deep learning for EEG classification. Front Comput Neurosci 2022; 16:1016516. [DOI: 10.3389/fncom.2022.1016516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/11/2022] [Accepted: 10/10/2022] [Indexed: 11/17/2022] Open
Abstract
In comparison to other biomedical signals, electroencephalography (EEG) signals are quite complex in nature, so it requires a versatile model for feature extraction and classification. The structural information that prevails in the originally featured matrix is usually lost when dealing with standard feature extraction and conventional classification techniques. The main intention of this work is to propose a very novel and versatile approach for EEG signal modeling and classification. In this work, a sparse representation model along with the analysis of sparseness measures is done initially for the EEG signals and then a novel convergence of utilizing these sparse representation measures with Swarm Intelligence (SI) techniques based Hidden Markov Model (HMM) is utilized for the classification. The SI techniques utilized to compute the hidden states of the HMM are Particle Swarm Optimization (PSO), Differential Evolution (DE), Whale Optimization Algorithm (WOA), and Backtracking Search Algorithm (BSA), thereby making the HMM more pliable. Later, a deep learning methodology with the help of Convolutional Neural Network (CNN) was also developed with it and the results are compared to the standard pattern recognition classifiers. To validate the efficacy of the proposed methodology, a comprehensive experimental analysis is done over publicly available EEG datasets. The method is supported by strong statistical tests and theoretical analysis and results show that when sparse representation is implemented with deep learning, the highest classification accuracy of 98.94% is obtained and when sparse representation is implemented with SI-based HMM method, a high classification accuracy of 95.70% is obtained.
Collapse
|
11
|
Dong E, Zhang H, Zhu L, Du S, Tong J. A multi-modal brain-computer interface based on threshold discrimination and its application in wheelchair control. Cogn Neurodyn 2022; 16:1123-1133. [PMID: 36237403 PMCID: PMC9508306 DOI: 10.1007/s11571-021-09779-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 12/02/2021] [Accepted: 12/23/2021] [Indexed: 11/25/2022] Open
Abstract
In this study, we propose a novel multi-modal brain-computer interface (BCI) system based on the threshold discrimination, which is proposed for the first time to distinguish between SSVEP and MI potentials. The system combines these two heterogeneous signals to increase the number of control commands and improve the performance of asynchronous control of external devices. In this research, an electric wheelchair is controlled as an example. The user can continuously control the wheelchair to turn left/right through motion imagination (MI) by imagining left/right-hand movement and generate another 6 commands for the wheelchair control by focusing on the SSVEP stimulation panel. Ten subjects participated in a MI training session and eight of them completed a mobile obstacle-avoidance experiment in a complex environment requesting high control accuracy for successful manipulation. Comparing with the single-modal BCI-controlled wheelchair system, the results demonstrate that the proposed multi-modal method is effective by providing more satisfactory control accuracy, and show the potential of BCI-controlled systems to be applied in complex daily tasks.
Collapse
Affiliation(s)
- Enzeng Dong
- Tianjin Key Laboratory of Control Theory and Applications in Complicated Systems, Tianjin University of Technology, Tianjin, 300384 China
| | - Haoran Zhang
- Tianjin Key Laboratory of Control Theory and Applications in Complicated Systems, Tianjin University of Technology, Tianjin, 300384 China
| | - Lin Zhu
- China North Industries Group 210 Research Institute, Beijing, China
| | - Shengzhi Du
- Department of Electrical Engineering, Tshwane University of Technology, Pretoria, 0001 South Africa
| | - Jigang Tong
- Tianjin Key Laboratory of Control Theory and Applications in Complicated Systems, Tianjin University of Technology, Tianjin, 300384 China
| |
Collapse
|
12
|
High-Frequency Vibrating Stimuli Using the Low-Cost Coin-Type Motors for SSSEP-Based BCI. BIOMED RESEARCH INTERNATIONAL 2022; 2022:4100381. [PMID: 36060141 PMCID: PMC9436568 DOI: 10.1155/2022/4100381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/01/2021] [Revised: 06/23/2022] [Accepted: 08/08/2022] [Indexed: 11/17/2022]
Abstract
Steady-state somatosensory-evoked potential- (SSSEP-) based brain-computer interfaces (BCIs) have been applied for assisting people with physical disabilities since it does not require gaze fixation or long-time training. Despite the advancement of various noninvasive electroencephalogram- (EEG-) based BCI paradigms, researches on SSSEP with the various frequency range and related classification algorithms are relatively unsettled. In this study, we investigated the feasibility of classifying the SSSEP within high-frequency vibration stimuli induced by a versatile coin-type eccentric rotating mass (ERM) motor. Seven healthy subjects performed selective attention (SA) tasks with vibration stimuli attached to the left and right index fingers. Three EEG feature extraction methods, followed by a support vector machine (SVM) classifier, have been tested: common spatial pattern (CSP), filter-bank CSP (FBCSP), and mutual information-based best individual feature (MIBIF) selection after the FBCSP. Consequently, the FBCSP showed the highest performance at
% for classifying the left and right-hand SA tasks than the other two methods (i.e., CSP and FBCSP-MIBIF). Based on our findings and approach, the high-frequency vibration stimuli using low-cost coin motors with the FBCSP-based feature selection can be potentially applied to developing practical SSSEP-based BCI systems.
Collapse
|
13
|
Lee KW, Lee DH, Kim SJ, Lee SW. Decoding Neural Correlation of Language-Specific Imagined Speech using EEG Signals. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:1977-1980. [PMID: 36086641 DOI: 10.1109/embc48229.2022.9871721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Speech impairments due to cerebral lesions and degenerative disorders can be devastating. For humans with severe speech deficits, imagined speech in the brain-computer interface has been a promising hope for reconstructing the neural signals of speech production. However, studies in the EEG-based imagined speech domain still have some limitations due to high variability in spatial and temporal information and low signal-to-noise ratio. In this paper, we investigated the neural signals for two groups of native speakers with two tasks with different languages, English and Chinese. Our assumption was that English, a non-tonal and phonogram-based language, would have spectral differences in neural computation compared to Chinese, a tonal and ideogram-based language. The results showed the significant difference in the relative power spectral density between English and Chinese in specific frequency band groups. Also, the spatial evaluation of Chinese native speakers in the theta band was distinctive during the imagination task. Hence, this paper would suggest the key spectral and spatial information of word imagination with specialized language while decoding the neural signals of speech. Clinical Relevance- Imagined speech-related studies lead to the development of assistive communication technology especially for patients with speech disorders such as aphasia due to brain damage. This study suggests significant spectral features by analyzing cross-language differences of EEG-based imagined speech using two widely used languages.
Collapse
|
14
|
Musellim S, Han DK, Jeong JH, Lee SW. Prototype-based Domain Generalization Framework for Subject-Independent Brain-Computer Interfaces. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:711-714. [PMID: 36086535 DOI: 10.1109/embc48229.2022.9871434] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Brain-computer interface (BCI) is challenging to use in practice due to the inter/intra-subject variability of electroencephalography (EEG). The BCI system, in general, necessitates a calibration technique to obtain subject/session-specific data in order to tune the model each time the system is utilized. This issue is acknowledged as a key hindrance to BCI, and a new strategy based on domain generalization has recently evolved to address it. In light of this, we've concentrated on developing an EEG classification framework that can be applied directly to data from unknown domains (i.e. subjects), using only data acquired from separate subjects previously. For this purpose, in this paper, we proposed a framework that employs the open-set recognition technique as an auxiliary task to learn subject-specific style features from the source dataset while helping the shared feature extractor with mapping the features of the unseen target dataset as a new unseen domain. Our aim is to impose cross-instance style in-variance in the same domain and reduce the open space risk on the potential unseen subject in order to improve the generalization ability of the shared feature extractor. Our experiments showed that using the domain information as an auxiliary network increases the generalization performance. Clinical relevance-This study suggests a strategy to improve the performance of the subject-independent BCI systems. Our framework can help to reduce the need for further calibration and can be utilized for a range of mental state monitoring tasks (e.g. neurofeedback, identification of epileptic seizures, and sleep disorders).
Collapse
|
15
|
Bang JS, Lee MH, Fazli S, Guan C, Lee SW. Spatio-Spectral Feature Representation for Motor Imagery Classification Using Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2022; 33:3038-3049. [PMID: 33449886 DOI: 10.1109/tnnls.2020.3048385] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Convolutional neural networks (CNNs) have recently been applied to electroencephalogram (EEG)-based brain-computer interfaces (BCIs). EEG is a noninvasive neuroimaging technique, which can be used to decode user intentions. Because the feature space of EEG data is highly dimensional and signal patterns are specific to the subject, appropriate methods for feature representation are required to enhance the decoding accuracy of the CNN model. Furthermore, neural changes exhibit high variability between sessions, subjects within a single session, and trials within a single subject, resulting in major issues during the modeling stage. In addition, there are many subject-dependent factors, such as frequency ranges, time intervals, and spatial locations at which the signal occurs, which prevent the derivation of a robust model that can achieve the parameterization of these factors for a wide range of subjects. However, previous studies did not attempt to preserve the multivariate structure and dependencies of the feature space. In this study, we propose a method to generate a spatiospectral feature representation that can preserve the multivariate information of EEG data. Specifically, 3-D feature maps were constructed by combining subject-optimized and subject-independent spectral filters and by stacking the filtered data into tensors. In addition, a layer-wise decomposition model was implemented using our 3-D-CNN framework to secure reliable classification results on a single-trial basis. The average accuracies of the proposed model were 87.15% (±7.31), 75.85% (±12.80), and 70.37% (±17.09) for the BCI competition data sets IV_2a, IV_2b, and OpenBMI data, respectively. These results are better than those obtained by state-of-the-art techniques, and the decomposition model obtained the relevance scores for neurophysiologically plausible electrode channels and frequency domains, confirming the validity of the proposed approach.
Collapse
|
16
|
EOG-Based Human–Computer Interface: 2000–2020 Review. SENSORS 2022; 22:s22134914. [PMID: 35808414 PMCID: PMC9269776 DOI: 10.3390/s22134914] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 06/23/2022] [Accepted: 06/25/2022] [Indexed: 11/28/2022]
Abstract
Electro-oculography (EOG)-based brain–computer interface (BCI) is a relevant technology influencing physical medicine, daily life, gaming and even the aeronautics field. EOG-based BCI systems record activity related to users’ intention, perception and motor decisions. It converts the bio-physiological signals into commands for external hardware, and it executes the operation expected by the user through the output device. EOG signal is used for identifying and classifying eye movements through active or passive interaction. Both types of interaction have the potential for controlling the output device by performing the user’s communication with the environment. In the aeronautical field, investigations of EOG-BCI systems are being explored as a relevant tool to replace the manual command and as a communicative tool dedicated to accelerating the user’s intention. This paper reviews the last two decades of EOG-based BCI studies and provides a structured design space with a large set of representative papers. Our purpose is to introduce the existing BCI systems based on EOG signals and to inspire the design of new ones. First, we highlight the basic components of EOG-based BCI studies, including EOG signal acquisition, EOG device particularity, extracted features, translation algorithms, and interaction commands. Second, we provide an overview of EOG-based BCI applications in the real and virtual environment along with the aeronautical application. We conclude with a discussion of the actual limits of EOG devices regarding existing systems. Finally, we provide suggestions to gain insight for future design inquiries.
Collapse
|
17
|
Machine-learning-enabled adaptive signal decomposition for a brain-computer interface using EEG. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103526] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
|
18
|
ENIC: Ensemble and Nature Inclined Classification with Sparse Depiction based Deep and Transfer Learning for Biosignal Classification. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108416] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
19
|
Lee YE, Shin GH, Lee M, Lee SW. Mobile BCI dataset of scalp- and ear-EEGs with ERP and SSVEP paradigms while standing, walking, and running. Sci Data 2021; 8:315. [PMID: 34930915 PMCID: PMC8688416 DOI: 10.1038/s41597-021-01094-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Accepted: 11/08/2021] [Indexed: 11/24/2022] Open
Abstract
We present a mobile dataset obtained from electroencephalography (EEG) of the scalp and around the ear as well as from locomotion sensors by 24 participants moving at four different speeds while performing two brain-computer interface (BCI) tasks. The data were collected from 32-channel scalp-EEG, 14-channel ear-EEG, 4-channel electrooculography, and 9-channel inertial measurement units placed at the forehead, left ankle, and right ankle. The recording conditions were as follows: standing, slow walking, fast walking, and slight running at speeds of 0, 0.8, 1.6, and 2.0 m/s, respectively. For each speed, two different BCI paradigms, event-related potential and steady-state visual evoked potential, were recorded. To evaluate the signal quality, scalp- and ear-EEG data were qualitatively and quantitatively validated during each speed. We believe that the dataset will facilitate BCIs in diverse mobile environments to analyze brain activities and evaluate the performance quantitatively for expanding the use of practical BCIs.
Collapse
Affiliation(s)
- Young-Eun Lee
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Gi-Hwan Shin
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Minji Lee
- grid.222754.40000 0001 0840 2678Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841 Republic of Korea
| | - Seong-Whan Lee
- Korea University, Department of Brain and Cognitive Engineering, Seoul, 02841, Republic of Korea. .,Korea University, Department of Artificial Intelligence, Seoul, 02841, Republic of Korea.
| |
Collapse
|
20
|
Mussabayeva A, Jamwal PK, Tahir Akhtar M. Ensemble Learning Approach for Subject-Independent P300 Speller. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:5893-5896. [PMID: 34892460 DOI: 10.1109/embc46164.2021.9629679] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
P300 speller is a brain-computer interface (BCI) speller system, used for enabling human with different paralyzing disorders, such as amyotrophic lateral sclerosis (ALS), to communicate with the outer world by processing electroencephalography (EEG) signals. Different people have different latency and amplitude of the P300 event-related potential (ERP) component, which is used as the main feature for detecting the target character. In order to achieve robust results for different subjects using generic training (GT), the ensemble learning classifiers are proposed based on linear discriminant analysis (LDA), support vector machine (SVM), k-nearest neighbors (kNN), and convolutional neural network (CNN). The proposed models are trained using data from healthy subjects and tested on both healthy subjects and ALS patients. The results show that the fusion of LDA, kNN and SVM provides the most accurate results, achieving the accuracy of 99% for healthy subjects and about 85% for ALS patients.
Collapse
|
21
|
Kim SH, Yang HJ, Nguyen NAT, Lee SW. AsEmo: Automatic Approach for EEG-Based Multiple Emotional State Identification. IEEE J Biomed Health Inform 2021; 25:1508-1518. [PMID: 33085624 DOI: 10.1109/jbhi.2020.3032678] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
An electroencephalogram (EEG) is the most extensively used physiological signal in emotion recognition using biometric data. However, these EEG data are difficult to analyze, because of their anomalous characteristic where statistical elements vary according to time as well as spatial-temporal correlations. Therefore, new methods that can clearly distinguish emotional states in EEG data are required. In this paper, we propose a new emotion recognition method, named AsEmo. The proposed method extracts effective features boosting classification performance on various emotional states from multi-class EEG data. AsEmo Automatically determines the number of spatial filters needed to extract significant features using the explained variance ratio (EVR) and employs a Subject-independent method for real-time processing of Emotion EEG data. The advantages of this method are as follows: (a) it automatically determines the spatial filter coefficients distinguishing emotional states and extracts the best features; (b) it is very robust for real-time analysis of new data using a subject-independent technique that considers subject sets, and not a specific subject; (c) it can be easily applied to both binary-class and multi-class data. Experimental results on real-world EEG emotion recognition tasks demonstrate that AsEmo outperforms other state-of-the-art methods with a 2-8% improvement in terms of classification accuracy.
Collapse
|
22
|
Lee SH, Lee M, Lee SW. Neural Decoding of Imagined Speech and Visual Imagery as Intuitive Paradigms for BCI Communication. IEEE Trans Neural Syst Rehabil Eng 2021; 28:2647-2659. [PMID: 33232243 DOI: 10.1109/tnsre.2020.3040289] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Brain-computer interface (BCI) is oriented toward intuitive systems that users can easily operate. Imagined speech and visual imagery are emerging paradigms that can directly convey a user's intention. We investigated the underlying characteristics that affect the decoding performance of these two paradigms. Twenty-two subjects performed imagined speech and visual imagery of twelve words/phrases frequently used for patients' communication. Spectral features were analyzed with thirteen-class classification (including rest class) using EEG filtered in six frequency ranges. In addition, cortical regions relevant to the two paradigms were analyzed by classification using single-channel and pre-defined cortical groups. Furthermore, we analyzed the word properties that affect the decoding performance based on the number of syllables, concrete and abstract concepts, and the correlation between the two paradigms. Finally, we investigated multiclass scalability in both paradigms. The high-frequency band displayed a significantly superior performance to that in the case of any other spectral features in the thirteen-class classification (imagined speech: 39.73 ± 5.64%; visual imagery: 40.14 ± 4.17%). Furthermore, the performance of Broca's and Wernicke's areas and auditory cortex was found to have improved among the cortical regions in both paradigms. As the number of classes increased, the decoding performance decreased moderately. Moreover, every subject exceeded the confidence level performance, implying the strength of the two paradigms in BCI inefficiency. These two intuitive paradigms were found to be highly effective for multiclass communication systems, having considerable similarities between each other. The results could provide crucial information for improving the decoding performance for practical BCI applications.
Collapse
|
23
|
Lee YE, Kwak NS, Lee SW. A Real-Time Movement Artifact Removal Method for Ambulatory Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2021; 28:2660-2670. [PMID: 33232242 DOI: 10.1109/tnsre.2020.3040264] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Recently, practical brain-computer interfaces (BCIs) have been widely investigated for detecting human intentions in real world. However, performance differences still exist between the laboratory and the real world environments. One of the main reasons for such differences comes from the user's unstable physical states (e.g., human movements are not strictly controlled), which produce unexpected signal artifacts. Hence, to minimize the performance degradation of electroencephalography (EEG)-based BCIs, we present a novel artifact removal method named constrained independent component analysis with online learning (cIOL). The cIOL can find and reject the noise-like components related to human body movements (i.e., movement artifacts) in the EEG signals. To obtain movement information, isolated electrodes are used to block electrical signals from the brain using high-resistance materials. We estimate artifacts with movement information using constrained independent component analysis from EEG signals and then extract artifact-free signals using online learning in each sample. In addition, the cIOL is evaluated by signal processing under 16 different experimental conditions (two types of EEG devices × two BCI paradigms × four different walking speeds). The experimental results show that the cIOL has the highest accuracy in both scalp- and ear-EEG, and has the highest signal-to-noise ratio in scalp-EEG among the state-of-the-art methods, except for the case of steady-state visual evoked potential at 2.0 m/s with superposition problem.
Collapse
|
24
|
Development of a Human-Display Interface with Vibrotactile Feedback for Real-World Assistive Applications. SENSORS 2021; 21:s21020592. [PMID: 33467611 PMCID: PMC7830928 DOI: 10.3390/s21020592] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/07/2020] [Revised: 01/11/2021] [Accepted: 01/11/2021] [Indexed: 12/21/2022]
Abstract
It is important to operate devices with control panels and touch screens assisted by haptic feedback in mobile environments such as driving automobiles and electric power wheelchairs. A lot of consideration is needed to give accurate haptic feedback, especially, presenting clear touch feedback to the elderly and people with reduced sensation is a very critical issue from healthcare and safety perspectives. In this study, we aimed to identify the perceptual characteristics for the frequency and direction of haptic vibration on the touch screen with vehicle-driving vibration and to propose an efficient haptic system based on these characteristics. As a result, we demonstrated that the detection threshold shift decreased at frequencies above 210 Hz due to the contact pressure during active touch, but the detection threshold shift increased at below 210 Hz. We found that the detection thresholds were 0.30–0.45 gpeak with similar sensitivity in the 80–270 Hz range. The haptic system implemented by reflecting the experimental results achieved characteristics suitable for use scenarios in automobiles. Ultimately, it could provide practical guidelines for the development of touch screens to give accurate touch feedback in the real-world environment.
Collapse
|
25
|
Belkhiria C, Peysakhovich V. Electro-Encephalography and Electro-Oculography in Aeronautics: A Review Over the Last Decade (2010-2020). FRONTIERS IN NEUROERGONOMICS 2020; 1:606719. [PMID: 38234309 PMCID: PMC10790927 DOI: 10.3389/fnrgo.2020.606719] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2024]
Abstract
Electro-encephalography (EEG) and electro-oculography (EOG) are methods of electrophysiological monitoring that have potentially fruitful applications in neuroscience, clinical exploration, the aeronautical industry, and other sectors. These methods are often the most straightforward way of evaluating brain oscillations and eye movements, as they use standard laboratory or mobile techniques. This review describes the potential of EEG and EOG systems and the application of these methods in aeronautics. For example, EEG and EOG signals can be used to design brain-computer interfaces (BCI) and to interpret brain activity, such as monitoring the mental state of a pilot in determining their workload. The main objectives of this review are to, (i) offer an in-depth review of literature on the basics of EEG and EOG and their application in aeronautics; (ii) to explore the methodology and trends of research in combined EEG-EOG studies over the last decade; and (iii) to provide methodological guidelines for beginners and experts when applying these methods in environments outside the laboratory, with a particular focus on human factors and aeronautics. The study used databases from scientific, clinical, and neural engineering fields. The review first introduces the characteristics and the application of both EEG and EOG in aeronautics, undertaking a large review of relevant literature, from early to more recent studies. We then built a novel taxonomy model that includes 150 combined EEG-EOG papers published in peer-reviewed scientific journals and conferences from January 2010 to March 2020. Several data elements were reviewed for each study (e.g., pre-processing, extracted features and performance metrics), which were then examined to uncover trends in aeronautics and summarize interesting methods from this important body of literature. Finally, the review considers the advantages and limitations of these methods as well as future challenges.
Collapse
|
26
|
Kwon OY, Lee MH, Guan C, Lee SW. Subject-Independent Brain-Computer Interfaces Based on Deep Convolutional Neural Networks. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2020; 31:3839-3852. [PMID: 31725394 DOI: 10.1109/tnnls.2019.2946869] [Citation(s) in RCA: 92] [Impact Index Per Article: 23.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
For a brain-computer interface (BCI) system, a calibration procedure is required for each individual user before he/she can use the BCI. This procedure requires approximately 20-30 min to collect enough data to build a reliable decoder. It is, therefore, an interesting topic to build a calibration-free, or subject-independent, BCI. In this article, we construct a large motor imagery (MI)-based electroencephalography (EEG) database and propose a subject-independent framework based on deep convolutional neural networks (CNNs). The database is composed of 54 subjects performing the left- and right-hand MI on two different days, resulting in 21 600 trials for the MI task. In our framework, we formulated the discriminative feature representation as a combination of the spectral-spatial input embedding the diversity of the EEG signals, as well as a feature representation learned from the CNN through a fusion technique that integrates a variety of discriminative brain signal patterns. To generate spectral-spatial inputs, we first consider the discriminative frequency bands in an information-theoretic observation model that measures the power of the features in two classes. From discriminative frequency bands, spectral-spatial inputs that include the unique characteristics of brain signal patterns are generated and then transformed into a covariance matrix as the input to the CNN. In the process of feature representations, spectral-spatial inputs are individually trained through the CNN and then combined by a concatenation fusion technique. In this article, we demonstrate that the classification accuracy of our subject-independent (or calibration-free) model outperforms that of subject-dependent models using various methods [common spatial pattern (CSP), common spatiospectral pattern (CSSP), filter bank CSP (FBCSP), and Bayesian spatio-spectral filter optimization (BSSFO)].
Collapse
|
27
|
|
28
|
Hosni SM, Shedeed HA, Mabrouk MS, Tolba MF. EEG-EOG based Virtual Keyboard: Toward Hybrid Brain Computer Interface. Neuroinformatics 2020; 17:323-341. [PMID: 30368637 DOI: 10.1007/s12021-018-9402-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The past twenty years have ignited a new spark in the research of Electroencephalogram (EEG), which was pursued to develop innovative Brain Computer Interfaces (BCIs) in order to help severely disabled people live a better life with a high degree of independence. Current BCIs are more theoretical than practical and are suffering from numerous challenges. New trends of research propose combining EEG to other simple and efficient bioelectric inputs such as Electro-oculography (EOG) resulting from eye movements, to produce more practical and robust Hybrid Brain Computer Interface systems (hBCI) or Brain/Neuronal Computer Interface (BNCI). Working towards this purpose, existing research in EOG based Human Computer Interaction (HCI) applications, must be organized and surveyed in order to develop a vision on the potential benefits of combining both input modalities and give rise to new designs that maximize these benefits. Our aim is to support and inspire the design of new hBCI systems based on both EEG and EOG signals, in doing so; first the current EOG based HCI systems were surveyed with a particular focus on EOG based systems for communication using virtual keyboard. Then, a survey of the current EEG-EOG virtual keyboard was performed highlighting the design protocols employed. We concluded with a discussion of the potential advantages of combining both systems with recommendations to give deep insight for future design issues for all EEG-EOG hBCI systems. Finally, a general architecture was proposed for a new EEG-EOG hBCI system. The proposed hybrid system completely alters the traditional view of the eye movement features present in EEG signal as artifacts that should be removed; instead EOG traces are extracted from EEG in our proposed hybrid architecture and are considered as an additional input modality sharing control according to the chosen design protocol.
Collapse
Affiliation(s)
- Sarah M Hosni
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Howida A Shedeed
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| | - Mai S Mabrouk
- Biomedical Engineering Department, Misr University for Science and Technology, Giza, Egypt.
| | - Mohamed F Tolba
- Faculty of Computer and Information Sciences, Ain Shams University, Cairo, Egypt
| |
Collapse
|
29
|
Lee MH, Williamson J, Kee YJ, Fazli S, Lee SW. Robust detection of event-related potentials in a user-voluntary short-term imagery task. PLoS One 2019; 14:e0226236. [PMID: 31877161 PMCID: PMC6932761 DOI: 10.1371/journal.pone.0226236] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 11/24/2019] [Indexed: 11/18/2022] Open
Abstract
Event-related potentials (ERPs) represent neuronal activity in the brain elicited by external visual or auditory stimulation and are widely used in brain-computer interface (BCI) systems. The ERP responses are elicited a few milliseconds after attending to an oddball stimulus; target and non-target stimuli are repeatedly flashed, and the ERP trials are averaged over time in order to improve their decoding accuracy. To reduce this time-consuming process, previous studies have attempted to evoke stronger ERP responses by changing certain experimental parameters like color, size, or the use of a face image as a target symbol. Since these exogenous potentials can be naturally evoked by merely looking at a target symbol, the BCI system could generate unintended commands while subjects are gazing at one of the symbols in a non-intentional mental state. We approached this problem of unintended command generation by assuming that a greater effort by the user in a short-term imagery task would evoke a discriminative ERP response. Three tasks were defined: passive attention, counting, and pitch-imagery. Users were instructed to passively attend to a target symbol, or to perform a mental tally of the number of target presentations, or to perform the novel task of imagining a high-pitch tone when the target symbol was highlighted. The decoding accuracy were 71.4%, 83.5%, and 89.2% for passive attention, counting, and pitch-imagery, respectively, after the fourth averaging procedure. We found stronger deflections in the N500 component corresponding to the levels of mental effort (passive attention: -1.094 ±0.88 μV, counting: -2.226 ±0.97 μV, and pitch-imagery: -2.883 ±0.74 μV), which highly influenced the decoding accuracy. In addition, the rate of binary classification between passive attention and pitch-imagery tasks was 73.5%, which is an adequate classification rate that motivated us to propose a two-stage classification strategy wherein the target symbols are estimated in the first stage and the passive or active mental state is decoded in the second stage. In this study, we found that the ERP response and the decoding accuracy are highly influenced by the user's voluntary mental tasks. This could lead to a useful approach in practical ERP systems in two respects. Firstly, the user-voluntary tasks can be easily utilized in many different types of BCI systems, and performance enhancement is less dependent on the manipulation of the system's external, visual stimulus parameters. Secondly, we propose an ERP system that classifies the brain state as intended or unintended by considering the measurable differences between passively gazing and actively performing the pitch-imagery tasks in the EEG signal thus minimizing unintended commands to the BCI system.
Collapse
Affiliation(s)
- Min-Ho Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- Department of Computer Science, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - John Williamson
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Young-Jin Kee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
| | - Siamac Fazli
- Department of Computer Science, Nazarbayev University, Nur-Sultan, Kazakhstan
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Seoul, Korea
- * E-mail:
| |
Collapse
|
30
|
Jeong JH, Yu BW, Lee DH, Lee SW. Classification of Drowsiness Levels Based on a Deep Spatio-Temporal Convolutional Bidirectional LSTM Network Using Electroencephalography Signals. Brain Sci 2019; 9:E348. [PMID: 31795445 PMCID: PMC6956039 DOI: 10.3390/brainsci9120348] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2019] [Revised: 11/22/2019] [Accepted: 11/26/2019] [Indexed: 11/16/2022] Open
Abstract
Non-invasive brain-computer interfaces (BCI) have been developed for recognizing human mental states with high accuracy and for decoding various types of mental conditions. In particular, accurately decoding a pilot's mental state is a critical issue as more than 70% of aviation accidents are caused by human factors, such as fatigue or drowsiness. In this study, we report the classification of not only two mental states (i.e., alert and drowsy states) but also five drowsiness levels from electroencephalogram (EEG) signals. To the best of our knowledge, this approach is the first to classify drowsiness levels in detail using only EEG signals. We acquired EEG data from ten pilots in a simulated night flight environment. For accurate detection, we proposed a deep spatio-temporal convolutional bidirectional long short-term memory network (DSTCLN) model. We evaluated the classification performance using Karolinska sleepiness scale (KSS) values for two mental states and five drowsiness levels. The grand-averaged classification accuracies were 0.87 (±0.01) and 0.69 (±0.02), respectively. Hence, we demonstrated the feasibility of classifying five drowsiness levels with high accuracy using deep learning.
Collapse
Affiliation(s)
- Ji-Hoon Jeong
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea; (J.-H.J.); (B.-W.Y.); (D.-H.L.)
| | - Baek-Woon Yu
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea; (J.-H.J.); (B.-W.Y.); (D.-H.L.)
| | - Dae-Hyeok Lee
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea; (J.-H.J.); (B.-W.Y.); (D.-H.L.)
| | - Seong-Whan Lee
- Department of Brain and Cognitive Engineering, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea; (J.-H.J.); (B.-W.Y.); (D.-H.L.)
- Department of Artificial Intelligence, Korea University, Anam-dong, Seongbuk-ku, Seoul 02841, Korea
| |
Collapse
|
31
|
Li Z, Zhang S, Pan J. Advances in Hybrid Brain-Computer Interfaces: Principles, Design, and Applications. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2019; 2019:3807670. [PMID: 31687006 PMCID: PMC6800963 DOI: 10.1155/2019/3807670] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Revised: 09/09/2019] [Accepted: 09/17/2019] [Indexed: 11/23/2022]
Abstract
Conventional brain-computer interface (BCI) systems have been facing two fundamental challenges: the lack of high detection performance and the control command problem. To this end, the researchers have proposed a hybrid brain-computer interface (hBCI) to address these challenges. This paper mainly discusses the research progress of hBCI and reviews three types of hBCI, namely, hBCI based on multiple brain models, multisensory hBCI, and hBCI based on multimodal signals. By analyzing the general principles, paradigm designs, experimental results, advantages, and applications of the latest hBCI system, we found that using hBCI technology can improve the detection performance of BCI and achieve multidegree/multifunctional control, which is significantly superior to single-mode BCIs.
Collapse
Affiliation(s)
- Zina Li
- South China Normal University, Guangzhou 510631, China
| | - Shuqing Zhang
- South China Normal University, Guangzhou 510631, China
| | - Jiahui Pan
- South China Normal University, Guangzhou 510631, China
| |
Collapse
|
32
|
Alteration of coupling between brain and heart induced by sedation with propofol and midazolam. PLoS One 2019; 14:e0219238. [PMID: 31314775 PMCID: PMC6636731 DOI: 10.1371/journal.pone.0219238] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 06/20/2019] [Indexed: 11/19/2022] Open
Abstract
For a comprehensive understanding of the nervous system, several previous studies have examined the network connections between the brain and the heart in diverse conditions. In this study, we identified coupling between the brain and the heart along the continuum of sedation levels, but not in discrete sedation levels (e. g., wakefulness, conscious sedation, and deep sedation). To identify coupling between the brain and the heart during sedation, we induced several depths of sedation using patient-controlled sedation with propofol and midazolam. We performed electroencephalogram (EEG) spectral analysis and extracted the instantaneous heart rate (HR) from the electrocardiogram (ECG). EEG spectral power dynamics and mean HR were compared along the continuum of sedation levels. We found that EEG sigma power was the parameter most sensitive to changes in the sedation level and was correlated with the mean HR under the effect of sedative agents. Moreover, we calculated the Granger causality (GC) value to quantify brain-heart coupling at each sedation level. Additionally, the GC analysis revealed noticeably different strengths and directions of causality among different sedation levels. In all the sedation levels, GC values from the brain to the heart (GCb→h) were higher than GC values from the heart to the brain (GCh→b). Moreover, the mean GCb→h increased as the sedation became deeper, resulting in higher GCb→h values in deep sedation (1.97 ± 0.18 in propofol, 2.02 ± 0.15 in midazolam) than in pre-sedation (1.71 ± 0.13 in propofol, 1.75 ± 0.11 in midazolam; p < 0.001). These results show that coupling between brain and heart activities becomes stronger as sedation becomes deeper, and that this coupling is more attributable to the brain-heart direction than to the heart-brain direction. These findings provide a better understanding of the relationship between the brain and the heart under specific conditions, namely, different sedation states.
Collapse
|
33
|
Yu Y, Liu Y, Yin E, Jiang J, Zhou Z, Hu D. An Asynchronous Hybrid Spelling Approach Based on EEG-EOG Signals for Chinese Character Input. IEEE Trans Neural Syst Rehabil Eng 2019; 27:1292-1302. [PMID: 31071045 DOI: 10.1109/tnsre.2019.2914916] [Citation(s) in RCA: 33] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this paper, we presented a novel asynchronous speller for Chinese sinogram input by incorporating electroencephalography (EOG) into the conventional electroencephalography (EEG)-based spelling paradigm. An EOG-based brain switch was used to activate a classic row-column P300-based speller only when spelling was needed, enabling an asynchronous operation of the system. Then, the user could input sinograms by alternately performing P300 and double-blink tasks until he or she intended to stop spelling. With the incorporation of an EOG detector, the system achieved rapid sinogram input. In addition to asynchronous operation, the performance of the proposed speller was compared with that achieved by a P300-based method alone across 18 subjects. The proposed system showed a mean communication speed of approximately 2.39 sinograms per minute, an increase of 0.83 sinograms per minute compared with the P300-based method. The preliminary online performance indicated that the proposed paradigm is a very promising approach for practical Chinese sinogram input application. This system may also be expanded to users whose languages are written in logographic scripts to serve as an assistive communication tool.
Collapse
|