1
|
Kim H, Won K, Ahn M, Jun SC. Comparison of recognition methods for an asynchronous (un-cued) BCI system: an investigation with 40-class SSVEP dataset. Biomed Eng Lett 2024; 14:617-630. [PMID: 38645586 PMCID: PMC11026332 DOI: 10.1007/s13534-024-00357-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2023] [Revised: 01/16/2024] [Accepted: 01/24/2024] [Indexed: 04/23/2024] Open
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer Interface (BCI) has demonstrated the potential to manage multi-command targets to achieve high-speed communication. Recent studies on multi-class SSVEP-based BCI have focused on synchronous systems, which rely on predefined time and task indicators; thus, these systems that use passive approaches may be less suitable for practical applications. Asynchronous systems recognize the user's intention (whether or not the user is willing to use systems) from brain activity; then, after recognizing the user's willingness, they begin to operate by switching swiftly for real-time control. Consequently, various methodologies have been proposed to capture the user's intention. However, in-depth investigation of recognition methods in asynchronous BCI system is lacking. Thus, in this work, three recognition methods (power spectral density analysis, canonical correlation analysis (CCA), and support vector machine (SVM)) used widely in asynchronous SSVEP BCI systems were explored to compare their performance. Further, we categorized asynchronous systems into two approaches (1-stage and 2-stage) based upon the recognition process's design, and compared their performance. To do so, a 40-class SSVEP dataset collected from 40 subjects was introduced. Finally, we found that the CCA-based method in the 2-stage approach demonstrated statistically significantly higher performance with a sensitivity of 97.62 ± 02.06%, specificity of 76.50 ± 23.50%, and accuracy of 75.59 ± 10.09%. Thus, it is expected that the 2-stage approach together with CCA-based recognition and FB-CCA classification have good potential to be implemented in practical asynchronous SSVEP BCI systems.
Collapse
Affiliation(s)
- Heegyu Kim
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
| | - Kyungho Won
- Hybrid Team, Inria, Univ Rennes, IRISA, CNRS, F35000 Rennes, France
| | - Minkyu Ahn
- School of Computer Science and Electrical Engineering, Handong Global University, Bukgu, Pohang, 37554 Korea
| | - Sung Chan Jun
- School of Electrical Engineering and Computer Science, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
- School of Artificial Intelligence, Gwangju Institute of Science and Technology, Bukgu, Gwangju, 61005 Korea
| |
Collapse
|
2
|
Wang R, Zhou T, Li Z, Zhao J, Li X. Using oscillatory and aperiodic neural activity features for identifying idle state in SSVEP-based BCIs reduces false triggers. J Neural Eng 2023; 20:066032. [PMID: 38016453 DOI: 10.1088/1741-2552/ad1054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.In existing studies, rhythmic (oscillatory) components were used as main features to identify brain states, such as control and idle states, while non-rhythmic (aperiodic) components were ignored. Recent studies have shown that aperiodic (1/f) activity is functionally related to cognitive processes. It is not clear if aperiodic activity can distinguish brain states in asynchronous brain-computer interfaces (BCIs) to reduce false triggers. In this paper, we propose an asynchronous method based on the fusion of oscillatory and aperiodic features for steady-state visual evoked potential-based BCIs.Approach.The proposed method first evaluates the oscillatory and aperiodic components of control and idle states using irregular-resampling auto-spectral analysis. Oscillatory features are then extracted using the spectral power of fundamental, second-harmonic, and third-harmonic frequencies of the oscillatory component, and aperiodic features are extracted using the slope and intercept of the first-order polynomial of the spectral fit of the aperiodic component under a log-logarithmic axis. The process produces two types of feature pools (oscillatory, aperiodic features). Next, feature selection (dimensionality reduction) is applied to the feature pools by Bonferroni correctedp-values from two-way analysis of variance. Last, these spatial-specific statistically significant features are used as input for classification to identify the idle state.Mainresults.On a 7-target dataset from 15 subjects, the mix of oscillatory and aperiodic features achieved an average accuracy of 88.39% compared to 83.53% when using oscillatory features alone (4.86% improvement). The results demonstrated that the proposed idle state recognition method achieved enhanced performance by incorporating aperiodic features.Significance.Our results demonstrated that (1) aperiodic features were effective in recognizing idle states and (2) fusing features of oscillatory and aperiodic components enhanced classification performance by 4.86% compared to oscillatory features alone.
Collapse
Affiliation(s)
- Rui Wang
- Department of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neuromodulation of Hebei Province, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | - Tianyi Zhou
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, People's Republic of China
| | - Zheng Li
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, People's Republic of China
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, People's Republic of China
| | - Jing Zhao
- Department of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neuromodulation of Hebei Province, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | - Xiaoli Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, People's Republic of China
| |
Collapse
|
3
|
Mai X, Sheng X, Shu X, Ding Y, Zhu X, Meng J. A Calibration-Free Hybrid Approach Combining SSVEP and EOG for Continuous Control. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3480-3491. [PMID: 37610901 DOI: 10.1109/tnsre.2023.3307814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
While SSVEP-BCI has been widely developed to control external devices, most of them rely on the discrete control strategy. The continuous SSVEP-BCI enables users to continuously deliver commands and receive real-time feedback from the devices, but it suffers from the transition state problem, a period the erroneous recognition, when users shift their gazes between targets. To resolve this issue, we proposed a novel calibration-free Bayesian approach by hybridizing SSVEP and electrooculography (EOG). First, canonical correlation analysis (CCA) was applied to detect the evoked SSVEPs, and saccade during the gaze shift was detected by EOG data using an adaptive threshold method. Then, the new target after the gaze shift was recognized based on a Bayesian optimization approach, which combined the detection of SSVEP and saccade together and calculated the optimized probability distribution of the targets. Eighteen healthy subjects participated in the offline and online experiments. The offline experiments showed that the proposed hybrid BCI had significantly higher overall continuous accuracy and shorter gaze-shifting time compared to FBCCA, CCA, MEC, and PSDA. In online experiments, the proposed hybrid BCI significantly outperformed CCA-based SSVEP-BCI in terms of continuous accuracy (77.61 ± 1.36%vs. 68.86 ± 1.08% and gaze-shifting time (0.93 ± 0.06s vs. 1.94 ± 0.08s). Additionally, participants also perceived a significant improvement over the CCA-based SSVEP-BCI when the newly proposed decoding approach was used. These results validated the efficacy of the proposed hybrid Bayesian approach for the BCI continuous control without any calibration. This study provides an effective framework for combining SSVEP and EOG, and promotes the potential applications of plug-and-play BCIs in continuous control.
Collapse
|
4
|
Zhou Y, Yu T, Gao W, Huang W, Lu Z, Huang Q, Li Y. Shared Three-Dimensional Robotic Arm Control Based on Asynchronous BCI and Computer Vision. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3163-3175. [PMID: 37498753 DOI: 10.1109/tnsre.2023.3299350] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/29/2023]
Abstract
OBJECTIVE A brain-computer interface (BCI) can be used to translate neuronal activity into commands to control external devices. However, using noninvasive BCI to control a robotic arm for movements in three-dimensional (3D) environments and accomplish complicated daily tasks, such as grasping and drinking, remains a challenge. APPROACH In this study, a shared robotic arm control system based on hybrid asynchronous BCI and computer vision was presented. The BCI model, which combines steady-state visual evoked potentials (SSVEPs) and blink-related electrooculography (EOG) signals, allows users to freely choose from fifteen commands in an asynchronous mode corresponding to robot actions in a 3D workspace and reach targets with a wide movement range, while computer vision can identify objects and assist a robotic arm in completing more precise tasks, such as grasping a target automatically. RESULTS Ten subjects participated in the experiments and achieved an average accuracy of more than 92% and a high trajectory efficiency for robot movement. All subjects were able to perform the reach-grasp-drink tasks successfully using the proposed shared control method, with fewer error commands and shorter completion time than with direct BCI control. SIGNIFICANCE Our results demonstrated the feasibility and efficiency of generating practical multidimensional control of an intuitive robotic arm by merging hybrid asynchronous BCI and computer vision-based recognition.
Collapse
|
5
|
Li M, Wu L, Lin F, Guo M, Xu G. Dual stimuli interface with logical division using local move stimuli. Cogn Neurodyn 2023; 17:965-973. [PMID: 37522052 PMCID: PMC10374500 DOI: 10.1007/s11571-022-09878-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 05/26/2022] [Accepted: 08/13/2022] [Indexed: 11/30/2022] Open
Abstract
Improving information transfer rate is a key to prompt the speed of outputting instructions of the event-related potential-based brain-computer interface. Our previous study designed a dual-stimuli interface that simultaneously presents two types of different stimuli to improve the speed. While, adding more stimuli into this interface makes subject easily affected by "flanker effect" that decreases the accuracy of recognizing intention. To achieve high recognition accuracy with many stimuli, this study proposes a dual stimuli interface based on whole flash and local move (DS-WL) and two rules of stimulus arrangement to induce the brain signals. Twenty subjects participated in the experiment, and their signals are recognized by a back propagation neural network classifier. The local move induces larger and later signals of targets to help discriminate the two kinds of stimuli; the rules reduce the N200 and P300 amplitudes of non-target, which improves accuracy. This study demonstrates that the DS-WL is a useful way to shorten the instruction output cycle and speed up the instructions outputting by local move and rules.
Collapse
Affiliation(s)
- Mengfan Li
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300132 China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, 300132 China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, 300132 China
| | - Lingyu Wu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300132 China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, 300132 China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, 300132 China
| | - Fang Lin
- Neuracle Technology (Changzhou) Co., Ltd., Beijing, China
| | - Miaomiao Guo
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300132 China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, 300132 China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, 300132 China
| | - Guizhi Xu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300132 China
- Hebei Key Laboratory of Bioelectromagnetics and Neuroengineering, School of Health Sciences and Biomedical Engineering, Hebei University of Technology, Tianjin, 300132 China
- Tianjin Key Laboratory of Bioelectromagnetic Technology and Intelligent Health, Hebei University of Technology, Tianjin, 300132 China
| |
Collapse
|
6
|
Li R, Zhang Y, Fan G, Li Z, Li J, Fan S, Lou C, Liu X. Design and implementation of high sampling rate and multichannel wireless recorder for EEG monitoring and SSVEP response detection. Front Neurosci 2023; 17:1193950. [PMID: 37457014 PMCID: PMC10339741 DOI: 10.3389/fnins.2023.1193950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/30/2023] [Indexed: 07/18/2023] Open
Abstract
Introduction The collection and process of human brain activity signals play an essential role in developing brain-computer interface (BCI) systems. A portable electroencephalogram (EEG) device has become an important tool for monitoring brain activity and diagnosing mental diseases. However, the miniaturization, portability, and scalability of EEG recorder are the current bottleneck in the research and application of BCI. Methods For scalp EEG and other applications, the current study designs a 32-channel EEG recorder with a sampling rate up to 30 kHz and 16-bit accuracy, which can meet both the demands of scalp and intracranial EEG signal recording. A fully integrated electrophysiology microchip RHS2116 controlled by FPGA is employed to build the EEG recorder, and the design meets the requirements of high sampling rate, high transmission rate and channel extensive. Results The experimental results show that the developed EEG recorder provides a maximum 30 kHz sampling rate and 58 Mbps wireless transmission rate. The electrophysiological experiments were performed on scalp and intracranial EEG collection. An inflatable helmet with adjustable contact impedance was designed, and the pressurization can improve the SNR by approximately 4 times, the average accuracy of steady-state visual evoked potential (SSVEP) was 93.12%. Animal experiments were also performed on rats, and spike activity was captured successfully. Conclusion The designed multichannel wireless EEG collection system is simple and comfort, the helmet-EEG recorder can capture the bioelectric signals without noticeable interference, and it has high measurement performance and great potential for practical application in BCI systems.
Collapse
Affiliation(s)
- Ruikai Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
- Information Center, The Affiliated Hospital of Hebei University, Baoding, China
| | - Yixing Zhang
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Guangwei Fan
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Ziteng Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Jun Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Shiyong Fan
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Cunguang Lou
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Xiuling Liu
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| |
Collapse
|
7
|
Peksa J, Mamchur D. State-of-the-Art on Brain-Computer Interface Technology. SENSORS (BASEL, SWITZERLAND) 2023; 23:6001. [PMID: 37447849 DOI: 10.3390/s23136001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/22/2023] [Revised: 06/23/2023] [Accepted: 06/26/2023] [Indexed: 07/15/2023]
Abstract
This paper provides a comprehensive overview of the state-of-the-art in brain-computer interfaces (BCI). It begins by providing an introduction to BCIs, describing their main operation principles and most widely used platforms. The paper then examines the various components of a BCI system, such as hardware, software, and signal processing algorithms. Finally, it looks at current trends in research related to BCI use for medical, educational, and other purposes, as well as potential future applications of this technology. The paper concludes by highlighting some key challenges that still need to be addressed before widespread adoption can occur. By presenting an up-to-date assessment of the state-of-the-art in BCI technology, this paper will provide valuable insight into where this field is heading in terms of progress and innovation.
Collapse
Affiliation(s)
- Janis Peksa
- Department of Information Technologies, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia
- Institute of Information Technology, Riga Technical University, Kalku Street 1, LV-1658 Riga, Latvia
| | - Dmytro Mamchur
- Department of Information Technologies, Turiba University, Graudu Street 68, LV-1058 Riga, Latvia
- Computer Engineering and Electronics Department, Kremenchuk Mykhailo Ostrohradskyi National University, Pershotravneva 20, 39600 Kremenchuk, Ukraine
| |
Collapse
|
8
|
Wan Z, Li M, Liu S, Huang J, Tan H, Duan W. EEGformer: A transformer-based brain activity classification method using EEG signal. Front Neurosci 2023; 17:1148855. [PMID: 37034169 PMCID: PMC10079879 DOI: 10.3389/fnins.2023.1148855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/06/2023] [Indexed: 04/11/2023] Open
Abstract
Background The effective analysis methods for steady-state visual evoked potential (SSVEP) signals are critical in supporting an early diagnosis of glaucoma. Most efforts focused on adopting existing techniques to the SSVEPs-based brain-computer interface (BCI) task rather than proposing new ones specifically suited to the domain. Method Given that electroencephalogram (EEG) signals possess temporal, regional, and synchronous characteristics of brain activity, we proposed a transformer-based EEG analysis model known as EEGformer to capture the EEG characteristics in a unified manner. We adopted a one-dimensional convolution neural network (1DCNN) to automatically extract EEG-channel-wise features. The output was fed into the EEGformer, which is sequentially constructed using three components: regional, synchronous, and temporal transformers. In addition to using a large benchmark database (BETA) toward SSVEP-BCI application to validate model performance, we compared the EEGformer to current state-of-the-art deep learning models using two EEG datasets, which are obtained from our previous study: SJTU emotion EEG dataset (SEED) and a depressive EEG database (DepEEG). Results The experimental results show that the EEGformer achieves the best classification performance across the three EEG datasets, indicating that the rationality of our model architecture and learning EEG characteristics in a unified manner can improve model classification performance. Conclusion EEGformer generalizes well to different EEG datasets, demonstrating our approach can be potentially suitable for providing accurate brain activity classification and being used in different application scenarios, such as SSVEP-based early glaucoma diagnosis, emotion recognition and depression discrimination.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Shichang Liu
- School of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Hai Tan
- School of Computer Science, Nanjing Audit University, Nanjing, Jiangsu, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
9
|
Xu J, Pan J, Cui T, Zhang S, Yang Y, Ren TL. Recent Progress of Tactile and Force Sensors for Human-Machine Interaction. SENSORS (BASEL, SWITZERLAND) 2023; 23:1868. [PMID: 36850470 PMCID: PMC9961639 DOI: 10.3390/s23041868] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Revised: 01/23/2023] [Accepted: 02/01/2023] [Indexed: 06/18/2023]
Abstract
Human-Machine Interface (HMI) plays a key role in the interaction between people and machines, which allows people to easily and intuitively control the machine and immersively experience the virtual world of the meta-universe by virtual reality/augmented reality (VR/AR) technology. Currently, wearable skin-integrated tactile and force sensors are widely used in immersive human-machine interactions due to their ultra-thin, ultra-soft, conformal characteristics. In this paper, the recent progress of tactile and force sensors used in HMI are reviewed, including piezoresistive, capacitive, piezoelectric, triboelectric, and other sensors. Then, this paper discusses how to improve the performance of tactile and force sensors for HMI. Next, this paper summarizes the HMI for dexterous robotic manipulation and VR/AR applications. Finally, this paper summarizes and proposes the future development trend of HMI.
Collapse
Affiliation(s)
- Jiandong Xu
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Jiong Pan
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Tianrui Cui
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Sheng Zhang
- Shenzhen International Graduate School, Tsinghua University, Shenzhen 518055, China
| | - Yi Yang
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
| | - Tian-Ling Ren
- School of Integrated Circuits and Beijing National Research Center for Information Science and Technology (BNRist), Tsinghua University, Beijing 100084, China
- Center for Flexible Electronics Technology, Tsinghua University, Beijing 100084, China
| |
Collapse
|
10
|
Zhang Z, Li D, Zhao Y, Fan Z, Xiang J, Wang X, Cui X. A flexible speller based on time-space frequency conversion SSVEP stimulation paradigm under dry electrode. Front Comput Neurosci 2023; 17:1101726. [PMID: 36817318 PMCID: PMC9929550 DOI: 10.3389/fncom.2023.1101726] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2022] [Accepted: 01/10/2023] [Indexed: 02/04/2023] Open
Abstract
Introduction Speller is the best way to express the performance of the brain-computer interface (BCI) paradigm. Due to its advantages of short analysis time and high accuracy, the SSVEP paradigm has been widely used in the BCI speller system based on the wet electrode. It is widely known that the wet electrode operation is cumbersome and that the subjects have a poor experience. In addition, in the asynchronous SSVEP system based on threshold analysis, the system flickers continuously from the beginning to the end of the experiment, which leads to visual fatigue. The dry electrode has a simple operation and provides a comfortable experience for subjects. The EOG signal can avoid the stimulation of SSVEP for a long time, thus reducing fatigue. Methods This study first designed the brain-controlled switch based on continuous blinking EOG signal and SSVEP signal to improve the flexibility of the BCI speller. Second, in order to increase the number of speller instructions, we designed the time-space frequency conversion (TSFC) SSVEP stimulus paradigm by constantly changing the time and space frequency of SSVEP sub-stimulus blocks, and designed a speller in a dry electrode environment. Results Seven subjects participated and completed the experiments. The results showed that the accuracy of the brain-controlled switch designed in this study was up to 94.64%, and all the subjects could use the speller flexibly. The designed 60-character speller based on the TSFC-SSVEP stimulus paradigm has an accuracy rate of 90.18% and an information transmission rate (ITR) of 117.05 bits/min. All subjects can output the specified characters in a short time. Discussion This study designed and implemented a multi-instruction SSVEP speller based on dry electrode. Through the combination of EOG and SSVEP signals, the speller can be flexibly controlled. The frequency of SSVEP stimulation sub-block is recoded in time and space by TSFC-SSVEP stimulation paradigm, which greatly improves the number of output instructions of BCI system in dry electrode environment. This work only uses FBCCA algorithm to test the stimulus paradigm, which requires a long stimulus time. In the future, we will use trained algorithms to study stimulus paradigm to improve its overall performance.
Collapse
|
11
|
Zhang J, Gao S, Zhou K, Cheng Y, Mao S. An online hybrid BCI combining SSVEP and EOG-based eye movements. Front Hum Neurosci 2023; 17:1103935. [PMID: 36875236 PMCID: PMC9978185 DOI: 10.3389/fnhum.2023.1103935] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Accepted: 01/31/2023] [Indexed: 02/18/2023] Open
Abstract
Hybrid brain-computer interface (hBCI) refers to a system composed of a single-modality BCI and another system. In this paper, we propose an online hybrid BCI combining steady-state visual evoked potential (SSVEP) and eye movements to improve the performance of BCI systems. Twenty buttons corresponding to 20 characters are evenly distributed in the five regions of the GUI and flash at the same time to arouse SSVEP. At the end of the flash, the buttons in the four regions move in different directions, and the subject continues to stare at the target with eyes to generate the corresponding eye movements. The CCA method and FBCCA method were used to detect SSVEP, and the electrooculography (EOG) waveform was used to detect eye movements. Based on the EOG features, this paper proposes a decision-making method based on SSVEP and EOG, which can further improve the performance of the hybrid BCI system. Ten healthy students took part in our experiment, and the average accuracy and information transfer rate of the system were 94.75% and 108.63 bits/min, respectively.
Collapse
Affiliation(s)
- Jun Zhang
- School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai, China
| | - Shouwei Gao
- School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai, China
| | - Kang Zhou
- School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai, China
| | - Yi Cheng
- School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai, China
| | - Shujun Mao
- School of Mechanical and Electrical Engineering and Automation, Shanghai University, Shanghai, China
| |
Collapse
|
12
|
Mussi MG, Adams KD. EEG hybrid brain-computer interfaces: A scoping review applying an existing hybrid-BCI taxonomy and considerations for pediatric applications. Front Hum Neurosci 2022; 16:1007136. [DOI: 10.3389/fnhum.2022.1007136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2022] [Accepted: 10/27/2022] [Indexed: 11/18/2022] Open
Abstract
Most hybrid brain-computer interfaces (hBCI) aim at improving the performance of single-input BCI. Many combinations are possible to configure an hBCI, such as using multiple brain input signals, different stimuli or more than one input system. Multiple studies have been done since 2010 where such interfaces have been tested and analyzed. Results and conclusions are promising but little has been discussed as to what is the best approach for the pediatric population, should they use hBCI as an assistive technology. Children might face greater challenges when using BCI and might benefit from less complex interfaces. Hence, in this scoping review we included 42 papers that developed hBCI systems for the purpose of control of assistive devices or communication software, and we analyzed them through the lenses of potential use in clinical settings and for children. We extracted taxonomic categories proposed in previous studies to describe the types of interfaces that have been developed. We also proposed interface characteristics that could be observed in different hBCI, such as type of target, number of targets and number of steps before selection. Then, we discussed how each of the extracted characteristics could influence the overall complexity of the system and what might be the best options for applications for children. Effectiveness and efficiency were also collected and included in the analysis. We concluded that the least complex hBCI interfaces might involve having a brain inputs and an external input, with a sequential role of operation, and visual stimuli. Those interfaces might also use a minimal number of targets of the strobic type, with one or two steps before the final selection. We hope this review can be used as a guideline for future hBCI developments and as an incentive to the design of interfaces that can also serve children who have motor impairments.
Collapse
|
13
|
Ju J, Feleke AG, Luo L, Fan X. Recognition of Drivers’ Hard and Soft Braking Intentions Based on Hybrid Brain-Computer Interfaces. CYBORG AND BIONIC SYSTEMS 2022. [DOI: 10.34133/2022/9847652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022] Open
Abstract
In this paper, we propose simultaneous and sequential hybrid brain-computer interfaces (hBCIs) that incorporate electroencephalography (EEG) and electromyography (EMG) signals to classify drivers’ hard braking, soft braking, and normal driving intentions to better assist driving for the first time. The simultaneous hBCIs adopt a feature-level fusion strategy (hBCI-FL) and classifier-level fusion strategies (hBCIs-CL). The sequential hBCIs include the hBCI-SE1, where EEG signals are prioritized to detect hard braking, and hBCI-SE2, where EMG signals are prioritized to detect hard braking. Experimental results show that the proposed hBCI-SE1 with spectral features and the one-vs-rest classification strategy performs best with an average system accuracy of 96.37% among hBCIs. This work is valuable for developing human-centric intelligent assistant driving systems to improve driving safety and driving comfort and promote the application of BCIs.
Collapse
Affiliation(s)
- Jiawei Ju
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China
| | | | - Longxi Luo
- School of Mechanical Engineering, Beijing Institute of Technology, Beijing, China
| | - Xinan Fan
- Beijing Machine and Equipment Institute, China
| |
Collapse
|
14
|
Ayoobi N, Sadeghian EB. A Subject-Independent Brain-Computer Interface Framework Based on Supervised Autoencoder. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:218-221. [PMID: 36086482 DOI: 10.1109/embc48229.2022.9871590] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
A calibration procedure is required in motor imagery-based brain-computer interface (MI-BCI) to tune the system for new users. This procedure is time-consuming and prevents naive users from using the system immediately. Developing a subject-independent MI-BCI system to reduce the calibration phase is still challenging due to the subject-dependent characteristics of the MI signals. Many algorithms based on machine learning and deep learning have been developed to extract high-level features from the MI signals to improve the subject-to-subject generalization of a BCI system. However, these methods are based on supervised learning and extract features useful for discriminating various MI signals. Hence, these approaches cannot find the common underlying patterns in the MI signals and their generalization level is limited. This paper proposes a subject-independent MI-BCI based on a supervised autoencoder (SAE) to circumvent the calibration phase. The suggested framework is validated on dataset 2a from BCI competition IV. The simulation results show that our SISAE model outperforms the conventional and widely used BCI algorithms, common spatial and filter bank common spatial patterns, in terms of the mean Kappa value, in eight out of nine subjects.
Collapse
|
15
|
Toward a Brain-Computer Interface- and Internet of Things-Based Smart Ward Collaborative System Using Hybrid Signals. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:6894392. [PMID: 35480157 PMCID: PMC9038386 DOI: 10.1155/2022/6894392] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Accepted: 03/26/2022] [Indexed: 11/24/2022]
Abstract
This study proposes a brain-computer interface (BCI)- and Internet of Things (IoT)-based smart ward collaborative system using hybrid signals. The system is divided into hybrid asynchronous electroencephalography (EEG)-, electrooculography (EOG)- and gyro-based BCI control system and an IoT monitoring and management system. The hybrid BCI control system proposes a GUI paradigm with cursor movement. The user uses the gyro to control the cursor area selection and uses blink-related EOG to control the cursor click. Meanwhile, the attention-related EEG signals are classified based on a support-vector machine (SVM) to make the final judgment. The judgment of the cursor area and the judgment of the attention state are reduced, thereby reducing the false operation rate in the hybrid BCI system. The accuracy in the hybrid BCI control system was 96.65 ± 1.44%, and the false operation rate and command response time were 0.89 ± 0.42 events/min and 2.65 ± 0.48 s, respectively. These results show the application potential of the hybrid BCI control system in daily tasks. In addition, we develop an architecture to connect intelligent things in a smart ward based on narrowband Internet of Things (NB-IoT) technology. The results demonstrate that our system provides superior communication transmission quality.
Collapse
|
16
|
Ha J, Park S, Im CH. Novel Hybrid Brain-Computer Interface for Virtual Reality Applications Using Steady-State Visual-Evoked Potential-Based Brain-Computer Interface and Electrooculogram-Based Eye Tracking for Increased Information Transfer Rate. Front Neuroinform 2022; 16:758537. [PMID: 35281718 PMCID: PMC8908008 DOI: 10.3389/fninf.2022.758537] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Accepted: 01/27/2022] [Indexed: 11/13/2022] Open
Abstract
Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) have recently attracted increasing attention in virtual reality (VR) applications as a promising tool for controlling virtual objects or generating commands in a "hands-free" manner. Video-oculography (VOG) has been frequently used as a tool to improve BCI performance by identifying the gaze location on the screen, however, current VOG devices are generally too expensive to be embedded in practical low-cost VR head-mounted display (HMD) systems. In this study, we proposed a novel calibration-free hybrid BCI system combining steady-state visual-evoked potential (SSVEP)-based BCI and electrooculogram (EOG)-based eye tracking to increase the information transfer rate (ITR) of a nine-target SSVEP-based BCI in VR environment. Experiments were repeated on three different frequency configurations of pattern-reversal checkerboard stimuli arranged in a 3 × 3 matrix. When a user was staring at one of the nine visual stimuli, the column containing the target stimulus was first identified based on the user's horizontal eye movement direction (left, middle, or right) classified using horizontal EOG recorded from a pair of electrodes that can be readily incorporated with any existing VR-HMD systems. Note that the EOG can be recorded using the same amplifier for recording SSVEP, unlike the VOG system. Then, the target visual stimulus was identified among the three visual stimuli vertically arranged in the selected column using the extension of multivariate synchronization index (EMSI) algorithm, one of the widely used SSVEP detection algorithms. In our experiments with 20 participants wearing a commercial VR-HMD system, it was shown that both the accuracy and ITR of the proposed hybrid BCI were significantly increased compared to those of the traditional SSVEP-based BCI in VR environment.
Collapse
Affiliation(s)
- Jisoo Ha
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, South Korea
| | - Seonghun Park
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Chang-Hwan Im
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, South Korea
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
| |
Collapse
|
17
|
Chen L, Chen P, Zhao S, Luo Z, Chen W, Pei Y, Zhao H, Jiang J, Xu M, Yan Y, Yin E. Adaptive asynchronous control system of robotic arm based on augmented reality-assisted brain-computer interface. J Neural Eng 2021; 18. [PMID: 34654000 DOI: 10.1088/1741-2552/ac3044] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Accepted: 10/15/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-controlled robotic arms have shown broad application prospects with the development of robotics, science and information decoding. However, disadvantages, such as poor flexibility restrict its wide application.Approach. In order to alleviate these drawbacks, this study proposed a robotic arm asynchronous control system based on steady-state visual evoked potential (SSVEP) in an augmented reality (AR) environment. In the AR environment, the participants were able to concurrently see the robot arm and visual stimulation interface through the AR device. Therefore, there was no need to switch attention frequently between the visual stimulation interface and the robotic arm. This study proposed a multi-template algorithm based on canonical correlation analysis and task-related component analysis to identify 12 targets. An optimization strategy based on dynamic window was adopted to adjust the duration of visual stimulation adaptively.Main results. Experimental results of this study found that the high-frequency SSVEP-based brain-computer interface (BCI) realized the switch of the system state, which controlled the robotic arm asynchronously. The average accuracy of the offline experiment was 94.97%, whereas the average information translate rate was 67.37 ± 14.27 bits·min-1. The online results from ten healthy subjects showed that the average selection time of a single online command was 2.04 s, which effectively reduced the visual fatigue of the subjects. Each subject could quickly complete the puzzle task.Significance. The experimental results demonstrated the feasibility and potential of this human-computer interaction strategy and provided new ideas for BCI-controlled robots.
Collapse
Affiliation(s)
- Lingling Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China
| | - Pengfei Chen
- School of Artificial Intelligence and Data Science, Hebei University of Technology, Tianjin 300130, People's Republic of China.,Engineering Research Center of Intelligent Rehabilitation Device and Detection Technology Ministry of Education, Tianjin 300130, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Shaokai Zhao
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Zhiguo Luo
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Wei Chen
- National Research Center for Rehabilitation Technical Aids, Beijing 100176, People's Republic of China
| | - Yu Pei
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Hongyu Zhao
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,East China University of Science and Technology, Shanghai 200237, People's Republic of China
| | - Jing Jiang
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing 100094, People's Republic of China
| | - Minpeng Xu
- Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China.,Tianjin University, Tianjin 300072, People's Republic of China
| | - Ye Yan
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| | - Erwei Yin
- Defense Innovation Institute, Academy of Military Sciences (AMS), Beijing 100071, People's Republic of China.,Tianjin Artificial Intelligence Innovation Center (TAIIC), Tianjin 300450, People's Republic of China
| |
Collapse
|
18
|
KARADUMAN M, KARCİ A. Determining the Demands of Disabled People by Artificial Intelligence Methods. COMPUTER SCIENCE 2021. [DOI: 10.53070/bbd.990485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
19
|
Sahonero-Alvarez G, Singh AK, Sayrafian K, Bianchi L, Roman-Gonzalez A. A Functional BCI Model by the P2731 Working Group: Transducer. BRAIN-COMPUTER INTERFACES 2021. [DOI: 10.1080/2326263x.2021.1968633] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
Affiliation(s)
| | | | - Kamran Sayrafian
- Information Technology Laboratory, National Institute of Standards & Technology, Gaithersburg, USA
| | - Luigi Bianchi
- Civil Engineering and Computer Science Engineering Dept. Tor Vergata University of Rome, Rome, Italy
| | | |
Collapse
|
20
|
Guan S, Li J, Wang F, Yuan Z, Kang X, Lu B. Discriminating three motor imagery states of the same joint for brain-computer interface. PeerJ 2021; 9:e12027. [PMID: 34513337 PMCID: PMC8395581 DOI: 10.7717/peerj.12027] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2021] [Accepted: 07/29/2021] [Indexed: 11/20/2022] Open
Abstract
The classification of electroencephalography (EEG) induced by the same joint is one of the major challenges for brain-computer interface (BCI) systems. In this paper, we propose a new framework, which includes two parts, feature extraction and classification. Based on local mean decomposition (LMD), cloud model, and common spatial pattern (CSP), a feature extraction method called LMD-CSP is proposed to extract distinguishable features. In order to improve the classification results multi-objective grey wolf optimization twin support vector machine (MOGWO-TWSVM) is applied to discriminate the extracted features. We evaluated the performance of the proposed framework on our laboratory data sets with three motor imagery (MI) tasks of the same joint (shoulder abduction, extension, and flexion), and the average classification accuracy was 91.27%. Further comparison with several widely used methods showed that the proposed method had better performance in feature extraction and pattern classification. Overall, this study can be used for developing high-performance BCI systems, enabling individuals to control external devices intuitively and naturally.
Collapse
Affiliation(s)
- Shan Guan
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Jixian Li
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Fuwang Wang
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Zhen Yuan
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Xiaogang Kang
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| | - Bin Lu
- School of Mechanical Engineering, Northeast Electric Power University, Jilin, China
| |
Collapse
|
21
|
Laport F, Iglesia D, Dapena A, Castro PM, Vazquez-Araujo FJ. Proposals and Comparisons from One-Sensor EEG and EOG Human-Machine Interfaces. SENSORS (BASEL, SWITZERLAND) 2021; 21:2220. [PMID: 33810122 PMCID: PMC8004835 DOI: 10.3390/s21062220] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/15/2021] [Accepted: 03/17/2021] [Indexed: 12/03/2022]
Abstract
Human-Machine Interfaces (HMI) allow users to interact with different devices such as computers or home elements. A key part in HMI is the design of simple non-invasive interfaces to capture the signals associated with the user's intentions. In this work, we have designed two different approaches based on Electroencephalography (EEG) and Electrooculography (EOG). For both cases, signal acquisition is performed using only one electrode, which makes placement more comfortable compared to multi-channel systems. We have also developed a Graphical User Interface (GUI) that presents objects to the user using two paradigms-one-by-one objects or rows-columns of objects. Both interfaces and paradigms have been compared for several users considering interactions with home elements.
Collapse
Affiliation(s)
- Francisco Laport
- CITIC Research Center, University of A Coruña, Campus de Elviña, 15071 A Coruña, Spain; (D.I.); (A.D.); (P.M.C.); (F.J.V.-A.)
| | | | | | | | | |
Collapse
|
22
|
Al-Saegh A, Dawwd SA, Abdul-Jabbar JM. Deep learning for motor imagery EEG-based classification: A review. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2020.102172] [Citation(s) in RCA: 27] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/14/2022]
|
23
|
Belkhiria C, Peysakhovich V. Electro-Encephalography and Electro-Oculography in Aeronautics: A Review Over the Last Decade (2010-2020). FRONTIERS IN NEUROERGONOMICS 2020; 1:606719. [PMID: 38234309 PMCID: PMC10790927 DOI: 10.3389/fnrgo.2020.606719] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/15/2020] [Accepted: 11/17/2020] [Indexed: 01/19/2024]
Abstract
Electro-encephalography (EEG) and electro-oculography (EOG) are methods of electrophysiological monitoring that have potentially fruitful applications in neuroscience, clinical exploration, the aeronautical industry, and other sectors. These methods are often the most straightforward way of evaluating brain oscillations and eye movements, as they use standard laboratory or mobile techniques. This review describes the potential of EEG and EOG systems and the application of these methods in aeronautics. For example, EEG and EOG signals can be used to design brain-computer interfaces (BCI) and to interpret brain activity, such as monitoring the mental state of a pilot in determining their workload. The main objectives of this review are to, (i) offer an in-depth review of literature on the basics of EEG and EOG and their application in aeronautics; (ii) to explore the methodology and trends of research in combined EEG-EOG studies over the last decade; and (iii) to provide methodological guidelines for beginners and experts when applying these methods in environments outside the laboratory, with a particular focus on human factors and aeronautics. The study used databases from scientific, clinical, and neural engineering fields. The review first introduces the characteristics and the application of both EEG and EOG in aeronautics, undertaking a large review of relevant literature, from early to more recent studies. We then built a novel taxonomy model that includes 150 combined EEG-EOG papers published in peer-reviewed scientific journals and conferences from January 2010 to March 2020. Several data elements were reviewed for each study (e.g., pre-processing, extracted features and performance metrics), which were then examined to uncover trends in aeronautics and summarize interesting methods from this important body of literature. Finally, the review considers the advantages and limitations of these methods as well as future challenges.
Collapse
|
24
|
Zhu Y, Li Y, Lu J, Li P. A Hybrid BCI Based on SSVEP and EOG for Robotic Arm Control. Front Neurorobot 2020; 14:583641. [PMID: 33328950 PMCID: PMC7714925 DOI: 10.3389/fnbot.2020.583641] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2020] [Accepted: 10/26/2020] [Indexed: 11/21/2022] Open
Abstract
Brain-computer interface (BCI) for robotic arm control has been studied to improve the life quality of people with severe motor disabilities. There are still challenges for robotic arm control in accomplishing a complex task with a series of actions. An efficient switch and a timely cancel command are helpful in the application of robotic arm. Based on the above, we proposed an asynchronous hybrid BCI in this study. The basic control of a robotic arm with six degrees of freedom was a steady-state visual evoked potential (SSVEP) based BCI with fifteen target classes. We designed an EOG-based switch which used a triple blink to either activate or deactivate the flash of SSVEP-based BCI. Stopping flash in the idle state can help to reduce visual fatigue and false activation rate (FAR). Additionally, users were allowed to cancel the current command simply by a wink in the feedback phase to avoid executing the incorrect command. Fifteen subjects participated and completed the experiments. The cue-based experiment obtained an average accuracy of 92.09%, and the information transfer rates (ITR) resulted in 35.98 bits/min. The mean FAR of the switch was 0.01/min. Furthermore, all subjects succeeded in asynchronously operating the robotic arm to grasp, lift, and move a target object from the initial position to a specific location. The results indicated the feasibility of the combination of EOG and SSVEP signals and the flexibility of EOG signal in BCI to complete a complicated task of robotic arm control.
Collapse
Affiliation(s)
- Yuanlu Zhu
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Ying Li
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Jinling Lu
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China
| | - Pengcheng Li
- Wuhan National Laboratory for Optoelectronics, Britton Chance Center of Biomedical Photonics, Huazhong University of Science and Technology, Wuhan, China.,MoE Key Laboratory for Biomedical Photonics, School of Engineering Sciences, Huazhong University of Science and Technology, Wuhan, China.,Huazhong University of Science and Technology-Suzhou Institute for Brainsmatics, Suzhou, China
| |
Collapse
|