1
|
Chen D, Huang H, Guan Z, Pan J, Li Y. An Intersubject Brain-Computer Interface Based on Domain-Adversarial Training of Convolutional Neural Network. IEEE Trans Biomed Eng 2024; 71:2956-2967. [PMID: 38781054 DOI: 10.1109/tbme.2024.3404131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
OBJECTIVE Attention decoding plays a vital role in daily life, where electroencephalography (EEG) has been widely involved. However, training a universally effective model for everyone is impractical due to substantial interindividual variability in EEG signals. To tackle the above challenge, we propose an end-to-end brain-computer interface (BCI) framework, including temporal and spatial one-dimensional (1D) convolutional neural network and domain-adversarial training strategy, namely DA-TSnet. METHOD Specifically, DA-TSnet extracts temporal and spatial features of EEG, while it is jointly supervised by task loss and domain loss. During training, DA-TSnet aims to maximize the domain loss while simultaneously minimizing the task loss. We conduct an offline analysis, simulate online experiments on a self-collected dataset of 85 subjects, and real online experiments on 22 subjects. MAIN RESULTS DA-TSnet achieves a leave-one-subject-out (LOSO) cross-validation (CV) classification accuracy of 89.40% ± 9.96%, outperforming several state-of-the-art attention EEG decoding methods. In simulated online experiments, DA-TSnet achieves an outstanding accuracy of 88.07% ± 11.22%. In real online experiments, it achieves an average accuracy surpassing 86%. SIGNIFICANCE An end-to-end network framework does not rely on elaborate preprocessing and feature extraction steps, which saves time and human workload. Moreover, our framework utilizes domain-adversarial training neural network (DANN) to tackle the challenge posed by the high interindividual variability in EEG signals, which has significant reference value for handling other EEG signal decoding issues. Last, the performance of the DA-TSnet framework in offline and online experiments underscores its potential to facilitate more reliable applications.
Collapse
|
2
|
Lin C, Zhang C, Xu J, Liu R, Leng Y, Fu C. Neural Correlation of EEG and Eye Movement in Natural Grasping Intention Estimation. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4329-4337. [PMID: 37883284 DOI: 10.1109/tnsre.2023.3327907] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2023]
Abstract
Decoding the user's natural grasp intent enhances the application of wearable robots, improving the daily lives of individuals with disabilities. Electroencephalogram (EEG) and eye movements are two natural representations when users generate grasp intent in their minds, with current studies decoding human intent by fusing EEG and eye movement signals. However, the neural correlation between these two signals remains unclear. Thus, this paper aims to explore the consistency between EEG and eye movement in natural grasping intention estimation. Specifically, six grasp intent pairs are decoded by combining feature vectors and utilizing the optimal classifier. Extensive experimental results indicate that the coupling between the EEG and eye movements intent patterns remains intact when the user generates a natural grasp intent, and concurrently, the EEG pattern is consistent with the eye movements pattern across the task pairs. Moreover, the findings reveal a solid connection between EEG and eye movements even when taking into account cortical EEG (originating from the visual cortex or motor cortex) and the presence of a suboptimal classifier. Overall, this work uncovers the coupling correlation between EEG and eye movements and provides a reference for intention estimation.
Collapse
|
3
|
Barnova K, Mikolasova M, Kahankova RV, Jaros R, Kawala-Sterniuk A, Snasel V, Mirjalili S, Pelc M, Martinek R. Implementation of artificial intelligence and machine learning-based methods in brain-computer interaction. Comput Biol Med 2023; 163:107135. [PMID: 37329623 DOI: 10.1016/j.compbiomed.2023.107135] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Revised: 05/13/2023] [Accepted: 06/04/2023] [Indexed: 06/19/2023]
Abstract
Brain-computer interfaces are used for direct two-way communication between the human brain and the computer. Brain signals contain valuable information about the mental state and brain activity of the examined subject. However, due to their non-stationarity and susceptibility to various types of interference, their processing, analysis and interpretation are challenging. For these reasons, the research in the field of brain-computer interfaces is focused on the implementation of artificial intelligence, especially in five main areas: calibration, noise suppression, communication, mental condition estimation, and motor imagery. The use of algorithms based on artificial intelligence and machine learning has proven to be very promising in these application domains, especially due to their ability to predict and learn from previous experience. Therefore, their implementation within medical technologies can contribute to more accurate information about the mental state of subjects, alleviate the consequences of serious diseases or improve the quality of life of disabled patients.
Collapse
Affiliation(s)
- Katerina Barnova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Martina Mikolasova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Radana Vilimkova Kahankova
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia
| | - Rene Jaros
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Aleksandra Kawala-Sterniuk
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland.
| | - Vaclav Snasel
- Department of Computer Science, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia.
| | - Seyedali Mirjalili
- Centre for Artificial Intelligence Research and Optimisation, Torrens University Australia, Australia.
| | - Mariusz Pelc
- Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland; School of Computing and Mathematical Sciences, University of Greenwich, London, UK.
| | - Radek Martinek
- Department of Cybernetics and Biomedical Engineering, Faculty of Electrical Engineering and Computer Science, VSB-Technical University of Ostrava, Czechia; Faculty of Electrical Engineering, Automatic Control and Informatics, Opole University of Technology, Poland.
| |
Collapse
|
4
|
Chen D, Huang H, Bao X, Pan J, Li Y. An EEG-based attention recognition method: fusion of time domain, frequency domain, and non-linear dynamics features. Front Neurosci 2023; 17:1194554. [PMID: 37502681 PMCID: PMC10368951 DOI: 10.3389/fnins.2023.1194554] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 06/22/2023] [Indexed: 07/29/2023] Open
Abstract
Introduction Attention is a complex cognitive function of human brain that plays a vital role in our daily lives. Electroencephalogram (EEG) is used to measure and analyze attention due to its high temporal resolution. Although several attention recognition brain-computer interfaces (BCIs) have been proposed, there is a scarcity of studies with a sufficient number of subjects, valid paradigms, and reliable recognition analysis across subjects. Methods In this study, we proposed a novel attention paradigm and feature fusion method to extract features, which fused time domain features, frequency domain features and nonlinear dynamics features. We then constructed an attention recognition framework for 85 subjects. Results and discussion We achieved an intra-subject average classification accuracy of 85.05% ± 6.87% and an inter-subject average classification accuracy of 81.60% ± 9.93%, respectively. We further explored the neural patterns in attention recognition, where attention states showed less activation than non-attention states in the prefrontal and occipital areas in α, β and θ bands. The research explores, for the first time, the fusion of time domain features, frequency domain features and nonlinear dynamics features for attention recognition, providing a new understanding of attention recognition.
Collapse
Affiliation(s)
- Di Chen
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
- Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou, China
| | - Haiyun Huang
- Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou, China
- School of Software, South China Normal University, Foshan, China
| | - Xiaoyu Bao
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
- Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou, China
| | - Jiahui Pan
- Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou, China
- School of Software, South China Normal University, Foshan, China
| | - Yuanqing Li
- School of Automation Science and Engineering, South China University of Technology, Guangzhou, China
- Research Center for Brain-Computer Interface, Pazhou Laboratory, Guangzhou, China
| |
Collapse
|
5
|
Li R, Zhang Y, Fan G, Li Z, Li J, Fan S, Lou C, Liu X. Design and implementation of high sampling rate and multichannel wireless recorder for EEG monitoring and SSVEP response detection. Front Neurosci 2023; 17:1193950. [PMID: 37457014 PMCID: PMC10339741 DOI: 10.3389/fnins.2023.1193950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2023] [Accepted: 05/30/2023] [Indexed: 07/18/2023] Open
Abstract
Introduction The collection and process of human brain activity signals play an essential role in developing brain-computer interface (BCI) systems. A portable electroencephalogram (EEG) device has become an important tool for monitoring brain activity and diagnosing mental diseases. However, the miniaturization, portability, and scalability of EEG recorder are the current bottleneck in the research and application of BCI. Methods For scalp EEG and other applications, the current study designs a 32-channel EEG recorder with a sampling rate up to 30 kHz and 16-bit accuracy, which can meet both the demands of scalp and intracranial EEG signal recording. A fully integrated electrophysiology microchip RHS2116 controlled by FPGA is employed to build the EEG recorder, and the design meets the requirements of high sampling rate, high transmission rate and channel extensive. Results The experimental results show that the developed EEG recorder provides a maximum 30 kHz sampling rate and 58 Mbps wireless transmission rate. The electrophysiological experiments were performed on scalp and intracranial EEG collection. An inflatable helmet with adjustable contact impedance was designed, and the pressurization can improve the SNR by approximately 4 times, the average accuracy of steady-state visual evoked potential (SSVEP) was 93.12%. Animal experiments were also performed on rats, and spike activity was captured successfully. Conclusion The designed multichannel wireless EEG collection system is simple and comfort, the helmet-EEG recorder can capture the bioelectric signals without noticeable interference, and it has high measurement performance and great potential for practical application in BCI systems.
Collapse
Affiliation(s)
- Ruikai Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
- Information Center, The Affiliated Hospital of Hebei University, Baoding, China
| | - Yixing Zhang
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Guangwei Fan
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Ziteng Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Jun Li
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Shiyong Fan
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Cunguang Lou
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| | - Xiuling Liu
- The College of Electronic Information Engineering and the Hebei Key Laboratory of Digital Medical Engineering, Hebei University, Baoding, China
| |
Collapse
|
6
|
Rodriguez F, He S, Tan H. The potential of convolutional neural networks for identifying neural states based on electrophysiological signals: experiments on synthetic and real patient data. Front Hum Neurosci 2023; 17:1134599. [PMID: 37333834 PMCID: PMC10272439 DOI: 10.3389/fnhum.2023.1134599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2022] [Accepted: 05/03/2023] [Indexed: 06/20/2023] Open
Abstract
Processing incoming neural oscillatory signals in real-time and decoding from them relevant behavioral or pathological states is often required for adaptive Deep Brain Stimulation (aDBS) and other brain-computer interface (BCI) applications. Most current approaches rely on first extracting a set of predefined features, such as the power in canonical frequency bands or various time-domain features, and then training machine learning systems that use those predefined features as inputs and infer what the underlying brain state is at each given time point. However, whether this algorithmic approach is best suited to extract all available information contained within the neural waveforms remains an open question. Here, we aim to explore different algorithmic approaches in terms of their potential to yield improvements in decoding performance based on neural activity such as measured through local field potentials (LFPs) recordings or electroencephalography (EEG). In particular, we aim to explore the potential of end-to-end convolutional neural networks, and compare this approach with other machine learning methods that are based on extracting predefined feature sets. To this end, we implement and train a number of machine learning models, based either on manually constructed features or, in the case of deep learning-based models, on features directly learnt from the data. We benchmark these models on the task of identifying neural states using simulated data, which incorporates waveform features previously linked to physiological and pathological functions. We then assess the performance of these models in decoding movements based on local field potentials recorded from the motor thalamus of patients with essential tremor. Our findings, derived from both simulated and real patient data, suggest that end-to-end deep learning-based methods may surpass feature-based approaches, particularly when the relevant patterns within the waveform data are either unknown, difficult to quantify, or when there may be, from the point of view of the predefined feature extraction pipeline, unidentified features that could contribute to decoding performance. The methodologies proposed in this study might hold potential for application in adaptive deep brain stimulation (aDBS) and other brain-computer interface systems.
Collapse
|
7
|
Mao J, Qiu S, Wei W, He H. Cross-modal guiding and reweighting network for multi-modal RSVP-based target detection. Neural Netw 2023; 161:65-82. [PMID: 36736001 DOI: 10.1016/j.neunet.2023.01.009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 10/31/2022] [Accepted: 01/11/2023] [Indexed: 01/17/2023]
Abstract
Rapid Serial Visual Presentation (RSVP) based Brain-Computer Interface (BCI) facilities the high-throughput detection of rare target images by detecting evoked event-related potentials (ERPs). At present, the decoding accuracy of the RSVP-based BCI system limits its practical applications. This study introduces eye movements (gaze and pupil information), referred to as EYE modality, as another useful source of information to combine with EEG-based BCI and forms a novel target detection system to detect target images in RSVP tasks. We performed an RSVP experiment, recorded the EEG signals and eye movements simultaneously during a target detection task, and constructed a multi-modal dataset including 20 subjects. Also, we proposed a cross-modal guiding and fusion network to fully utilize EEG and EYE modalities and fuse them for better RSVP decoding performance. In this network, a two-branch backbone was built to extract features from these two modalities. A Cross-Modal Feature Guiding (CMFG) module was proposed to guide EYE modality features to complement the EEG modality for better feature extraction. A Multi-scale Multi-modal Reweighting (MMR) module was proposed to enhance the multi-modal features by exploring intra- and inter-modal interactions. And, a Dual Activation Fusion (DAF) was proposed to modulate the enhanced multi-modal features for effective fusion. Our proposed network achieved a balanced accuracy of 88.00% (±2.29) on the collected dataset. The ablation studies and visualizations revealed the effectiveness of the proposed modules. This work implies the effectiveness of introducing the EYE modality in RSVP tasks. And, our proposed network is a promising method for RSVP decoding and further improves the performance of RSVP-based target detection systems.
Collapse
Affiliation(s)
- Jiayu Mao
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Shuang Qiu
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China
| | - Wei Wei
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| | - Huiguang He
- Laboratory of Brain Atlas and Brain-Inspired Intelligence, State Key Laboratory of Multimodal Artificial Intelligence Systems, Institute of Automation, Chinese Academy of Sciences, Beijing, China; School of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
8
|
Korkmaz OE, Aydemir O, Oral EA, Ozbek IY. A novel probabilistic and 3D column P300 stimulus presentation paradigm for EEG-based spelling systems. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08329-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/18/2023]
|
9
|
Du P, Li P, Cheng L, Li X, Su J. Single-trial P300 classification algorithm based on centralized multi-person data fusion CNN. Front Neurosci 2023; 17:1132290. [PMID: 36908799 PMCID: PMC9992797 DOI: 10.3389/fnins.2023.1132290] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 02/01/2023] [Indexed: 02/24/2023] Open
Abstract
Introduction Currently, it is still a challenge to detect single-trial P300 from electroencephalography (EEG) signals. In this paper, to address the typical problems faced by existing single-trial P300 classification, such as complex, time-consuming and low accuracy processes, a single-trial P300 classification algorithm based on multiplayer data fusion convolutional neural network (CNN) is proposed to construct a centralized collaborative brain-computer interfaces (cBCI) for fast and highly accurate classification of P300 EEG signals. Methods In this paper, two multi-person data fusion methods (parallel data fusion and serial data fusion) are used in the data pre-processing stage to fuse multi-person EEG information stimulated by the same task instructions, and then the fused data is fed as input to the CNN for classification. In building the CNN network for single-trial P300 classification, the Conv layer was first used to extract the features of single-trial P300, and then the Maxpooling layer was used to connect the Flatten layer for secondary feature extraction and dimensionality reduction, thereby simplifying the computation. Finally batch normalisation is used to train small batches of data in order to better generalize the network and speed up single-trial P300 signal classification. Results In this paper, the above new algorithms were tested on the Kaggle dataset and the Brain-Computer Interface (BCI) Competition III dataset, and by analyzing the P300 waveform features and EEG topography and the four standard evaluation metrics, namely Accuracy, Precision, Recall and F1-score,it was demonstrated that the single-trial P300 classification algorithm after two multi-person data fusion CNNs significantly outperformed other classification algorithms. Discussion The results show that the single-trial P300 classification algorithm after two multi-person data fusion CNNs significantly outperformed the single-person model, and that the single-trial P300 classification algorithm with two multi-person data fusion CNNs involves smaller models, fewer training parameters, higher classification accuracy and improves the overall P300-cBCI classification rate and actual performance more effectively with a small amount of sample information compared to other algorithms.
Collapse
Affiliation(s)
- Pu Du
- School of Integrated Circuit Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Penghai Li
- School of Integrated Circuit Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Longlong Cheng
- School of Integrated Circuit Science and Engineering, Tianjin University of Technology, Tianjin, China.,China Electronics Cloud Brain Technology Co., Ltd., Tianjin, China
| | - Xueqing Li
- School of Integrated Circuit Science and Engineering, Tianjin University of Technology, Tianjin, China
| | - Jianxian Su
- School of Integrated Circuit Science and Engineering, Tianjin University of Technology, Tianjin, China
| |
Collapse
|
10
|
Pan J, Chen X, Ban N, He J, Chen J, Huang H. Advances in P300 brain-computer interface spellers: toward paradigm design and performance evaluation. Front Hum Neurosci 2022; 16:1077717. [PMID: 36618996 PMCID: PMC9810759 DOI: 10.3389/fnhum.2022.1077717] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2022] [Accepted: 11/23/2022] [Indexed: 12/24/2022] Open
Abstract
A brain-computer interface (BCI) is a non-muscular communication technology that provides an information exchange channel for our brains and external devices. During the decades, BCI has made noticeable progress and has been applied in many fields. One of the most traditional BCI applications is the BCI speller. This article primarily discusses the progress of research into P300 BCI spellers and reviews four types of P300 spellers: single-modal P300 spellers, P300 spellers based on multiple brain patterns, P300 spellers with multisensory stimuli, and P300 spellers with multiple intelligent techniques. For each type of P300 speller, we further review several representative P300 spellers, including their design principles, paradigms, algorithms, experimental performance, and corresponding advantages. We particularly emphasized the paradigm design ideas, including the overall layout, individual symbol shapes and stimulus forms. Furthermore, several important issues and research guidance for the P300 speller were identified. We hope that this review can assist researchers in learning the new ideas of these novel P300 spellers and enhance their practical application capability.
Collapse
Affiliation(s)
- Jiahui Pan
- School of Software, South China Normal University, Guangzhou, China
| | - XueNing Chen
- School of Software, South China Normal University, Guangzhou, China
| | - Nianming Ban
- School of Software, South China Normal University, Guangzhou, China
| | - JiaShao He
- School of Software, South China Normal University, Guangzhou, China
| | - Jiayi Chen
- School of Software, South China Normal University, Guangzhou, China
| | - Haiyun Huang
- School of Software, South China Normal University, Guangzhou, China
| |
Collapse
|
11
|
Chen W, Chen SK, Liu YH, Chen YJ, Chen CS. An Electric Wheelchair Manipulating System Using SSVEP-Based BCI System. BIOSENSORS 2022; 12:bios12100772. [PMID: 36290910 PMCID: PMC9599534 DOI: 10.3390/bios12100772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/05/2022] [Accepted: 09/16/2022] [Indexed: 11/22/2022]
Abstract
Most people with motor disabilities use a joystick to control an electric wheelchair. However, those who suffer from multiple sclerosis or amyotrophic lateral sclerosis may require other methods to control an electric wheelchair. This study implements an electroencephalography (EEG)-based brain–computer interface (BCI) system and a steady-state visual evoked potential (SSVEP) to manipulate an electric wheelchair. While operating the human–machine interface, three types of SSVEP scenarios involving a real-time virtual stimulus are displayed on a monitor or mixed reality (MR) goggles to produce the EEG signals. Canonical correlation analysis (CCA) is used to classify the EEG signals into the corresponding class of command and the information transfer rate (ITR) is used to determine the effect. The experimental results show that the proposed SSVEP stimulus generates the EEG signals because of the high classification accuracy of CCA. This is used to control an electric wheelchair along a specific path. Simultaneous localization and mapping (SLAM) is the mapping method that is available in the robotic operating software (ROS) platform that is used for the wheelchair system for this study.
Collapse
Affiliation(s)
- Wen Chen
- Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 10608, Taiwan
| | - Shih-Kang Chen
- Department of Mechatronics Control, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
| | - Yi-Hung Liu
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Yu-Jen Chen
- Department of Radiation Oncology, MacKay Memorial Hospital, Taipei 10449, Taiwan
| | - Chin-Sheng Chen
- Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 10608, Taiwan
- Correspondence: ; Tel.: +886-2-27712171 (ext. 4325)
| |
Collapse
|
12
|
Xu Z, Wang T, Cao J, Bao Z, Jiang T, Gao F. BECT Spike Detection Based on Novel EEG Sequence Features and LSTM Algorithms. IEEE Trans Neural Syst Rehabil Eng 2021; 29:1734-1743. [PMID: 34428145 DOI: 10.1109/tnsre.2021.3107142] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
The benign epilepsy with spinous waves in the central temporal region (BECT) is the one of the most common epileptic syndromes in children, that seriously threaten the nervous system development of children. The most obvious feature of BECT is the existence of a large number of electroencephalogram (EEG) spikes in the Rolandic area during the interictal period, that is an important basis to assist neurologists in BECT diagnosis. With this regard, the paper proposes a novel BECT spike detection algorithm based on time domain EEG sequence features and the long short-term memory (LSTM) neural network. Three time domain sequence features, that can obviously characterize the spikes of BECT, are extracted for EEG representation. The synthetic minority oversampling technique (SMOTE) is applied to address the spike imbalance issue in EEGs, and the bi-directional LSTM (BiLSTM) is trained for spike detection. The algorithm is evaluated using the EEG data of 15 BECT patients recorded from the Children's Hospital, Zhejiang University School of Medicine (CHZU). The experiment shows that the proposed algorithm can obtained an average of 88.54% F1 score, 92.04% sensitivity, and 85.75% precision, that generally outperforms several state-of-the-art spike detection methods.
Collapse
|