1
|
Liu B, Gao H, Jiang Y, Wu J. Research on a soft saturation nonlinear SSVEP signal feature extraction algorithm. Sci Rep 2024; 14:17043. [PMID: 39048655 PMCID: PMC11269718 DOI: 10.1038/s41598-024-67853-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2024] [Accepted: 07/16/2024] [Indexed: 07/27/2024] Open
Abstract
Brain-computer interfaces (BCIs) based on steady-state visual evoked potentials (SSVEP) have received widespread attention due to their high information transmission rate, high accuracy, and rich instruction set. However, the performance of its identification methods strongly depends on the amount of calibration data for within-subject classification. Some studies use deep learning (DL) algorithms for inter-subject classification, which can reduce the calculation process, but there is still much room for improvement in performance compared with intra-subject classification. To solve these problems, an efficient SSVEP signal recognition deep learning network model e-SSVEPNet based on the soft saturation nonlinear module is proposed in this paper. The soft saturation nonlinear module uses a similar exponential calculation method for output when it is less than zero, improving robustness to noise. Under the conditions of the SSVEP data set, two sliding time window lengths (1 s and 0.5 s), and three training data sizes, this paper evaluates the proposed network model and compares it with other traditional and deep learning model baseline methods. The experimental results of the nonlinear module were classified and compared. A large number of experimental results show that the proposed network has the highest average accuracy of intra-subject classification on the SSVEP data set, improves the performance of SSVEP signal classification and recognition, and has higher decoding accuracy under short signals, so it has huge potential ability to realize high-speed SSVEP-based for BCI.
Collapse
Affiliation(s)
- Bo Liu
- Shenyang Ligong University, Shenyang, Liaoning, China
| | - Hongwei Gao
- Shenyang Ligong University, Shenyang, Liaoning, China.
| | - Yueqiu Jiang
- Shenyang Ligong University, Shenyang, Liaoning, China.
| | - Jiaxuan Wu
- Shenyang Ligong University, Shenyang, Liaoning, China
| |
Collapse
|
2
|
Mwata-Velu T, Zamora E, Vasquez-Gomez JI, Ruiz-Pinales J, Sossa H. Multiclass Classification of Visual Electroencephalogram Based on Channel Selection, Minimum Norm Estimation Algorithm, and Deep Network Architectures. SENSORS (BASEL, SWITZERLAND) 2024; 24:3968. [PMID: 38931751 PMCID: PMC11207572 DOI: 10.3390/s24123968] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 06/04/2024] [Accepted: 06/06/2024] [Indexed: 06/28/2024]
Abstract
This work addresses the challenge of classifying multiclass visual EEG signals into 40 classes for brain-computer interface applications using deep learning architectures. The visual multiclass classification approach offers BCI applications a significant advantage since it allows the supervision of more than one BCI interaction, considering that each class label supervises a BCI task. However, because of the nonlinearity and nonstationarity of EEG signals, using multiclass classification based on EEG features remains a significant challenge for BCI systems. In the present work, mutual information-based discriminant channel selection and minimum-norm estimate algorithms were implemented to select discriminant channels and enhance the EEG data. Hence, deep EEGNet and convolutional recurrent neural networks were separately implemented to classify the EEG data for image visualization into 40 labels. Using the k-fold cross-validation approach, average classification accuracies of 94.8% and 89.8% were obtained by implementing the aforementioned network architectures. The satisfactory results obtained with this method offer a new implementation opportunity for multitask embedded BCI applications utilizing a reduced number of both channels (<50%) and network parameters (<110 K).
Collapse
Affiliation(s)
- Tat’y Mwata-Velu
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
- Section Électricité, Institut Supérieur Pédagogique Technique de Kinshasa (I.S.P.T.-KIN), Av. de la Science 5, Gombe, Kinshasa 03287, Democratic Republic of the Congo
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico (J.R.-P.)
| | - Erik Zamora
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
| | - Juan Irving Vasquez-Gomez
- Centro de Innovación y Desarrollo Tecnológico en Cómputo, Instituto Politécnico Nacional, Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Gustavo A. Madero, Mexico City 07738, Mexico;
| | - Jose Ruiz-Pinales
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico (J.R.-P.)
| | - Humberto Sossa
- Robotics and Mechatronics Lab, Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC–IPN), Avenida Juan de Dios Bátiz esquina Miguel Othón de Mendizábal Colonia Nueva Industrial, Vallejo CP, Gustavo A. Madero, Mexico City 07738, Mexico; (T.M.-V.); (H.S.)
| |
Collapse
|
3
|
Hua C, Tao J, Zhou Z, Chai L, Yan Y, Liu J, Fu R. EEG classification model for virtual reality motion sickness based on multi-scale CNN feature correlation. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2024; 251:108218. [PMID: 38728828 DOI: 10.1016/j.cmpb.2024.108218] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 03/28/2024] [Accepted: 05/06/2024] [Indexed: 05/12/2024]
Abstract
BACKGROUND Virtual reality motion sickness (VRMS) is a key issue hindering the development of virtual reality technology, and accurate detection of its occurrence is the first prerequisite for solving the issue. OBJECTIVE In this paper, a convolutional neural network (CNN) EEG detection model based on multi-scale feature correlation is proposed for detecting VRMS. METHODS The model uses multi-scale 1D convolutional layers to extract multi-scale temporal features from the multi-lead EEG data, and then calculates the feature correlations of the extracted multi-scale features among all the leads to form the feature adjacent matrixes, which converts the time-domain features to correlation-based brain network features, thus strengthen the feature representation. Finally, the correlation features of each layer are fused. The fused features are then fed into the channel attention module to filter the channels and classify them using a fully connected network. Finally, we recruit subjects to experience 6 different modes of virtual roller coaster scenes, and collect resting EEG data before and after the task to verify the model. RESULTS The results show that the accuracy, precision, recall and F1-score of this model for the recognition of VRMS are 98.66 %, 98.65 %, 98.68 %, and 98.66 %, respectively. The proposed model outperforms the current classic and advanced EEG recognition models. SIGNIFICANCE It shows that this model can be used for the recognition of VRMS based on the resting state EEG.
Collapse
Affiliation(s)
- Chengcheng Hua
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Jianlong Tao
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Zhanfeng Zhou
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Lining Chai
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Ying Yan
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Jia Liu
- School of Automation, C-IMER, CICAEET, Nanjing University of Information Science & Technology, Nanjing 210044, China
| | - Rongrong Fu
- Measurement Technology and Instrumentation Key Laboratory of Hebei Province, Department of Electrical Engineering, Yanshan University, Qinhuangdao 066000, China.
| |
Collapse
|
4
|
Chen SY, Chang CM, Chiang KJ, Wei CS. SSVEP-DAN: Cross-Domain Data Alignment for SSVEP-Based Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2024; 32:2027-2037. [PMID: 38781061 DOI: 10.1109/tnsre.2024.3404432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2024]
Abstract
Steady-state visual-evoked potential (SSVEP)-based brain-computer interfaces (BCIs) offer a non-invasive means of communication through high-speed speller systems. However, their efficiency is highly dependent on individual training data acquired during time-consuming calibration sessions. To address the challenge of data insufficiency in SSVEP-based BCIs, we introduce SSVEP-DAN, the first dedicated neural network model designed to align SSVEP data across different domains, encompassing various sessions, subjects, or devices. Our experimental results demonstrate the ability of SSVEP-DAN to transform existing source SSVEP data into supplementary calibration data. This results in a significant improvement in SSVEP decoding accuracy while reducing the calibration time. We envision SSVEP-DAN playing a crucial role in future applications of high-performance SSVEP-based BCIs. The source code for this work is available at: https://github.com/CECNL/SSVEP-DAN.
Collapse
|
5
|
Zhang X, Zhang T, Jiang Y, Zhang W, Lu Z, Wang Y, Tao Q. A novel brain-controlled prosthetic hand method integrating AR-SSVEP augmentation, asynchronous control, and machine vision assistance. Heliyon 2024; 10:e26521. [PMID: 38463871 PMCID: PMC10920167 DOI: 10.1016/j.heliyon.2024.e26521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 11/27/2023] [Accepted: 02/14/2024] [Indexed: 03/12/2024] Open
Abstract
Background and objective The brain-computer interface (BCI) system based on steady-state visual evoked potentials (SSVEP) is expected to help disabled patients achieve alternative prosthetic hand assistance. However, the existing study still has some shortcomings in interaction aspects such as stimulus paradigm and control logic. The purpose of this study is to innovate the visual stimulus paradigm and asynchronous decoding/control strategy by integrating augmented reality technology, and propose an asynchronous pattern recognition algorithm, thereby improving the interaction logic and practical application capabilities of the prosthetic hand with the BCI system. Methods An asynchronous visual stimulus paradigm based on an augmented reality (AR) interface was proposed in this paper, in which there were 8 control modes, including Grasp, Put down, Pinch, Point, Fist, Palm push, Hold pen, and Initial. According to the attentional orienting characteristics of the paradigm, a novel asynchronous pattern recognition algorithm that combines center extended canonical correlation analysis and support vector machine (Center-ECCA-SVM) was proposed. Then, this study proposed an intelligent BCI system switch based on a deep learning object detection algorithm (YOLOv4) to improve the level of user interaction. Finally, two experiments were designed to test the performance of the brain-controlled prosthetic hand system and its practical performance in real scenarios. Results Under the AR paradigm of this study, compared with the liquid crystal display (LCD) paradigm, the average SSVEP spectrum amplitude of multiple subjects increased by 17.41%, and the signal-noise ratio (SNR) increased by 3.52%. The average stimulus pattern recognition accuracy was 96.71 ± 3.91%, which was 2.62% higher than the LCD paradigm. Under the data analysis time of 2s, the Center-ECCA-SVM classifier obtained 94.66 ± 3.87% and 97.40 ± 2.78% asynchronous pattern recognition accuracy under the Normal metric and the Tolerant metric, respectively. And the YOLOv4-tiny model achieves a speed of 25.29fps and a 96.4% confidence in the prosthetic hand in real-time detection. Finally, the brain-controlled prosthetic hand helped the subjects to complete 4 kinds of daily life tasks in the real scene, and the time-consuming were all within an acceptable range, which verified the effectiveness and practicability of the system. Conclusion This research is based on improving the user interaction level of the prosthetic hand with the BCI system, and has made improvements in the SSVEP paradigm, asynchronous pattern recognition, interaction, and control logic. Furthermore, it also provides support for BCI areas for alternative prosthetic control, and movement disorder rehabilitation programs.
Collapse
Affiliation(s)
- Xiaodong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Teng Zhang
- Zhejiang Normal University, Jinhua, Zhejiang, 321004, China
- Shaanxi Key Laboratory of Intelligent Robot, Xi'an, Shannxi, 710049, China
| | - Yongyu Jiang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Weiming Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Zhufeng Lu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Yu Wang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, Shannxi, 710049, China
| | - Qing Tao
- School of Mechanical Engineering, Xinjiang University, Wulumuqi, Xinjiang, 830000, China
| |
Collapse
|
6
|
Wang R, Zhou T, Li Z, Zhao J, Li X. Using oscillatory and aperiodic neural activity features for identifying idle state in SSVEP-based BCIs reduces false triggers. J Neural Eng 2023; 20:066032. [PMID: 38016453 DOI: 10.1088/1741-2552/ad1054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2023] [Accepted: 11/28/2023] [Indexed: 11/30/2023]
Abstract
Objective.In existing studies, rhythmic (oscillatory) components were used as main features to identify brain states, such as control and idle states, while non-rhythmic (aperiodic) components were ignored. Recent studies have shown that aperiodic (1/f) activity is functionally related to cognitive processes. It is not clear if aperiodic activity can distinguish brain states in asynchronous brain-computer interfaces (BCIs) to reduce false triggers. In this paper, we propose an asynchronous method based on the fusion of oscillatory and aperiodic features for steady-state visual evoked potential-based BCIs.Approach.The proposed method first evaluates the oscillatory and aperiodic components of control and idle states using irregular-resampling auto-spectral analysis. Oscillatory features are then extracted using the spectral power of fundamental, second-harmonic, and third-harmonic frequencies of the oscillatory component, and aperiodic features are extracted using the slope and intercept of the first-order polynomial of the spectral fit of the aperiodic component under a log-logarithmic axis. The process produces two types of feature pools (oscillatory, aperiodic features). Next, feature selection (dimensionality reduction) is applied to the feature pools by Bonferroni correctedp-values from two-way analysis of variance. Last, these spatial-specific statistically significant features are used as input for classification to identify the idle state.Mainresults.On a 7-target dataset from 15 subjects, the mix of oscillatory and aperiodic features achieved an average accuracy of 88.39% compared to 83.53% when using oscillatory features alone (4.86% improvement). The results demonstrated that the proposed idle state recognition method achieved enhanced performance by incorporating aperiodic features.Significance.Our results demonstrated that (1) aperiodic features were effective in recognizing idle states and (2) fusing features of oscillatory and aperiodic components enhanced classification performance by 4.86% compared to oscillatory features alone.
Collapse
Affiliation(s)
- Rui Wang
- Department of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neuromodulation of Hebei Province, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | - Tianyi Zhou
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, People's Republic of China
| | - Zheng Li
- Center for Cognition and Neuroergonomics, State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Zhuhai 519087, People's Republic of China
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, People's Republic of China
| | - Jing Zhao
- Department of Electrical Engineering and the Key Laboratory of Intelligent Rehabilitation and Neuromodulation of Hebei Province, Yanshan University, Qinhuangdao 066004, People's Republic of China
| | - Xiaoli Li
- State Key Laboratory of Cognitive Neuroscience and Learning, Beijing Normal University, Beijing 100875, People's Republic of China
| |
Collapse
|
7
|
Mai X, Ai J, Wei Y, Zhu X, Meng J. Phase-Locked Time-Shift Data Augmentation Method for SSVEP Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2023; 31:4096-4105. [PMID: 37815966 DOI: 10.1109/tnsre.2023.3323351] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/12/2023]
Abstract
Steady-state visual evoked potential (SSVEP) based brain-computer interfaces (BCIs) have achieved an information transfer rate (ITR) of over 300 bits/min, but abundant training data is required. The performance of SSVEP algorithms deteriorates greatly under limited data, and the existing time-shift data augmentation method fails to improve it because the phase-locked requirement between training samples is violated. To address this issue, this study proposes a novel augmentation method, namely phase-locked time-shift (PLTS), for SSVEP-BCI. The similarity between epochs at different time moments was evaluated, and a unique time-shift step was calculated for each class to augment additional data epochs in each trial. The results showed that the PLTS significantly improved the classification performance of SSVEP algorithms on the BETA SSVEP datasets. Moreover, under the condition of one calibration block, by slightly prolonging the calibration duration (from 48 s to 51.5 s), the ITR increased from 40.88±4.54 bits/min to 122.61±7.05 bits/min with the PLTS. This study provides a new perspective on augmenting data epochs for training-based SSVEP-BCI, promotes the classification accuracy and ITR under limited training data, and thus facilitates the real-life applications of SSVEP-based brain spellers.
Collapse
|
8
|
Mai X, Sheng X, Shu X, Ding Y, Zhu X, Meng J. A Calibration-Free Hybrid Approach Combining SSVEP and EOG for Continuous Control. IEEE Trans Neural Syst Rehabil Eng 2023; 31:3480-3491. [PMID: 37610901 DOI: 10.1109/tnsre.2023.3307814] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/25/2023]
Abstract
While SSVEP-BCI has been widely developed to control external devices, most of them rely on the discrete control strategy. The continuous SSVEP-BCI enables users to continuously deliver commands and receive real-time feedback from the devices, but it suffers from the transition state problem, a period the erroneous recognition, when users shift their gazes between targets. To resolve this issue, we proposed a novel calibration-free Bayesian approach by hybridizing SSVEP and electrooculography (EOG). First, canonical correlation analysis (CCA) was applied to detect the evoked SSVEPs, and saccade during the gaze shift was detected by EOG data using an adaptive threshold method. Then, the new target after the gaze shift was recognized based on a Bayesian optimization approach, which combined the detection of SSVEP and saccade together and calculated the optimized probability distribution of the targets. Eighteen healthy subjects participated in the offline and online experiments. The offline experiments showed that the proposed hybrid BCI had significantly higher overall continuous accuracy and shorter gaze-shifting time compared to FBCCA, CCA, MEC, and PSDA. In online experiments, the proposed hybrid BCI significantly outperformed CCA-based SSVEP-BCI in terms of continuous accuracy (77.61 ± 1.36%vs. 68.86 ± 1.08% and gaze-shifting time (0.93 ± 0.06s vs. 1.94 ± 0.08s). Additionally, participants also perceived a significant improvement over the CCA-based SSVEP-BCI when the newly proposed decoding approach was used. These results validated the efficacy of the proposed hybrid Bayesian approach for the BCI continuous control without any calibration. This study provides an effective framework for combining SSVEP and EOG, and promotes the potential applications of plug-and-play BCIs in continuous control.
Collapse
|
9
|
Mirzabagherian H, Menhaj MB, Suratgar AA, Talebi N, Abbasi Sardari MR, Sajedin A. Temporal-spatial convolutional residual network for decoding attempted movement related EEG signals of subjects with spinal cord injury. Comput Biol Med 2023; 164:107159. [PMID: 37531857 DOI: 10.1016/j.compbiomed.2023.107159] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Revised: 05/19/2023] [Accepted: 06/07/2023] [Indexed: 08/04/2023]
Abstract
Brain Computer Interface (BCI) offers a promising approach to restoring hand functionality for people with cervical spinal cord injury (SCI). A reliable classification of brain activities based on appropriate flexibility in feature extraction could enhance BCI systems performance. In the present study, based on convolutional layers with temporal-spatial, Separable and Depthwise structures, we develop Temporal-Spatial Convolutional Residual Network)TSCR-Net(and Temporal-Spatial Convolutional Iterative Residual Network)TSCIR-Net(structures to classify electroencephalogram (EEG) signals. Using EEG signals in five different hand movement classes of SCI people, we compare the effectiveness of TSCIR-Net and TSCR-Net models with some competitive methods. We use the bayesian hyperparameter optimization algorithm to tune the hyperparameters of compact convolutional neural networks. In order to show the high generalizability of the proposed models, we compare the results of the models in different frequency ranges. Our proposed models decoded distinctive characteristics of different movement efforts and obtained higher classification accuracy than previous deep neural networks. Our findings indicate that TSCIR-Net and TSCR-Net models fulfills a better classification accuracy of 71.11%, and 64.55% for EEG_All and 57.74%, and 67.87% for EEG_Low frequency data sets than the compared methods in the literature.
Collapse
Affiliation(s)
- Hamed Mirzabagherian
- Department of Electrical Engineering, Amirkabir University of Technology, Hafez Ave. 15875-4413, Tehran, Iran.
| | - Mohammad Bagher Menhaj
- Department of Electrical Engineering, Amirkabir University of Technology, Hafez Ave. 15875-4413, Tehran, Iran.
| | - Amir Abolfazl Suratgar
- Department of Electrical Engineering, Amirkabir University of Technology, Hafez Ave. 15875-4413, Tehran, Iran.
| | - Nasibeh Talebi
- Department of Biomedical Engineering, Faculty of Engineering, Shahed University, Tehran, Iran.
| | | | - Atena Sajedin
- Department of Electrical Engineering, Amirkabir University of Technology, Hafez Ave. 15875-4413, Tehran, Iran.
| |
Collapse
|
10
|
Du H, Riddell RP, Wang X. A hybrid complex-valued neural network framework with applications to electroencephalogram (EEG). Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104862] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023]
|
11
|
Wang D, Liu A, Xue B, Wu L, Chen X. Improving the performance of SSVEP-BCI contaminated by physiological noise via adversarial training. MEDICINE IN NOVEL TECHNOLOGY AND DEVICES 2023. [DOI: 10.1016/j.medntd.2023.100213] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/29/2023] Open
|
12
|
Chen J, Zhang Y, Pan Y, Xu P, Guan C. A transformer-based deep neural network model for SSVEP classification. Neural Netw 2023; 164:521-534. [PMID: 37209444 DOI: 10.1016/j.neunet.2023.04.045] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 03/24/2023] [Accepted: 04/26/2023] [Indexed: 05/22/2023]
Abstract
Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data becomes urgent. In recent years, developing the methods that can work in inter-subject scenario has become a promising new direction. As a popular deep learning model nowadays, Transformer has been used in EEG signal classification tasks owing to its excellent performance. Therefore, in this study, we proposed a deep learning model for SSVEP classification based on Transformer architecture in inter-subject scenario, termed as SSVEPformer, which was the first application of Transformer on the SSVEP classification. Inspired by previous studies, we adopted the complex spectrum features of SSVEP data as the model input, which could enable the model to simultaneously explore the spectral and spatial information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) was proposed to improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12 targets; Dataset 2: 35 subjects, 40 targets). The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate than other baseline methods. The proposed models validate the feasibility of deep learning models based on Transformer architecture for SSVEP data classification, and could serve as potential models to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
Collapse
Affiliation(s)
- Jianbo Chen
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China
| | - Yangsong Zhang
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China; MOE Key Laboratory for NeuroInformation, Clinical Hospital of Chengdu Brain Science Institute, and Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
| | - Yudong Pan
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China
| | - Peng Xu
- MOE Key Laboratory for NeuroInformation, Clinical Hospital of Chengdu Brain Science Institute, and Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore
| |
Collapse
|
13
|
Wang S, Ji B, Shao D, Chen W, Gao K. A Methodology for Enhancing SSVEP Features Using Adaptive Filtering Based on the Spatial Distribution of EEG Signals. MICROMACHINES 2023; 14:mi14050976. [PMID: 37241600 DOI: 10.3390/mi14050976] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Revised: 04/22/2023] [Accepted: 04/24/2023] [Indexed: 05/28/2023]
Abstract
In this paper, we propose a classification algorithm of EEG signal based on canonical correlation analysis (CCA) and integrated with adaptive filtering. It can enhance the detection of steady-state visual evoked potentials (SSVEPs) in a brain-computer interface (BCI) speller. An adaptive filter is employed in front of the CCA algorithm to improve the signal-to-noise ratio (SNR) of SSVEP signals by removing background electroencephalographic (EEG) activities. The ensemble method is developed to integrate recursive least squares (RLS) adaptive filter corresponding to multiple stimulation frequencies. The method is tested by the SSVEP signal recorded from six targets by actual experiment and the EEG in a public SSVEP dataset of 40 targets from Tsinghua University. The accuracy rates of the CCA method and the CCA-based integrated RLS filter algorithm (RLS-CCA method) are compared. Experiment results show that the proposed RLS-CCA-based method significantly improves the classification accuracy compared with the pure CCA method. Especially when the number of EEG leads is low (three occipital electrodes and five non occipital electrodes), its advantage is more significant, and accuracy reaches 91.23%, which is more suitable for wearable environments where high-density EEG is not easy to collect.
Collapse
Affiliation(s)
- Shengyu Wang
- School of Information Science and Technology, Donghua University, Shanghai 201620, China
| | - Bowen Ji
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
- Innovation Center NPU Chongqing, Northwestern Polytechnical University, Chongqing 401135, China
| | - Dian Shao
- Unmanned System Research Institute, Northwestern Polytechnical University, Xi'an 710072, China
| | - Wanru Chen
- School of Information Science and Technology, Donghua University, Shanghai 201620, China
| | - Kunpeng Gao
- School of Information Science and Technology, Donghua University, Shanghai 201620, China
| |
Collapse
|
14
|
Mwata-Velu T, Niyonsaba-Sebigunda E, Avina-Cervantes JG, Ruiz-Pinales J, Velu-A-Gulenga N, Alonso-Ramírez AA. Motor Imagery Multi-Tasks Classification for BCIs Using the NVIDIA Jetson TX2 Board and the EEGNet Network. SENSORS (BASEL, SWITZERLAND) 2023; 23:4164. [PMID: 37112504 PMCID: PMC10145994 DOI: 10.3390/s23084164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 04/15/2023] [Accepted: 04/17/2023] [Indexed: 06/19/2023]
Abstract
Nowadays, Brain-Computer Interfaces (BCIs) still captivate large interest because of multiple advantages offered in numerous domains, explicitly assisting people with motor disabilities in communicating with the surrounding environment. However, challenges of portability, instantaneous processing time, and accurate data processing remain for numerous BCI system setups. This work implements an embedded multi-tasks classifier based on motor imagery using the EEGNet network integrated into the NVIDIA Jetson TX2 card. Therefore, two strategies are developed to select the most discriminant channels. The former uses the accuracy based-classifier criterion, while the latter evaluates electrode mutual information to form discriminant channel subsets. Next, the EEGNet network is implemented to classify discriminant channel signals. Additionally, a cyclic learning algorithm is implemented at the software level to accelerate the model learning convergence and fully profit from the NJT2 hardware resources. Finally, motor imagery Electroencephalogram (EEG) signals provided by HaLT's public benchmark were used, in addition to the k-fold cross-validation method. Average accuracies of 83.7% and 81.3% were achieved by classifying EEG signals per subject and motor imagery task, respectively. Each task was processed with an average latency of 48.7 ms. This framework offers an alternative for online EEG-BCI systems' requirements, dealing with short processing times and reliable classification accuracy.
Collapse
Affiliation(s)
- Tat’y Mwata-Velu
- Centro de Investigación en Computación, Instituto Politécnico Nacional (CIC-IPN), Avenida Juan de Dios Bátiz Esquina Miguel Othón de Mendizábal Colonia Nueva Industrial Vallejo, Alcaldía Gustavo A. Madero, Ciudad de Mexico C.P. 07738, Mexico
- Institut Supérieur Pédagogique Technique de Kinshasa (I.S.P.T.-KIN), Av. de la Science 5, Gombe, Kinshasa 3287, Democratic Republic of the Congo; (E.N.-S.); (J.R.-P.)
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico;
| | - Edson Niyonsaba-Sebigunda
- Institut Supérieur Pédagogique Technique de Kinshasa (I.S.P.T.-KIN), Av. de la Science 5, Gombe, Kinshasa 3287, Democratic Republic of the Congo; (E.N.-S.); (J.R.-P.)
| | - Juan Gabriel Avina-Cervantes
- Telematics and Digital Signal Processing Research Groups (CAs), Department of Electronics Engineering, Universidad de Guanajuato, Salamanca 36885, Mexico;
| | - Jose Ruiz-Pinales
- Institut Supérieur Pédagogique Technique de Kinshasa (I.S.P.T.-KIN), Av. de la Science 5, Gombe, Kinshasa 3287, Democratic Republic of the Congo; (E.N.-S.); (J.R.-P.)
| | - Narcisse Velu-A-Gulenga
- Institut Supérieur Pédagogique de Kikwit (I.S.P. KIKWIT), Av Nzundu 2, Com. Lukolela, Kikwit 8211, Democratic Republic of the Congo
| | - Adán Antonio Alonso-Ramírez
- Instituto Tecnológico Nacional de México en Celaya (TecNM-Celaya), Av. Antonio García Cubas Pte 600, Celaya C.P. 38010, Guanajuato, Mexico;
| |
Collapse
|
15
|
Wan Z, Cheng W, Li M, Zhu R, Duan W. GDNet-EEG: An attention-aware deep neural network based on group depth-wise convolution for SSVEP stimulation frequency recognition. Front Neurosci 2023; 17:1160040. [PMID: 37123356 PMCID: PMC10133471 DOI: 10.3389/fnins.2023.1160040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2023] [Accepted: 03/27/2023] [Indexed: 05/02/2023] Open
Abstract
Background Steady state visually evoked potentials (SSVEPs) based early glaucoma diagnosis requires effective data processing (e.g., deep learning) to provide accurate stimulation frequency recognition. Thus, we propose a group depth-wise convolutional neural network (GDNet-EEG), a novel electroencephalography (EEG)-oriented deep learning model tailored to learn regional characteristics and network characteristics of EEG-based brain activity to perform SSVEPs-based stimulation frequency recognition. Method Group depth-wise convolution is proposed to extract temporal and spectral features from the EEG signal of each brain region and represent regional characteristics as diverse as possible. Furthermore, EEG attention consisting of EEG channel-wise attention and specialized network-wise attention is designed to identify essential brain regions and form significant feature maps as specialized brain functional networks. Two publicly SSVEPs datasets (large-scale benchmark and BETA dataset) and their combined dataset are utilized to validate the classification performance of our model. Results Based on the input sample with a signal length of 1 s, the GDNet-EEG model achieves the average classification accuracies of 84.11, 85.93, and 93.35% on the benchmark, BETA, and combination datasets, respectively. Compared with the average classification accuracies achieved by comparison baselines, the average classification accuracies of the GDNet-EEG trained on a combination dataset increased from 1.96 to 18.2%. Conclusion Our approach can be potentially suitable for providing accurate SSVEP stimulation frequency recognition and being used in early glaucoma diagnosis.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Wangxinjun Cheng
- Queen Mary College of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Renping Zhu
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
- School of Information Management, Wuhan University, Wuhan, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
16
|
Wan Z, Li M, Liu S, Huang J, Tan H, Duan W. EEGformer: A transformer-based brain activity classification method using EEG signal. Front Neurosci 2023; 17:1148855. [PMID: 37034169 PMCID: PMC10079879 DOI: 10.3389/fnins.2023.1148855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/06/2023] [Indexed: 04/11/2023] Open
Abstract
Background The effective analysis methods for steady-state visual evoked potential (SSVEP) signals are critical in supporting an early diagnosis of glaucoma. Most efforts focused on adopting existing techniques to the SSVEPs-based brain-computer interface (BCI) task rather than proposing new ones specifically suited to the domain. Method Given that electroencephalogram (EEG) signals possess temporal, regional, and synchronous characteristics of brain activity, we proposed a transformer-based EEG analysis model known as EEGformer to capture the EEG characteristics in a unified manner. We adopted a one-dimensional convolution neural network (1DCNN) to automatically extract EEG-channel-wise features. The output was fed into the EEGformer, which is sequentially constructed using three components: regional, synchronous, and temporal transformers. In addition to using a large benchmark database (BETA) toward SSVEP-BCI application to validate model performance, we compared the EEGformer to current state-of-the-art deep learning models using two EEG datasets, which are obtained from our previous study: SJTU emotion EEG dataset (SEED) and a depressive EEG database (DepEEG). Results The experimental results show that the EEGformer achieves the best classification performance across the three EEG datasets, indicating that the rationality of our model architecture and learning EEG characteristics in a unified manner can improve model classification performance. Conclusion EEGformer generalizes well to different EEG datasets, demonstrating our approach can be potentially suitable for providing accurate brain activity classification and being used in different application scenarios, such as SSVEP-based early glaucoma diagnosis, emotion recognition and depression discrimination.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Shichang Liu
- School of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Hai Tan
- School of Computer Science, Nanjing Audit University, Nanjing, Jiangsu, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
17
|
Xu D, Tang F, Li Y, Zhang Q, Feng X. An Analysis of Deep Learning Models in SSVEP-Based BCI: A Survey. Brain Sci 2023; 13:brainsci13030483. [PMID: 36979293 PMCID: PMC10046535 DOI: 10.3390/brainsci13030483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2023] [Revised: 03/04/2023] [Accepted: 03/10/2023] [Indexed: 03/15/2023] Open
Abstract
The brain–computer interface (BCI), which provides a new way for humans to directly communicate with robots without the involvement of the peripheral nervous system, has recently attracted much attention. Among all the BCI paradigms, BCIs based on steady-state visual evoked potentials (SSVEPs) have the highest information transfer rate (ITR) and the shortest training time. Meanwhile, deep learning has provided an effective and feasible solution for solving complex classification problems in many fields, and many researchers have started to apply deep learning to classify SSVEP signals. However, the designs of deep learning models vary drastically. There are many hyper-parameters that influence the performance of the model in an unpredictable way. This study surveyed 31 deep learning models (2011–2023) that were used to classify SSVEP signals and analyzed their design aspects including model input, model structure, performance measure, etc. Most of the studies that were surveyed in this paper were published in 2021 and 2022. This survey is an up-to-date design guide for researchers who are interested in using deep learning models to classify SSVEP signals.
Collapse
Affiliation(s)
- Dongcen Xu
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; (D.X.); (F.T.); (Y.L.); (Q.Z.)
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- University of Chinese Academy of Sciences, Beijing 100049, China
| | - Fengzhen Tang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; (D.X.); (F.T.); (Y.L.); (Q.Z.)
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Yiping Li
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; (D.X.); (F.T.); (Y.L.); (Q.Z.)
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Qifeng Zhang
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; (D.X.); (F.T.); (Y.L.); (Q.Z.)
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
| | - Xisheng Feng
- State Key Laboratory of Robotics, Shenyang Institute of Automation, Chinese Academy of Sciences, Shenyang 110016, China; (D.X.); (F.T.); (Y.L.); (Q.Z.)
- Institutes for Robotics and Intelligent Manufacturing, Chinese Academy of Sciences, Shenyang 110169, China
- Correspondence:
| |
Collapse
|
18
|
Convolutional Neural Network with a Topographic Representation Module for EEG-Based Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020268. [PMID: 36831811 PMCID: PMC9954538 DOI: 10.3390/brainsci13020268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 02/08/2023] Open
Abstract
Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.
Collapse
|
19
|
Saffari F, Khadem A. A deep learning method for classification of steady-state visual evoked potentials in a brain-computer interface speller. BRAIN-COMPUTER INTERFACES 2023. [DOI: 10.1080/2326263x.2023.2166651] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/05/2023]
Affiliation(s)
- Farzad Saffari
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| | - Ali Khadem
- Department of Biomedical Engineering, Faculty of Electrical Engineering, K. N. Toosi University of Technology, Tehran, Iran
| |
Collapse
|
20
|
Liang N, Wang C, Li S, Xie X, Lin J, Zhong W. The classification of flash visual evoked potential based on deep learning. BMC Med Inform Decis Mak 2023; 23:13. [PMID: 36658545 PMCID: PMC9851116 DOI: 10.1186/s12911-023-02107-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2022] [Accepted: 01/12/2023] [Indexed: 01/20/2023] Open
Abstract
BACKGROUND Visual electrophysiology is an objective visual function examination widely used in clinical work and medical identification that can objectively evaluate visual function and locate lesions according to waveform changes. However, in visual electrophysiological examinations, the flash visual evoked potential (FVEP) varies greatly among individuals, resulting in different waveforms in different normal subjects. Moreover, most of the FVEP wave labelling is performed automatically by a machine, and manually corrected by professional clinical technicians. These labels may have biases due to the individual variations in subjects, incomplete clinical examination data, different professional skills, personal habits and other factors. Through the retrospective study of big data, an artificial intelligence algorithm is used to maintain high generalization abilities in complex situations and improve the accuracy of prescreening. METHODS A novel multi-input neural network based on convolution and confidence branching (MCAC-Net) for retinitis pigmentosa RP recognition and out-of-distribution detection is proposed. The MCAC-Net with global and local feature extraction is designed for the FVEP signal that has different local and global information, and a confidence branch is added for out-of-distribution sample detection. For the proposed manual features,a new input layer is added. RESULTS The model is verified by a clinically collected FVEP dataset, and an accuracy of 90.7% is achieved in the classification task and 93.3% in the out-of-distribution detection task. CONCLUSION We built a deep learning-based FVEP classification algorithm that promises to be an excellent tool for screening RP diseases by using FVEP signals.
Collapse
Affiliation(s)
- Na Liang
- College of Computer Science, Chongqing University, Chongqing, China
| | - Chengliang Wang
- College of Computer Science, Chongqing University, Chongqing, China
| | - Shiying Li
- Department of Ophthalmology, Xiang’an Hospital of Xiamen University, Xiamen University, Xiamen, China
- Department of Ophthalmology, Eye Institute of Xiamen University, Xiamen, China
| | - Xin Xie
- College of Computer Science, Chongqing University, Chongqing, China
| | - Jun Lin
- Department of Ophthalmology, Yongchuan People’s Hospital of Chongqing, Chongqing, China
| | - Wen Zhong
- Chongqing Health Statistics Information Center, Chongqing, China
| |
Collapse
|
21
|
Berke Guney O, Ozkan H. Transfer learning of an ensemble of DNNs for SSVEP BCI spellers without user-specific training. J Neural Eng 2023; 20. [PMID: 36535036 DOI: 10.1088/1741-2552/acacca] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Accepted: 12/19/2022] [Indexed: 12/23/2022]
Abstract
Objective.Steady-state visually evoked potentials (SSVEPs), measured with electroencephalogram (EEG), yield decent information transfer rates (ITRs) in brain-computer interface (BCI) spellers. However, the current high performing SSVEP BCI spellers in the literature require an initial lengthy and tiring user-specific training for each new user for system adaptation, including data collection with EEG experiments, algorithm training and calibration (all are before the actual use of the system). This impedes the widespread use of BCIs. To ensure practicality, we propose a novel target identification method based on an ensemble of deep neural networks (DNNs), which does not require any sort of user-specific training.Approach.We exploit already-existing literature datasets from participants of previously conducted EEG experiments to train a global target identifier DNN first, which is then fine-tuned to each participant. We transfer this ensemble of fine-tuned DNNs to the new user instance, determine thekmost representative DNNs according to the participants' statistical similarities to the new user, and predict the target character through a weighted combination of the ensemble predictions.Main results.The proposed method significantly outperforms all the state-of-the-art alternatives for all stimulation durations in [0.2-1.0] s on two large-scale benchmark and BETA datasets, and achieves impressive 155.51 bits/min and 114.64 bits/min ITRs. Code is available for reproducibility:https://github.com/osmanberke/Ensemble-of-DNNs.Significance.Our Ensemble-DNN method has the potential to promote the practical widespread deployment of BCI spellers in daily lives as we provide the highest performance while enabling the immediate system use without any user-specific training.
Collapse
Affiliation(s)
- Osman Berke Guney
- Department of Electrical and Computer Engineering, Boston University, Boston, MA, United States of America
| | - Huseyin Ozkan
- Faculty of Engineering and Natural Sciences, Sabanci University, Istanbul, Turkey
| |
Collapse
|
22
|
Albahri AS, Al-qaysi ZT, Alzubaidi L, Alnoor A, Albahri OS, Alamoodi AH, Bakar AA. A Systematic Review of Using Deep Learning Technology in the Steady-State Visually Evoked Potential-Based Brain-Computer Interface Applications: Current Trends and Future Trust Methodology. Int J Telemed Appl 2023; 2023:7741735. [PMID: 37168809 PMCID: PMC10164869 DOI: 10.1155/2023/7741735] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2022] [Revised: 02/01/2023] [Accepted: 03/16/2023] [Indexed: 05/13/2023] Open
Abstract
The significance of deep learning techniques in relation to steady-state visually evoked potential- (SSVEP-) based brain-computer interface (BCI) applications is assessed through a systematic review. Three reliable databases, PubMed, ScienceDirect, and IEEE, were considered to gather relevant scientific and theoretical articles. Initially, 125 papers were found between 2010 and 2021 related to this integrated research field. After the filtering process, only 30 articles were identified and classified into five categories based on their type of deep learning methods. The first category, convolutional neural network (CNN), accounts for 70% (n = 21/30). The second category, recurrent neural network (RNN), accounts for 10% (n = 3/30). The third and fourth categories, deep neural network (DNN) and long short-term memory (LSTM), account for 6% (n = 30). The fifth category, restricted Boltzmann machine (RBM), accounts for 3% (n = 1/30). The literature's findings in terms of the main aspects identified in existing applications of deep learning pattern recognition techniques in SSVEP-based BCI, such as feature extraction, classification, activation functions, validation methods, and achieved classification accuracies, are examined. A comprehensive mapping analysis was also conducted, which identified six categories. Current challenges of ensuring trustworthy deep learning in SSVEP-based BCI applications were discussed, and recommendations were provided to researchers and developers. The study critically reviews the current unsolved issues of SSVEP-based BCI applications in terms of development challenges based on deep learning techniques and selection challenges based on multicriteria decision-making (MCDM). A trust proposal solution is presented with three methodology phases for evaluating and benchmarking SSVEP-based BCI applications using fuzzy decision-making techniques. Valuable insights and recommendations for researchers and developers in the SSVEP-based BCI and deep learning are provided.
Collapse
Affiliation(s)
- A. S. Albahri
- Iraqi Commission for Computers and Informatics (ICCI), Baghdad, Iraq
| | - Z. T. Al-qaysi
- Department of Computer Science, Computer Science and Mathematics College, Tikrit University, Tikrit, Iraq
| | - Laith Alzubaidi
- School of Mechanical, Medical, and Process Engineering, Queensland University of Technology, Brisbane, QLD 4000, Australia
- ARC Industrial Transformation Training Centre—Joint Biomechanics, Queensland University of Technology, Brisbane, QLD 4000, Australia
| | | | - O. S. Albahri
- Computer Techniques Engineering Department, Mazaya University College, Nasiriyah, Iraq
- Department of Computer Science and Information Technology, La Trobe University, Melbourne, VIC, Australia
| | - A. H. Alamoodi
- Faculty of Computing and Meta-Technology (FKMT), Universiti Pendidikan Sultan Idris (UPSI), Perak, Malaysia
| | | |
Collapse
|
23
|
Rostami E, Ghassemi F, Tabanfar Z. Transfer Learning assisted PodNet for Stimulation Frequency Detection in Steady state visually evoked potential-based BCI Spellers. BRAIN-COMPUTER INTERFACES 2022. [DOI: 10.1080/2326263x.2022.2134623] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Affiliation(s)
- Elham Rostami
- Amirkabir University of Technology, Department of Biomedical Engineering, Tehran, Iran
| | - Farnaz Ghassemi
- Amirkabir University of Technology, Department of Biomedical Engineering, Tehran, Iran
| | - Zahra Tabanfar
- Amirkabir University of Technology, Department of Biomedical Engineering, Tehran, Iran
| |
Collapse
|
24
|
Xiao X, Xu L, Yue J, Pan B, Xu M, Ming D. Fixed template network and dynamic template network: novel network designs for decoding steady-state visual evoked potentials. J Neural Eng 2022; 19. [PMID: 36206723 DOI: 10.1088/1741-2552/ac9861] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2022] [Accepted: 10/07/2022] [Indexed: 12/24/2022]
Abstract
Objective. Decomposition methods are efficient to decode steady-state visual evoked potentials (SSVEPs). In recent years, the brain-computer interface community has also been developing deep learning networks for decoding SSVEPs. However, there is no clear evidence that current deep learning models outperform decomposition methods on the SSVEP decoding tasks. Many studies lacked the comparison with state-of-the-art decomposition methods in a fair environment.Approach. This study proposed a novel network design motivated by the works of decomposition methods. Fixed template network (FTN) and dynamic template network (DTN) are two novel networks combining the advantages of fixed templates and subject-specific templates. This study also proposed a data augmentation method for SSVEPs. This study compared the intra-subject classification performance of DTN and FTN with that of state-of-the-art decomposition methods on three public SSVEP datasets.Main results. The results show that both FTN and DTN achieved the suboptimal classification performance compared with state-of-the-art decomposition methods.Significance. Both network designs could enhance the decoding performance of SSVEPs, making them promising networks for improving the practicality of SSVEP-based applications.
Collapse
Affiliation(s)
- Xiaolin Xiao
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Lichao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Jin Yue
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Baizhou Pan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Minpeng Xu
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
25
|
Kwon J, Hwang J, Nam H, Im CH. Novel hybrid visual stimuli incorporating periodic motions into conventional flickering or pattern-reversal visual stimuli for steady-state visual evoked potential-based brain-computer interfaces. Front Neuroinform 2022; 16:997068. [PMID: 36213545 PMCID: PMC9534124 DOI: 10.3389/fninf.2022.997068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/30/2022] [Indexed: 11/13/2022] Open
Abstract
In this study, we proposed a new type of hybrid visual stimuli for steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs), which incorporate various periodic motions into conventional flickering stimuli (FS) or pattern reversal stimuli (PRS). Furthermore, we investigated optimal periodic motions for each FS and PRS to enhance the performance of SSVEP-based BCIs. Periodic motions were implemented by changing the size of the stimulus according to four different temporal functions denoted by none, square, triangular, and sine, yielding a total of eight hybrid visual stimuli. Additionally, we developed the extended version of filter bank canonical correlation analysis (FBCCA), which is a state-of-the-art training-free classification algorithm for SSVEP-based BCIs, to enhance the classification accuracy for PRS-based hybrid visual stimuli. Twenty healthy individuals participated in the SSVEP-based BCI experiment to discriminate four visual stimuli with different frequencies. An average classification accuracy and information transfer rate (ITR) were evaluated to compare the performances of SSVEP-based BCIs for different hybrid visual stimuli. Additionally, the user's visual fatigue for each of the hybrid visual stimuli was also evaluated. As the result, for FS, the highest performances were reported when the periodic motion of the sine waveform was incorporated for all window sizes except for 3 s. For PRS, the periodic motion of the square waveform showed the highest classification accuracies for all tested window sizes. A significant statistical difference in the performance between the two best stimuli was not observed. The averaged fatigue scores were reported to be 5.3 ± 2.05 and 4.05 ± 1.28 for FS with sine-wave periodic motion and PRS with square-wave periodic motion, respectively. Consequently, our results demonstrated that FS with sine-wave periodic motion and PRS with square-wave periodic motion could effectively improve the BCI performances compared to conventional FS and PRS. In addition, thanks to its low visual fatigue, PRS with square-wave periodic motion can be regarded as the most appropriate visual stimulus for the long-term use of SSVEP-based BCIs, particularly for window sizes equal to or larger than 2 s.
Collapse
Affiliation(s)
- Jinuk Kwon
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Jihun Hwang
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
| | - Hyerin Nam
- Department of Artificial Intelligence, Hanyang University, Seoul, South Korea
| | - Chang-Hwan Im
- Department of Biomedical Engineering, Hanyang University, Seoul, South Korea
- Department of Electronic Engineering, Hanyang University, Seoul, South Korea
- Department of Artificial Intelligence, Hanyang University, Seoul, South Korea
- Department of HY-KIST Bio-Convergence, Hanyang University, Seoul, South Korea
- *Correspondence: Chang-Hwan Im
| |
Collapse
|
26
|
Zhang Z, Han S, Yi H, Duan F, Kang F, Sun Z, Solé-Casals J, Caiafa CF. A Brain-Controlled Vehicle System Based on Steady State Visual Evoked Potentials. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10051-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
AbstractIn this paper, we propose a human-vehicle cooperative driving system. The objectives of this research are twofold: (1) providing a feasible brain-controlled vehicle (BCV) mode; (2) providing a human-vehicle cooperative control mode. For the first aim, through a brain-computer interface (BCI), we can analyse the EEG signal and get the driving intentions of the driver. For the second aim, the human-vehicle cooperative control is manifested in the BCV combined with the obstacle detection assistance. Considering the potential dangers of driving a real motor vehicle in the outdoor, an obstacle detection module is essential in the human-vehicle cooperative driving system. Obstacle detection and emergency braking can ensure the safety of the driver and the vehicle during driving. EEG system based on steady-state visual evoked potential (SSVEP) is used in the BCI. Simulation and real vehicle driving experiment platform are designed to verify the feasibility of the proposed human-vehicle cooperative driving system. Five subjects participated in the simulation experiment and real the vehicle driving experiment. The outdoor experimental results show that the average accuracy of intention recognition is 90.68 ± 2.96% on the real vehicle platform. In this paper, we verified the feasibility of the SSVEP-based BCV mode and realised the human-vehicle cooperative driving system.
Collapse
|
27
|
Pan Y, Chen J, Zhang Y, Zhang Y. An efficient CNN-LSTM Network with spectral normalization and label smoothing technologies for SSVEP frequency recognition. J Neural Eng 2022; 19. [PMID: 36041426 DOI: 10.1088/1741-2552/ac8dc5] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2022] [Accepted: 08/30/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Steady-state visual evoked potentials(SSVEPs) based braincomputer interface(BCI) has received great interests owing to the high information transfer rate(ITR) and available large number of targets. However, the performance of frequency recognition methods heavily depends on the amount of the calibration data for intra-subject classification. Some research adopted the deep learning(DL) algorithm to conduct the inter-subject classification, which could reduce the calculation procedure, but the performance still has large room to improve compared with the intra-subject classification. APPROACH To address these issues, we proposed an efficient SSVEP DL NETwork (termed SSVEPNET) based on 1D convolution and long short-term memory (LSTM) module. To enhance the performance of SSVEPNT, we adopted the spectral normalization and label smoothing technologies during implementing the network architecture. We evaluated the SSVEPNET and compared it with other methods for the intra- and inter-subject classification under different conditions, i.e., two datasets, two time-window lengths (1 s and 0.5 s), three sizes of training data. MAIN RESULTS Under all the experimental settings, the proposed SSVEPNET achieved the highest average accuracy for the intra- and inter-subject classification on the two SSVEP datasets, when compared with other traditional and DL baseline methods. Signif icance. The extensive experimental results demonstrate that the proposed DL model holds promise to enhance frequency recognition performance in SSVEP-based BCIs. Besides, the mixed network structures with CNN and LSTM, and the spectral normalization and label smoothing could be useful optimization strategies to design efficient models for EEG data.
Collapse
Affiliation(s)
- YuDong Pan
- Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang,CN,621010, Mianyang, 621010, CHINA
| | - Jianbo Chen
- Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang 621010, China, Mianyang, 621010, CHINA
| | - Yangsong Zhang
- School of Computer Science and Technology, Southwest University of Science and Technology, Mianyang,CN,621010, Mianyang, 621010, CHINA
| | - Yu Zhang
- Department of Bioengineering, Lehigh University, Bethlehem, PA 18015, USA, Bethlehem, 18015-3027, UNITED STATES
| |
Collapse
|
28
|
A L1 normalization enhanced dynamic window method for SSVEP-based BCIs. J Neurosci Methods 2022; 380:109688. [PMID: 35973644 DOI: 10.1016/j.jneumeth.2022.109688] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Revised: 07/28/2022] [Accepted: 08/11/2022] [Indexed: 11/22/2022]
Abstract
BACKGROUND Filter bank canonical correlation analysis (FBCCA) has been widely applied to detect the frequency components of steady-state visual evoked potential (SSVEP). FBCCA with dynamic window (FBCCA-DW) is recently proposed to improve its performance. FBCCA-DW adaptively chooses a proper window length based on the signal-to-noise ratio (SNR) of SSVEP signals. It takes the output of FBCCA to evaluate the SNR of SSVEP signals, by using the softmax function and cost function. In practice, SSVEP signals always contain task-unrelated electroencephalogram (EEG), which degrades the SSVEP task. When the power of task-unrelated EEG changes, there would be an offset in the output of FBCCA. However, due to the insensitivity of softmax function to the offset, the SNR in FBCCA-DW ignores the interference of the task-unrelated EEG. Therefore, FBCCA-DW will analyze SSVEP signals at an inappropriate window length. NEW METHOD To solve the issue, we replace the softmax function with the L1 normalization, which could respond a reasonable SNR to the offset. Since the proposed method takes task-unrelated EEG into account, it could choose a more appropriate window length. RESULTS We comprehensively validate the proposed method on three publicly available SSVEP datasets. The results indicate that the proposed method could improve the performance significantly. COMPARISON WITH EXISTING METHODS The proposed method outperforms FBCCA and FBCCA-DW in terms of information transfer rate (ITR). CONCLUSIONS The proposed method enhances the correlation between the window length and the credibility of the recognition result. It shows its potential for practical applications in complex environments.
Collapse
|
29
|
Zhang X, Qiu S, Zhang Y, Wang K, Wang Y, He H. Bidirectional siamese correlation analysis method for enhancing the detection of SSVEPs. J Neural Eng 2022; 19. [PMID: 35853437 DOI: 10.1088/1741-2552/ac823e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2022] [Accepted: 07/19/2022] [Indexed: 11/12/2022]
Abstract
OBJECTIVE Steady-state visual evoked potential (SSVEP)-based brain-computer interfaces (BCIs) have attracted increasing attention due to their high information transfer rate. To improve the performance of SSVEP detection, we propose a bidirectional Siamese correlation analysis (bi-SiamCA) model. APPROACH In this model, an LSTM-based Siamese architecture is designed to measure the similarity between the SSVEP signal and the template in each frequency and obtain the probability that the SSVEP signal belongs to each frequency. Additionally, a maximize agreement module with a designed contrastive loss is adopted in the Siamese architecture to increase the similarity between the SSVEP signal and the reference signal in the same frequency. Moreover, a two-way signal processing mechanism is built to effectively integrate complementary information from two temporal directions of the input signals. Our model uses raw SSVEPs as inputs and can be trained end-to-end. MAIN RESULTS Experimental results on a 40-class dataset and a 12-class dataset indicate that bi-SiamCA can significantly improve the classification accuracy compared with the prominent traditional and deep learning methods, especially under short data lengths. Feature visualizations show that the similarity between the SSVEP signal and the reference signal in the same frequency gradually improved in our model. CONCLUSION The proposed bi-SiamCA model enhances the performance of SSVEP detection and outperforms the compared methods. SIGNIFICANCE Due to its high decoding accuracy under short signals, our approach has great potential to implement a high-speed SSVEP-based BCI.
Collapse
Affiliation(s)
- Xinyi Zhang
- Chinese Academy of Sciences Institute of Automation, 95 Zhongguancun East Road, Haidian District, Beijing, Beijing, 100190, CHINA
| | - Shuang Qiu
- Chinese Academy of Sciences Institute of Automation, 95 Zhongguancun East Road, Haidian District, Beijing, Beijing, Beijing, 100190, CHINA
| | - Yukun Zhang
- Institute of Automation Chinese Academy of Sciences, 95 Zhongguancun East Road, Haidian District, Beijing, Beijing, 100190, CHINA
| | - Kangning Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Weijin Road Campus: No.92 Weijin Road, Nankai District.Tianjin., Tianjin, 300072, CHINA
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Science, Beijing, China, Beijing, 100083, CHINA
| | - Huiguang He
- Institute of Automation Chinese Academy of Sciences, Institute of Automation, Chinese Academy of Sciences (CASIA), Beijing, China, Beijing, 100190, CHINA
| |
Collapse
|
30
|
Yao H, Liu K, Deng X, Tang X, Yu H. FB-EEGNet: A fusion neural network across multi-stimulus for SSVEP target detection. J Neurosci Methods 2022; 379:109674. [PMID: 35842015 DOI: 10.1016/j.jneumeth.2022.109674] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Revised: 05/24/2022] [Accepted: 07/10/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND Steady-state visual evoked potential (SSVEP) is a prevalent paradigm of brain-computer interface (BCI). Recently, deep neural networks (DNNs) have been employed for SSVEP target recognition. However, current DNN models can not fully extract information from SSVEP harmonic components, and ignore the influence of non-target stimuli. NEW METHOD To employ information of multiple sub-bands and non-target stimulus data, we propose a DNN model for SSVEP target detection, i.e., FB-EEGNet, which fuses features of multiple neural networks. Additionally, we design a multi-label for each sample and optimize the parameters of FB-EEGNet across multi-stimulus to incorporate the information from non-target stimuli. RESULTS Under the subject-specific condition, FB-EEGNet achieves the average classification accuracies (information transfer rate (ITR)) of 76.75 % (50.70 bits/min) and 89.14 % (70.45 bits/min) in a time widow of 0.7 s under the public 12-target dataset and our experimental 9-target dataset, respectively. Under the cross-subject condition, FB-EEGNet achieved mean accuracies (ITRs) of 81.72 % (67.99 bits/min) and 92.15 % (76.12 bits/min) on the public and experimental datasets in a time window of 1 s, respectively. COMPARISON WITH EXISTING METHODS FB-EEGNet shows superior performance than CCNN, EEGNet, CCA and FBCCA both for subject-dependent and subject-independent SSVEP target recognition. CONCLUSION FB-EEGNet can effectively extract information from multiple sub-bands and cross-stimulus targets, providing a promising way for extracting deep features in SSVEP using neural networks.
Collapse
Affiliation(s)
- Huiming Yao
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Ke Liu
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China.
| | - Xin Deng
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Xianlun Tang
- College of Automation, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| | - Hong Yu
- College of Computer Science and Technology, Chongqing University of Posts and Telecommunications, Chongqing 400065, China
| |
Collapse
|
31
|
Improving Motor Imagery EEG Classification Based on Channel Selection Using a Deep Learning Architecture. MATHEMATICS 2022. [DOI: 10.3390/math10132302] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/06/2023]
Abstract
Recently, motor imagery EEG signals have been widely applied in Brain–Computer Interfaces (BCI). These signals are typically observed in the first motor cortex of the brain, resulting from the imagination of body limb movements. For non-invasive BCI systems, it is not apparent how to locate the electrodes, optimizing the accuracy for a given task. This study proposes a comparative analysis of channel signals exploiting the Deep Learning (DL) technique and a public dataset to locate the most discriminant channels. EEG channels are usually selected based on the function and nomenclature of electrode location from international standards. Instead, the most suitable configuration for a given paradigm must be determined by analyzing the proper selection of the channels. Therefore, an EEGNet network was implemented to classify signals from different channel location using the accuracy metric. Achieved results were then contrasted with results from the state-of-the-art. As a result, the proposed method improved BCI classification accuracy.
Collapse
|
32
|
Du Y, Liu J. IENet: a robust convolutional neural network for EEG based brain-computer interfaces. J Neural Eng 2022; 19. [PMID: 35605585 DOI: 10.1088/1741-2552/ac7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 05/22/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) develop into novel application areas with more complex scenarios, which put forward higher requirements for the robustness of EEG signal processing algorithms. Deep learning can automatically extract discriminative features and potential dependencies via deep structures, demonstrating strong analytical capabilities in numerous domains such as computer vision (CV) and natural language processing (NLP). Making full use of deep learning technology to design a robust algorithm that is capable of analyzing EEG across BCI paradigms is our main work in this paper. APPROACH Inspired by InceptionV4 and InceptionTime architecture, we introduce a neural network ensemble named InceptionEEG-Net (IENet), where multi-scale convolutional layer and convolution of length 1 enable model to extract rich high-dimensional features with limited parameters. In addition, we propose the average receptive field gain for convolutional neural networks (CNNs), which optimizes IENet to detect long patterns at a smaller cost. We compare with the current state-of-the-art method across five EEG-BCI paradigms: steady-state visual evoked potentials, epilepsy EEG, overt attention P300 visual-evoked potentials, covert attention P300 visual-evoked potentials and movement-related cortical potentials. MAIN RESULTS The classification results show that the generalizability of IENet is on par with the state-of-the-art paradigm-agnostic models on test datasets. Furthermore, the feature explainability analysis of IENet illustrates its capability to extract neurophysiologically interpretable features for different BCI paradigms, ensuring the reliability of algorithm. Significance. It can be seen from our results that IENet can generalize to different BCI paradigms. And it is essential for deep CNNs to increase the receptive field size using average receptive field gain.
Collapse
Affiliation(s)
- Yipeng Du
- SCCE, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 100083 P. R.China, Beijing, Beijing, 100083, CHINA
| | - Jian Liu
- SCCE, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 100083 P. R.China, Beijing, 100083, CHINA
| |
Collapse
|
33
|
Ji Y, Li F, Fu B, Li Y, Zhou Y, Niu Y, Zhang L, Chen Y, Shi G. Spatial-temporal Network for Fine-grained-level Emotion EEG Recognition. J Neural Eng 2022; 19. [PMID: 35523129 DOI: 10.1088/1741-2552/ac6d7d] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/05/2022] [Indexed: 11/12/2022]
Abstract
Electroencephalogram (EEG)-based affective computing brain-computer interfaces provide the capability for machines to understand human intentions. In practice, people are more concerned with the strength of a certain emotional state over a short period of time, which was called as fine-grained-level emotion in this paper. In this study, we built a fine-grained-level emotion EEG dataset that contains two coarse-grained emotions and four corresponding fine-grained-level emotions. To fully extract the features of the EEG signals, we proposed a corresponding fine-grained emotion EEG network (FG-emotionNet) for spatial-temporal feature extraction. Each feature extraction layer is linked to raw EEG signals to alleviate overfitting and ensure that the spatial features of each scale can be extracted from the raw signals. Moreover, all previous scale features are fused before the current spatial-feature layer to enhance the scale features in the spatial block. Additionally, long short-term memory is adopted as the temporal block to extract the temporal features based on spatial features and classify the category of fine-grained emotions. Subject-dependent and cross-session experiments demonstrated that the performance of the proposed method is superior to that of the representative methods in emotion recognition and similar structure methods with proposed method.
Collapse
Affiliation(s)
- Youshuo Ji
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Fu Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Boxun Fu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yang Li
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, 710071, CHINA
| | - YiJin Zhou
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Yi Niu
- Xidian University, No. 2 South Taibai Road, Xi'an, Shaanxi, Xian, Shaanxi, 710071, CHINA
| | - Lijian Zhang
- Beijing Institute of Mechanical Equipment, No. 50 Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | - Yuanfang Chen
- Beijing Institute of Mechanical Equipment, No. 50, Yongding Road, Haidian District, Beijing, China, Beijing, 100854, CHINA
| | | |
Collapse
|
34
|
Mahmood M, Kim N, Mahmood M, Kim H, Kim H, Rodeheaver N, Sang M, Yu KJ, Yeo WH. VR-enabled portable brain-computer interfaces via wireless soft bioelectronics. Biosens Bioelectron 2022; 210:114333. [DOI: 10.1016/j.bios.2022.114333] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 04/19/2022] [Accepted: 04/26/2022] [Indexed: 11/02/2022]
|
35
|
Gao D, Zheng W, Wang M, Wang L, Xiao Y, Zhang Y. A Zero-Padding Frequency Domain Convolutional Neural Network for SSVEP Classification. Front Hum Neurosci 2022; 16:815163. [PMID: 35370578 PMCID: PMC8967947 DOI: 10.3389/fnhum.2022.815163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 02/16/2022] [Indexed: 11/23/2022] Open
Abstract
The brain-computer interface (BCI) of steady-state visual evoked potential (SSVEP) is one of the fundamental ways of human-computer communication. The main challenge is that there may be a nonlinear relationship between different SSVEP in other states. For improving the performance of SSVEP BCI, a novel CNN algorithm model is proposed in this study. Based on the discrete Fourier transform to calculate the signal's power spectral density (PSD), we perform zero-padding in the signal's time domain to improve its performance on the PSD and make it more refined. In this way, the frequency point interval in the PSD of the SSVEP is consistent with the minimum gap between the stimulation frequency. Combining the nonlinear transformation capabilities of CNN in deep learning, a zero-padding frequency domain convolutional neural network (ZPFDCNN) model is proposed. Extensive experiments based on the SSVEP dataset validate the effectiveness of our method. The study verifies that the proposed ZPFDCNN method can improve the effectiveness of the SSVEP-based high-speed BCI ITR. It has massive potential in the application of BCI.
Collapse
Affiliation(s)
- Dongrui Gao
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Wenyin Zheng
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China
| | - Manqing Wang
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China
- School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Lutao Wang
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China
| | - Yi Xiao
- National Key Laboratory of Human Factors Engineering, China Astronaut Research and Training Center, Beijing, China
- Yi Xiao
| | - Yongqing Zhang
- School of Computer Science, Chengdu University of Information Technology, Chengdu, China
- School of Computer Science and Engineering, University of Electronic Science and Technology of China, Chengdu, China
- *Correspondence: Yongqing Zhang
| |
Collapse
|
36
|
Komolovaitė D, Maskeliūnas R, Damaševičius R. Deep Convolutional Neural Network-Based Visual Stimuli Classification Using Electroencephalography Signals of Healthy and Alzheimer’s Disease Subjects. Life (Basel) 2022; 12:life12030374. [PMID: 35330125 PMCID: PMC8950142 DOI: 10.3390/life12030374] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2022] [Revised: 02/28/2022] [Accepted: 03/02/2022] [Indexed: 11/20/2022] Open
Abstract
Visual perception is an important part of human life. In the context of facial recognition, it allows us to distinguish between emotions and important facial features that distinguish one person from another. However, subjects suffering from memory loss face significant facial processing problems. If the perception of facial features is affected by memory impairment, then it is possible to classify visual stimuli using brain activity data from the visual processing regions of the brain. This study differentiates the aspects of familiarity and emotion by the inversion effect of the face and uses convolutional neural network (CNN) models (EEGNet, EEGNet SSVEP (steady-state visual evoked potentials), and DeepConvNet) to learn discriminative features from raw electroencephalography (EEG) signals. Due to the limited number of available EEG data samples, Generative Adversarial Networks (GAN) and Variational Autoencoders (VAE) are introduced to generate synthetic EEG signals. The generated data are used to pretrain the models, and the learned weights are initialized to train them on the real EEG data. We investigate minor facial characteristics in brain signals and the ability of deep CNN models to learn them. The effect of face inversion was studied, and it was observed that the N170 component has a considerable and sustained delay. As a result, emotional and familiarity stimuli were divided into two categories based on the posture of the face. The categories of upright and inverted stimuli have the smallest incidences of confusion. The model’s ability to learn the face-inversion effect is demonstrated once more.
Collapse
Affiliation(s)
- Dovilė Komolovaitė
- Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania;
| | - Rytis Maskeliūnas
- Department of Multimedia Engineering, Kaunas University of Technology, 51368 Kaunas, Lithuania;
- Correspondence:
| | - Robertas Damaševičius
- Department of Applied Informatics, Vytautas Magnus University, 44404 Kaunas, Lithuania;
| |
Collapse
|
37
|
Ravi A, Lu J, Pearce S, Jiang N. Enhanced System Robustness of Asynchronous BCI in Augmented Reality using Steady-state Motion Visual Evoked Potential. IEEE Trans Neural Syst Rehabil Eng 2022; 30:85-95. [PMID: 34990366 DOI: 10.1109/tnsre.2022.3140772] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
This study evaluated the effect of change in background on steady state visually evoked potentials (SSVEP) and steady state motion visually evoked potentials (SSMVEP) based brain computer interfaces (BCI) in a small-profile augmented reality (AR) headset. A four target SSVEP and SSMVEP BCI was implemented using the Cognixion AR headset prototype. An active (AB) and a non-active background (NB) were evaluated. The signal characteristics and classification performance of the two BCI paradigms were studied. Offline analysis was performed using canonical correlation analysis (CCA) and complex-spectrum based convolutional neural network (C-CNN). Finally, the asynchronous pseudo-online performance of the SSMVEP BCI was evaluated. Signal analysis revealed that the SSMVEP stimulus was more robust to change in background compared to SSVEP stimulus in AR. The decoding performance revealed that the C-CNN method outperformed CCA for both stimulus types and NB background, in agreement with results in the literature. The average offline accuracies for W=1s of C-CNN were (NB vs. AB): SSVEP: 82% ±15% vs. 60% ±21% and SSMVEP: 71.4% ± 22% vs. 63.5% ± 18%. Additionally, for W=2s, the AR-SSMVEP BCI with the C-CNN method was 83.3% ± 27% (NB) and 74.1% ±22% (AB). The results suggest that with the C-CNN method, the AR-SSMVEP BCI is both robust to change in background conditions and provides high decoding accuracy compared to the AR-SSVEP BCI. This study presents novel results that highlight the robustness and practical application of SSMVEP BCIs developed with a low-cost AR headset.
Collapse
|
38
|
Kumari N, Anwar S, Bhattacharjee V. Automated visual stimuli evoked multi-channel EEG signal classification using EEGCapsNet. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2021.11.019] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
39
|
Wong CM, Wang Z, Nakanishi M, Wang B, Rosa A, Chen CLP, Jung TP, Wan F. Online Adaptation Boosts SSVEP-Based BCI Performance. IEEE Trans Biomed Eng 2021; 69:2018-2028. [PMID: 34882542 DOI: 10.1109/tbme.2021.3133594] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE A user-friendly steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) prefers no calibration for its target recognition algorithm, however, the existing calibration-free schemes perform still far behind their calibration-based counterparts. To tackle this issue, learning online from the subject's unlabeled data is investigated as a potential approach to boost the performance of the calibration-free SSVEP-based BCIs. METHODS An online adaptation scheme is developed to tune the spatial filters using the online unlabeled data from previous trials, and then developing the online adaptive canonical correlation analysis (OACCA) method. RESULTS A simulation study on two public SSVEP datasets (Dataset I and II) with a total of 105 subjects demonstrated that the proposed online adaptation scheme can boost the CCA's averaged information transfer rate (ITR) from 94.60 to 158.87 bits/min in Dataset I and from 85.80 to 123.91 bits/min in Dataset II. Furthermore, in our online experiment it boosted the CCA's ITR from 55.81 bits/min to 95.73 bits/min. More importantly, this online adaptation scheme can be easily combined with any spatial filtering-based algorithms to achieve online learning. CONCLUSION By online adaptation, the proposed OACCA performed much better than the calibration-free CCA, and comparable to the calibration-based algorithms. SIGNIFICANCE This work provides a general way for the SSVEP-based BCIs to learn online from unlabeled data and thus avoid calibration.
Collapse
|
40
|
Ding W, Shan J, Fang B, Wang C, Sun F, Li X. Filter Bank Convolutional Neural Network for Short Time-Window Steady-State Visual Evoked Potential Classification. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2615-2624. [PMID: 34851830 DOI: 10.1109/tnsre.2021.3132162] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Convolutional neural network (CNN) has been gradually applied to steady-state visual evoked potential (SSVEP) of the brain-computer interface (BCI). Frequency-domain features extracted by fast Fourier Transform (FFT) or time-domain signals are used as network input. In the frequency-domain diagram, the features at the short time-window are not obvious and the phase information of each electrode channel may be ignored as well. Hence we propose a time-domain-based CNN method (tCNN), using the time-domain signal as network input. And the filter bank tCNN (FB-tCNN) is further proposed to improve its performance in the short time-window. We compare FB-tCNN with the canonical correlation analysis (CCA) methods and other CNN methods in our dataset and public dataset. And FB-tCNN shows superior performance at the short time-window in the intra-individual test. At the 0.2 s time-window, the accuracy of our method reaches 88.36 ± 4.89 % in our dataset, 77.78 ± 2.16 % and 79.21 ± 1.80 % respectively in the two sessions of the public dataset, which is higher than other methods. The impacts of training-subject number and data length in inter-individual or cross-individual are studied. FB-tCNN shows the potential in implementing inter-individual BCI. Further analysis shows that the deep learning method is easier in terms of the implementation of the asynchronous BCI system than the training data-driven CCA. The code is available for reproducibility at https://github.com/DingWenl/FB-tCNN.
Collapse
|
41
|
Guney OB, Oblokulov M, Ozkan H. A Deep Neural Network for SSVEP-based Brain-Computer Interfaces. IEEE Trans Biomed Eng 2021; 69:932-944. [PMID: 34495825 DOI: 10.1109/tbme.2021.3110440] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
OBJECTIVE Target identification in brain-computer interface (BCI) spellers refers to the electroencephalogram (EEG) classification for predicting the target character that the subject intends to spell. When the visual stimulus of each character is tagged with a distinct frequency, the EEG records steady-state visually evoked potentials (SSVEP) whose spectrum is dominated by the harmonics of the target frequency. In this setting, we address the target identification and propose a novel deep neural network (DNN) architecture. METHOD The proposed DNN processes the multi-channel SSVEP with convolutions across the sub-bands of harmonics, channels, time, and classifies at the fully connected layer. We test with two publicly available large scale (the benchmark and BETA) datasets consisting of in total 105 subjects with 40 characters. Our first stage training learns a global model by exploiting the statistical commonalities among all subjects, and the second stage fine tunes to each subject separately by exploiting the individualities. RESULTS Our DNN achieves impressive information transfer rates (ITRs) on both datasets, 265.23 bits/min and 196.59 bits/min, respectively, with only 0.4 seconds of stimulation. The code is available for reproducibility at https://github.com/osmanberke/Deep-SSVEP-BCI. CONCLUSION The presented DNN strongly outperforms the state-of-the-art techniques as our accuracy and ITR rates are the highest ever reported performance results on these datasets. SIGNIFICANCE Due to its unprecedentedly high speller ITRs and flawless applicability to general SSVEP systems, our technique has great potential in various biomedical engineering settings of BCIs such as communication, rehabilitation and control.
Collapse
|
42
|
Gao Z, Sun X, Liu M, Dang W, Ma C, Chen G. Attention-Based Parallel Multiscale Convolutional Neural Network for Visual Evoked Potentials EEG Classification. IEEE J Biomed Health Inform 2021; 25:2887-2894. [PMID: 33591923 DOI: 10.1109/jbhi.2021.3059686] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
Electroencephalography (EEG) decoding is an important part of Visual Evoked Potentials-based Brain-Computer Interfaces (BCIs), which directly determines the performance of BCIs. However, long-time attention to repetitive visual stimuli could cause physical and psychological fatigue, resulting in weaker reliable response and stronger noise interference, which exacerbates the difficulty of Visual Evoked Potentials EEG decoding. In this state, subjects' attention could not be concentrated enough and the frequency response of their brains becomes less reliable. To solve these problems, we propose an attention-based parallel multiscale convolutional neural network (AMS-CNN). Specifically, the AMS-CNN first extract robust temporal representations via two parallel convolutional layers with small and large temporal filters respectively. Then, we employ two sequential convolution blocks for spatial fusion and temporal fusion to extract advanced feature representations. Further, we use attention mechanism to weight the features at different moments according to the output-related interest. Finally, we employ a full connected layer with softmax activation function for classification. Two fatigue datasets collected from our lab are implemented to validate the superior classification performance of the proposed method compared to the state-of-the-art methods. Analysis reveals the competitiveness of multiscale convolution and attention mechanism. These results suggest that the proposed framework is a promising solution to improving the decoding performance of Visual Evoked Potential BCIs.
Collapse
|
43
|
Xu L, Xu M, Jung TP, Ming D. Review of brain encoding and decoding mechanisms for EEG-based brain-computer interface. Cogn Neurodyn 2021; 15:569-584. [PMID: 34367361 PMCID: PMC8286913 DOI: 10.1007/s11571-021-09676-z] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Revised: 03/10/2021] [Accepted: 03/26/2021] [Indexed: 01/04/2023] Open
Abstract
A brain-computer interface (BCI) can connect humans and machines directly and has achieved successful applications in the past few decades. Many new BCI paradigms and algorithms have been developed in recent years. Therefore, it is necessary to review new progress in BCIs. This paper summarizes progress for EEG-based BCIs from the perspective of encoding paradigms and decoding algorithms, which are two key elements of BCI systems. Encoding paradigms are grouped by their underlying neural meachanisms, namely sensory- and motor-related, vision-related, cognition-related and hybrid paradigms. Decoding algorithms are reviewed in four categories, namely decomposition algorithms, Riemannian geometry, deep learning and transfer learning. This review will provide a comprehensive overview of both modern primary paradigms and algorithms, making it helpful for those who are developing BCI systems.
Collapse
Affiliation(s)
- Lichao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Swartz Center for Computational Neuroscience, University of California, San Diego, USA
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
44
|
Hong J, Qin X. Signal processing algorithms for SSVEP-based brain computer interface: State-of-the-art and recent developments. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2021. [DOI: 10.3233/jifs-201280] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/19/2023]
Abstract
Over past two decades, steady-state evoked potentials (SSVEP)-based brain computer interface (BCI) systems have been extensively developed. As we all know, signal processing algorithms play an important role in this BCI. However, there is no comprehensive review of the latest development of signal processing algorithms for SSVEP-based BCI. By analyzing the papers published in authoritative journals in nearly five years, signal processing algorithms of preprocessing, feature extraction and classification modules are discussed in detail. In addition, other aspects existed in this BCI are mentioned. The following key problems are solved. (1) In recent years, which signal processing algorithms are frequently used in each module? (2) Which signal processing algorithms attract more attention in recent years? (3) Which modules are the key to signal processing in BCI field? This information is very important for choosing the appropriate algorithms, and can also be considered as a reference for further research. Simultaneously, we hope that this work can provide relevant BCI researchers with valuable information about the latest trends of signal processing algorithms for SSVEP-based BCI systems.
Collapse
Affiliation(s)
- Jie Hong
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an, Shaanxi, China
| | - Xiansheng Qin
- School of Mechanical Engineering, Northwestern Polytechnical University, Xi’an, Shaanxi, China
| |
Collapse
|
45
|
Sun Q, Chen M, Zhang L, Li C, Kang W. Similarity-constrained task-related component analysis for enhancing SSVEP detection. J Neural Eng 2021; 18. [PMID: 33946051 DOI: 10.1088/1741-2552/abfdfa] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/03/2020] [Accepted: 05/04/2021] [Indexed: 11/11/2022]
Abstract
Objective. Task-related component analysis (TRCA) is a representative subject-specific training algorithm in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces. Task-related components (TRCs), extracted by the TRCA-based spatial filtering from electroencephalogram (EEG) signals through maximizing the reproducibility across trials, may contain some task-related inherent noise that is still trial-reproducible.Approach. To address this problem, this study proposed a similarity-constrained TRCA (scTRCA) algorithm to remove the task-related noise and extract TRCs maximally correlated with SSVEPs for enhancing SSVEP detection. Similarity constraints, which were created by introducing covariance matrices between EEG training data and an artificial SSVEP template, were added to the objective function of TRCA. Therefore, a better spatial filter was obtained by maximizing not only the reproducibility across trials but also the similarity between TRCs and SSVEPs. The proposed scTRCA was compared with TRCA, multi-stimulus TRCA, and sine-cosine reference signal based on two public datasets.Main results. The performance of TRCA in target identification of SSVEPs is improved by introducing similarity constraints. The proposed scTRCA significantly outperformed the other three methods, and the improvement was more significant especially with insufficient training data.Significance. The proposed scTRCA algorithm is promising for enhancing SSVEP detection considering its better performance and robustness against insufficient calibration.
Collapse
Affiliation(s)
- Qiang Sun
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing 400044, People's Republic of China
| | - Minyou Chen
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing 400044, People's Republic of China
| | - Li Zhang
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing 400044, People's Republic of China
| | - Changsheng Li
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing 400044, People's Republic of China
| | - Wenfa Kang
- State Key Laboratory of Power Transmission Equipment & System Security and New Technology, School of Electrical Engineering, Chongqing University, Chongqing 400044, People's Republic of China
| |
Collapse
|
46
|
Ko W, Jeon E, Jeong S, Suk HI. Multi-Scale Neural Network for EEG Representation Learning in BCI. IEEE COMPUT INTELL M 2021. [DOI: 10.1109/mci.2021.3061875] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
|
47
|
Maymandi H, Perez Benitez JL, Gallegos-Funes F, Perez Benitez JA. A novel monitor for practical brain-computer interface applications based on visual evoked potential. BRAIN-COMPUTER INTERFACES 2021. [DOI: 10.1080/2326263x.2021.1900032] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Affiliation(s)
- Hamidreza Maymandi
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - Jorge Luis Perez Benitez
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - F. Gallegos-Funes
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| | - J. A. Perez Benitez
- Laboratorio de Electromagnetismo Aplicado (LENDE), Escuela Superior de Ingeniería Mecánica y Eléctrica (ESIME), Instituto Politécnico Nacional (IPN), CDMX, Mexico
| |
Collapse
|
48
|
Petrosyan A, Sinkin M, Lebedev MA, Ossadtchi A. Decoding and interpreting cortical signals with a compact convolutional neural network. J Neural Eng 2021; 18. [PMID: 33524962 DOI: 10.1088/1741-2552/abe20e] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 02/01/2021] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) decode information from neural activity and send it to external devices. The use of Deep Learning approaches for decoding allows for automatic feature engineering within the specific decoding task. Physiologically plausible interpretation of the network parameters ensures the robustness of the learned decision rules and opens the exciting opportunity for automatic knowledge discovery. APPROACH We describe a compact convolutional network-based architecture for adaptive decoding of electrocorticographic (ECoG) data into finger kinematics. We also propose a novel theoretically justified approach to interpreting the spatial and temporal weights in the architectures that combine adaptation in both space and time. The obtained spatial and frequency patterns characterizing the neuronal populations pivotal to the specific decoding task can then be interpreted by fitting appropriate spatial and dynamical models. MAIN RESULTS We first tested our solution using realistic Monte-Carlo simulations. Then, when applied to the ECoG data from Berlin BCI competition IV dataset, our architecture performed comparably to the competition winners without requiring explicit feature engineering. Using the proposed approach to the network weights interpretation we could unravel the spatial and the spectral patterns of the neuronal processes underlying the successful decoding of finger kinematics from an ECoG dataset. Finally we have also applied the entire pipeline to the analysis of a 32-channel EEG motor-imagery dataset and observed physiologically plausible patterns specific to the task. SIGNIFICANCE We described a compact and interpretable CNN architecture derived from the basic principles and encompassing the knowledge in the field of neural electrophysiology. For the first time in the context of such multibranch architectures with factorized spatial and temporal processing we presented theoretically justified weights interpretation rules. We verified our recipes using simulations and real data and demonstrated that the proposed solution offers a good decoder and a tool for investigating motor control neural mechanisms.
Collapse
Affiliation(s)
- Artur Petrosyan
- Center for Bioelectric Interfaces, Higher School of Economics, Krivokolennyi per., 3, Moscow, Russia, 10100, RUSSIAN FEDERATION
| | - Mikhail Sinkin
- A I Yevdokimov Moscow State University of Medicine and Dentistry of the Ministry of Healthcare of the Russian Federation Faculty of Dentistry, Delegatskaya St., 20, p. 1, Moskva, Moskva, 127473, RUSSIAN FEDERATION
| | - M A Lebedev
- Neurobiology, Duke University, Hudson Hall 136, Durham, NC 27708-0281, USA, Durham, 27517, UNITED STATES
| | - Alexei Ossadtchi
- Center for Bioelectric Interfaces, Higher School of Economics, Krivokolennyi per., 3, Moscow, Russia, 101000, RUSSIAN FEDERATION
| |
Collapse
|
49
|
Chen X, Tao X, Wang FL, Xie H. Global research on artificial intelligence-enhanced human electroencephalogram analysis. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05588-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
|
50
|
Gong S, Xing K, Cichocki A, Li J. Deep Learning in EEG: Advance of the Last Ten-Year Critical Period. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3079712] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|