1
|
Li X, Yang S, Fei N, Wang J, Huang W, Hu Y. A Convolutional Neural Network for SSVEP Identification by Using a Few-Channel EEG. Bioengineering (Basel) 2024; 11:613. [PMID: 38927850 PMCID: PMC11200714 DOI: 10.3390/bioengineering11060613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/11/2024] [Accepted: 06/08/2024] [Indexed: 06/28/2024] Open
Abstract
The application of wearable electroencephalogram (EEG) devices is growing in brain-computer interfaces (BCI) owing to their good wearability and portability. Compared with conventional devices, wearable devices typically support fewer EEG channels. Devices with few-channel EEGs have been proven to be available for steady-state visual evoked potential (SSVEP)-based BCI. However, fewer-channel EEGs can cause the BCI performance to decrease. To address this issue, an attention-based complex spectrum-convolutional neural network (atten-CCNN) is proposed in this study, which combines a CNN with a squeeze-and-excitation block and uses the spectrum of the EEG signal as the input. The proposed model was assessed on a wearable 40-class dataset and a public 12-class dataset under subject-independent and subject-dependent conditions. The results show that whether using a three-channel EEG or single-channel EEG for SSVEP identification, atten-CCNN outperformed the baseline models, indicating that the new model can effectively enhance the performance of SSVEP-BCI with few-channel EEGs. Therefore, this SSVEP identification algorithm based on a few-channel EEG is particularly suitable for use with wearable EEG devices.
Collapse
Affiliation(s)
- Xiaodong Li
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen 518053, China
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong SAR, China
| | - Shuoheng Yang
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen 518053, China
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong SAR, China
| | - Ningbo Fei
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong SAR, China
| | - Junlin Wang
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen 518053, China
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong SAR, China
| | - Wei Huang
- Department of Rehabilitation, The Second Affiliated Hospital of Guangdong Medical University, Zhanjiang 524003, China
| | - Yong Hu
- Orthopedics Center, The University of Hong Kong-Shenzhen Hospital, Shenzhen 518053, China
- Department of Orthopaedics and Traumatology, The University of Hong Kong, Hong Kong SAR, China
- Department of Rehabilitation, The Second Affiliated Hospital of Guangdong Medical University, Zhanjiang 524003, China
| |
Collapse
|
2
|
Qin K, Xu R, Li S, Wang X, Cichocki A, Jin J. A Time-Local Weighted Transformation Recognition Framework for Steady State Visual Evoked Potentials Based Brain-Computer Interfaces. IEEE Trans Neural Syst Rehabil Eng 2024; 32:1596-1605. [PMID: 38598402 DOI: 10.1109/tnsre.2024.3386763] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/12/2024]
Abstract
Canonical correlation analysis (CCA), Multivariate synchronization index (MSI), and their extended methods have been widely used for target recognition in Brain-computer interfaces (BCIs) based on Steady State Visual Evoked Potentials (SSVEP), and covariance calculation is an important process for these algorithms. Some studies have proved that embedding time-local information into the covariance can optimize the recognition effect of the above algorithms. However, the optimization effect can only be observed from the recognition results and the improvement principle of time-local information cannot be explained. Therefore, we propose a time-local weighted transformation (TT) recognition framework that directly embeds the time-local information into the electroencephalography signal through weighted transformation. The influence mechanism of time-local information on the SSVEP signal can then be observed in the frequency domain. Low-frequency noise is suppressed on the premise of sacrificing part of the SSVEP fundamental frequency energy, the harmonic energy of SSVEP is enhanced at the cost of introducing a small amount of high-frequency noise. The experimental results show that the TT recognition framework can significantly improve the recognition ability of the algorithms and the separability of extracted features. Its enhancement effect is significantly better than the traditional time-local covariance extraction method, which has enormous application potential.
Collapse
|
3
|
Yin X, Lin M. Multi-information improves the performance of CCA-based SSVEP classification. Cogn Neurodyn 2024; 18:165-172. [PMID: 38406193 PMCID: PMC10881948 DOI: 10.1007/s11571-022-09923-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 11/24/2022] [Accepted: 12/19/2022] [Indexed: 01/11/2023] Open
Abstract
The target recognition algorithm based on canonical correlation analysis (CCA) has been widely used in steady-state visual evoked potential (SSVEP)-based brain-computer interfaces. To reduce visual fatigue and improve the information transfer rate (ITR), how to improve the accuracy of algorithms within a short time window has become one of the main problems at present. There were filter bank CCA (FBCCA), individual template CCA (ITCCA), and temporally local CCA (TCCA), which improve the CCA algorithm from different aspects.This paper proposed to consider individual, frequency, and time information at the same time, so as to extract features more effectively. A comparison of the various methods was performed using benchmark dataset. Classification accuracy and ITR were used for performance evaluation. In the different extensions of CCA, the method incorporating the above three kinds of information simultaneously achieved the best performance within a short time window. This study explores the effect of using a variety of information to improve the CCA algorithm.
Collapse
Affiliation(s)
- Xiangguo Yin
- National Demonstration Center for Experimental Mechanical Engineering Education (Shandong University), Key La-boratory of High-efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engi-neering, Shandong University, Jinan, 250061 China
- University of Health and Rehabilitation Sciences, Qingdao, 266071 China
| | - Mingxing Lin
- National Demonstration Center for Experimental Mechanical Engineering Education (Shandong University), Key La-boratory of High-efficiency and Clean Mechanical Manufacture of Ministry of Education, School of Mechanical Engi-neering, Shandong University, Jinan, 250061 China
| |
Collapse
|
4
|
Park S, Kim M, Nam H, Kwon J, Im CH. In-Car Environment Control Using an SSVEP-Based Brain-Computer Interface with Visual Stimuli Presented on Head-Up Display: Performance Comparison with a Button-Press Interface. SENSORS (BASEL, SWITZERLAND) 2024; 24:545. [PMID: 38257638 PMCID: PMC10819861 DOI: 10.3390/s24020545] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2023] [Revised: 01/04/2024] [Accepted: 01/12/2024] [Indexed: 01/24/2024]
Abstract
Controlling the in-car environment, including temperature and ventilation, is necessary for a comfortable driving experience. However, it often distracts the driver's attention, potentially causing critical car accidents. In the present study, we implemented an in-car environment control system utilizing a brain-computer interface (BCI) based on steady-state visual evoked potential (SSVEP). In the experiment, four visual stimuli were displayed on a laboratory-made head-up display (HUD). This allowed the participants to control the in-car environment by simply staring at a target visual stimulus, i.e., without pressing a button or averting their eyes from the front. The driving performances in two realistic driving tests-obstacle avoidance and car-following tests-were then compared between the manual control condition and SSVEP-BCI control condition using a driving simulator. In the obstacle avoidance driving test, where participants needed to stop the car when obstacles suddenly appeared, the participants showed significantly shorter response time (1.42 ± 0.26 s) in the SSVEP-BCI control condition than in the manual control condition (1.79 ± 0.27 s). No-response rate, defined as the ratio of obstacles that the participants did not react to, was also significantly lower in the SSVEP-BCI control condition (4.6 ± 14.7%) than in the manual control condition (20.5 ± 25.2%). In the car-following driving test, where the participants were instructed to follow a preceding car that runs at a sinusoidally changing speed, the participants showed significantly lower speed difference with the preceding car in the SSVEP-BCI control condition (15.65 ± 7.04 km/h) than in the manual control condition (19.54 ± 11.51 km/h). The in-car environment control system using SSVEP-based BCI showed a possibility that might contribute to safer driving by keeping the driver's focus on the front and thereby enhancing the overall driving performance.
Collapse
Affiliation(s)
- Seonghun Park
- Department of Electronic Engineering, Hanyang University, Seoul 04763, Republic of Korea; (S.P.); (J.K.)
| | - Minsu Kim
- Department of Artificial Intelligence, Hanyang University, Seoul 04763, Republic of Korea; (M.K.); (H.N.)
| | - Hyerin Nam
- Department of Artificial Intelligence, Hanyang University, Seoul 04763, Republic of Korea; (M.K.); (H.N.)
| | - Jinuk Kwon
- Department of Electronic Engineering, Hanyang University, Seoul 04763, Republic of Korea; (S.P.); (J.K.)
| | - Chang-Hwan Im
- Department of Electronic Engineering, Hanyang University, Seoul 04763, Republic of Korea; (S.P.); (J.K.)
- Department of Artificial Intelligence, Hanyang University, Seoul 04763, Republic of Korea; (M.K.); (H.N.)
- Department of Biomedical Engineering, Hanyang University, Seoul 04763, Republic of Korea
| |
Collapse
|
5
|
Zhao S, Wang R, Bao R, Yang L. Spatially-coded SSVEP BCI without pre-training based on FBCCA. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104717] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
6
|
Chen J, Zhang Y, Pan Y, Xu P, Guan C. A transformer-based deep neural network model for SSVEP classification. Neural Netw 2023; 164:521-534. [PMID: 37209444 DOI: 10.1016/j.neunet.2023.04.045] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 03/24/2023] [Accepted: 04/26/2023] [Indexed: 05/22/2023]
Abstract
Steady-state visual evoked potential (SSVEP) is one of the most commonly used control signals in the brain-computer interface (BCI) systems. However, the conventional spatial filtering methods for SSVEP classification highly depend on the subject-specific calibration data. The need for the methods that can alleviate the demand for the calibration data becomes urgent. In recent years, developing the methods that can work in inter-subject scenario has become a promising new direction. As a popular deep learning model nowadays, Transformer has been used in EEG signal classification tasks owing to its excellent performance. Therefore, in this study, we proposed a deep learning model for SSVEP classification based on Transformer architecture in inter-subject scenario, termed as SSVEPformer, which was the first application of Transformer on the SSVEP classification. Inspired by previous studies, we adopted the complex spectrum features of SSVEP data as the model input, which could enable the model to simultaneously explore the spectral and spatial information for classification. Furthermore, to fully utilize the harmonic information, an extended SSVEPformer based on the filter bank technology (FB-SSVEPformer) was proposed to improve the classification performance. Experiments were conducted using two open datasets (Dataset 1: 10 subjects, 12 targets; Dataset 2: 35 subjects, 40 targets). The experimental results show that the proposed models could achieve better results in terms of classification accuracy and information transfer rate than other baseline methods. The proposed models validate the feasibility of deep learning models based on Transformer architecture for SSVEP data classification, and could serve as potential models to alleviate the calibration procedure in the practical application of SSVEP-based BCI systems.
Collapse
Affiliation(s)
- Jianbo Chen
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China
| | - Yangsong Zhang
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China; MOE Key Laboratory for NeuroInformation, Clinical Hospital of Chengdu Brain Science Institute, and Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
| | - Yudong Pan
- School of Computer Science and Technology, Laboratory for Brain Science and Medical Artificial Intelligence, Southwest University of Science and Technology, Mianyang, China
| | - Peng Xu
- MOE Key Laboratory for NeuroInformation, Clinical Hospital of Chengdu Brain Science Institute, and Center for Information in BioMedicine, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China.
| | - Cuntai Guan
- School of Computer Science and Engineering, Nanyang Technological University (NTU), Singapore
| |
Collapse
|
7
|
Wan Z, Li M, Liu S, Huang J, Tan H, Duan W. EEGformer: A transformer-based brain activity classification method using EEG signal. Front Neurosci 2023; 17:1148855. [PMID: 37034169 PMCID: PMC10079879 DOI: 10.3389/fnins.2023.1148855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 03/06/2023] [Indexed: 04/11/2023] Open
Abstract
Background The effective analysis methods for steady-state visual evoked potential (SSVEP) signals are critical in supporting an early diagnosis of glaucoma. Most efforts focused on adopting existing techniques to the SSVEPs-based brain-computer interface (BCI) task rather than proposing new ones specifically suited to the domain. Method Given that electroencephalogram (EEG) signals possess temporal, regional, and synchronous characteristics of brain activity, we proposed a transformer-based EEG analysis model known as EEGformer to capture the EEG characteristics in a unified manner. We adopted a one-dimensional convolution neural network (1DCNN) to automatically extract EEG-channel-wise features. The output was fed into the EEGformer, which is sequentially constructed using three components: regional, synchronous, and temporal transformers. In addition to using a large benchmark database (BETA) toward SSVEP-BCI application to validate model performance, we compared the EEGformer to current state-of-the-art deep learning models using two EEG datasets, which are obtained from our previous study: SJTU emotion EEG dataset (SEED) and a depressive EEG database (DepEEG). Results The experimental results show that the EEGformer achieves the best classification performance across the three EEG datasets, indicating that the rationality of our model architecture and learning EEG characteristics in a unified manner can improve model classification performance. Conclusion EEGformer generalizes well to different EEG datasets, demonstrating our approach can be potentially suitable for providing accurate brain activity classification and being used in different application scenarios, such as SSVEP-based early glaucoma diagnosis, emotion recognition and depression discrimination.
Collapse
Affiliation(s)
- Zhijiang Wan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
- Industrial Institute of Artificial Intelligence, Nanchang University, Nanchang, Jiangxi, China
| | - Manyu Li
- School of Information Engineering, Nanchang University, Nanchang, Jiangxi, China
| | - Shichang Liu
- School of Computer Science, Shaanxi Normal University, Xi’an, Shaanxi, China
| | - Jiajin Huang
- Faculty of Information Technology, Beijing University of Technology, Beijing, China
| | - Hai Tan
- School of Computer Science, Nanjing Audit University, Nanjing, Jiangsu, China
| | - Wenfeng Duan
- The First Affiliated Hospital of Nanchang University, Nanchang University, Nanchang, Jiangxi, China
| |
Collapse
|
8
|
Chen W, Chen SK, Liu YH, Chen YJ, Chen CS. An Electric Wheelchair Manipulating System Using SSVEP-Based BCI System. BIOSENSORS 2022; 12:bios12100772. [PMID: 36290910 PMCID: PMC9599534 DOI: 10.3390/bios12100772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/07/2022] [Revised: 09/05/2022] [Accepted: 09/16/2022] [Indexed: 11/22/2022]
Abstract
Most people with motor disabilities use a joystick to control an electric wheelchair. However, those who suffer from multiple sclerosis or amyotrophic lateral sclerosis may require other methods to control an electric wheelchair. This study implements an electroencephalography (EEG)-based brain–computer interface (BCI) system and a steady-state visual evoked potential (SSVEP) to manipulate an electric wheelchair. While operating the human–machine interface, three types of SSVEP scenarios involving a real-time virtual stimulus are displayed on a monitor or mixed reality (MR) goggles to produce the EEG signals. Canonical correlation analysis (CCA) is used to classify the EEG signals into the corresponding class of command and the information transfer rate (ITR) is used to determine the effect. The experimental results show that the proposed SSVEP stimulus generates the EEG signals because of the high classification accuracy of CCA. This is used to control an electric wheelchair along a specific path. Simultaneous localization and mapping (SLAM) is the mapping method that is available in the robotic operating software (ROS) platform that is used for the wheelchair system for this study.
Collapse
Affiliation(s)
- Wen Chen
- Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 10608, Taiwan
| | - Shih-Kang Chen
- Department of Mechatronics Control, Industrial Technology Research Institute, Hsinchu 310401, Taiwan
| | - Yi-Hung Liu
- Department of Mechanical Engineering, National Taiwan University of Science and Technology, Taipei 106335, Taiwan
| | - Yu-Jen Chen
- Department of Radiation Oncology, MacKay Memorial Hospital, Taipei 10449, Taiwan
| | - Chin-Sheng Chen
- Graduate Institute of Automation Technology, National Taipei University of Technology, Taipei 10608, Taiwan
- Correspondence: ; Tel.: +886-2-27712171 (ext. 4325)
| |
Collapse
|
9
|
An SSVEP-based BCI with LEDs visual stimuli using dynamic window CCA algorithm. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103727] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
|
10
|
Israsena P, Pan-Ngum S. A CNN-Based Deep Learning Approach for SSVEP Detection Targeting Binaural Ear-EEG. Front Comput Neurosci 2022; 16:868642. [PMID: 35664916 PMCID: PMC9160186 DOI: 10.3389/fncom.2022.868642] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2022] [Accepted: 04/19/2022] [Indexed: 11/13/2022] Open
Abstract
This paper discusses a machine learning approach for detecting SSVEP at both ears with minimal channels. SSVEP is a robust EEG signal suitable for many BCI applications. It is strong at the visual cortex around the occipital area, but the SNR gets worse when detected from other areas of the head. To make use of SSVEP measured around the ears following the ear-EEG concept, especially for practical binaural implementation, we propose a CNN structure coupled with regressed softmax outputs to improve accuracy. Evaluating on a public dataset, we studied classification performance for both subject-dependent and subject-independent trainings. It was found that with the proposed structure using a group training approach, a 69.21% accuracy was achievable. An ITR of 6.42 bit/min given 63.49 % accuracy was recorded while only monitoring data from T7 and T8. This represents a 12.47% improvement from a single ear implementation and illustrates potential of the approach to enhance performance for practical implementation of wearable EEG.
Collapse
Affiliation(s)
- Pasin Israsena
- National Electronics and Computer Technology Center (NECTEC), National Science and Technology Development Agency (NSTDA), Pathumthani, Thailand
- *Correspondence: Pasin Israsena
| | - Setha Pan-Ngum
- Department of Computer Engineering, Faculty of Engineering, Chulalongkorn University, Bangkok, Thailand
| |
Collapse
|
11
|
A Human-Machine Interface Based on an EOG and a Gyroscope for Humanoid Robot Control and Its Application to Home Services. JOURNAL OF HEALTHCARE ENGINEERING 2022; 2022:1650387. [PMID: 35345662 PMCID: PMC8957419 DOI: 10.1155/2022/1650387] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/05/2021] [Revised: 01/28/2022] [Accepted: 02/14/2022] [Indexed: 11/18/2022]
Abstract
The human-machine interface (HMI) has been studied for robot teleoperation with the aim of empowering people who experience motor disabilities to increase their interaction with the physical environment. The challenge of an HMI for robot control is to rapidly, accurately, and sufficiently produce control commands. In this paper, an asynchronous HMI based on an electrooculogram (EOG) and a gyroscope is proposed using two self-paced and endogenous features, double blink and head rotation. By designing the multilevel graphical user interface (GUI), the user can rotate his head to move the cursor of the GUI and create a double blink to trigger the button in the interface. The proposed HMI is able to supply sufficient commands at the same time with high accuracy (ACC) and low response time (RT). In the trigger task of sixteen healthy subjects, the target was clicked from 20 options with ACC of 99.2% and RT 2.34 s. Furthermore, a continuous strategy that uses motion start and motion stop commands to create a certain robot motion is proposed to control a humanoid robot based on the HMI. It avoids the situation that combines some commands to achieve one motion or converts the certain motion to a command directly. In the home service experiment, all subjects operated a humanoid robot changing the state of a switch, grasping a key, and putting it into a box. The time ratio between HMI control and manual control was 1.22, and the number of commands ratio was 1.18. The results demonstrated that the continuous strategy and proposed HMI can improve performance in humanoid robot control.
Collapse
|
12
|
Tong C, Wang H, Wang Y. Relation of canonical correlation analysis and multivariate synchronization index in SSVEP detection. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103345] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
13
|
Ma P, Dong C, Lin R, Ma S, Jia T, Chen X, Xiao Z, Qi Y. A classification algorithm of an SSVEP brain-computer interface based on CCA fusion wavelet coefficients. J Neurosci Methods 2022; 371:109502. [DOI: 10.1016/j.jneumeth.2022.109502] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 02/05/2022] [Accepted: 02/08/2022] [Indexed: 11/16/2022]
|
14
|
Zheng X, Xu G, Han C, Tian P, Zhang K, Liang R, Jia Y, Yan W, Du C, Zhang S. Enhancing Performance of SSVEP-Based Visual Acuity via Spatial Filtering. Front Neurosci 2021; 15:716051. [PMID: 34489633 PMCID: PMC8417433 DOI: 10.3389/fnins.2021.716051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2021] [Accepted: 07/20/2021] [Indexed: 11/30/2022] Open
Abstract
The purpose of this study was to enhance the performance of steady-state visual evoked potential (SSVEP)-based visual acuity assessment with spatial filtering methods. Using the vertical sinusoidal gratings at six spatial frequency steps as the visual stimuli for 11 subjects, SSVEPs were recorded from six occipital electrodes (O1, Oz, O2, PO3, POz, and PO4). Ten commonly used training-free spatial filtering methods, i.e., native combination (single-electrode), bipolar combination, Laplacian combination, average combination, common average reference (CAR), minimum energy combination (MEC), maximum contrast combination (MCC), canonical correlation analysis (CCA), multivariate synchronization index (MSI), and partial least squares (PLS), were compared for multielectrode signals combination in SSVEP visual acuity assessment by statistical analyses, e.g., Bland–Altman analysis and repeated-measures ANOVA. The SSVEP signal characteristics corresponding to each spatial filtering method were compared, determining the chosen spatial filtering methods of CCA and MSI with a higher performance than the native combination for further signal processing. After the visual acuity threshold estimation criterion, the agreement between the subjective Freiburg Visual Acuity and Contrast Test (FrACT) and SSVEP visual acuity for the native combination (0.253 logMAR), CCA (0.202 logMAR), and MSI (0.208 logMAR) was all good, and the difference between FrACT and SSVEP visual acuity was also all acceptable for the native combination (−0.095 logMAR), CCA (0.039 logMAR), and MSI (−0.080 logMAR), where CCA-based SSVEP visual acuity had the best performance and the native combination had the worst. The study proved that the performance of SSVEP-based visual acuity can be enhanced by spatial filtering methods of CCA and MSI and also recommended CCA as the spatial filtering method for multielectrode signals combination in SSVEP visual acuity assessment.
Collapse
Affiliation(s)
- Xiaowei Zheng
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China.,State Key Laboratory for Manufacturing Systems Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chengcheng Han
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Peiyuan Tian
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Kai Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Renghao Liang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Yaguang Jia
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Wenqiang Yan
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Chenghang Du
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Sicong Zhang
- School of Mechanical Engineering, Xi'an Jiaotong University, Xi'an, China
| |
Collapse
|