1
|
Khabti J, AlAhmadi S, Soudani A. Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks. Sensors (Basel) 2024; 24:3168. [PMID: 38794022 DOI: 10.3390/s24103168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/08/2024] [Accepted: 05/14/2024] [Indexed: 05/26/2024]
Abstract
The widely adopted paradigm in brain-computer interfaces (BCIs) involves motor imagery (MI), enabling improved communication between humans and machines. EEG signals derived from MI present several challenges due to their inherent characteristics, which lead to a complex process of classifying and finding the potential tasks of a specific participant. Another issue is that BCI systems can result in noisy data and redundant channels, which in turn can lead to increased equipment and computational costs. To address these problems, the optimal channel selection of a multiclass MI classification based on a Fusion convolutional neural network with Attention blocks (FCNNA) is proposed. In this study, we developed a CNN model consisting of layers of convolutional blocks with multiple spatial and temporal filters. These filters are designed specifically to capture the distribution and relationships of signal features across different electrode locations, as well as to analyze the evolution of these features over time. Following these layers, a Convolutional Block Attention Module (CBAM) is used to, further, enhance EEG signal feature extraction. In the process of channel selection, the genetic algorithm is used to select the optimal set of channels using a new technique to deliver fixed as well as variable channels for all participants. The proposed methodology is validated showing 6.41% improvement in multiclass classification compared to most baseline models. Notably, we achieved the highest results of 93.09% for binary classes involving left-hand and right-hand movements. In addition, the cross-subject strategy for multiclass classification yielded an impressive accuracy of 68.87%. Following channel selection, multiclass classification accuracy was enhanced, reaching 84.53%. Overall, our experiments illustrated the efficiency of the proposed EEG MI model in both channel selection and classification, showing superior results with either a full channel set or a reduced number of channels.
Collapse
Affiliation(s)
- Joharah Khabti
- Department of Computer Science, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia
| | - Saad AlAhmadi
- Department of Computer Science, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia
| | - Adel Soudani
- Department of Computer Science, College of Computer and Information Sciences (CCIS), King Saud University, Riyadh 11543, Saudi Arabia
| |
Collapse
|
2
|
Liu Y, Dai W, Liu Y, Hu D, Yang B, Zhou Z. An SSVEP-based BCI with 112 targets using frequency spatial multiplexing. J Neural Eng 2024; 21:036004. [PMID: 38639058 DOI: 10.1088/1741-2552/ad4091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/15/2024] [Indexed: 04/20/2024]
Abstract
Objective.Brain-computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (⩾100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly.Approach.In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets.Main results.Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively.Significance.This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.
Collapse
Affiliation(s)
- Yaru Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410000, People's Republic of China
| | - Wei Dai
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410000, People's Republic of China
| | - Yadong Liu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410000, People's Republic of China
| | - Dewen Hu
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410000, People's Republic of China
| | - Banghua Yang
- School of Mechatronic Engineering and Automation, School of Medicine, Research Center of Brain-Computer Engineering, Shanghai University, Shanghai 200444, People's Republic of China
| | - Zongtan Zhou
- College of Intelligence Science and Technology, National University of Defense Technology, Changsha 410000, People's Republic of China
| |
Collapse
|
3
|
Barmpas K, Panagakis Y, Zoumpourlis G, Adamos DA, Laskaris N, Zafeiriou S. A causal perspective on brainwave modeling for brain-computer interfaces. J Neural Eng 2024; 21:036001. [PMID: 38621380 DOI: 10.1088/1741-2552/ad3eb5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 04/15/2024] [Indexed: 04/17/2024]
Abstract
Objective. Machine learning (ML) models have opened up enormous opportunities in the field of brain-computer Interfaces (BCIs). Despite their great success, they usually face severe limitations when they are employed in real-life applications outside a controlled laboratory setting.Approach. Mixing causal reasoning, identifying causal relationships between variables of interest, with brainwave modeling can change one's viewpoint on some of these major challenges which can be found in various stages in the ML pipeline, ranging from data collection and data pre-processing to training methods and techniques.Main results. In this work, we employ causal reasoning and present a framework aiming to breakdown and analyze important challenges of brainwave modeling for BCIs.Significance. Furthermore, we present how general ML practices as well as brainwave-specific techniques can be utilized and solve some of these identified challenges. And finally, we discuss appropriate evaluation schemes in order to measure these techniques' performance and efficiently compare them with other methods that will be developed in the future.
Collapse
Affiliation(s)
- Konstantinos Barmpas
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Yannis Panagakis
- Department of Informatics and Telecommunications, National and Kapodistrian University of Athens, Athens 15784, Greece
- Archimedes Research Unit, Research Center Athena, Athens 15125, Greece
- Cogitat Ltd, London, United Kingdom
| | | | - Dimitrios A Adamos
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| | - Nikolaos Laskaris
- School of Informatics, Aristotle University of Thessaloniki, Thessaloniki 54124, Greece
- Cogitat Ltd, London, United Kingdom
| | - Stefanos Zafeiriou
- Department of Computing, Imperial College London, London SW7 2RH, United Kingdom
- Cogitat Ltd, London, United Kingdom
| |
Collapse
|
4
|
Yan S, Hu Y, Zhang R, Qi D, Hu Y, Yao D, Shi L, Zhang L. Multilayer network-based channel selection for motor imagery brain-computer interface. J Neural Eng 2024; 21:016029. [PMID: 38295419 DOI: 10.1088/1741-2552/ad2496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/31/2024] [Indexed: 02/02/2024]
Abstract
Objective. The number of electrode channels in a motor imagery-based brain-computer interface (MI-BCI) system influences not only its decoding performance, but also its convenience for use in applications. Although many channel selection methods have been proposed in the literature, they are usually based on the univariate features of a single channel. This leads to a loss of the interaction between channels and the exchange of information between networks operating at different frequency bands.Approach. We integrate brain networks containing four frequency bands into a multilayer network framework and propose a multilayer network-based channel selection (MNCS) method for MI-BCI systems. A graph learning-based method is used to estimate the multilayer network from electroencephalogram (EEG) data that are filtered by multiple frequency bands. The multilayer participation coefficient of the multilayer network is then computed to select EEG channels that do not contain redundant information. Furthermore, the common spatial pattern (CSP) method is used to extract effective features. Finally, a support vector machine classifier with a linear kernel is trained to accurately identify MI tasks.Main results. We used three publicly available datasets from the BCI Competition containing data on 12 healthy subjects and one dataset containing data on 15 stroke patients to validate the effectiveness of our proposed method. The results showed that the proposed MNCS method outperforms all channels (85.8% vs. 93.1%, 84.4% vs. 89.0%, 71.7% vs. 79.4%, and 72.7% vs. 84.0%). Moreover, it achieved significantly higher decoding accuracies on MI-BCI systems than state-of-the-art methods (pairedt-tests,p< 0.05).Significance. The experimental results showed that the proposed MNCS method can select appropriate channels to improve the decoding performance as well as the convenience of the application of MI-BCI systems.
Collapse
Affiliation(s)
- Shaoting Yan
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| | - Yuxia Hu
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| | - Rui Zhang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| | - Daowei Qi
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
| | - Yubo Hu
- The No.3 Provincial People's Hospital of Henan Province, Zhengzhou, People's Republic of China
| | - Dezhong Yao
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
| | - Li Shi
- Department of Automation, Tsinghua University, Beijing, People's Republic of China
- Beijing National Research Center for Information Science and Technology, Beijing, People's Republic of China
| | - Lipeng Zhang
- School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou, People's Republic of China
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, Zhengzhou, People's Republic of China
- Institute of Neuroscience, Zhengzhou University, Zhengzhou, People's Republic of China
| |
Collapse
|
5
|
Wu X, Zhang D, Li G, Gao X, Metcalfe B, Chen L. Data augmentation for invasive brain-computer interfaces based on stereo-electroencephalography (SEEG). J Neural Eng 2024; 21:016026. [PMID: 38237174 DOI: 10.1088/1741-2552/ad200e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 01/18/2024] [Indexed: 02/23/2024]
Abstract
Objective.Deep learning is increasingly used for brain-computer interfaces (BCIs). However, the quantity of available data is sparse, especially for invasive BCIs. Data augmentation (DA) methods, such as generative models, can help to address this sparseness. However, all the existing studies on brain signals were based on convolutional neural networks and ignored the temporal dependence. This paper attempted to enhance generative models by capturing the temporal relationship from a time-series perspective.Approach. A conditional generative network (conditional transformer-based generative adversarial network (cTGAN)) based on the transformer model was proposed. The proposed method was tested using a stereo-electroencephalography (SEEG) dataset which was recorded from eight epileptic patients performing five different movements. Three other commonly used DA methods were also implemented: noise injection (NI), variational autoencoder (VAE), and conditional Wasserstein generative adversarial network with gradient penalty (cWGANGP). Using the proposed method, the artificial SEEG data was generated, and several metrics were used to compare the data quality, including visual inspection, cosine similarity (CS), Jensen-Shannon distance (JSD), and the effect on the performance of a deep learning-based classifier.Main results. Both the proposed cTGAN and the cWGANGP methods were able to generate realistic data, while NI and VAE outputted inferior samples when visualized as raw sequences and in a lower dimensional space. The cTGAN generated the best samples in terms of CS and JSD and outperformed cWGANGP significantly in enhancing the performance of a deep learning-based classifier (each of them yielding a significant improvement of 6% and 3.4%, respectively).Significance. This is the first time that DA methods have been applied to invasive BCIs based on SEEG. In addition, this study demonstrated the advantages of the model that preserves the temporal dependence from a time-series perspective.
Collapse
Affiliation(s)
- Xiaolong Wu
- The Centre for Autonomous Robotics (CENTAUR), Department of Electronic & Electrical Engineering, University of Bath, Bath, United Kingdom
| | - Dingguo Zhang
- The Centre for Autonomous Robotics (CENTAUR), Department of Electronic & Electrical Engineering, University of Bath, Bath, United Kingdom
| | - Guangye Li
- School of Mechanical Engineering, Shanghai Jiao Tong University, People's Republic of China
| | - Xin Gao
- The Centre for Autonomous Robotics (CENTAUR), Department of Electronic & Electrical Engineering, University of Bath, Bath, United Kingdom
| | - Benjamin Metcalfe
- The Centre for Autonomous Robotics (CENTAUR), Department of Electronic & Electrical Engineering, University of Bath, Bath, United Kingdom
| | - Liang Chen
- Liang Chen is with Huashan Hospital, Fudan University, People's Republic of China
| |
Collapse
|
6
|
Papadopoulos S, Szul MJ, Congedo M, Bonaiuto JJ, Mattout J. Beta bursts question the ruling power for brain-computer interfaces. J Neural Eng 2024; 21:016010. [PMID: 38167234 DOI: 10.1088/1741-2552/ad19ea] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 01/02/2024] [Indexed: 01/05/2024]
Abstract
Objective: Current efforts to build reliable brain-computer interfaces (BCI) span multiple axes from hardware, to software, to more sophisticated experimental protocols, and personalized approaches. However, despite these abundant efforts, there is still room for significant improvement. We argue that a rather overlooked direction lies in linking BCI protocols with recent advances in fundamental neuroscience.Approach: In light of these advances, and particularly the characterization of the burst-like nature of beta frequency band activity and the diversity of beta bursts, we revisit the role of beta activity in 'left vs. right hand' motor imagery (MI) tasks. Current decoding approaches for such tasks take advantage of the fact that MI generates time-locked changes in induced power in the sensorimotor cortex and rely on band-passed power changes in single or multiple channels. Although little is known about the dynamics of beta burst activity during MI, we hypothesized that beta bursts should be modulated in a way analogous to their activity during performance of real upper limb movements.Main results and Significance: We show that classification features based on patterns of beta burst modulations yield decoding results that are equivalent to or better than typically used beta power across multiple open electroencephalography datasets, thus providing insights into the specificity of these bio-markers.
Collapse
Affiliation(s)
- Sotirios Papadopoulos
- University Lyon 1, Lyon, France
- Lyon Neuroscience Research Center, CRNL, INSERM U1028, CNRS, UMR5292, Lyon, France
- Institut de Sciences Cognitives Marc Jeannerod, CNRS, UMR5229, Lyon, France
| | - Maciej J Szul
- University Lyon 1, Lyon, France
- Institut de Sciences Cognitives Marc Jeannerod, CNRS, UMR5229, Lyon, France
| | - Marco Congedo
- GIPSA-lab, University Grenoble Alpes, CNRS, Grenoble-INP, Grenoble, France
| | - James J Bonaiuto
- University Lyon 1, Lyon, France
- Institut de Sciences Cognitives Marc Jeannerod, CNRS, UMR5229, Lyon, France
| | - Jérémie Mattout
- University Lyon 1, Lyon, France
- Lyon Neuroscience Research Center, CRNL, INSERM U1028, CNRS, UMR5292, Lyon, France
| |
Collapse
|
7
|
Fuentes-Martinez VJ, Romero S, Lopez-Gordo MA, Minguillon J, Rodríguez-Álvarez M. Low-Cost EEG Multi-Subject Recording Platform for the Assessment of Students' Attention and the Estimation of Academic Performance in Secondary School. Sensors (Basel) 2023; 23:9361. [PMID: 38067731 PMCID: PMC10708847 DOI: 10.3390/s23239361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/15/2023] [Accepted: 11/20/2023] [Indexed: 12/18/2023]
Abstract
The level of student attention in class greatly affects their academic performance. Teachers typically rely on visual inspection to react to students' attention in time, but this subjective method leads to inconsistencies across classes. Online education exacerbates the issue as students can turn off cameras and microphones to keep their own privacy. To address this, we present a novel, low-cost EEG-based platform for assessing students' attention and estimating their academic performance. In a study involving 34 secondary school students (aged 14 to 16), participants watched an academic video and answered evaluation questions while their EEG activity was recorded using a commercial headset. The results demonstrate a significant correlation (0.53, p-value = 0.003) between the power spectral density (PSD) of the EEG beta band (12-30 Hz) and students' academic performance. Additionally, there was a notable difference in PSD-beta between high and low academic performers. These findings encourage the use of PSD-beta for the immediate and objective assessment of both the student attention and the subsequent academic performance. The platform offers valuable and objective feedback to teachers, enhancing the effectiveness of both face-to-face and online teaching and learning environments.
Collapse
Affiliation(s)
- Victor Juan Fuentes-Martinez
- Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain;
- Department of Signal Theory, Telematics and Communications, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain; (M.A.L.-G.); (J.M.)
- Neuroengineering and Computation Lab, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain
| | - Samuel Romero
- Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain;
- Neuroengineering and Computation Lab, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain
| | - Miguel Angel Lopez-Gordo
- Department of Signal Theory, Telematics and Communications, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain; (M.A.L.-G.); (J.M.)
- Neuroengineering and Computation Lab, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain
| | - Jesus Minguillon
- Department of Signal Theory, Telematics and Communications, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain; (M.A.L.-G.); (J.M.)
- Neuroengineering and Computation Lab, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain
| | - Manuel Rodríguez-Álvarez
- Department of Computer Engineering, Automation and Robotics, Research Centre for Information and Communication Technologies (CITIC-UGR), University of Granada, 18014 Granada, Spain;
| |
Collapse
|
8
|
Luo R, Xiao X, Chen E, Meng L, Jung TP, Xu M, Ming D. Almost free of calibration for SSVEP-based brain-computer interfaces. J Neural Eng 2023; 20:066013. [PMID: 37948768 DOI: 10.1088/1741-2552/ad0b8f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/10/2023] [Indexed: 11/12/2023]
Abstract
Objective. Steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) is a promising technology that can achieve high information transfer rate (ITR) with supervised algorithms such as ensemble task-related component analysis (eTRCA) and task-discriminant component analysis (TDCA). However, training individual models requires a tedious and time-consuming calibration process, which hinders the real-life use of SSVEP-BCIs. A recent data augmentation method, called source aliasing matrix estimation (SAME), can generate new EEG samples from a few calibration trials. But SAME does not exploit the information across stimuli as well as only reduces the number of calibration trials per command, so it still has some limitations.Approach. This study proposes an extended version of SAME, called multi-stimulus SAME (msSAME), which exploits the similarity of the aliasing matrix across frequencies to enhance the performance of SSVEP-BCI with insufficient calibration trials. We also propose a semi-supervised approach based on msSAME that can further reduce the number of SSVEP frequencies needed for calibration. We evaluate our method on two public datasets, Benchmark and BETA, and an online experiment.Main results. The results show that msSAME outperforms SAME for both eTRCA and TDCA on the public datasets. Moreover, the semi-supervised msSAME-based method achieves comparable performance to the fully calibrated methods and outperforms the conventional free-calibrated methods. Remarkably, our method only needs 24 s to calibrate 40 targets in the online experiment and achieves an average ITR of 213.8 bits min-1with a peak of 242.6 bits min-1.Significance. This study significantly reduces the calibration effort for individual SSVEP-BCIs, which is beneficial for developing practical plug-and-play SSVEP-BCIs.
Collapse
Affiliation(s)
- Ruixin Luo
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Xiaolin Xiao
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Enze Chen
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Lin Meng
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Tzyy-Ping Jung
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- The Swartz Center for Computational Neuroscience, University of California, San Diego, CA, United States of America
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
- College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin, People's Republic of China
| |
Collapse
|
9
|
Pan L, Wang K, Xu L, Sun X, Yi W, Xu M, Ming D. Riemannian geometric and ensemble learning for decoding cross-session motor imagery electroencephalography signals. J Neural Eng 2023; 20:066011. [PMID: 37931299 DOI: 10.1088/1741-2552/ad0a01] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/06/2023] [Indexed: 11/08/2023]
Abstract
Objective.Brain-computer interfaces (BCIs) enable a direct communication pathway between the human brain and external devices, without relying on the traditional peripheral nervous and musculoskeletal systems. Motor imagery (MI)-based BCIs have attracted significant interest for their potential in motor rehabilitation. However, current algorithms fail to account for the cross-session variability of electroencephalography signals, limiting their practical application.Approach.We proposed a Riemannian geometry-based adaptive boosting and voting ensemble (RAVE) algorithm to address this issue. Our approach segmented the MI period into multiple sub-datasets using a sliding window approach and extracted features from each sub-dataset using Riemannian geometry. We then trained adaptive boosting (AdaBoost) ensemble learning classifiers for each sub-dataset, with the final BCI output determined by majority voting of all classifiers. We tested our proposed RAVE algorithm and eight other competing algorithms on four datasets (Pan2023, BNCI001-2014, BNCI001-2015, BNCI004-2015).Main results.Our results showed that, in the cross-session scenario, the RAVE algorithm outperformed the eight other competing algorithms significantly under different within-session training sample sizes. Compared to traditional algorithms that involved a large number of training samples, the RAVE algorithm achieved similar or even better classification performance on the datasets (Pan2023, BNCI001-2014, BNCI001-2015), even when it did not use or only used a small number of within-session training samples.Significance.These findings indicate that our cross-session decoding strategy could enable MI-BCI applications that require no or minimal training process.
Collapse
Affiliation(s)
- Lincong Pan
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Lichao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
| | - Xinwei Sun
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing 100192, People's Republic of China
| | - Minpeng Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin 300072, People's Republic of China
- Haihe Laboratory of Brain-computer Interaction and Human-machine Integration, Tianjin 300392, People's Republic of China
| |
Collapse
|
10
|
Lun X, Zhang Y, Zhu M, Lian Y, Hou Y. A Combined Virtual Electrode-Based ESA and CNN Method for MI-EEG Signal Feature Extraction and Classification. Sensors (Basel) 2023; 23:8893. [PMID: 37960592 PMCID: PMC10649179 DOI: 10.3390/s23218893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
A Brain-Computer Interface (BCI) is a medium for communication between the human brain and computers, which does not rely on other human neural tissues, but only decodes Electroencephalography (EEG) signals and converts them into commands to control external devices. Motor Imagery (MI) is an important BCI paradigm that generates a spontaneous EEG signal without external stimulation by imagining limb movements to strengthen the brain's compensatory function, and it has a promising future in the field of computer-aided diagnosis and rehabilitation technology for brain diseases. However, there are a series of technical difficulties in the research of motor imagery-based brain-computer interface (MI-BCI) systems, such as: large individual differences in subjects and poor performance of the cross-subject classification model; a low signal-to-noise ratio of EEG signals and poor classification accuracy; and the poor online performance of the MI-BCI system. To address the above problems, this paper proposed a combined virtual electrode-based EEG Source Analysis (ESA) and Convolutional Neural Network (CNN) method for MI-EEG signal feature extraction and classification. The outcomes reveal that the online MI-BCI system developed based on this method can improve the decoding ability of multi-task MI-EEG after training, it can learn generalized features from multiple subjects in cross-subject experiments and has some adaptability to the individual differences of new subjects, and it can decode the EEG intent online and realize the brain control function of the intelligent cart, which provides a new idea for the research of an online MI-BCI system.
Collapse
Affiliation(s)
| | | | | | | | - Yimin Hou
- School of Automation Engineering, Northeast Electric Power University, Jilin 132012, China; (X.L.); (Y.Z.); (M.Z.); (Y.L.)
| |
Collapse
|
11
|
Chowdhury RR, Muhammad Y, Adeel U. Enhancing Cross-Subject Motor Imagery Classification in EEG-Based Brain-Computer Interfaces by Using Multi-Branch CNN. Sensors (Basel) 2023; 23:7908. [PMID: 37765965 PMCID: PMC10536894 DOI: 10.3390/s23187908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/23/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
A brain-computer interface (BCI) is a computer-based system that allows for communication between the brain and the outer world, enabling users to interact with computers using neural activity. This brain signal is obtained from electroencephalogram (EEG) signals. A significant obstacle to the development of BCIs based on EEG is the classification of subject-independent motor imagery data since EEG data are very individualized. Deep learning techniques such as the convolutional neural network (CNN) have illustrated their influence on feature extraction to increase classification accuracy. In this paper, we present a multi-branch (five branches) 2D convolutional neural network that employs several hyperparameters for every branch. The proposed model achieved promising results for cross-subject classification and outperformed EEGNet, ShallowConvNet, DeepConvNet, MMCNN, and EEGNet_Fusion on three public datasets. Our proposed model, EEGNet Fusion V2, achieves 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively. However, the proposed model has a bit higher computational cost, i.e., it takes around 3.5 times more computational time per sample than EEGNet_Fusion.
Collapse
Affiliation(s)
- Radia Rayan Chowdhury
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
| | - Yar Muhammad
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
- Department of Computer Science, School of Physics, Engineering & Computer Science, University of Hertfordshire, Hatfield AL10 9AB, UK
| | - Usman Adeel
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK
| |
Collapse
|
12
|
Antony MJ, Sankaralingam BP, Khan S, Almjally A, Almujally NA, Mahendran RK. Brain-Computer Interface: The HOL-SSA Decomposition and Two-Phase Classification on the HGD EEG Data. Diagnostics (Basel) 2023; 13:2852. [PMID: 37685390 PMCID: PMC10486696 DOI: 10.3390/diagnostics13172852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
An efficient processing approach is essential for increasing identification accuracy since the electroencephalogram (EEG) signals produced by the Brain-Computer Interface (BCI) apparatus are nonlinear, nonstationary, and time-varying. The interpretation of scalp EEG recordings can be hampered by nonbrain contributions to electroencephalographic (EEG) signals, referred to as artifacts. Common disturbances in the capture of EEG signals include electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG) and other artifacts, which have a significant impact on the extraction of meaningful information. This study suggests integrating the Singular Spectrum Analysis (SSA) and Independent Component Analysis (ICA) methods to preprocess the EEG data. The key objective of our research was to employ Higher-Order Linear-Moment-based SSA (HOL-SSA) to decompose EEG signals into multivariate components, followed by extracting source signals using Online Recursive ICA (ORICA). This approach effectively improves artifact rejection. Experimental results using the motor imagery High-Gamma Dataset validate our method's ability to identify and remove artifacts such as EOG, ECG, and EMG from EEG data, while preserving essential brain activity.
Collapse
Affiliation(s)
- Mary Judith Antony
- Department of Computer Science & Engineering, Panimalar College of Engineering, Chennai 600123, India
| | - Baghavathi Priya Sankaralingam
- Department of Computer Science & Engineering, Amrita School of Computing, Amrita Vishwa Vidyapeetham, Chennai 601103, India
| | - Shakir Khan
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.K.); (A.A.)
- University Centre for Research and Development, Department of Computer Science and Engineering, Chandigarh University, Mohali 140413, India
| | - Abrar Almjally
- College of Computer and Information Sciences, Imam Mohammad Ibn Saud Islamic University (IMSIU), Riyadh 11432, Saudi Arabia; (S.K.); (A.A.)
| | - Nouf Abdullah Almujally
- Department of Information Systems, College of Computer and Information Sciences, Princess Nourah bint Abdulrahman University, Riyadh 11671, Saudi Arabia
| | - Rakesh Kumar Mahendran
- Department of Computer Science and Engineering, Rajalakshmi Engineering College, Chennai 602105, India;
| |
Collapse
|
13
|
Venkatesh S, Miranda ER, Braund E. SSVEP-based brain-computer interface for music using a low-density EEG system. Assist Technol 2023; 35:378-388. [PMID: 35713603 DOI: 10.1080/10400435.2022.2084182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/21/2022] [Indexed: 10/18/2022] Open
Abstract
In this paper, we present a bespoke brain-computer interface (BCI), which was developed for a person with severe motor-impairments, who was previously a Violinist, to allow performing and composing music at home. It uses steady-state visually evoked potential (SSVEP) and adopts a dry, low-density, and wireless electroencephalogram (EEG) headset. In this study, we investigated two parameters: (1) placement of the EEG headset and (2) inter-stimulus distance and found that the former significantly improved the information transfer rate (ITR). To analyze EEG, we adopted canonical correlation analysis (CCA) without weight-calibration. The BCI for musical performance realized a high ITR of 37.59 ± 9.86 bits min-1 and a mean accuracy of 88.89 ± 10.09%. The BCI for musical composition obtained an ITR of 14.91 ± 2.87 bits min-1 and a mean accuracy of 95.83 ± 6.97%. The BCI was successfully deployed to the person with severe motor-impairments. She regularly uses it for musical composition at home, demonstrating how BCIs can be translated from laboratories to real-world scenarios.
Collapse
Affiliation(s)
- Satvik Venkatesh
- Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, Plymouth, UK
| | - Eduardo Reck Miranda
- Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, Plymouth, UK
| | - Edward Braund
- Interdisciplinary Centre for Computer Music Research (ICCMR), University of Plymouth, Plymouth, UK
| |
Collapse
|
14
|
Liang L, Zhang Q, Zhou J, Li W, Gao X. Dataset Evaluation Method and Application for Performance Testing of SSVEP-BCI Decoding Algorithm. Sensors (Basel) 2023; 23:6310. [PMID: 37514603 PMCID: PMC10385518 DOI: 10.3390/s23146310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/24/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems have been extensively researched over the past two decades, and multiple sets of standard datasets have been published and widely used. However, there are differences in sample distribution and collection equipment across different datasets, and there is a lack of a unified evaluation method. Most new SSVEP decoding algorithms are tested based on self-collected data or offline performance verification using one or two previous datasets, which can lead to performance differences when used in actual application scenarios. To address these issues, this paper proposed a SSVEP dataset evaluation method and analyzed six datasets with frequency and phase modulation paradigms to form an SSVEP algorithm evaluation dataset system. Finally, based on the above datasets, performance tests were carried out on the four existing SSVEP decoding algorithms. The findings reveal that the performance of the same algorithm varies significantly when tested on diverse datasets. Substantial performance variations were observed among subjects, ranging from the best-performing to the worst-performing. The above results demonstrate that the SSVEP dataset evaluation method can integrate six datasets to form a SSVEP algorithm performance testing dataset system. This system can test and verify the SSVEP decoding algorithm from different perspectives such as different subjects, different environments, and different equipment, which is helpful for the research of new SSVEP decoding algorithms and has significant reference value for other BCI application fields.
Collapse
Affiliation(s)
- Liyan Liang
- China Academy of Information and Communications Technology, Beijing 100161, China
| | - Qian Zhang
- China Academy of Information and Communications Technology, Beijing 100161, China
| | - Jie Zhou
- China Academy of Information and Communications Technology, Beijing 100161, China
| | - Wenyu Li
- China Academy of Information and Communications Technology, Beijing 100161, China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of Medicine, Tsinghua University, Beijing 100084, China
| |
Collapse
|
15
|
Fernández-Rodríguez Á, Ron-Angevin R, Velasco-Álvarez F, Diaz-Pineda J, Letouzé T, André JM. Evaluation of Single-Trial Classification to Control a Visual ERP-BCI under a Situation Awareness Scenario. Brain Sci 2023; 13:886. [PMID: 37371365 DOI: 10.3390/brainsci13060886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/15/2023] [Accepted: 05/29/2023] [Indexed: 06/29/2023] Open
Abstract
An event-related potential (ERP)-based brain-computer interface (BCI) can be used to monitor a user's cognitive state during a surveillance task in a situational awareness context. The present study explores the use of an ERP-BCI for detecting new planes in an air traffic controller (ATC). Two experiments were conducted to evaluate the impact of different visual factors on target detection. Experiment 1 validated the type of stimulus used and the effect of not knowing its appearance location in an ERP-BCI scenario. Experiment 2 evaluated the effect of the size of the target stimulus appearance area and the stimulus salience in an ATC scenario. The main results demonstrate that the size of the plane appearance area had a negative impact on the detection performance and on the amplitude of the P300 component. Future studies should address this issue to improve the performance of an ATC in stimulus detection using an ERP-BCI.
Collapse
Affiliation(s)
- Álvaro Fernández-Rodríguez
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain
| | - Ricardo Ron-Angevin
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain
| | - Francisco Velasco-Álvarez
- Departamento de Tecnología Electrónica, Instituto Universitario de Investigación en Telecomunicación de la Universidad de Málaga (TELMA), Universidad de Málaga, 29071 Malaga, Spain
| | | | - Théodore Letouzé
- Laboratoire IMS, CNRS UMR 5218, Cognitive Team, Bordeaux INP-ENSC, 33400 Talence, France
| | - Jean-Marc André
- Laboratoire IMS, CNRS UMR 5218, Cognitive Team, Bordeaux INP-ENSC, 33400 Talence, France
| |
Collapse
|
16
|
Abdulghani MM, Walters WL, Abed KH. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering (Basel) 2023; 10:649. [PMID: 37370580 DOI: 10.3390/bioengineering10060649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/23/2023] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain-computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
Collapse
Affiliation(s)
- Mokhles M Abdulghani
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| | - Wilbur L Walters
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| | - Khalid H Abed
- Department of Electrical & Computer Engineering and Computer Science, College of Sciences, Engineering & Technology, Jackson State University, Jackson, MS 39217, USA
| |
Collapse
|
17
|
Shuqfa Z, Belkacem AN, Lakas A. Decoding Multi-Class Motor Imagery and Motor Execution Tasks Using Riemannian Geometry Algorithms on Large EEG Datasets. Sensors (Basel) 2023; 23:s23115051. [PMID: 37299779 DOI: 10.3390/s23115051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/11/2023] [Accepted: 05/15/2023] [Indexed: 06/12/2023]
Abstract
The use of Riemannian geometry decoding algorithms in classifying electroencephalography-based motor-imagery brain-computer interfaces (BCIs) trials is relatively new and promises to outperform the current state-of-the-art methods by overcoming the noise and nonstationarity of electroencephalography signals. However, the related literature shows high classification accuracy on only relatively small BCI datasets. The aim of this paper is to provide a study of the performance of a novel implementation of the Riemannian geometry decoding algorithm using large BCI datasets. In this study, we apply several Riemannian geometry decoding algorithms on a large offline dataset using four adaptation strategies: baseline, rebias, supervised, and unsupervised. Each of these adaptation strategies is applied in motor execution and motor imagery for both scenarios 64 electrodes and 29 electrodes. The dataset is composed of four-class bilateral and unilateral motor imagery and motor execution of 109 subjects. We run several classification experiments and the results show that the best classification accuracy is obtained for the scenario where the baseline minimum distance to Riemannian mean has been used. The mean accuracy values up to 81.5% for motor execution, and up to 76.4% for motor imagery. The accurate classification of EEG trials helps to realize successful BCI applications that allow effective control of devices.
Collapse
Affiliation(s)
- Zaid Shuqfa
- Connected Autonomous Intelligent Systems Laboratory, Department of Computer and Network Engineering, College of IT (CIT), United Arab Emirates University (UAEU), Al Ain 15551, United Arab Emirates
| | - Abdelkader Nasreddine Belkacem
- Connected Autonomous Intelligent Systems Laboratory, Department of Computer and Network Engineering, College of IT (CIT), United Arab Emirates University (UAEU), Al Ain 15551, United Arab Emirates
| | - Abderrahmane Lakas
- Connected Autonomous Intelligent Systems Laboratory, Department of Computer and Network Engineering, College of IT (CIT), United Arab Emirates University (UAEU), Al Ain 15551, United Arab Emirates
| |
Collapse
|
18
|
Perpetuini D, Günal M, Chiou N, Koyejo S, Mathewson K, Low KA, Fabiani M, Gratton G, Chiarelli AM. Fast Optical Signals for Real-Time Retinotopy and Brain Computer Interface. Bioengineering (Basel) 2023; 10:553. [PMID: 37237623 PMCID: PMC10215195 DOI: 10.3390/bioengineering10050553] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/28/2023] Open
Abstract
A brain-computer interface (BCI) allows users to control external devices through brain activity. Portable neuroimaging techniques, such as near-infrared (NIR) imaging, are suitable for this goal. NIR imaging has been used to measure rapid changes in brain optical properties associated with neuronal activation, namely fast optical signals (FOS) with good spatiotemporal resolution. However, FOS have a low signal-to-noise ratio, limiting their BCI application. Here FOS were acquired with a frequency-domain optical system from the visual cortex during visual stimulation consisting of a rotating checkerboard wedge, flickering at 5 Hz. We used measures of photon count (Direct Current, DC light intensity) and time of flight (phase) at two NIR wavelengths (690 nm and 830 nm) combined with a machine learning approach for fast estimation of visual-field quadrant stimulation. The input features of a cross-validated support vector machine classifier were computed as the average modulus of the wavelet coherence between each channel and the average response among all channels in 512 ms time windows. An above chance performance was obtained when differentiating visual stimulation quadrants (left vs. right or top vs. bottom) with the best classification accuracy of ~63% (information transfer rate of ~6 bits/min) when classifying the superior and inferior stimulation quadrants using DC at 830 nm. The method is the first attempt to provide generalizable retinotopy classification relying on FOS, paving the way for the use of FOS in real-time BCI.
Collapse
Affiliation(s)
- David Perpetuini
- Department of Neuroscience, Imaging and Clinical Sciences, G. D’Annunzio University of Chieti-Pescara, 66100 Chieti, Italy
- Institute for Advanced Biomedical Technologies, G. D’Annunzio University of Chieti-Pescara, 66100 Chieti, Italy
| | - Mehmet Günal
- Beckman Institute, University of Illinois at Urbana Champaign, Urbana, IL 61801, USA
| | - Nicole Chiou
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| | - Sanmi Koyejo
- Department of Computer Science, Stanford University, Stanford, CA 94305, USA
| | - Kyle Mathewson
- Department of Psychology, Faculty of Science, University of Alberta, Edmonton, AB T6G 2R3, Canada
| | - Kathy A. Low
- Beckman Institute, University of Illinois at Urbana Champaign, Urbana, IL 61801, USA
| | - Monica Fabiani
- Beckman Institute, University of Illinois at Urbana Champaign, Urbana, IL 61801, USA
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, IL 61820, USA
| | - Gabriele Gratton
- Beckman Institute, University of Illinois at Urbana Champaign, Urbana, IL 61801, USA
- Psychology Department, University of Illinois at Urbana Champaign, Champaign, IL 61820, USA
| | - Antonio Maria Chiarelli
- Department of Neuroscience, Imaging and Clinical Sciences, G. D’Annunzio University of Chieti-Pescara, 66100 Chieti, Italy
- Institute for Advanced Biomedical Technologies, G. D’Annunzio University of Chieti-Pescara, 66100 Chieti, Italy
| |
Collapse
|
19
|
Ortega-Rodríguez J, Gómez-González JF, Pereda E. Selection of the Minimum Number of EEG Sensors to Guarantee Biometric Identification of Individuals. Sensors (Basel) 2023; 23:s23094239. [PMID: 37177443 PMCID: PMC10181121 DOI: 10.3390/s23094239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/12/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Biometric identification uses person recognition techniques based on the extraction of some of their physical or biological properties, which make it possible to characterize and differentiate one person from another and provide irreplaceable and critical information that is suitable for application in security systems. The extraction of information from the electrical biosignal of the human brain has received a great deal of attention in recent years. Analysis of EEG signals has been widely used over the last century in medicine and as a basis for brain-machine interfaces (BMIs). In addition, the application of EEG signals for biometric recognition has recently been demonstrated. In this context, EEG-based biometric systems are often considered in two different applications: identification (one-to-many classification) and authentication (one-to-one or true/false classification). In this article, we establish a methodology for selecting and reducing the minimum number of EEG sensors necessary to carry out effective biometric identification of individuals. Two methodologies were applied, one based on principal component analysis and the other on the Wilcoxon signed-rank test in order to reduce the number of electrodes. This allowed us to identify, according to the methodology used, the areas of the cerebral cortex that would allow selection of the minimum number of electrodes necessary for the identification of individuals. The methodologies were applied to two databases, one with 13 people with self-collected recordings using low-cost EEG equipment, EMOTIV EPOC+, and another publicly available database with recordings from 109 people provided by the PhysioNet BCI.
Collapse
Affiliation(s)
- Jordan Ortega-Rodríguez
- Department of Industrial Engineering, University of La Laguna, 38200 San Cristóbal de La Laguna, Spain
- IACTEC Medical Technology Group, Instituto de Astrofísica de Canarias (IAC), 38320 San Cristóbal de La Laguna, Spain
| | | | - Ernesto Pereda
- Department of Industrial Engineering, University of La Laguna, 38200 San Cristóbal de La Laguna, Spain
| |
Collapse
|
20
|
Zafar A, Hussain SJ, Ali MU, Lee SW. Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study. Sensors (Basel) 2023; 23:s23073714. [PMID: 37050774 PMCID: PMC10098559 DOI: 10.3390/s23073714] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 06/01/2023]
Abstract
In recent decades, the brain-computer interface (BCI) has emerged as a leading area of research. The feature selection is vital to reduce the dataset's dimensionality, increase the computing effectiveness, and enhance the BCI's performance. Using activity-related features leads to a high classification rate among the desired tasks. This study presents a wrapper-based metaheuristic feature selection framework for BCI applications using functional near-infrared spectroscopy (fNIRS). Here, the temporal statistical features (i.e., the mean, slope, maximum, skewness, and kurtosis) were computed from all the available channels to form a training vector. Seven metaheuristic optimization algorithms were tested for their classification performance using a k-nearest neighbor-based cost function: particle swarm optimization, cuckoo search optimization, the firefly algorithm, the bat algorithm, flower pollination optimization, whale optimization, and grey wolf optimization (GWO). The presented approach was validated based on an available online dataset of motor imagery (MI) and mental arithmetic (MA) tasks from 29 healthy subjects. The results showed that the classification accuracy was significantly improved by utilizing the features selected from the metaheuristic optimization algorithms relative to those obtained from the full set of features. All of the abovementioned metaheuristic algorithms improved the classification accuracy and reduced the feature vector size. The GWO yielded the highest average classification rates (p < 0.01) of 94.83 ± 5.5%, 92.57 ± 6.9%, and 85.66 ± 7.3% for the MA, MI, and four-class (left- and right-hand MI, MA, and baseline) tasks, respectively. The presented framework may be helpful in the training phase for selecting the appropriate features for robust fNIRS-based BCI applications.
Collapse
Affiliation(s)
- Amad Zafar
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Shaik Javeed Hussain
- Department of Electrical and Electronics, Global College of Engineering and Technology, Muscat 112, Oman
| | - Muhammad Umair Ali
- Department of Intelligent Mechatronics Engineering, Sejong University, Seoul 05006, Republic of Korea
| | - Seung Won Lee
- Department of Precision Medicine, School of Medicine, Sungkyunkwan University, Suwon 16419, Republic of Korea
| |
Collapse
|
21
|
Hashem HA, Abdulazeem Y, Labib LM, Elhosseini MA, Shehata M. An Integrated Machine Learning-Based Brain Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. Sensors (Basel) 2023; 23:3171. [PMID: 36991884 PMCID: PMC10053613 DOI: 10.3390/s23063171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 02/27/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Terminal neurological conditions can affect millions of people worldwide and hinder them from doing their daily tasks and movements normally. Brain computer interface (BCI) is the best hope for many individuals with motor deficiencies. It will help many patients interact with the outside world and handle their daily tasks without assistance. Therefore, machine learning-based BCI systems have emerged as non-invasive techniques for reading out signals from the brain and interpreting them into commands to help those people to perform diverse limb motor tasks. This paper proposes an innovative and improved machine learning-based BCI system that analyzes EEG signals obtained from motor imagery to distinguish among various limb motor tasks based on BCI competition III dataset IVa. The proposed framework pipeline for EEG signal processing performs the following major steps. The first step uses a meta-heuristic optimization technique, called the whale optimization algorithm (WOA), to select the optimal features for discriminating between neural activity patterns. The pipeline then uses machine learning models such as LDA, k-NN, DT, RF, and LR to analyze the chosen features to enhance the precision of EEG signal analysis. The proposed BCI system, which merges the WOA as a feature selection method and the optimized k-NN classification model, demonstrated an overall accuracy of 98.6%, outperforming other machine learning models and previous techniques on the BCI competition III dataset IVa. Additionally, the EEG feature contribution in the ML classification model is reported using Explainable AI (XAI) tools, which provide insights into the individual contributions of the features in the predictions made by the model. By incorporating XAI techniques, the results of this study offer greater transparency and understanding of the relationship between the EEG features and the model's predictions. The proposed method shows potential levels for better use in controlling diverse limb motor tasks to help people with limb impairments and support them while enhancing their quality of life.
Collapse
Affiliation(s)
- Hend A. Hashem
- Computers and Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
- Nile Higher Institute of Engineering and Technology, Mansoura University, Mansoura 35516, Egypt
| | - Yousry Abdulazeem
- Computer Engineering Department, MISR Higher Institute for Engineering and Technology, Mansoura University, Mansoura 35516, Egypt
| | - Labib M. Labib
- Computers and Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
| | - Mostafa A. Elhosseini
- Computers and Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
- College of Computer Science and Engineering, Taibah University, Yanbu 46421, Saudi Arabia
| | - Mohamed Shehata
- Computers and Systems Engineering Department, Faculty of Engineering, Mansoura University, Mansoura 35516, Egypt
- Computer Science and Engineering Department, Speed School of Engineering, University of Louisville, Louisville, KY 40292, USA
| |
Collapse
|
22
|
Tao T, Gao Y, Jia Y, Chen R, Li P, Xu G. A Multi-Channel Ensemble Method for Error-Related Potential Classification Using 2D EEG Images. Sensors (Basel) 2023; 23:2863. [PMID: 36905065 PMCID: PMC10007400 DOI: 10.3390/s23052863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/19/2023] [Accepted: 03/02/2023] [Indexed: 06/18/2023]
Abstract
An error-related potential (ErrP) occurs when people's expectations are not consistent with the actual outcome. Accurately detecting ErrP when a human interacts with a BCI is the key to improving these BCI systems. In this paper, we propose a multi-channel method for error-related potential detection using a 2D convolutional neural network. Multiple channel classifiers are integrated to make final decisions. Specifically, every 1D EEG signal from the anterior cingulate cortex (ACC) is transformed into a 2D waveform image; then, a model named attention-based convolutional neural network (AT-CNN) is proposed to classify it. In addition, we propose a multi-channel ensemble approach to effectively integrate the decisions of each channel classifier. Our proposed ensemble approach can learn the nonlinear relationship between each channel and the label, which obtains 5.27% higher accuracy than the majority voting ensemble approach. We conduct a new experiment and validate our proposed method on a Monitoring Error-Related Potential dataset and our dataset. With the method proposed in this paper, the accuracy, sensitivity and specificity were 86.46%, 72.46% and 90.17%, respectively. The result shows that the AT-CNNs-2D proposed in this paper can effectively improve the accuracy of ErrP classification, and provides new ideas for the study of classification of ErrP brain-computer interfaces.
Collapse
Affiliation(s)
- Tangfei Tao
- Key Laboratory of Education Ministry for Modern Design & Rotor-Bearing System, Xi’an Jiaotong University, Xi’an 710049, China
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Yuxiang Gao
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Yaguang Jia
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Ruiquan Chen
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| | - Ping Li
- School of Foreign Studies, Xi’an Jiaotong University, Xi’an 710049, China
| | - Guanghua Xu
- School of Mechanical Engineering, Xi’an Jiaotong University, Xi’an 710049, China
- State Key Laboratory for Manufacturing Systems Engineering, Xi’an Jiaotong University, Xi’an 710049, China
| |
Collapse
|
23
|
Saibene A, Caglioni M, Corchs S, Gasparini F. EEG-Based BCIs on Motor Imagery Paradigm Using Wearable Technologies: A Systematic Review. Sensors (Basel) 2023; 23:2798. [PMID: 36905004 PMCID: PMC10007053 DOI: 10.3390/s23052798] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2023] [Revised: 02/21/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
In recent decades, the automatic recognition and interpretation of brain waves acquired by electroencephalographic (EEG) technologies have undergone remarkable growth, leading to a consequent rapid development of brain-computer interfaces (BCIs). EEG-based BCIs are non-invasive systems that allow communication between a human being and an external device interpreting brain activity directly. Thanks to the advances in neurotechnologies, and especially in the field of wearable devices, BCIs are now also employed outside medical and clinical applications. Within this context, this paper proposes a systematic review of EEG-based BCIs, focusing on one of the most promising paradigms based on motor imagery (MI) and limiting the analysis to applications that adopt wearable devices. This review aims to evaluate the maturity levels of these systems, both from the technological and computational points of view. The selection of papers has been performed following the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA), leading to 84 publications considered in the last ten years (from 2012 to 2022). Besides technological and computational aspects, this review also aims to systematically list experimental paradigms and available datasets in order to identify benchmarks and guidelines for the development of new applications and computational models.
Collapse
Affiliation(s)
- Aurora Saibene
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126 Milano, Italy
- NeuroMI, Milan Center for Neuroscience, Piazza dell’Ateneo Nuovo 1, 20126 Milano, Italy
| | - Mirko Caglioni
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126 Milano, Italy
| | - Silvia Corchs
- NeuroMI, Milan Center for Neuroscience, Piazza dell’Ateneo Nuovo 1, 20126 Milano, Italy
- Department of Theoretical and Applied Sciences, University of Insubria, Via J. H. Dunant 3, 21100 Varese, Italy
| | - Francesca Gasparini
- Department of Informatics, Systems and Communication, University of Milano-Bicocca, Viale Sarca 336, 20126 Milano, Italy
- NeuroMI, Milan Center for Neuroscience, Piazza dell’Ateneo Nuovo 1, 20126 Milano, Italy
| |
Collapse
|
24
|
Peketi S, Dhok SB. Machine Learning Enabled P300 Classifier for Autism Spectrum Disorder Using Adaptive Signal Decomposition. Brain Sci 2023; 13:brainsci13020315. [PMID: 36831857 PMCID: PMC9954262 DOI: 10.3390/brainsci13020315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 02/06/2023] [Accepted: 02/08/2023] [Indexed: 02/16/2023] Open
Abstract
Joint attention skills deficiency in Autism spectrum disorder (ASD) hinders individuals from communicating effectively. The P300 Electroencephalogram (EEG) signal-based brain-computer interface (BCI) helps these individuals in neurorehabilitation training to overcome this deficiency. The detection of the P300 signal is more challenging in ASD as it is noisy, has less amplitude, and has a higher latency than in other individuals. This paper presents a novel application of the variational mode decomposition (VMD) technique in a BCI system involving ASD subjects for P300 signal identification. The EEG signal is decomposed into five modes using VMD. Thirty linear and non-linear time and frequency domain features are extracted for each mode. Synthetic minority oversampling technique data augmentation is performed to overcome the class imbalance problem in the chosen dataset. Then, a comparative analysis of three popular machine learning classifiers is performed for this application. VMD's fifth mode with a support vector machine (fine Gaussian kernel) classifier gave the best performance parameters, namely accuracy, F1-score, and the area under the curve, as 91.12%, 91.18%, and 96.6%, respectively. These results are better when compared to other state-of-the-art methods.
Collapse
|
25
|
Cattan GH, Quemy A. Case-Based and Quantum Classification for ERP-Based Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020303. [PMID: 36831846 PMCID: PMC9954540 DOI: 10.3390/brainsci13020303] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2022] [Revised: 02/03/2023] [Accepted: 02/08/2023] [Indexed: 02/15/2023] Open
Abstract
Low transfer rates are a major bottleneck for brain-computer interfaces based on electroencephalography (EEG). This problem has led to the development of more robust and accurate classifiers. In this study, we investigated the performance of variational quantum, quantum-enhanced support vector, and hypergraph case-based reasoning classifiers in the binary classification of EEG data from a P300 experiment. On the one hand, quantum classification is a promising technology to reduce computational time and improve learning outcomes. On the other hand, case-based reasoning has an excellent potential to simplify the preprocessing steps of EEG analysis. We found that the balanced training (prediction) accuracy of each of these three classifiers was 56.95 (51.83), 83.17 (50.25), and 71.10% (52.04%), respectively. In addition, case-based reasoning performed significantly lower with a simplified (49.78%) preprocessing pipeline. These results demonstrated that all classifiers were able to learn from the data and that quantum classification of EEG data was implementable; however, more research is required to enable a greater prediction accuracy because none of the classifiers were able to generalize from the data. This could be achieved by improving the configuration of the quantum classifiers (e.g., increasing the number of shots) and increasing the number of trials for hypergraph case-based reasoning classifiers through transfer learning.
Collapse
Affiliation(s)
| | - Alexandre Quemy
- Faculty of Computer Sciences, Poznań University of Technology, 60-965 Poznań, Poland
| |
Collapse
|
26
|
Liang X, Liu Y, Yu Y, Liu K, Liu Y, Zhou Z. Convolutional Neural Network with a Topographic Representation Module for EEG-Based Brain-Computer Interfaces. Brain Sci 2023; 13. [PMID: 36831811 DOI: 10.3390/brainsci13020268] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/02/2023] [Accepted: 02/03/2023] [Indexed: 02/08/2023] Open
Abstract
Convolutional neural networks (CNNs) have shown great potential in the field of brain-computer interfaces (BCIs) due to their ability to directly process raw electroencephalogram (EEG) signals without artificial feature extraction. Some CNNs have achieved better classification accuracy than that of traditional methods. Raw EEG signals are usually represented as a two-dimensional (2-D) matrix composed of channels and time points, ignoring the spatial topological information of electrodes. Our goal is to make a CNN that takes raw EEG signals as inputs have the ability to learn spatial topological features and improve its classification performance while basically maintaining its original structure. We propose an EEG topographic representation module (TRM). This module consists of (1) a mapping block from raw EEG signals to a 3-D topographic map and (2) a convolution block from the topographic map to an output with the same size as the input. According to the size of the convolutional kernel used in the convolution block, we design two types of TRMs, namely TRM-(5,5) and TRM-(3,3). We embed the two TRM types into three widely used CNNs (ShallowConvNet, DeepConvNet and EEGNet) and test them on two publicly available datasets (the Emergency Braking During Simulated Driving Dataset (EBDSDD) and the High Gamma Dataset (HGD)). Results show that the classification accuracies of all three CNNs are improved on both datasets after using the TRMs. With TRM-(5,5), the average classification accuracies of DeepConvNet, EEGNet and ShallowConvNet are improved by 6.54%, 1.72% and 2.07% on the EBDSDD and by 6.05%, 3.02% and 5.14% on the HGD, respectively; with TRM-(3,3), they are improved by 7.76%, 1.71% and 2.17% on the EBDSDD and by 7.61%, 5.06% and 6.28% on the HGD, respectively. We improve the classification performance of three CNNs on both datasets through the use of TRMs, indicating that they have the capability to mine spatial topological EEG information. More importantly, since the output of a TRM has the same size as the input, CNNs with raw EEG signals as inputs can use this module without changing their original structures.
Collapse
|
27
|
Arı E, Taçgın E. Input Shape Effect on Classification Performance of Raw EEG Motor Imagery Signals with Convolutional Neural Networks for Use in Brain-Computer Interfaces. Brain Sci 2023; 13:brainsci13020240. [PMID: 36831784 PMCID: PMC9954790 DOI: 10.3390/brainsci13020240] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Revised: 01/26/2023] [Accepted: 01/28/2023] [Indexed: 02/04/2023] Open
Abstract
EEG signals are interpreted, analyzed and classified by many researchers for use in brain-computer interfaces. Although there are many different EEG signal acquisition methods, one of the most interesting is motor imagery signals. Many different signal processing methods, machine learning and deep learning models have been developed for the classification of motor imagery signals. Among these, Convolutional Neural Network models generally achieve better results than other models. Because the size and shape of the data is important for training Convolutional Neural Network models and discovering the right relationships, researchers have designed and experimented with many different input shape structures. However, no study has been found in the literature evaluating the effect of different input shapes on model performance and accuracy. In this study, the effects of different input shapes on model performance and accuracy in the classification of EEG motor imagery signals were investigated, which had not been specifically studied before. In addition, signal preprocessing methods, which take a long time before classification, were not used; rather, two CNN models were developed for training and classification using raw data. Two different datasets, BCI Competition IV 2A and 2B, were used in classification processes. For different input shapes, 53.03-89.29% classification accuracy and 2-23 s epoch time were obtained for 2A dataset, 64.84-84.94% classification accuracy and 4-10 s epoch time were obtained for 2B dataset. This study showed that the input shape has a significant effect on the classification performance, and when the correct input shape is selected and the correct CNN architecture is developed, feature extraction and classification can be done well by the CNN architecture without any signal preprocessing.
Collapse
Affiliation(s)
- Emre Arı
- Department of Mechanical Engineering, Faculty of Engineering, Marmara University, Istanbul 34840, Turkey
- Department of Mechanical Engineering, Faculty of Engineering, Dicle University, Diyarbakır 21280, Turkey
- Correspondence:
| | - Ertuğrul Taçgın
- Department of Mechanical Engineering, Faculty of Engineering, Doğuş University, Istanbul 34775, Turkey
| |
Collapse
|
28
|
Ron-Angevin R, Fernández-Rodríguez Á, Dupont C, Maigrot J, Meunier J, Tavard H, Lespinet-Najib V, André JM. Comparison of Two Paradigms Based on Stimulation with Images in a Spelling Brain-Computer Interface. Sensors (Basel) 2023; 23:1304. [PMID: 36772343 PMCID: PMC9920351 DOI: 10.3390/s23031304] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Revised: 01/11/2023] [Accepted: 01/20/2023] [Indexed: 06/18/2023]
Abstract
A P300-based speller can be used to control a home automation system via brain activity. Evaluation of the visual stimuli used in a P300-based speller is a common topic in the field of brain-computer interfaces (BCIs). The aim of the present work is to compare, using the usability approach, two types of stimuli that have provided high performance in previous studies. Twelve participants controlled a BCI under two conditions, which varied in terms of the type of stimulus employed: a red famous face surrounded by a white rectangle (RFW) and a range of neutral pictures (NPs). The usability approach included variables related to effectiveness (accuracy and information transfer rate), efficiency (stress and fatigue), and satisfaction (pleasantness and System Usability Scale and Affect Grid questionnaires). The results indicated that there were no significant differences in effectiveness, but the system that used NPs was reported as significantly more pleasant. Hence, since satisfaction variables should also be considered in systems that potential users are likely to employ regularly, the use of different NPs may be a more suitable option than the use of a single RFW for the development of a home automation system based on a visual P300-based speller.
Collapse
Affiliation(s)
- Ricardo Ron-Angevin
- Departamento de Tecnología Electrónica, Universidad de Málaga, 29071 Malaga, Spain
| | | | | | | | | | | | | | - Jean-Marc André
- Laboratoire IMS, CNRS UMR 5218, Cognitive Team, Bordeaux INP-ENSC, 33400 Talence, France
| |
Collapse
|
29
|
Ma Z, Wang K, Xu M, Yi W, Xu F, Ming D. Transformed common spatial pattern for motor imagery-based brain-computer interfaces. Front Neurosci 2023; 17:1116721. [PMID: 36960172 PMCID: PMC10028145 DOI: 10.3389/fnins.2023.1116721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Accepted: 02/20/2023] [Indexed: 03/09/2023] Open
Abstract
Objective The motor imagery (MI)-based brain-computer interface (BCI) is one of the most popular BCI paradigms. Common spatial pattern (CSP) is an effective algorithm for decoding MI-related electroencephalogram (EEG) patterns. However, it highly depends on the selection of EEG frequency bands. To address this problem, previous researchers often used a filter bank to decompose EEG signals into multiple frequency bands before applying the traditional CSP. Approach This study proposed a novel method, i.e., transformed common spatial pattern (tCSP), to extract the discriminant EEG features from multiple frequency bands after but not before CSP. To verify its effectiveness, we tested tCSP on a dataset collected by our team and a public dataset from BCI competition III. We also performed an online evaluation of the proposed method. Main results As a result, for the dataset collected by our team, the classification accuracy of tCSP was significantly higher than CSP by about 8% and filter bank CSP (FBCSP) by about 4.5%. The combination of tCSP and CSP further improved the system performance with an average accuracy of 84.77% and a peak accuracy of 100%. For dataset IVa in BCI competition III, the combination method got an average accuracy of 94.55%, which performed best among all the presented CSP-based methods. In the online evaluation, tCSP and the combination method achieved an average accuracy of 80.00 and 84.00%, respectively. Significance The results demonstrate that the frequency band selection after CSP is better than before for MI-based BCIs. This study provides a promising approach for decoding MI EEG patterns, which is significant for the development of BCIs.
Collapse
Affiliation(s)
- Zhen Ma
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Kun Wang
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Minpeng Xu
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
- *Correspondence: Minpeng Xu,
| | - Weibo Yi
- Beijing Machine and Equipment Institute, Beijing, China
| | - Fangzhou Xu
- International School for Optoelectronic Engineering, Qilu University of Technology (Shandong Academy of Sciences), Jinan, China
| | - Dong Ming
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Dong Ming,
| |
Collapse
|
30
|
Syrov N, Yakovlev L, Miroshnikov A, Kaplan A. Beyond passive observation: feedback anticipation and observation activate the mirror system in virtual finger movement control via P300-BCI. Front Hum Neurosci 2023; 17:1180056. [PMID: 37213933 PMCID: PMC10192585 DOI: 10.3389/fnhum.2023.1180056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2023] [Accepted: 04/13/2023] [Indexed: 05/23/2023] Open
Abstract
Action observation (AO) is widely used as a post-stroke therapy to activate sensorimotor circuits through the mirror neuron system. However, passive observation is often considered to be less effective and less interactive than goal-directed movement observation, leading to the suggestion that observation of goal-directed actions may have stronger therapeutic potential, as goal-directed AO has been shown to activate mechanisms for monitoring action errors. Some studies have also suggested the use of AO as a form of Brain-computer interface (BCI) feedback. In this study, we investigated the potential for observation of virtual hand movements within a P300-based BCI as a feedback system to activate the mirror neuron system. We also explored the role of feedback anticipation and estimation mechanisms during movement observation. Twenty healthy subjects participated in the study. We analyzed event-related desynchronization and synchronization (ERD/S) of sensorimotor EEG rhythms and Error-related potentials (ErrPs) during observation of virtual hand finger flexion presented as feedback in the P300-BCI loop and compared the dynamics of ERD/S and ErrPs during observation of correct feedback and errors. We also analyzed these EEG markers during passive AO under two conditions: when subjects anticipated the action demonstration and when the action was unexpected. A pre-action mu-ERD was found both before passive AO and during action anticipation within the BCI loop. Furthermore, a significant increase in beta-ERS was found during AO within incorrect BCI feedback trials. We suggest that the BCI feedback may exaggerate the passive-AO effect, as it engages feedback anticipation and estimation mechanisms as well as movement error monitoring simultaneously. The results of this study provide insights into the potential of P300-BCI with AO-feedback as a tool for neurorehabilitation.
Collapse
Affiliation(s)
- Nikolay Syrov
- V. Zelman Center for Neurobiology and Brain Rehabilitation, Skolkovo Institute of Science and Technology, Moscow, Russia
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
- *Correspondence: Nikolay Syrov,
| | - Lev Yakovlev
- V. Zelman Center for Neurobiology and Brain Rehabilitation, Skolkovo Institute of Science and Technology, Moscow, Russia
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Andrei Miroshnikov
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
| | - Alexander Kaplan
- V. Zelman Center for Neurobiology and Brain Rehabilitation, Skolkovo Institute of Science and Technology, Moscow, Russia
- Baltic Center for Neurotechnology and Artificial Intelligence, Immanuel Kant Baltic Federal University, Kaliningrad, Russia
- Department of Human and Animal Physiology, Faculty of Biology, Lomonosov Moscow State University, Moscow, Russia
| |
Collapse
|
31
|
Ma Y, Gong A, Nan W, Ding P, Wang F, Fu Y. Personalized Brain-Computer Interface and Its Applications. J Pers Med 2022; 13:46. [PMID: 36675707 PMCID: PMC9861730 DOI: 10.3390/jpm13010046] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 12/19/2022] [Accepted: 12/20/2022] [Indexed: 12/28/2022] Open
Abstract
Brain-computer interfaces (BCIs) are a new technology that subverts traditional human-computer interaction, where the control signal source comes directly from the user's brain. When a general BCI is used for practical applications, it is difficult for it to meet the needs of different individuals because of the differences among individual users in physiological and mental states, sensations, perceptions, imageries, cognitive thinking activities, and brain structures and functions. For this reason, it is necessary to customize personalized BCIs for specific users. So far, few studies have elaborated on the key scientific and technical issues involved in personalized BCIs. In this study, we will focus on personalized BCIs, give the definition of personalized BCIs, and detail their design, development, evaluation methods and applications. Finally, the challenges and future directions of personalized BCIs are discussed. It is expected that this study will provide some useful ideas for innovative studies and practical applications of personalized BCIs.
Collapse
Affiliation(s)
- Yixin Ma
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China
| | - Anmin Gong
- School of Information Engineering, Chinese People’s Armed Police Force Engineering University, Xian 710086, China
| | - Wenya Nan
- Department of Psychology, College of Education, Shanghai Normal University, Shanghai 200234, China
| | - Peng Ding
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China
| | - Fan Wang
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China
| | - Yunfa Fu
- Faculty of Information Engineering and Automation, Kunming University of Science and Technology, Kunming 650500, China
- Brain Cognition and Brain-Computer Intelligence Integration Group, Kunming University of Science and Technology, Kunming 650500, China
| |
Collapse
|
32
|
Fernández-Rodríguez Á, Darves-Bornoz A, Velasco-Álvarez F, Ron-Angevin R. Effect of Stimulus Size in a Visual ERP-Based BCI under RSVP. Sensors (Basel) 2022; 22:9505. [PMID: 36502205 PMCID: PMC9741214 DOI: 10.3390/s22239505] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/24/2022] [Revised: 11/23/2022] [Accepted: 12/02/2022] [Indexed: 06/17/2023]
Abstract
Rapid serial visual presentation (RSVP) is currently one of the most suitable paradigms for use with a visual brain-computer interface based on event-related potentials (ERP-BCI) by patients with a lack of ocular motility. However, gaze-independent paradigms have not been studied as closely as gaze-dependent ones, and variables such as the sizes of the stimuli presented have not yet been explored under RSVP. Hence, the aim of the present work is to assess whether stimulus size has an impact on ERP-BCI performance under the RSVP paradigm. Twelve participants tested the ERP-BCI under RSVP using three different stimulus sizes: small (0.1 × 0.1 cm), medium (1.9 × 1.8 cm), and large (20.05 × 19.9 cm) at 60 cm. The results showed significant differences in accuracy between the conditions; the larger the stimulus, the better the accuracy obtained. It was also shown that these differences were not due to incorrect perception of the stimuli since there was no effect from the size in a perceptual discrimination task. The present work therefore shows that stimulus size has an impact on the performance of an ERP-BCI under RSVP. This finding should be considered by future ERP-BCI proposals aimed at users who need gaze-independent systems.
Collapse
|
33
|
Colucci A, Vermehren M, Cavallo A, Angerhöfer C, Peekhaus N, Zollo L, Kim WS, Paik NJ, Soekadar SR. Brain-Computer Interface-Controlled Exoskeletons in Clinical Neurorehabilitation: Ready or Not? Neurorehabil Neural Repair 2022; 36:747-756. [PMID: 36426541 PMCID: PMC9720703 DOI: 10.1177/15459683221138751] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The development of brain-computer interface-controlled exoskeletons promises new treatment strategies for neurorehabilitation after stroke or spinal cord injury. By converting brain/neural activity into control signals of wearable actuators, brain/neural exoskeletons (B/NEs) enable the execution of movements despite impaired motor function. Beyond the use as assistive devices, it was shown that-upon repeated use over several weeks-B/NEs can trigger motor recovery, even in chronic paralysis. Recent development of lightweight robotic actuators, comfortable and portable real-world brain recordings, as well as reliable brain/neural control strategies have paved the way for B/NEs to enter clinical care. Although B/NEs are now technically ready for broader clinical use, their promotion will critically depend on early adopters, for example, research-oriented physiotherapists or clinicians who are open for innovation. Data collected by early adopters will further elucidate the underlying mechanisms of B/NE-triggered motor recovery and play a key role in increasing efficacy of personalized treatment strategies. Moreover, early adopters will provide indispensable feedback to the manufacturers necessary to further improve robustness, applicability, and adoption of B/NEs into existing therapy plans.
Collapse
Affiliation(s)
- Annalisa Colucci
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany
| | - Mareike Vermehren
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany
| | - Alessia Cavallo
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany
| | - Cornelius Angerhöfer
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany
| | - Niels Peekhaus
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany
| | - Loredana Zollo
- Unit of Advanced Robotics and Human-Centred Technologies (CREO Lab), University Campus Bio-Medico of Rome, Roma RM, Italy
| | - Won-Seok Kim
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea
| | - Nam-Jong Paik
- Department of Rehabilitation Medicine, Seoul National University College of Medicine, Seoul National University Bundang Hospital, Bundang-gu, Seongnam-si, Gyeonggi-do, Republic of Korea
| | - Surjo R. Soekadar
- Clinical Neurotechnology Laboratory, Neurowissenschaftliches Forschungszentrum (NWFZ), Department of Psychiatry and Neurosciences, Charité Campus Mitte (CCM), Charité – Universitätsmedizin Berlin, Charitéplatz 1, Berlin, Germany,Surjo R. Soekadar, Charité Universitatsmedizin Berlin, Charitéplatz 1, Berlin 10117, Germany.
| |
Collapse
|
34
|
Emsawas T, Morita T, Kimura T, Fukui KI, Numao M. Multi-Kernel Temporal and Spatial Convolution for EEG-Based Emotion Classification. Sensors (Basel) 2022; 22:8250. [PMID: 36365948 PMCID: PMC9654218 DOI: 10.3390/s22218250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2022] [Revised: 10/22/2022] [Accepted: 10/23/2022] [Indexed: 06/16/2023]
Abstract
Deep learning using an end-to-end convolutional neural network (ConvNet) has been applied to several electroencephalography (EEG)-based brain-computer interface tasks to extract feature maps and classify the target output. However, the EEG analysis remains challenging since it requires consideration of various architectural design components that influence the representational ability of extracted features. This study proposes an EEG-based emotion classification model called the multi-kernel temporal and spatial convolution network (MultiT-S ConvNet). The multi-scale kernel is used in the model to learn various time resolutions, and separable convolutions are applied to find related spatial patterns. In addition, we enhanced both the temporal and spatial filters with a lightweight gating mechanism. To validate the performance and classification accuracy of MultiT-S ConvNet, we conduct subject-dependent and subject-independent experiments on EEG-based emotion datasets: DEAP and SEED. Compared with existing methods, MultiT-S ConvNet outperforms with higher accuracy results and a few trainable parameters. Moreover, the proposed multi-scale module in temporal filtering enables extracting a wide range of EEG representations, covering short- to long-wavelength components. This module could be further implemented in any model of EEG-based convolution networks, and its ability potentially improves the model's learning capacity.
Collapse
Affiliation(s)
- Taweesak Emsawas
- Graduate School of Information Science and Technology, Osaka University, Osaka 565-0871, Japan
| | - Takashi Morita
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Tsukasa Kimura
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Ken-ichi Fukui
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| | - Masayuki Numao
- The Institute of Scientific and Industrial Research (ISIR), Osaka University, Osaka 567-0047, Japan
| |
Collapse
|
35
|
Du Y, Liu J. IENet: a robust convolutional neural network for EEG based brain-computer interfaces. J Neural Eng 2022; 19. [PMID: 35605585 DOI: 10.1088/1741-2552/ac7257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2021] [Accepted: 05/22/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE Brain-computer interfaces (BCIs) based on electroencephalogram (EEG) develop into novel application areas with more complex scenarios, which put forward higher requirements for the robustness of EEG signal processing algorithms. Deep learning can automatically extract discriminative features and potential dependencies via deep structures, demonstrating strong analytical capabilities in numerous domains such as computer vision (CV) and natural language processing (NLP). Making full use of deep learning technology to design a robust algorithm that is capable of analyzing EEG across BCI paradigms is our main work in this paper. APPROACH Inspired by InceptionV4 and InceptionTime architecture, we introduce a neural network ensemble named InceptionEEG-Net (IENet), where multi-scale convolutional layer and convolution of length 1 enable model to extract rich high-dimensional features with limited parameters. In addition, we propose the average receptive field gain for convolutional neural networks (CNNs), which optimizes IENet to detect long patterns at a smaller cost. We compare with the current state-of-the-art method across five EEG-BCI paradigms: steady-state visual evoked potentials, epilepsy EEG, overt attention P300 visual-evoked potentials, covert attention P300 visual-evoked potentials and movement-related cortical potentials. MAIN RESULTS The classification results show that the generalizability of IENet is on par with the state-of-the-art paradigm-agnostic models on test datasets. Furthermore, the feature explainability analysis of IENet illustrates its capability to extract neurophysiologically interpretable features for different BCI paradigms, ensuring the reliability of algorithm. Significance. It can be seen from our results that IENet can generalize to different BCI paradigms. And it is essential for deep CNNs to increase the receptive field size using average receptive field gain.
Collapse
Affiliation(s)
- Yipeng Du
- SCCE, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 100083 P. R.China, Beijing, Beijing, 100083, CHINA
| | - Jian Liu
- SCCE, University of Science and Technology Beijing, 30 Xueyuan Road, Haidian District, Beijing 100083 P. R.China, Beijing, 100083, CHINA
| |
Collapse
|
36
|
Värbu K, Muhammad N, Muhammad Y. Past, Present, and Future of EEG-Based BCI Applications. Sensors (Basel) 2022; 22:3331. [PMID: 35591021 PMCID: PMC9101004 DOI: 10.3390/s22093331] [Citation(s) in RCA: 14] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 02/23/2022] [Revised: 04/05/2022] [Accepted: 04/25/2022] [Indexed: 06/15/2023]
Abstract
An electroencephalography (EEG)-based brain-computer interface (BCI) is a system that provides a pathway between the brain and external devices by interpreting EEG. EEG-based BCI applications have initially been developed for medical purposes, with the aim of facilitating the return of patients to normal life. In addition to the initial aim, EEG-based BCI applications have also gained increasing significance in the non-medical domain, improving the life of healthy people, for instance, by making it more efficient, collaborative and helping develop themselves. The objective of this review is to give a systematic overview of the literature on EEG-based BCI applications from the period of 2009 until 2019. The systematic literature review has been prepared based on three databases PubMed, Web of Science and Scopus. This review was conducted following the PRISMA model. In this review, 202 publications were selected based on specific eligibility criteria. The distribution of the research between the medical and non-medical domain has been analyzed and further categorized into fields of research within the reviewed domains. In this review, the equipment used for gathering EEG data and signal processing methods have also been reviewed. Additionally, current challenges in the field and possibilities for the future have been analyzed.
Collapse
Affiliation(s)
- Kaido Värbu
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Naveed Muhammad
- Institute of Computer Science, University of Tartu, 51009 Tartu, Estonia;
| | - Yar Muhammad
- Department of Computing & Games, School of Computing, Engineering & Digital Technologies, Teesside University, Middlesbrough TS1 3BX, UK;
| |
Collapse
|
37
|
Phadikar S, Sinha N, Ghosh R, Ghaderpour E. Automatic Muscle Artifacts Identification and Removal from Single-Channel EEG Using Wavelet Transform with Meta-Heuristically Optimized Non-Local Means Filter. Sensors (Basel) 2022; 22:2948. [PMID: 35458940 DOI: 10.3390/s22082948] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/06/2022] [Revised: 03/20/2022] [Accepted: 04/10/2022] [Indexed: 11/17/2022]
Abstract
Electroencephalogram (EEG) signals may get easily contaminated by muscle artifacts, which may lead to wrong interpretation in the brain–computer interface (BCI) system as well as in various medical diagnoses. The main objective of this paper is to remove muscle artifacts without distorting the information contained in the EEG. A novel multi-stage EEG denoising method is proposed for the first time in which wavelet packet decomposition (WPD) is combined with a modified non-local means (NLM) algorithm. At first, the artifact EEG signal is identified through a pre-trained classifier. Next, the identified EEG signal is decomposed into wavelet coefficients and corrected through a modified NLM filter. Finally, the artifact-free EEG is reconstructed from corrected wavelet coefficients through inverse WPD. To optimize the filter parameters, two meta-heuristic algorithms are used in this paper for the first time. The proposed system is first validated on simulated EEG data and then tested on real EEG data. The proposed approach achieved average mutual information (MI) as 2.9684 ± 0.7045 on real EEG data. The result reveals that the proposed system outperforms recently developed denoising techniques with higher average MI, which indicates that the proposed approach is better in terms of quality of reconstruction and is fully automatic.
Collapse
|
38
|
Kim S, Shin DY, Kim T, Lee S, Hyun JK, Park SM. Enhanced Recognition of Amputated Wrist and Hand Movements by Deep Learning Method Using Multimodal Fusion of Electromyography and Electroencephalography. Sensors (Basel) 2022; 22:s22020680. [PMID: 35062641 PMCID: PMC8778369 DOI: 10.3390/s22020680] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 01/09/2022] [Accepted: 01/14/2022] [Indexed: 02/04/2023]
Abstract
Motion classification can be performed using biometric signals recorded by electroencephalography (EEG) or electromyography (EMG) with noninvasive surface electrodes for the control of prosthetic arms. However, current single-modal EEG and EMG based motion classification techniques are limited owing to the complexity and noise of EEG signals, and the electrode placement bias, and low-resolution of EMG signals. We herein propose a novel system of two-dimensional (2D) input image feature multimodal fusion based on an EEG/EMG-signal transfer learning (TL) paradigm for detection of hand movements in transforearm amputees. A feature extraction method in the frequency domain of the EEG and EMG signals was adopted to establish a 2D image. The input images were used for training on a model based on the convolutional neural network algorithm and TL, which requires 2D images as input data. For the purpose of data acquisition, five transforearm amputees and nine healthy controls were recruited. Compared with the conventional single-modal EEG signal trained models, the proposed multimodal fusion method significantly improved classification accuracy in both the control and patient groups. When the two signals were combined and used in the pretrained model for EEG TL, the classification accuracy increased by 4.18-4.35% in the control group, and by 2.51-3.00% in the patient group.
Collapse
Affiliation(s)
- Sehyeon Kim
- Department of Convergence IT Engineering, Pohang University of Science and Technology, Pohang 37673, Korea;
| | - Dae Youp Shin
- Department of Rehabilitation Medicine, College of Medicine, Dankook University, Cheonan 31116, Korea;
| | - Taekyung Kim
- Department of Medical Device Management and Research, SAIHST, Sungkyunkwan University, Seoul 03063, Korea;
| | - Sangsook Lee
- Department of Rehabilitation Medicine, Daejeon Hospital, Daejeon 34383, Korea;
| | - Jung Keun Hyun
- Department of Rehabilitation Medicine, College of Medicine, Dankook University, Cheonan 31116, Korea;
- Department of Nanobiomedical Science & BK21 NBM Global Research Center for Regenerative Medicine, Dankook University, Cheonan 31116, Korea
- Institute of Tissue Regeneration Engineering (ITREN), Dankook University, Cheonan 31116, Korea
- Correspondence: (J.K.H.); (S.-M.P.); Tel.: +82-10-2293-3415 (J.K.H.); +82-10-7208-7740 (S.-M.P.)
| | - Sung-Min Park
- Department of Convergence IT Engineering, Pohang University of Science and Technology, Pohang 37673, Korea;
- Department of Electrical Engineering, Pohang University of Science and Technology, Pohang 37673, Korea
- Department of Mechanical Engineering, Pohang University of Science and Technology, Pohang 37673, Korea
- Correspondence: (J.K.H.); (S.-M.P.); Tel.: +82-10-2293-3415 (J.K.H.); +82-10-7208-7740 (S.-M.P.)
| |
Collapse
|
39
|
Vekety B, Logemann A, Takacs ZK. Mindfulness Practice with a Brain-Sensing Device Improved Cognitive Functioning of Elementary School Children: An Exploratory Pilot Study. Brain Sci 2022; 12:103. [PMID: 35053846 PMCID: PMC8774020 DOI: 10.3390/brainsci12010103] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 01/07/2022] [Accepted: 01/11/2022] [Indexed: 12/04/2022] Open
Abstract
This is the first pilot study with children that has assessed the effects of a brain-computer interface-assisted mindfulness program on neural mechanisms and associated cognitive performance. The participants were 31 children aged 9-10 years who were randomly assigned to either an eight-session mindfulness training with EEG-feedback or a passive control group. Mindfulness-related brain activity was measured during the training, while cognitive tests and resting-state brain activity were measured pre- and post-test. The within-group measurement of calm/focused brain states and mind-wandering revealed a significant linear change. Significant positive changes were detected in children's inhibition, information processing, and resting-state brain activity (alpha, theta) compared to the control group. Elevated baseline alpha activity was associated with less reactivity in reaction time on a cognitive test. Our exploratory findings show some preliminary support for a potential executive function-enhancing effect of mindfulness supplemented with EEG-feedback, which may have some important implications for children's self-regulated learning and academic achievement.
Collapse
Affiliation(s)
- Boglarka Vekety
- Doctoral School of Education, Faculty of Education and Psychology, ELTE Eötvös Loránd University, 1075 Budapest, Hungary;
- MTA-ELTE Lendület Adaptation Research Group, 1064 Budapest, Hungary
| | - Alexander Logemann
- Institute of Psychology, Faculty of Education and Psychology, ELTE Eötvös Loránd University, 1064 Budapest, Hungary;
| | - Zsofia K. Takacs
- Clinical Psychology, School of Health in Social Science, University of Edinburgh, Edinburgh EH8 9AG, UK
| |
Collapse
|
40
|
Hag A, Handayani D, Altalhi M, Pillai T, Mantoro T, Kit MH, Al-Shargie F. Enhancing EEG-Based Mental Stress State Recognition Using an Improved Hybrid Feature Selection Algorithm. Sensors (Basel) 2021; 21:8370. [PMID: 34960469 DOI: 10.3390/s21248370] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Revised: 12/06/2021] [Accepted: 12/10/2021] [Indexed: 01/15/2023]
Abstract
In real-life applications, electroencephalogram (EEG) signals for mental stress recognition require a conventional wearable device. This, in turn, requires an efficient number of EEG channels and an optimal feature set. This study aims to identify an optimal feature subset that can discriminate mental stress states while enhancing the overall classification performance. We extracted multi-domain features within the time domain, frequency domain, time-frequency domain, and network connectivity features to form a prominent feature vector space for stress. We then proposed a hybrid feature selection (FS) method using minimum redundancy maximum relevance with particle swarm optimization and support vector machines (mRMR-PSO-SVM) to select the optimal feature subset. The performance of the proposed method is evaluated and verified using four datasets, namely EDMSS, DEAP, SEED, and EDPMSC. To further consolidate, the effectiveness of the proposed method is compared with that of the state-of-the-art metaheuristic methods. The proposed model significantly reduced the features vector space by an average of 70% compared with the state-of-the-art methods while significantly increasing overall detection performance.
Collapse
|
41
|
Iliopoulos AC, Papasotiriou I. Functional Complex Networks Based on Operational Architectonics: Application on Electroencephalography-Brain-computer Interface for Imagined Speech. Neuroscience 2021; 484:98-118. [PMID: 34871742 DOI: 10.1016/j.neuroscience.2021.11.045] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Revised: 11/26/2021] [Accepted: 11/29/2021] [Indexed: 01/18/2023]
Abstract
A new method for analyzing brain complex dynamics and states is presented. This method constructs functional brain graphs and is comprised of two pylons: (a) Operational architectonics (OA) concept of brain and mind functioning. (b) Network neuroscience. In particular, the algorithm utilizes OA framework for a non-parametric segmentation of EEGs, which leads to the identification of change points, namely abrupt jumps in EEG amplitude, called Rapid Transition Processes (RTPs). Subsequently, the time coordinates of RTPs are used for the generation of undirected weighted complex networks fulfilling a scale-free topology criterion, from which various network metrics of brain connectivity are estimated. These metrics form feature vectors, which can be used in machine learning algorithms for classification and/or prediction. The method is tested in classification problems on an EEG-based BCI data set, acquired from individuals during imagery pronunciation tasks of various words/vowels. The classification results, based on a Naïve Bayes classifier, show that the overall accuracies were found to be above chance level in all tested cases. This method was also compared with other state-of-the-art computational approaches commonly used for functional network generation, exhibiting competitive performance. The method can be useful to neuroscientists wishing to enhance their repository of brain research algorithms.
Collapse
Affiliation(s)
- A C Iliopoulos
- Research Genetic Cancer Centre S.A. Industrial Area of Florina, 53100 Florina, Greece
| | - I Papasotiriou
- Research Genetic Cancer Centre International GmbH, Zug 6300, Switzerland.
| |
Collapse
|
42
|
Martínez-Cagigal V, Thielen J, Santamaría-Vázquez E, Pérez-Velasco S, Desain P, Hornero R. Brain-computer interfaces based on code-modulated visual evoked potentials (c-VEP): a literature review. J Neural Eng 2021; 18. [PMID: 34763331 DOI: 10.1088/1741-2552/ac38cf] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2021] [Accepted: 11/11/2021] [Indexed: 11/11/2022]
Abstract
Objective.Code-modulated visual evoked potentials (c-VEP) have been consolidated in recent years as robust control signals capable of providing non-invasive brain-computer interfaces (BCIs) for reliable, high-speed communication. Their usefulness for communication and control purposes has been reflected in an exponential increase of related articles in the last decade. The aim of this review is to provide a comprehensive overview of the literature to gain understanding of the existing research on c-VEP-based BCIs, since its inception (1984) until today (2021), as well as to identify promising future research lines.Approach.The literature review was conducted according to the Preferred Reporting Items for Systematic reviews and Meta-Analysis guidelines. After assessing the eligibility of journal manuscripts, conferences, book chapters and non-indexed documents, a total of 70 studies were included. A comprehensive analysis of the main characteristics and design choices of c-VEP-based BCIs was discussed, including stimulation paradigms, signal processing, modeling responses, applications, etc.Main results.The literature review showed that state-of-the-art c-VEP-based BCIs are able to provide an accurate control of the system with a large number of commands, high selection speeds and even without calibration. In general, a lack of validation in real setups was observed, especially regarding the validation with disabled populations. Future work should be focused toward developing self-paced c-VEP-based portable BCIs applied in real-world environments that could exploit the unique benefits of c-VEP paradigms. Some aspects such as asynchrony, unsupervised training, or code optimization still require further research and development.Significance.Despite the growing popularity of c-VEP-based BCIs, to the best of our knowledge, this is the first literature review on the topic. In addition to providing a joint discussion of the advances in the field, some future lines of research are suggested to contribute to the development of reliable plug-and-play c-VEP-based BCIs.
Collapse
Affiliation(s)
- Víctor Martínez-Cagigal
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Paseo de Belén, 15, University of Valladolid, Valladolid, Spain.,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Jordy Thielen
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Eduardo Santamaría-Vázquez
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Paseo de Belén, 15, University of Valladolid, Valladolid, Spain.,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| | - Sergio Pérez-Velasco
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Paseo de Belén, 15, University of Valladolid, Valladolid, Spain
| | - Peter Desain
- Radboud University, Donders Institute for Brain, Cognition and Behaviour, Nijmegen, The Netherlands
| | - Roberto Hornero
- Biomedical Engineering Group, E.T.S. Ingenieros de Telecomunicación, Paseo de Belén, 15, University of Valladolid, Valladolid, Spain.,Biomedical Research Networking Center in Bioengineering, Biomaterials and Nanomedicine (CIBER-BBN), Madrid, Spain
| |
Collapse
|
43
|
Si X, Li S, Xiang S, Yu J, Ming D. Imagined speech increases the hemodynamic response and functional connectivity of the dorsal motor cortex. J Neural Eng 2021; 18. [PMID: 34507311 DOI: 10.1088/1741-2552/ac25d9] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 09/10/2021] [Indexed: 11/12/2022]
Abstract
Objective. Decoding imagined speech from brain signals could provide a more natural, user-friendly way for developing the next generation of the brain-computer interface (BCI). With the advantages of non-invasive, portable, relatively high spatial resolution and insensitivity to motion artifacts, the functional near-infrared spectroscopy (fNIRS) shows great potential for developing the non-invasive speech BCI. However, there is a lack of fNIRS evidence in uncovering the neural mechanism of imagined speech. Our goal is to investigate the specific brain regions and the corresponding cortico-cortical functional connectivity features during imagined speech with fNIRS.Approach. fNIRS signals were recorded from 13 subjects' bilateral motor and prefrontal cortex during overtly and covertly repeating words. Cortical activation was determined through the mean oxygen-hemoglobin concentration changes, and functional connectivity was calculated by Pearson's correlation coefficient.Main results. (a) The bilateral dorsal motor cortex was significantly activated during the covert speech, whereas the bilateral ventral motor cortex was significantly activated during the overt speech. (b) As a subregion of the motor cortex, sensorimotor cortex (SMC) showed a dominant dorsal response to covert speech condition, whereas a dominant ventral response to overt speech condition. (c) Broca's area was deactivated during the covert speech but activated during the overt speech. (d) Compared to overt speech, dorsal SMC(dSMC)-related functional connections were enhanced during the covert speech.Significance. We provide fNIRS evidence for the involvement of dSMC in speech imagery. dSMC is the speech imagery network's key hub and is probably involved in the sensorimotor information processing during the covert speech. This study could inspire the BCI community to focus on the potential contribution of dSMC during speech imagery.
Collapse
Affiliation(s)
- Xiaopeng Si
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China.,Institute of Applied Psychology, Tianjin University, Tianjin 300350, People's Republic of China
| | - Sicheng Li
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| | - Shaoxin Xiang
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Jiayue Yu
- Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin International Engineering Institute, Tianjin University, Tianjin 300072, People's Republic of China
| | - Dong Ming
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin 300072, People's Republic of China.,Tianjin Key Laboratory of Brain Science and Neural Engineering, Tianjin University, Tianjin 300072, People's Republic of China
| |
Collapse
|
44
|
Barria P, Pino A, Tovar N, Gomez-Vargas D, Baleta K, Díaz CAR, Múnera M, Cifuentes CA. BCI-Based Control for Ankle Exoskeleton T-FLEX: Comparison of Visual and Haptic Stimuli with Stroke Survivors. Sensors (Basel) 2021; 21:6431. [PMID: 34640750 DOI: 10.3390/s21196431] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/01/2021] [Revised: 08/31/2021] [Accepted: 09/15/2021] [Indexed: 11/16/2022]
Abstract
Brain–computer interface (BCI) remains an emerging tool that seeks to improve the patient interaction with the therapeutic mechanisms and to generate neuroplasticity progressively through neuromotor abilities. Motor imagery (MI) analysis is the most used paradigm based on the motor cortex’s electrical activity to detect movement intention. It has been shown that motor imagery mental practice with movement-associated stimuli may offer an effective strategy to facilitate motor recovery in brain injury patients. In this sense, this study aims to present the BCI associated with visual and haptic stimuli to facilitate MI generation and control the T-FLEX ankle exoskeleton. To achieve this, five post-stroke patients (55–63 years) were subjected to three different strategies using T-FLEX: stationary therapy (ST) without motor imagination, motor imagination with visual stimulation (MIV), and motor imagination with visual-haptic inducement (MIVH). The quantitative characterization of both BCI stimuli strategies was made through the motor imagery accuracy rate, the electroencephalographic (EEG) analysis during the MI active periods, the statistical analysis, and a subjective patient’s perception. The preliminary results demonstrated the viability of the BCI-controlled ankle exoskeleton system with the beta rebound, in terms of patient’s performance during MI active periods and satisfaction outcomes. Accuracy differences employing haptic stimulus were detected with an average of 68% compared with the 50.7% over only visual stimulus. However, the power spectral density (PSD) did not present changes in prominent activation of the MI band but presented significant variations in terms of laterality. In this way, visual and haptic stimuli improved the subject’s MI accuracy but did not generate differential brain activity over the affected hemisphere. Hence, long-term sessions with a more extensive sample and a more robust algorithm should be carried out to evaluate the impact of the proposed system on neuronal and motor evolution after stroke.
Collapse
|
45
|
Haddix C, Al-Bakri AF, Sunderam S. Prediction of isometric handgrip force from graded event-related desynchronization of the sensorimotor rhythm. J Neural Eng 2021; 18. [PMID: 34479215 DOI: 10.1088/1741-2552/ac23c0] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2020] [Accepted: 09/03/2021] [Indexed: 11/12/2022]
Abstract
Objective. Brain-computer interfaces (BCIs) show promise as a direct line of communication between the brain and the outside world that could benefit those with impaired motor function. But the commands available for BCI operation are often limited by the ability of the decoder to differentiate between the many distinct motor or cognitive tasks that can be visualized or attempted. Simple binary command signals (e.g. right hand at rest versus movement) are therefore used due to their ability to produce large observable differences in neural recordings. At the same time, frequent command switching can impose greater demands on the subject's focus and takes time to learn. Here, we attempt to decode the degree of effort in a specific movement task to produce a graded and more flexible command signal.Approach.Fourteen healthy human subjects (nine male, five female) responded to visual cues by squeezing a hand dynamometer to different levels of predetermined force, guided by continuous visual feedback, while the electroencephalogram (EEG) and grip force were monitored. Movement-related EEG features were extracted and modeled to predict exerted force.Main results.We found that event-related desynchronization (ERD) of the 8-30 Hz mu-beta sensorimotor rhythm of the EEG is separable for different degrees of motor effort. Upon four-fold cross-validation, linear classifiers were found to predict grip force from an ERD vector with mean accuracies across subjects of 53% and 55% for the dominant and non-dominant hand, respectively. ERD amplitude increased with target force but appeared to pass through a trough that hinted at non-monotonic behavior.Significance.Our results suggest that modeling and interactive feedback based on the intended level of motor effort is feasible. The observed ERD trends suggest that different mechanisms may govern intermediate versus low and high degrees of motor effort. This may have utility in rehabilitative protocols for motor impairments.
Collapse
Affiliation(s)
- Chase Haddix
- F. Joseph Halcomb III, MD, Department of Biomedical Engineering, University of Kentucky, Lexington, KY 40506, United States of America
| | - Amir F Al-Bakri
- F. Joseph Halcomb III, MD, Department of Biomedical Engineering, University of Kentucky, Lexington, KY 40506, United States of America.,Department of Biomedical Engineering, University of Babylon, Babylon, Iraq
| | - Sridhar Sunderam
- F. Joseph Halcomb III, MD, Department of Biomedical Engineering, University of Kentucky, Lexington, KY 40506, United States of America
| |
Collapse
|
46
|
Kumar A, Pirogova E, Mahmoud SS, Fang Q. Classification of error-related potentials evoked during stroke rehabilitation training. J Neural Eng 2021; 18. [PMID: 34384052 DOI: 10.1088/1741-2552/ac1d32] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2020] [Accepted: 08/12/2021] [Indexed: 01/22/2023]
Abstract
Objective.Error-related potentials (ErrPs) are elicited in the human brain following an error's perception. Recently, ErrPs have been observed in a novel task situation, i.e. when stroke patients perform upper-limb rehabilitation exercises. These ErrPs can be used to developassist-as-needed(AAN) robotic stroke rehabilitation systems. However, to date, there is no reported research on assessing the feasibility of using the ErrPs to implement the AAN approach. Hence, in this study, we evaluated and compared the single-trial classification of novel ErrPs using various classical machine learning and deep learning approaches.Approach.Electroencephalogram data of 13 stroke patients recorded while performing an upper-limb physical rehabilitation exercise were used. Two classification approaches, one combining the xDAWN spatial filtering and support vector machines, and the other using a convolutional neural network-based double transfer learning, were utilized.Main results.Results showed that the ErrPs could be detected with a mean area under the receiver operating characteristics curve of 0.838, and a mean accuracy of 0.842, 0.257 above the chance level (p< 0.05), for a within-subject classification. The results indicated the feasibility of using ErrP signals in real-time AAN robot therapy with evidence from the conducted latency analysis, cross-subject classification, and three-class asynchronous classification.Significance.The findings presented support our proposed approach of using ErrPs as a measure to trigger and/or modulate as required the robotic assistance in a real-timehuman-in-the-looprobotic stroke rehabilitation system.
Collapse
Affiliation(s)
- Akshay Kumar
- Department of Biomedical Engineering, College of Engineering, Shantou University, Guangdong, People's Republic of China
| | - Elena Pirogova
- School of Engineering, Royal Melbourne Institute of Technology University, Melbourne, Australia
| | - Seedahmed S Mahmoud
- Department of Biomedical Engineering, College of Engineering, Shantou University, Guangdong, People's Republic of China
| | - Qiang Fang
- Department of Biomedical Engineering, College of Engineering, Shantou University, Guangdong, People's Republic of China
| |
Collapse
|
47
|
Kim S, Lee S, Kang H, Kim S, Ahn M. P300 Brain-Computer Interface-Based Drone Control in Virtual and Augmented Reality. Sensors (Basel) 2021; 21:5765. [PMID: 34502655 DOI: 10.3390/s21175765] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/27/2021] [Revised: 08/19/2021] [Accepted: 08/24/2021] [Indexed: 01/01/2023]
Abstract
Since the emergence of head-mounted displays (HMDs), researchers have attempted to introduce virtual and augmented reality (VR, AR) in brain–computer interface (BCI) studies. However, there is a lack of studies that incorporate both AR and VR to compare the performance in the two environments. Therefore, it is necessary to develop a BCI application that can be used in both VR and AR to allow BCI performance to be compared in the two environments. In this study, we developed an opensource-based drone control application using P300-based BCI, which can be used in both VR and AR. Twenty healthy subjects participated in the experiment with this application. They were asked to control the drone in two environments and filled out questionnaires before and after the experiment. We found no significant (p > 0.05) difference in online performance (classification accuracy and amplitude/latency of P300 component) and user experience (satisfaction about time length, program, environment, interest, difficulty, immersion, and feeling of self-control) between VR and AR. This indicates that the P300 BCI paradigm is relatively reliable and may work well in various situations.
Collapse
|
48
|
Li F, Chao W, Li Y, Fu B, Ji Y, Wu H, Shi G. Decoding imagined speech from EEG signals using hybrid-scale spatial-temporal dilated convolution network. J Neural Eng 2021; 18. [PMID: 34256357 DOI: 10.1088/1741-2552/ac13c0] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2021] [Accepted: 07/13/2021] [Indexed: 11/12/2022]
Abstract
Objective.Directly decoding imagined speech from electroencephalogram (EEG) signals has attracted much interest in brain-computer interface applications, because it provides a natural and intuitive communication method for locked-in patients. Several methods have been applied to imagined speech decoding, but how to construct spatial-temporal dependencies and capture long-range contextual cues in EEG signals to better decode imagined speech should be considered.Approach.In this study, we propose a novel model called hybrid-scale spatial-temporal dilated convolution network (HS-STDCN) for EEG-based imagined speech recognition. HS-STDCN integrates feature learning from temporal and spatial information into a unified end-to-end model. To characterize the temporal dependencies of the EEG sequences, we adopted a hybrid-scale temporal convolution layer to capture temporal information at multiple levels. A depthwise spatial convolution layer was then designed to construct intrinsic spatial relationships of EEG electrodes, which can produce a spatial-temporal representation of the input EEG data. Based on the spatial-temporal representation, dilated convolution layers were further employed to learn long-range discriminative features for the final classification.Main results.To evaluate the proposed method, we compared the HS-STDCN with other existing methods on our collected dataset. The HS-STDCN achieved an averaged classification accuracy of 54.31% for decoding eight imagined words, which is significantly better than other methods at a significance level of 0.05.Significance.The proposed HS-STDCN model provided an effective approach to make use of both the temporal and spatial dependencies of the input EEG signals for imagined speech recognition. We also visualized the word semantic differences to analyze the impact of word semantics on imagined speech recognition, investigated the important regions in the decoding process, and explored the use of fewer electrodes to achieve comparable performance.
Collapse
Affiliation(s)
- Fu Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Weibing Chao
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Yang Li
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Boxun Fu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Youshuo Ji
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Hao Wu
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| | - Guangming Shi
- Key Laboratory of Intelligent Perception and Image Understanding of Ministry of Education, School of Artificial Intelligence, Xidian University, Xi'an, People's Republic of China
| |
Collapse
|
49
|
Zhou X, Xu M, Xiao X, Wang Y, Jung TP, Ming D. Detection of fixation points using a small visual landmark for brain-computer interfaces. J Neural Eng 2021; 18. [PMID: 34130268 DOI: 10.1088/1741-2552/ac0b51] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2021] [Accepted: 06/15/2021] [Indexed: 11/12/2022]
Abstract
Objective.The speed of visual brain-computer interfaces (v-BCIs) has been greatly improved in recent years. However, the traditional v-BCI paradigms require users to directly gaze at the intensive flickering items, which would cause severe problems such as visual fatigue and excessive visual resource consumption in practical applications. Therefore, it is imperative to develop a user-friendly v-BCI.Approach.According to the retina-cortical relationship, this study developed a novel BCI paradigm to detect the fixation point of eyes using a small visual stimulus that subtended only 0.6° in visual angle and was out of the central visual field. Specifically, the visual stimulus was treated as a landmark to judge the eccentricity and polar angle of the fixation point. Sixteen different fixation points were selected around the visual landmark, i.e. different combinations of two eccentricities (2° and 4°) and eight polar angles (0,π4,π2,3π4,π,5π4,3π2and7π4). Twelve subjects participated in this study, and they were asked to gaze at one out of the 16 points for each trial. A multi-class discriminative canonical pattern matching (Multi-DCPM) algorithm was proposed to decode the user's fixation point.Main results.We found the visual stimulation landmark elicited different spatial event-related potential patterns for different fixation points. Multi-DCPM could achieve an average accuracy of 66.2% with a standard deviation of 15.8% for the classification of the sixteen fixation points, which was significantly higher than traditional algorithms (p⩽0.001). Experimental results of this study demonstrate the feasibility of using a small visual stimulus as a landmark to track the relative position of the fixation point.Significance.The proposed new paradigm provides a potential approach to alleviate the problem of irritating stimuli in v-BCIs, which can broaden the applications of BCIs.
Collapse
Affiliation(s)
- Xiaoyu Zhou
- The Laboratory of Neural Engineering & Rehabilitation, Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China
| | - Minpeng Xu
- The Laboratory of Neural Engineering & Rehabilitation, Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,The Tianjin International Joint Research Center for Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Xiaolin Xiao
- The Laboratory of Neural Engineering & Rehabilitation, Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,The Tianjin International Joint Research Center for Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| | - Yijun Wang
- The State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing, People's Republic of China
| | - Tzyy-Ping Jung
- The Laboratory of Neural Engineering & Rehabilitation, Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,The Swartz Center for Computational Neuroscience, University of California, San Diego, CA, United States of America
| | - Dong Ming
- The Laboratory of Neural Engineering & Rehabilitation, Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, People's Republic of China.,The Tianjin International Joint Research Center for Neural Engineering, Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, People's Republic of China
| |
Collapse
|
50
|
Chen Y, Yang C, Ye X, Chen X, Wang Y, Gao X. Implementing a calibration-free SSVEP-based BCI system with 160 targets. J Neural Eng 2021; 18. [PMID: 34134091 DOI: 10.1088/1741-2552/ac0bfa] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [What about the content of this article? (0)] [Affiliation(s)] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 06/16/2021] [Indexed: 11/11/2022]
Abstract
Objective.Steady-state visual evoked potential (SSVEP) is an essential paradigm of electroencephalogram based brain-computer interface (BCI). Previous studies in the BCI research field mostly focused on enhancing classification accuracy and reducing stimuli duration. This study, however, concentrated on increasing the number of available targets in the BCI systems without calibration.Approach. Motivated by the idea of multiple frequency sequential coding, we developed a calibration-free SSVEP-BCI system implementing 160 targets by four continuous sinusoidal stimuli that lasted four seconds in total. Taking advantage of the benchmark dataset of SSVEP-BCI, this study optimized an arrangement of stimuli sequences, maximizing the response distance between different stimuli. We proposed an effective classification algorithm based on filter bank canonical correlation analysis. To evaluate the performance of this system, we conducted offline and online experiments using cue-guided selection tasks. Eight subjects participated in the offline experiments, and 12 subjects participated in the online experiments with real-time feedbacks.Mainresults. Offline experiments indicated the feasibility of the stimulation selection and detection algorithms. Furthermore, the online system achieved an average accuracy of 87.16 ± 11.46% and an information transfer rate of 78.84 ± 15.59 bits min-1. Specifically, seven of 12 subjects accomplished online experiments with accuracy higher than 90%. This study proposed an intact solution of applying numerous targets to SSVEP-based BCIs. Results of experiments confirmed the utility and efficiency of the system.Significance. This study firstly provides a calibration-free SSVEP-BCI speller system that enables more than 100 commands. This system could significantly expand the application scenario of SSVEP-based BCI. Meanwhile, the design criterion can hopefully enhance the overall performance of the BCI system. The demo video can be found in the supplementary material available online atstacks.iop.org/JNE/18/046094/mmedia.
Collapse
Affiliation(s)
- Yonghao Chen
- School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, People's Republic of China.,Department of Biomedical Engineering, School of medicine, Tsinghua University, Beijing 100084, People's Republic of China
| | - Chen Yang
- School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, People's Republic of China
| | - Xiaochen Ye
- School of Electronic Engineering, Beijing University of Posts and Telecommunications, Beijing 100876, People's Republic of China
| | - Xiaogang Chen
- Institute of Biomedical Engineering, Chinese Academy of Medical Sciences and Peking Union Medical College, Tianjin 300192, People's Republic of China
| | - Yijun Wang
- State Key Laboratory on Integrated Optoelectronics, Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, People's Republic of China
| | - Xiaorong Gao
- Department of Biomedical Engineering, School of medicine, Tsinghua University, Beijing 100084, People's Republic of China
| |
Collapse
|