1
|
Wu H, Xie Q, Yu Z, Zhang J, Liu S, Long J. Unsupervised heterogeneous domain adaptation for EEG classification. J Neural Eng 2024. [PMID: 38968936 DOI: 10.1088/1741-2552/ad5fbd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/07/2024]
Abstract
$Objective.$ Domain adaptation has been recognized as a potent solution to the challenge of limited training data for electroencephalography (EEG) classification tasks. Existing studies primarily focus on homogeneous environments, however, the heterogeneous properties of EEG data arising from device diversity cannot be overlooked. This motivates the development of heterogeneous domain adaptation methods that can fully exploit the knowledge from an auxiliary heterogeneous domain for EEG classification. $Approach.$ In this article, we propose a novel model named Informative Representation Fusion (IRF) to tackle the problem of unsupervised heterogeneous domain adaptation in the context of EEG data. In IRF, we consider different perspectives of data, i.e., independent identically distributed (iid) and non-iid, to learn different representations. Specifically, from the non-iid perspective, IRF models high-order correlations among data by hypergraphs and develops hypergraph encoders to obtain data representations of each domain. From the non-iid perspective, by applying multi-layer perceptron networks to the source and target domain data, we achieve another type of representation for both domains. Subsequently, an attention mechanism is used to fuse these two types of representations to yield informative features. To learn transferable representations, the Maximum Mean Discrepancy is utilized to align the distributions of the source and target domains based on the fused features. $Main~results.$ Experimental results on several real-world datasets demonstrate the effectiveness of the proposed model. $Significance.$ This article handles an EEG classification situation where the source and target EEG data lie in different spaces, and what's more, under an unsupervised learning setting. This situation is practical in the real world but barely studied in the literature. The proposed model achieves high classification accuracy, and this study is important for the commercial applications of EEG-based BCIs.
Collapse
|
2
|
Wu X, Wellington S, Fu Z, Zhang D. Speech decoding from stereo-electroencephalography (sEEG) signals using advanced deep learning methods. J Neural Eng 2024; 21:036055. [PMID: 38885688 DOI: 10.1088/1741-2552/ad593a] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2024] [Accepted: 06/17/2024] [Indexed: 06/20/2024]
Abstract
Objective.Brain-computer interfaces (BCIs) are technologies that bypass damaged or disrupted neural pathways and directly decode brain signals to perform intended actions. BCIs for speech have the potential to restore communication by decoding the intended speech directly. Many studies have demonstrated promising results using invasive micro-electrode arrays and electrocorticography. However, the use of stereo-electroencephalography (sEEG) for speech decoding has not been fully recognized.Approach.In this research, recently released sEEG data were used to decode Dutch words spoken by epileptic participants. We decoded speech waveforms from sEEG data using advanced deep-learning methods. Three methods were implemented: a linear regression method, an recurrent neural network (RNN)-based sequence-to-sequence model (RNN), and a transformer model.Main results.Our RNN and transformer models outperformed the linear regression significantly, while no significant difference was found between the two deep-learning methods. Further investigation on individual electrodes showed that the same decoding result can be obtained using only a few of the electrodes.Significance.This study demonstrated that decoding speech from sEEG signals is possible, and the location of the electrodes is critical to the decoding performance.
Collapse
|
3
|
Li X, Yang S, Fei N, Wang J, Huang W, Hu Y. A Convolutional Neural Network for SSVEP Identification by Using a Few-Channel EEG. Bioengineering (Basel) 2024; 11:613. [PMID: 38927850 PMCID: PMC11200714 DOI: 10.3390/bioengineering11060613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/11/2024] [Accepted: 06/08/2024] [Indexed: 06/28/2024] Open
Abstract
The application of wearable electroencephalogram (EEG) devices is growing in brain-computer interfaces (BCI) owing to their good wearability and portability. Compared with conventional devices, wearable devices typically support fewer EEG channels. Devices with few-channel EEGs have been proven to be available for steady-state visual evoked potential (SSVEP)-based BCI. However, fewer-channel EEGs can cause the BCI performance to decrease. To address this issue, an attention-based complex spectrum-convolutional neural network (atten-CCNN) is proposed in this study, which combines a CNN with a squeeze-and-excitation block and uses the spectrum of the EEG signal as the input. The proposed model was assessed on a wearable 40-class dataset and a public 12-class dataset under subject-independent and subject-dependent conditions. The results show that whether using a three-channel EEG or single-channel EEG for SSVEP identification, atten-CCNN outperformed the baseline models, indicating that the new model can effectively enhance the performance of SSVEP-BCI with few-channel EEGs. Therefore, this SSVEP identification algorithm based on a few-channel EEG is particularly suitable for use with wearable EEG devices.
Collapse
|
4
|
Khabti J, AlAhmadi S, Soudani A. Optimal Channel Selection of Multiclass Motor Imagery Classification Based on Fusion Convolutional Neural Network with Attention Blocks. SENSORS (BASEL, SWITZERLAND) 2024; 24:3168. [PMID: 38794022 PMCID: PMC11125262 DOI: 10.3390/s24103168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2024] [Revised: 05/08/2024] [Accepted: 05/14/2024] [Indexed: 05/26/2024]
Abstract
The widely adopted paradigm in brain-computer interfaces (BCIs) involves motor imagery (MI), enabling improved communication between humans and machines. EEG signals derived from MI present several challenges due to their inherent characteristics, which lead to a complex process of classifying and finding the potential tasks of a specific participant. Another issue is that BCI systems can result in noisy data and redundant channels, which in turn can lead to increased equipment and computational costs. To address these problems, the optimal channel selection of a multiclass MI classification based on a Fusion convolutional neural network with Attention blocks (FCNNA) is proposed. In this study, we developed a CNN model consisting of layers of convolutional blocks with multiple spatial and temporal filters. These filters are designed specifically to capture the distribution and relationships of signal features across different electrode locations, as well as to analyze the evolution of these features over time. Following these layers, a Convolutional Block Attention Module (CBAM) is used to, further, enhance EEG signal feature extraction. In the process of channel selection, the genetic algorithm is used to select the optimal set of channels using a new technique to deliver fixed as well as variable channels for all participants. The proposed methodology is validated showing 6.41% improvement in multiclass classification compared to most baseline models. Notably, we achieved the highest results of 93.09% for binary classes involving left-hand and right-hand movements. In addition, the cross-subject strategy for multiclass classification yielded an impressive accuracy of 68.87%. Following channel selection, multiclass classification accuracy was enhanced, reaching 84.53%. Overall, our experiments illustrated the efficiency of the proposed EEG MI model in both channel selection and classification, showing superior results with either a full channel set or a reduced number of channels.
Collapse
|
5
|
Liu Y, Dai W, Liu Y, Hu D, Yang B, Zhou Z. An SSVEP-based BCI with 112 targets using frequency spatial multiplexing. J Neural Eng 2024; 21:036004. [PMID: 38639058 DOI: 10.1088/1741-2552/ad4091] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2023] [Accepted: 04/15/2024] [Indexed: 04/20/2024]
Abstract
Objective.Brain-computer interface (BCI) systems with large directly accessible instruction sets are one of the difficulties in BCI research. Research to achieve high target resolution (⩾100) has not yet entered a rapid development stage, which contradicts the application requirements. Steady-state visual evoked potential (SSVEP) based BCIs have an advantage in terms of the number of targets, but the competitive mechanism between the target stimulus and its neighboring stimuli is a key challenge that prevents the target resolution from being improved significantly.Approach.In this paper, we reverse the competitive mechanism and propose a frequency spatial multiplexing method to produce more targets with limited frequencies. In the proposed paradigm, we replicated each flicker stimulus as a 2 × 2 matrix and arrange the matrices of all frequencies in a tiled fashion to form the interaction interface. With different arrangements, we designed and tested three example paradigms with different layouts. Further we designed a graph neural network that distinguishes between targets of the same frequency by recognizing the different electroencephalography (EEG) response distribution patterns evoked by each target and its neighboring targets.Main results.Extensive experiment studies employing eleven subjects have been performed to verify the validity of the proposed method. The average classification accuracies in the offline validation experiments for the three paradigms are 89.16%, 91.38%, and 87.90%, with information transfer rates (ITR) of 51.66, 53.96, and 50.55 bits/min, respectively.Significance.This study utilized the positional relationship between stimuli and did not circumvent the competing response problem. Therefore, other state-of-the-art methods focusing on enhancing the efficiency of SSVEP detection can be used as a basis for the present method to achieve very promising improvements.
Collapse
|
6
|
Barmpas K, Panagakis Y, Zoumpourlis G, Adamos DA, Laskaris N, Zafeiriou S. A causal perspective on brainwave modeling for brain-computer interfaces. J Neural Eng 2024; 21:036001. [PMID: 38621380 DOI: 10.1088/1741-2552/ad3eb5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 04/15/2024] [Indexed: 04/17/2024]
Abstract
Objective. Machine learning (ML) models have opened up enormous opportunities in the field of brain-computer Interfaces (BCIs). Despite their great success, they usually face severe limitations when they are employed in real-life applications outside a controlled laboratory setting.Approach. Mixing causal reasoning, identifying causal relationships between variables of interest, with brainwave modeling can change one's viewpoint on some of these major challenges which can be found in various stages in the ML pipeline, ranging from data collection and data pre-processing to training methods and techniques.Main results. In this work, we employ causal reasoning and present a framework aiming to breakdown and analyze important challenges of brainwave modeling for BCIs.Significance. Furthermore, we present how general ML practices as well as brainwave-specific techniques can be utilized and solve some of these identified challenges. And finally, we discuss appropriate evaluation schemes in order to measure these techniques' performance and efficiently compare them with other methods that will be developed in the future.
Collapse
|
7
|
Yan S, Hu Y, Zhang R, Qi D, Hu Y, Yao D, Shi L, Zhang L. Multilayer network-based channel selection for motor imagery brain-computer interface. J Neural Eng 2024; 21:016029. [PMID: 38295419 DOI: 10.1088/1741-2552/ad2496] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2023] [Accepted: 01/31/2024] [Indexed: 02/02/2024]
Abstract
Objective. The number of electrode channels in a motor imagery-based brain-computer interface (MI-BCI) system influences not only its decoding performance, but also its convenience for use in applications. Although many channel selection methods have been proposed in the literature, they are usually based on the univariate features of a single channel. This leads to a loss of the interaction between channels and the exchange of information between networks operating at different frequency bands.Approach. We integrate brain networks containing four frequency bands into a multilayer network framework and propose a multilayer network-based channel selection (MNCS) method for MI-BCI systems. A graph learning-based method is used to estimate the multilayer network from electroencephalogram (EEG) data that are filtered by multiple frequency bands. The multilayer participation coefficient of the multilayer network is then computed to select EEG channels that do not contain redundant information. Furthermore, the common spatial pattern (CSP) method is used to extract effective features. Finally, a support vector machine classifier with a linear kernel is trained to accurately identify MI tasks.Main results. We used three publicly available datasets from the BCI Competition containing data on 12 healthy subjects and one dataset containing data on 15 stroke patients to validate the effectiveness of our proposed method. The results showed that the proposed MNCS method outperforms all channels (85.8% vs. 93.1%, 84.4% vs. 89.0%, 71.7% vs. 79.4%, and 72.7% vs. 84.0%). Moreover, it achieved significantly higher decoding accuracies on MI-BCI systems than state-of-the-art methods (pairedt-tests,p< 0.05).Significance. The experimental results showed that the proposed MNCS method can select appropriate channels to improve the decoding performance as well as the convenience of the application of MI-BCI systems.
Collapse
|
8
|
Wu X, Zhang D, Li G, Gao X, Metcalfe B, Chen L. Data augmentation for invasive brain-computer interfaces based on stereo-electroencephalography (SEEG). J Neural Eng 2024; 21:016026. [PMID: 38237174 DOI: 10.1088/1741-2552/ad200e] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 01/18/2024] [Indexed: 02/23/2024]
Abstract
Objective.Deep learning is increasingly used for brain-computer interfaces (BCIs). However, the quantity of available data is sparse, especially for invasive BCIs. Data augmentation (DA) methods, such as generative models, can help to address this sparseness. However, all the existing studies on brain signals were based on convolutional neural networks and ignored the temporal dependence. This paper attempted to enhance generative models by capturing the temporal relationship from a time-series perspective.Approach. A conditional generative network (conditional transformer-based generative adversarial network (cTGAN)) based on the transformer model was proposed. The proposed method was tested using a stereo-electroencephalography (SEEG) dataset which was recorded from eight epileptic patients performing five different movements. Three other commonly used DA methods were also implemented: noise injection (NI), variational autoencoder (VAE), and conditional Wasserstein generative adversarial network with gradient penalty (cWGANGP). Using the proposed method, the artificial SEEG data was generated, and several metrics were used to compare the data quality, including visual inspection, cosine similarity (CS), Jensen-Shannon distance (JSD), and the effect on the performance of a deep learning-based classifier.Main results. Both the proposed cTGAN and the cWGANGP methods were able to generate realistic data, while NI and VAE outputted inferior samples when visualized as raw sequences and in a lower dimensional space. The cTGAN generated the best samples in terms of CS and JSD and outperformed cWGANGP significantly in enhancing the performance of a deep learning-based classifier (each of them yielding a significant improvement of 6% and 3.4%, respectively).Significance. This is the first time that DA methods have been applied to invasive BCIs based on SEEG. In addition, this study demonstrated the advantages of the model that preserves the temporal dependence from a time-series perspective.
Collapse
|
9
|
Papadopoulos S, Szul MJ, Congedo M, Bonaiuto JJ, Mattout J. Beta bursts question the ruling power for brain-computer interfaces. J Neural Eng 2024; 21:016010. [PMID: 38167234 DOI: 10.1088/1741-2552/ad19ea] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 01/02/2024] [Indexed: 01/05/2024]
Abstract
Objective: Current efforts to build reliable brain-computer interfaces (BCI) span multiple axes from hardware, to software, to more sophisticated experimental protocols, and personalized approaches. However, despite these abundant efforts, there is still room for significant improvement. We argue that a rather overlooked direction lies in linking BCI protocols with recent advances in fundamental neuroscience.Approach: In light of these advances, and particularly the characterization of the burst-like nature of beta frequency band activity and the diversity of beta bursts, we revisit the role of beta activity in 'left vs. right hand' motor imagery (MI) tasks. Current decoding approaches for such tasks take advantage of the fact that MI generates time-locked changes in induced power in the sensorimotor cortex and rely on band-passed power changes in single or multiple channels. Although little is known about the dynamics of beta burst activity during MI, we hypothesized that beta bursts should be modulated in a way analogous to their activity during performance of real upper limb movements.Main results and Significance: We show that classification features based on patterns of beta burst modulations yield decoding results that are equivalent to or better than typically used beta power across multiple open electroencephalography datasets, thus providing insights into the specificity of these bio-markers.
Collapse
|
10
|
Fuentes-Martinez VJ, Romero S, Lopez-Gordo MA, Minguillon J, Rodríguez-Álvarez M. Low-Cost EEG Multi-Subject Recording Platform for the Assessment of Students' Attention and the Estimation of Academic Performance in Secondary School. SENSORS (BASEL, SWITZERLAND) 2023; 23:9361. [PMID: 38067731 PMCID: PMC10708847 DOI: 10.3390/s23239361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/19/2023] [Revised: 11/15/2023] [Accepted: 11/20/2023] [Indexed: 12/18/2023]
Abstract
The level of student attention in class greatly affects their academic performance. Teachers typically rely on visual inspection to react to students' attention in time, but this subjective method leads to inconsistencies across classes. Online education exacerbates the issue as students can turn off cameras and microphones to keep their own privacy. To address this, we present a novel, low-cost EEG-based platform for assessing students' attention and estimating their academic performance. In a study involving 34 secondary school students (aged 14 to 16), participants watched an academic video and answered evaluation questions while their EEG activity was recorded using a commercial headset. The results demonstrate a significant correlation (0.53, p-value = 0.003) between the power spectral density (PSD) of the EEG beta band (12-30 Hz) and students' academic performance. Additionally, there was a notable difference in PSD-beta between high and low academic performers. These findings encourage the use of PSD-beta for the immediate and objective assessment of both the student attention and the subsequent academic performance. The platform offers valuable and objective feedback to teachers, enhancing the effectiveness of both face-to-face and online teaching and learning environments.
Collapse
|
11
|
Luo R, Xiao X, Chen E, Meng L, Jung TP, Xu M, Ming D. Almost free of calibration for SSVEP-based brain-computer interfaces. J Neural Eng 2023; 20:066013. [PMID: 37948768 DOI: 10.1088/1741-2552/ad0b8f] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2023] [Accepted: 11/10/2023] [Indexed: 11/12/2023]
Abstract
Objective. Steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) is a promising technology that can achieve high information transfer rate (ITR) with supervised algorithms such as ensemble task-related component analysis (eTRCA) and task-discriminant component analysis (TDCA). However, training individual models requires a tedious and time-consuming calibration process, which hinders the real-life use of SSVEP-BCIs. A recent data augmentation method, called source aliasing matrix estimation (SAME), can generate new EEG samples from a few calibration trials. But SAME does not exploit the information across stimuli as well as only reduces the number of calibration trials per command, so it still has some limitations.Approach. This study proposes an extended version of SAME, called multi-stimulus SAME (msSAME), which exploits the similarity of the aliasing matrix across frequencies to enhance the performance of SSVEP-BCI with insufficient calibration trials. We also propose a semi-supervised approach based on msSAME that can further reduce the number of SSVEP frequencies needed for calibration. We evaluate our method on two public datasets, Benchmark and BETA, and an online experiment.Main results. The results show that msSAME outperforms SAME for both eTRCA and TDCA on the public datasets. Moreover, the semi-supervised msSAME-based method achieves comparable performance to the fully calibrated methods and outperforms the conventional free-calibrated methods. Remarkably, our method only needs 24 s to calibrate 40 targets in the online experiment and achieves an average ITR of 213.8 bits min-1with a peak of 242.6 bits min-1.Significance. This study significantly reduces the calibration effort for individual SSVEP-BCIs, which is beneficial for developing practical plug-and-play SSVEP-BCIs.
Collapse
|
12
|
Pan L, Wang K, Xu L, Sun X, Yi W, Xu M, Ming D. Riemannian geometric and ensemble learning for decoding cross-session motor imagery electroencephalography signals. J Neural Eng 2023; 20:066011. [PMID: 37931299 DOI: 10.1088/1741-2552/ad0a01] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2023] [Accepted: 11/06/2023] [Indexed: 11/08/2023]
Abstract
Objective.Brain-computer interfaces (BCIs) enable a direct communication pathway between the human brain and external devices, without relying on the traditional peripheral nervous and musculoskeletal systems. Motor imagery (MI)-based BCIs have attracted significant interest for their potential in motor rehabilitation. However, current algorithms fail to account for the cross-session variability of electroencephalography signals, limiting their practical application.Approach.We proposed a Riemannian geometry-based adaptive boosting and voting ensemble (RAVE) algorithm to address this issue. Our approach segmented the MI period into multiple sub-datasets using a sliding window approach and extracted features from each sub-dataset using Riemannian geometry. We then trained adaptive boosting (AdaBoost) ensemble learning classifiers for each sub-dataset, with the final BCI output determined by majority voting of all classifiers. We tested our proposed RAVE algorithm and eight other competing algorithms on four datasets (Pan2023, BNCI001-2014, BNCI001-2015, BNCI004-2015).Main results.Our results showed that, in the cross-session scenario, the RAVE algorithm outperformed the eight other competing algorithms significantly under different within-session training sample sizes. Compared to traditional algorithms that involved a large number of training samples, the RAVE algorithm achieved similar or even better classification performance on the datasets (Pan2023, BNCI001-2014, BNCI001-2015), even when it did not use or only used a small number of within-session training samples.Significance.These findings indicate that our cross-session decoding strategy could enable MI-BCI applications that require no or minimal training process.
Collapse
|
13
|
Lun X, Zhang Y, Zhu M, Lian Y, Hou Y. A Combined Virtual Electrode-Based ESA and CNN Method for MI-EEG Signal Feature Extraction and Classification. SENSORS (BASEL, SWITZERLAND) 2023; 23:8893. [PMID: 37960592 PMCID: PMC10649179 DOI: 10.3390/s23218893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Revised: 10/27/2023] [Accepted: 10/30/2023] [Indexed: 11/15/2023]
Abstract
A Brain-Computer Interface (BCI) is a medium for communication between the human brain and computers, which does not rely on other human neural tissues, but only decodes Electroencephalography (EEG) signals and converts them into commands to control external devices. Motor Imagery (MI) is an important BCI paradigm that generates a spontaneous EEG signal without external stimulation by imagining limb movements to strengthen the brain's compensatory function, and it has a promising future in the field of computer-aided diagnosis and rehabilitation technology for brain diseases. However, there are a series of technical difficulties in the research of motor imagery-based brain-computer interface (MI-BCI) systems, such as: large individual differences in subjects and poor performance of the cross-subject classification model; a low signal-to-noise ratio of EEG signals and poor classification accuracy; and the poor online performance of the MI-BCI system. To address the above problems, this paper proposed a combined virtual electrode-based EEG Source Analysis (ESA) and Convolutional Neural Network (CNN) method for MI-EEG signal feature extraction and classification. The outcomes reveal that the online MI-BCI system developed based on this method can improve the decoding ability of multi-task MI-EEG after training, it can learn generalized features from multiple subjects in cross-subject experiments and has some adaptability to the individual differences of new subjects, and it can decode the EEG intent online and realize the brain control function of the intelligent cart, which provides a new idea for the research of an online MI-BCI system.
Collapse
|
14
|
Chowdhury RR, Muhammad Y, Adeel U. Enhancing Cross-Subject Motor Imagery Classification in EEG-Based Brain-Computer Interfaces by Using Multi-Branch CNN. SENSORS (BASEL, SWITZERLAND) 2023; 23:7908. [PMID: 37765965 PMCID: PMC10536894 DOI: 10.3390/s23187908] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/11/2023] [Revised: 08/23/2023] [Accepted: 09/11/2023] [Indexed: 09/29/2023]
Abstract
A brain-computer interface (BCI) is a computer-based system that allows for communication between the brain and the outer world, enabling users to interact with computers using neural activity. This brain signal is obtained from electroencephalogram (EEG) signals. A significant obstacle to the development of BCIs based on EEG is the classification of subject-independent motor imagery data since EEG data are very individualized. Deep learning techniques such as the convolutional neural network (CNN) have illustrated their influence on feature extraction to increase classification accuracy. In this paper, we present a multi-branch (five branches) 2D convolutional neural network that employs several hyperparameters for every branch. The proposed model achieved promising results for cross-subject classification and outperformed EEGNet, ShallowConvNet, DeepConvNet, MMCNN, and EEGNet_Fusion on three public datasets. Our proposed model, EEGNet Fusion V2, achieves 89.6% and 87.8% accuracy for the actual and imagined motor activity of the eegmmidb dataset and scores of 74.3% and 84.1% for the BCI IV-2a and IV-2b datasets, respectively. However, the proposed model has a bit higher computational cost, i.e., it takes around 3.5 times more computational time per sample than EEGNet_Fusion.
Collapse
|
15
|
Antony MJ, Sankaralingam BP, Khan S, Almjally A, Almujally NA, Mahendran RK. Brain-Computer Interface: The HOL-SSA Decomposition and Two-Phase Classification on the HGD EEG Data. Diagnostics (Basel) 2023; 13:2852. [PMID: 37685390 PMCID: PMC10486696 DOI: 10.3390/diagnostics13172852] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2023] [Revised: 08/25/2023] [Accepted: 08/28/2023] [Indexed: 09/10/2023] Open
Abstract
An efficient processing approach is essential for increasing identification accuracy since the electroencephalogram (EEG) signals produced by the Brain-Computer Interface (BCI) apparatus are nonlinear, nonstationary, and time-varying. The interpretation of scalp EEG recordings can be hampered by nonbrain contributions to electroencephalographic (EEG) signals, referred to as artifacts. Common disturbances in the capture of EEG signals include electrooculogram (EOG), electrocardiogram (ECG), electromyogram (EMG) and other artifacts, which have a significant impact on the extraction of meaningful information. This study suggests integrating the Singular Spectrum Analysis (SSA) and Independent Component Analysis (ICA) methods to preprocess the EEG data. The key objective of our research was to employ Higher-Order Linear-Moment-based SSA (HOL-SSA) to decompose EEG signals into multivariate components, followed by extracting source signals using Online Recursive ICA (ORICA). This approach effectively improves artifact rejection. Experimental results using the motor imagery High-Gamma Dataset validate our method's ability to identify and remove artifacts such as EOG, ECG, and EMG from EEG data, while preserving essential brain activity.
Collapse
|
16
|
Venkatesh S, Miranda ER, Braund E. SSVEP-based brain-computer interface for music using a low-density EEG system. Assist Technol 2023; 35:378-388. [PMID: 35713603 DOI: 10.1080/10400435.2022.2084182] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/21/2022] [Indexed: 10/18/2022] Open
Abstract
In this paper, we present a bespoke brain-computer interface (BCI), which was developed for a person with severe motor-impairments, who was previously a Violinist, to allow performing and composing music at home. It uses steady-state visually evoked potential (SSVEP) and adopts a dry, low-density, and wireless electroencephalogram (EEG) headset. In this study, we investigated two parameters: (1) placement of the EEG headset and (2) inter-stimulus distance and found that the former significantly improved the information transfer rate (ITR). To analyze EEG, we adopted canonical correlation analysis (CCA) without weight-calibration. The BCI for musical performance realized a high ITR of 37.59 ± 9.86 bits min-1 and a mean accuracy of 88.89 ± 10.09%. The BCI for musical composition obtained an ITR of 14.91 ± 2.87 bits min-1 and a mean accuracy of 95.83 ± 6.97%. The BCI was successfully deployed to the person with severe motor-impairments. She regularly uses it for musical composition at home, demonstrating how BCIs can be translated from laboratories to real-world scenarios.
Collapse
|
17
|
Liang L, Zhang Q, Zhou J, Li W, Gao X. Dataset Evaluation Method and Application for Performance Testing of SSVEP-BCI Decoding Algorithm. SENSORS (BASEL, SWITZERLAND) 2023; 23:6310. [PMID: 37514603 PMCID: PMC10385518 DOI: 10.3390/s23146310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 06/24/2023] [Accepted: 07/07/2023] [Indexed: 07/30/2023]
Abstract
Steady-state visual evoked potential (SSVEP)-based brain-computer interface (BCI) systems have been extensively researched over the past two decades, and multiple sets of standard datasets have been published and widely used. However, there are differences in sample distribution and collection equipment across different datasets, and there is a lack of a unified evaluation method. Most new SSVEP decoding algorithms are tested based on self-collected data or offline performance verification using one or two previous datasets, which can lead to performance differences when used in actual application scenarios. To address these issues, this paper proposed a SSVEP dataset evaluation method and analyzed six datasets with frequency and phase modulation paradigms to form an SSVEP algorithm evaluation dataset system. Finally, based on the above datasets, performance tests were carried out on the four existing SSVEP decoding algorithms. The findings reveal that the performance of the same algorithm varies significantly when tested on diverse datasets. Substantial performance variations were observed among subjects, ranging from the best-performing to the worst-performing. The above results demonstrate that the SSVEP dataset evaluation method can integrate six datasets to form a SSVEP algorithm performance testing dataset system. This system can test and verify the SSVEP decoding algorithm from different perspectives such as different subjects, different environments, and different equipment, which is helpful for the research of new SSVEP decoding algorithms and has significant reference value for other BCI application fields.
Collapse
|
18
|
Fernández-Rodríguez Á, Ron-Angevin R, Velasco-Álvarez F, Diaz-Pineda J, Letouzé T, André JM. Evaluation of Single-Trial Classification to Control a Visual ERP-BCI under a Situation Awareness Scenario. Brain Sci 2023; 13:886. [PMID: 37371365 DOI: 10.3390/brainsci13060886] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2023] [Revised: 05/15/2023] [Accepted: 05/29/2023] [Indexed: 06/29/2023] Open
Abstract
An event-related potential (ERP)-based brain-computer interface (BCI) can be used to monitor a user's cognitive state during a surveillance task in a situational awareness context. The present study explores the use of an ERP-BCI for detecting new planes in an air traffic controller (ATC). Two experiments were conducted to evaluate the impact of different visual factors on target detection. Experiment 1 validated the type of stimulus used and the effect of not knowing its appearance location in an ERP-BCI scenario. Experiment 2 evaluated the effect of the size of the target stimulus appearance area and the stimulus salience in an ATC scenario. The main results demonstrate that the size of the plane appearance area had a negative impact on the detection performance and on the amplitude of the P300 component. Future studies should address this issue to improve the performance of an ATC in stimulus detection using an ERP-BCI.
Collapse
|
19
|
Abdulghani MM, Walters WL, Abed KH. Imagined Speech Classification Using EEG and Deep Learning. Bioengineering (Basel) 2023; 10:649. [PMID: 37370580 DOI: 10.3390/bioengineering10060649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2023] [Revised: 05/23/2023] [Accepted: 05/25/2023] [Indexed: 06/29/2023] Open
Abstract
In this paper, we propose an imagined speech-based brain wave pattern recognition using deep learning. Multiple features were extracted concurrently from eight-channel electroencephalography (EEG) signals. To obtain classifiable EEG data with fewer sensors, we placed the EEG sensors on carefully selected spots on the scalp. To decrease the dimensions and complexity of the EEG dataset and to avoid overfitting during the deep learning algorithm, we utilized the wavelet scattering transformation. A low-cost 8-channel EEG headset was used with MATLAB 2023a to acquire the EEG data. The long-short term memory recurrent neural network (LSTM-RNN) was used to decode the identified EEG signals into four audio commands: up, down, left, and right. Wavelet scattering transformation was applied to extract the most stable features by passing the EEG dataset through a series of filtration processes. Filtration was implemented for each individual command in the EEG datasets. The proposed imagined speech-based brain wave pattern recognition approach achieved a 92.50% overall classification accuracy. This accuracy is promising for designing a trustworthy imagined speech-based brain-computer interface (BCI) future real-time systems. For better evaluation of the classification performance, other metrics were considered, and we obtained 92.74%, 92.50%, and 92.62% for precision, recall, and F1-score, respectively.
Collapse
|
20
|
Shuqfa Z, Belkacem AN, Lakas A. Decoding Multi-Class Motor Imagery and Motor Execution Tasks Using Riemannian Geometry Algorithms on Large EEG Datasets. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23115051. [PMID: 37299779 DOI: 10.3390/s23115051] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/13/2023] [Revised: 05/11/2023] [Accepted: 05/15/2023] [Indexed: 06/12/2023]
Abstract
The use of Riemannian geometry decoding algorithms in classifying electroencephalography-based motor-imagery brain-computer interfaces (BCIs) trials is relatively new and promises to outperform the current state-of-the-art methods by overcoming the noise and nonstationarity of electroencephalography signals. However, the related literature shows high classification accuracy on only relatively small BCI datasets. The aim of this paper is to provide a study of the performance of a novel implementation of the Riemannian geometry decoding algorithm using large BCI datasets. In this study, we apply several Riemannian geometry decoding algorithms on a large offline dataset using four adaptation strategies: baseline, rebias, supervised, and unsupervised. Each of these adaptation strategies is applied in motor execution and motor imagery for both scenarios 64 electrodes and 29 electrodes. The dataset is composed of four-class bilateral and unilateral motor imagery and motor execution of 109 subjects. We run several classification experiments and the results show that the best classification accuracy is obtained for the scenario where the baseline minimum distance to Riemannian mean has been used. The mean accuracy values up to 81.5% for motor execution, and up to 76.4% for motor imagery. The accurate classification of EEG trials helps to realize successful BCI applications that allow effective control of devices.
Collapse
|
21
|
Perpetuini D, Günal M, Chiou N, Koyejo S, Mathewson K, Low KA, Fabiani M, Gratton G, Chiarelli AM. Fast Optical Signals for Real-Time Retinotopy and Brain Computer Interface. Bioengineering (Basel) 2023; 10:553. [PMID: 37237623 PMCID: PMC10215195 DOI: 10.3390/bioengineering10050553] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2023] [Revised: 04/28/2023] [Accepted: 05/03/2023] [Indexed: 05/28/2023] Open
Abstract
A brain-computer interface (BCI) allows users to control external devices through brain activity. Portable neuroimaging techniques, such as near-infrared (NIR) imaging, are suitable for this goal. NIR imaging has been used to measure rapid changes in brain optical properties associated with neuronal activation, namely fast optical signals (FOS) with good spatiotemporal resolution. However, FOS have a low signal-to-noise ratio, limiting their BCI application. Here FOS were acquired with a frequency-domain optical system from the visual cortex during visual stimulation consisting of a rotating checkerboard wedge, flickering at 5 Hz. We used measures of photon count (Direct Current, DC light intensity) and time of flight (phase) at two NIR wavelengths (690 nm and 830 nm) combined with a machine learning approach for fast estimation of visual-field quadrant stimulation. The input features of a cross-validated support vector machine classifier were computed as the average modulus of the wavelet coherence between each channel and the average response among all channels in 512 ms time windows. An above chance performance was obtained when differentiating visual stimulation quadrants (left vs. right or top vs. bottom) with the best classification accuracy of ~63% (information transfer rate of ~6 bits/min) when classifying the superior and inferior stimulation quadrants using DC at 830 nm. The method is the first attempt to provide generalizable retinotopy classification relying on FOS, paving the way for the use of FOS in real-time BCI.
Collapse
|
22
|
Ortega-Rodríguez J, Gómez-González JF, Pereda E. Selection of the Minimum Number of EEG Sensors to Guarantee Biometric Identification of Individuals. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094239. [PMID: 37177443 PMCID: PMC10181121 DOI: 10.3390/s23094239] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2023] [Revised: 04/12/2023] [Accepted: 04/21/2023] [Indexed: 05/15/2023]
Abstract
Biometric identification uses person recognition techniques based on the extraction of some of their physical or biological properties, which make it possible to characterize and differentiate one person from another and provide irreplaceable and critical information that is suitable for application in security systems. The extraction of information from the electrical biosignal of the human brain has received a great deal of attention in recent years. Analysis of EEG signals has been widely used over the last century in medicine and as a basis for brain-machine interfaces (BMIs). In addition, the application of EEG signals for biometric recognition has recently been demonstrated. In this context, EEG-based biometric systems are often considered in two different applications: identification (one-to-many classification) and authentication (one-to-one or true/false classification). In this article, we establish a methodology for selecting and reducing the minimum number of EEG sensors necessary to carry out effective biometric identification of individuals. Two methodologies were applied, one based on principal component analysis and the other on the Wilcoxon signed-rank test in order to reduce the number of electrodes. This allowed us to identify, according to the methodology used, the areas of the cerebral cortex that would allow selection of the minimum number of electrodes necessary for the identification of individuals. The methodologies were applied to two databases, one with 13 people with self-collected recordings using low-cost EEG equipment, EMOTIV EPOC+, and another publicly available database with recordings from 109 people provided by the PhysioNet BCI.
Collapse
|
23
|
Zafar A, Hussain SJ, Ali MU, Lee SW. Metaheuristic Optimization-Based Feature Selection for Imagery and Arithmetic Tasks: An fNIRS Study. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23073714. [PMID: 37050774 PMCID: PMC10098559 DOI: 10.3390/s23073714] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/02/2023] [Revised: 03/23/2023] [Accepted: 03/30/2023] [Indexed: 06/01/2023]
Abstract
In recent decades, the brain-computer interface (BCI) has emerged as a leading area of research. The feature selection is vital to reduce the dataset's dimensionality, increase the computing effectiveness, and enhance the BCI's performance. Using activity-related features leads to a high classification rate among the desired tasks. This study presents a wrapper-based metaheuristic feature selection framework for BCI applications using functional near-infrared spectroscopy (fNIRS). Here, the temporal statistical features (i.e., the mean, slope, maximum, skewness, and kurtosis) were computed from all the available channels to form a training vector. Seven metaheuristic optimization algorithms were tested for their classification performance using a k-nearest neighbor-based cost function: particle swarm optimization, cuckoo search optimization, the firefly algorithm, the bat algorithm, flower pollination optimization, whale optimization, and grey wolf optimization (GWO). The presented approach was validated based on an available online dataset of motor imagery (MI) and mental arithmetic (MA) tasks from 29 healthy subjects. The results showed that the classification accuracy was significantly improved by utilizing the features selected from the metaheuristic optimization algorithms relative to those obtained from the full set of features. All of the abovementioned metaheuristic algorithms improved the classification accuracy and reduced the feature vector size. The GWO yielded the highest average classification rates (p < 0.01) of 94.83 ± 5.5%, 92.57 ± 6.9%, and 85.66 ± 7.3% for the MA, MI, and four-class (left- and right-hand MI, MA, and baseline) tasks, respectively. The presented framework may be helpful in the training phase for selecting the appropriate features for robust fNIRS-based BCI applications.
Collapse
|
24
|
Hashem HA, Abdulazeem Y, Labib LM, Elhosseini MA, Shehata M. An Integrated Machine Learning-Based Brain Computer Interface to Classify Diverse Limb Motor Tasks: Explainable Model. SENSORS (BASEL, SWITZERLAND) 2023; 23:3171. [PMID: 36991884 PMCID: PMC10053613 DOI: 10.3390/s23063171] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 02/27/2023] [Accepted: 03/13/2023] [Indexed: 06/19/2023]
Abstract
Terminal neurological conditions can affect millions of people worldwide and hinder them from doing their daily tasks and movements normally. Brain computer interface (BCI) is the best hope for many individuals with motor deficiencies. It will help many patients interact with the outside world and handle their daily tasks without assistance. Therefore, machine learning-based BCI systems have emerged as non-invasive techniques for reading out signals from the brain and interpreting them into commands to help those people to perform diverse limb motor tasks. This paper proposes an innovative and improved machine learning-based BCI system that analyzes EEG signals obtained from motor imagery to distinguish among various limb motor tasks based on BCI competition III dataset IVa. The proposed framework pipeline for EEG signal processing performs the following major steps. The first step uses a meta-heuristic optimization technique, called the whale optimization algorithm (WOA), to select the optimal features for discriminating between neural activity patterns. The pipeline then uses machine learning models such as LDA, k-NN, DT, RF, and LR to analyze the chosen features to enhance the precision of EEG signal analysis. The proposed BCI system, which merges the WOA as a feature selection method and the optimized k-NN classification model, demonstrated an overall accuracy of 98.6%, outperforming other machine learning models and previous techniques on the BCI competition III dataset IVa. Additionally, the EEG feature contribution in the ML classification model is reported using Explainable AI (XAI) tools, which provide insights into the individual contributions of the features in the predictions made by the model. By incorporating XAI techniques, the results of this study offer greater transparency and understanding of the relationship between the EEG features and the model's predictions. The proposed method shows potential levels for better use in controlling diverse limb motor tasks to help people with limb impairments and support them while enhancing their quality of life.
Collapse
|
25
|
Tao T, Gao Y, Jia Y, Chen R, Li P, Xu G. A Multi-Channel Ensemble Method for Error-Related Potential Classification Using 2D EEG Images. SENSORS (BASEL, SWITZERLAND) 2023; 23:2863. [PMID: 36905065 PMCID: PMC10007400 DOI: 10.3390/s23052863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/09/2023] [Revised: 02/19/2023] [Accepted: 03/02/2023] [Indexed: 06/18/2023]
Abstract
An error-related potential (ErrP) occurs when people's expectations are not consistent with the actual outcome. Accurately detecting ErrP when a human interacts with a BCI is the key to improving these BCI systems. In this paper, we propose a multi-channel method for error-related potential detection using a 2D convolutional neural network. Multiple channel classifiers are integrated to make final decisions. Specifically, every 1D EEG signal from the anterior cingulate cortex (ACC) is transformed into a 2D waveform image; then, a model named attention-based convolutional neural network (AT-CNN) is proposed to classify it. In addition, we propose a multi-channel ensemble approach to effectively integrate the decisions of each channel classifier. Our proposed ensemble approach can learn the nonlinear relationship between each channel and the label, which obtains 5.27% higher accuracy than the majority voting ensemble approach. We conduct a new experiment and validate our proposed method on a Monitoring Error-Related Potential dataset and our dataset. With the method proposed in this paper, the accuracy, sensitivity and specificity were 86.46%, 72.46% and 90.17%, respectively. The result shows that the AT-CNNs-2D proposed in this paper can effectively improve the accuracy of ErrP classification, and provides new ideas for the study of classification of ErrP brain-computer interfaces.
Collapse
|