1
|
Tseng HC, Tai KY, Ma YZ, Van LD, Ko LW, Jung TP. Accurate Mental Stress Detection Using Sequential Backward Selection and Adaptive Synthetic Methods. IEEE Trans Neural Syst Rehabil Eng 2024; 32:3095-3103. [PMID: 39167520 DOI: 10.1109/tnsre.2024.3447274] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/23/2024]
Abstract
The daily experience of mental stress profoundly influences our health and work performance while concurrently triggering alterations in brain electrical activity. Electroencephalogram (EEG) is a widely adopted method for assessing cognitive and affective states. This study delves into the EEG correlates of stress and the potential use of resting EEG in evaluating stress levels. Over 13 weeks, our longitudinal study focuses on the real-life experiences of college students, collecting data from each of the 18 participants across multiple days in classroom settings. To tackle the complexity arising from the multitude of EEG features and the imbalance in data samples across stress levels, we use the sequential backward selection (SBS) method for feature selection and the adaptive synthetic (ADASYN) sampling algorithm for imbalanced data. Our findings unveil that delta and theta features account for approximately 50% of the selected features through the SBS process. In leave-one-out (LOO) cross-validation, the combination of band power and pair-wise coherence (COH) achieves a maximum balanced accuracy of 94.8% in stress-level detection for the above daily stress dataset. Notably, using ADASYN and borderline synthesized minority over-sampling technique (borderline-SMOTE) methods enhances model accuracy compared to the traditional SMOTE approach. These results provide valuable insights into using EEG signals for assessing stress levels in real-life scenarios, shedding light on potential strategies for managing stress more effectively.
Collapse
|
2
|
Ahuja C, Sethia D. Harnessing Few-Shot Learning for EEG signal classification: a survey of state-of-the-art techniques and future directions. Front Hum Neurosci 2024; 18:1421922. [PMID: 39050382 PMCID: PMC11266297 DOI: 10.3389/fnhum.2024.1421922] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 05/31/2024] [Indexed: 07/27/2024] Open
Abstract
This paper presents a systematic literature review, providing a comprehensive taxonomy of Data Augmentation (DA), Transfer Learning (TL), and Self-Supervised Learning (SSL) techniques within the context of Few-Shot Learning (FSL) for EEG signal classification. EEG signals have shown significant potential in various paradigms, including Motor Imagery, Emotion Recognition, Visual Evoked Potentials, Steady-State Visually Evoked Potentials, Rapid Serial Visual Presentation, Event-Related Potentials, and Mental Workload. However, challenges such as limited labeled data, noise, and inter/intra-subject variability have impeded the effectiveness of traditional machine learning (ML) and deep learning (DL) models. This review methodically explores how FSL approaches, incorporating DA, TL, and SSL, can address these challenges and enhance classification performance in specific EEG paradigms. It also delves into the open research challenges related to these techniques in EEG signal classification. Specifically, the review examines the identification of DA strategies tailored to various EEG paradigms, the creation of TL architectures for efficient knowledge transfer, and the formulation of SSL methods for unsupervised representation learning from EEG data. Addressing these challenges is crucial for enhancing the efficacy and robustness of FSL-based EEG signal classification. By presenting a structured taxonomy of FSL techniques and discussing the associated research challenges, this systematic review offers valuable insights for future investigations in EEG signal classification. The findings aim to guide and inspire researchers, promoting advancements in applying FSL methodologies for improved EEG signal analysis and classification in real-world settings.
Collapse
Affiliation(s)
- Chirag Ahuja
- Department of Computer Science and Engineering, Delhi Technological University, New Delhi, India
| | - Divyashikha Sethia
- Department of Software Engineering, Delhi Technology University, New Delhi, India
| |
Collapse
|
3
|
Zhang X, Xu K, Zhang L, Zhao R, Wei W, She Y. Optimal channel dynamic selection for Constructing lightweight Data EEG-based emotion recognition. Heliyon 2024; 10:e30174. [PMID: 38694096 PMCID: PMC11061731 DOI: 10.1016/j.heliyon.2024.e30174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 04/12/2024] [Accepted: 04/22/2024] [Indexed: 05/03/2024] Open
Abstract
At present, most methods to improve the accuracy of emotion recognition based on electroencephalogram (EEG) are achieved by means of increasing the number of channels and feature types. This is to use the big data to train the classification model but it also increases the code complexity and consumes a large amount of computer time. We propose a method of Ant Colony Optimization with Convolutional Neural Networks and Long Short-Term Memory (ACO-CNN-LSTM) which can attain the dynamic optimal channels for lightweight data. First, transform the time-domain EEG signal to the frequency domain by Fast Fourier Transform (FFT), and the Differential Entropy (DE) of the three frequency bands (α , β and γ ) are extracted as the feature data; Then, based on the DE feature dataset, ACO is employed to plan the path where the electrodes are located in the brain map. The classification accuracy of CNN-LSTM is used as the objective function for path determination, and the electrodes on the optimal path are used as the optimal channels; Next, the initial learning rate and batchsize parameters are exactly matched the data characteristics, which can obtain the best initial learning rate and batchsize; Finally, the SJTU Emotion EEG Dataset (SEED) dataset is used for emotion recognition based on the ACO-CNN-LSTM. From the experimental results, it can be seen that: the average accuracy of three-classification (positive, neutral, negative) can achieve 96.59 %, which is based on the lightweight data by means of ACO-CNN-LSTM proposed in the paper. Meanwhile, the computer time consumed is reduced. The computational efficiency is increased by 15.85 % compared with the traditional CNN-LSTM method. The accuracy can achieve more than 90 % when the data volume is reduced to 50 %. In summary, the proposed method of ACO-CNN-LSTM in the paper can get higher efficiency and accuracy.
Collapse
Affiliation(s)
- Xiaodan Zhang
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Kemeng Xu
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Lu Zhang
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Rui Zhao
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Wei Wei
- School of Electronics and Information, Xi'an Polytechnic University, Xi'an, Shaanxi, 710600, China
| | - Yichong She
- School of Life Sciences, Xi Dian University, Xi'an, Shaanxi, 710126, China
| |
Collapse
|
4
|
Faraji P, Khodabakhshi MB. CollectiveNet-AltSpec: A collective concurrent CNN architecture of alternate specifications for EEG media perception and emotion tracing aided by multi-domain feature-augmentation. Neural Netw 2023; 167:502-516. [PMID: 37690212 DOI: 10.1016/j.neunet.2023.08.031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 04/18/2023] [Accepted: 08/18/2023] [Indexed: 09/12/2023]
Abstract
Enhancing computability of cerebral recordings and connections made with human/non-human brain have been on track and are expected to propel in our current era. An effective contribution towards said ends is improving accuracy of attempts at discerning intricate phenomena taking place within human brain. Here and in two different capacities of experiments, we attempt to distinguish cerebral perceptions shaped and affective states surfaced during observation of samples of media incorporating distinct audio-visual and emotional contents, through employing electroencephalograph/EEG recorded sessions of two reputable datasets of DEAP and SEED. Here we introduce AltSpec(E3) the inceptive form of CollectiveNet intelligent computational architectures employing collective and concurrent multi-spec analysis to exploit complex patterns in complex data-structures. This processing technique uses a full array of diversification protocols with multifarious parts enabling surgical levels of optimization while integrating a holistic analysis of patterns. Data-structures designed here contain multi-electrode neuroinformatic and neurocognitive features studying emotion reactions and attentive patterns. These spatially and temporally featured 2D/3D constructs of domain-augmented data are eventually AI-processed and outputs are defragmented forming one definitive judgement. The media-perception tracing is arguably first of its kind, at least when implemented on mentioned datasets. Backed by this multi-directional approach and in subject-independent configurations for perception-tracing on 5-media-class basis, mean accuracies of 81.00% and 68.93% were obtained on DEAP and SEED, respectively. We also managed to classify emotions with accuracies of 61.59% and 66.21% in cross-dataset validation followed by 81.47% and 88.12% in cross-subject validation settings trained on DEAP and SEED, consecutively.
Collapse
Affiliation(s)
- Parham Faraji
- Department of Biomedical Engineering, Hamedan University of Technology, Hamedan 6516913733, Iran.
| | | |
Collapse
|
5
|
Xia Y, Liu Y. EEG-Based Emotion Recognition with Consideration of Individual Difference. SENSORS (BASEL, SWITZERLAND) 2023; 23:7749. [PMID: 37765808 PMCID: PMC10535213 DOI: 10.3390/s23187749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/07/2023] [Revised: 09/05/2023] [Accepted: 09/06/2023] [Indexed: 09/29/2023]
Abstract
Electroencephalograms (EEGs) are often used for emotion recognition through a trained EEG-to-emotion models. The training samples are EEG signals recorded while participants receive external induction labeled as various emotions. Individual differences such as emotion degree and time response exist under the same external emotional inductions. These differences can lead to a decrease in the accuracy of emotion classification models in practical applications. The brain-based emotion recognition model proposed in this paper is able to sufficiently consider these individual differences. The proposed model comprises an emotion classification module and an individual difference module (IDM). The emotion classification module captures the spatial and temporal features of the EEG data, while the IDM introduces personalized adjustments to specific emotional features by accounting for participant-specific variations as a form of interference. This approach aims to enhance the classification performance of EEG-based emotion recognition for diverse participants. The results of our comparative experiments indicate that the proposed method obtains a maximum accuracy of 96.43% for binary classification on DEAP data. Furthermore, it performs better in scenarios with significant individual differences, where it reaches a maximum accuracy of 98.92%.
Collapse
Affiliation(s)
- Yuxiao Xia
- College of Automation, Qingdao University, Qingdao 266071, China;
| | - Yinhua Liu
- Insititute for Future, Qingdao University, Qingdao 266071, China
| |
Collapse
|
6
|
Quan J, Li Y, Wang L, He R, Yang S, Guo L. EEG-based cross-subject emotion recognition using multi-source domain transfer learning. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104741] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023]
|
7
|
Common Mental Disorders in Smart City Settings and Use of Multimodal Medical Sensor Fusion to Detect Them. Diagnostics (Basel) 2023; 13:diagnostics13061082. [PMID: 36980390 PMCID: PMC10047202 DOI: 10.3390/diagnostics13061082] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2023] [Revised: 03/04/2023] [Accepted: 03/09/2023] [Indexed: 03/17/2023] Open
Abstract
Cities have undergone numerous permanent transformations at times of severe disruption. The Lisbon earthquake of 1755, for example, sparked the development of seismic construction rules. In 1848, when cholera spread through London, the first health law in the United Kingdom was passed. The Chicago fire of 1871 led to stricter building rules, which led to taller skyscrapers that were less likely to catch fire. Along similar lines, the COVID-19 epidemic may have a lasting effect, having pushed the global shift towards greener, more digital, and more inclusive cities. The pandemic highlighted the significance of smart/remote healthcare. Specifically, the elderly delayed seeking medical help for fear of contracting the infection. As a result, remote medical services were seen as a key way to keep healthcare services running smoothly. When it comes to both human and environmental health, cities play a critical role. By concentrating people and resources in a single location, the urban environment generates both health risks and opportunities to improve health. In this manuscript, we have identified the most common mental disorders and their prevalence rates in cities. We have also identified the factors that contribute to the development of mental health issues in urban spaces. Through careful analysis, we have found that multimodal feature fusion is the best method for measuring and analysing multiple signal types in real time. However, when utilizing multimodal signals, the most important issue is how we might combine them; this is an area of burgeoning research interest. To this end, we have highlighted ways to combine multimodal features for detecting and predicting mental issues such as anxiety, mood state recognition, suicidal tendencies, and substance abuse.
Collapse
|
8
|
Emotional State Classification from MUSIC-Based Features of Multichannel EEG Signals. BIOENGINEERING (BASEL, SWITZERLAND) 2023; 10:bioengineering10010099. [PMID: 36671671 PMCID: PMC9854769 DOI: 10.3390/bioengineering10010099] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/29/2022] [Revised: 01/04/2023] [Accepted: 01/06/2023] [Indexed: 01/14/2023]
Abstract
Electroencephalogram (EEG)-based emotion recognition is a computationally challenging issue in the field of medical data science that has interesting applications in cognitive state disclosure. Generally, EEG signals are classified from frequency-based features that are often extracted using non-parametric models such as Welch's power spectral density (PSD). These non-parametric methods are not computationally sound due to having complexity and extended run time. The main purpose of this work is to apply the multiple signal classification (MUSIC) model, a parametric-based frequency-spectrum-estimation technique to extract features from multichannel EEG signals for emotional state classification from the SEED dataset. The main challenge of using MUSIC in EEG feature extraction is to tune its parameters for getting the discriminative features from different classes, which is a significant contribution of this work. Another contribution is to show some flaws of this dataset for the first time that contributed to achieving high classification accuracy in previous research works. This work used MUSIC features to classify three emotional states and achieve 97% accuracy on average using an artificial neural network. The proposed MUSIC model optimizes a 95-96% run time compared with the conventional classical non-parametric technique (Welch's PSD) for feature extraction.
Collapse
|
9
|
Multimodal EEG Emotion Recognition Based on the Attention Recurrent Graph Convolutional Network. INFORMATION 2022. [DOI: 10.3390/info13110550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
EEG-based emotion recognition has become an important part of human–computer interaction. To solve the problem that single-modal features are not complete enough, in this paper, we propose a multimodal emotion recognition method based on the attention recurrent graph convolutional neural network, which is represented by Mul-AT-RGCN. The method explores the relationship between multiple-modal feature channels of EEG and peripheral physiological signals, converts one-dimensional sequence features into two-dimensional map features for modeling, and then extracts spatiotemporal and frequency–space features from the obtained multimodal features. These two types of features are input into a recurrent graph convolutional network with a convolutional block attention module for deep semantic feature extraction and sentiment classification. To reduce the differences between subjects, a domain adaptation module is also introduced to the cross-subject experimental verification. This proposed method performs feature learning in three dimensions of time, space, and frequency by excavating the complementary relationship of different modal data so that the learned deep emotion-related features are more discriminative. The proposed method was tested on the DEAP, a multimodal dataset, and the average classification accuracies of valence and arousal within subjects reached 93.19% and 91.82%, respectively, which were improved by 5.1% and 4.69%, respectively, compared with the only EEG modality and were also superior to the most-current methods. The cross-subject experiment also obtained better classification accuracies, which verifies the effectiveness of the proposed method in multimodal EEG emotion recognition.
Collapse
|
10
|
Zhang J, Zhang X, Chen G, Huang L, Sun Y. EEG emotion recognition based on cross-frequency granger causality feature extraction and fusion in the left and right hemispheres. Front Neurosci 2022; 16:974673. [PMID: 36161187 PMCID: PMC9491730 DOI: 10.3389/fnins.2022.974673] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2022] [Accepted: 08/17/2022] [Indexed: 11/13/2022] Open
Abstract
EEG emotion recognition based on Granger causality (GC) brain networks mainly focus on the EEG signal from the same-frequency bands, however, there are still some causality relationships between EEG signals in the cross-frequency bands. Considering the functional asymmetric of the left and right hemispheres to emotional response, this paper proposes an EEG emotion recognition scheme based on cross-frequency GC feature extraction and fusion in the left and right hemispheres. Firstly, we calculate the GC relationship of EEG signals according to the frequencies and hemispheres, and mainly focus on the causality of the cross-frequency EEG signals in left and right hemispheres. Then, to remove the redundant connections of the GC brain network, an adaptive two-stage decorrelation feature extraction scheme is proposed under the condition of maintaining the best emotion recognition performance. Finally, a multi-GC feature fusion scheme is designed to balance the recognition accuracy and feature number of each GC feature, which comprehensively considers the influence of the recognition accuracy and computational complexity. Experimental results on the DEAP emotion dataset show that the proposed scheme can achieve an average accuracy of 84.91% for four classifications, which improved the classification accuracy by up to 8.43% compared with that of the traditional same-frequency band GC features.
Collapse
|
11
|
Li J, Wu X, Zhang Y, Yang H, Wu X. DRS-Net: A spatial–temporal affective computing model based on multichannel EEG data. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103660] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
12
|
Yang H, Huang S, Guo S, Sun G. Multi-Classifier Fusion Based on MI–SFFS for Cross-Subject Emotion Recognition. ENTROPY 2022; 24:e24050705. [PMID: 35626587 PMCID: PMC9141183 DOI: 10.3390/e24050705] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Revised: 05/09/2022] [Accepted: 05/10/2022] [Indexed: 02/01/2023]
Abstract
With the widespread use of emotion recognition, cross-subject emotion recognition based on EEG signals has become a hot topic in affective computing. Electroencephalography (EEG) can be used to detect the brain’s electrical activity associated with different emotions. The aim of this research is to improve the accuracy by enhancing the generalization of features. A Multi-Classifier Fusion method based on mutual information with sequential forward floating selection (MI_SFFS) is proposed. The dataset used in this paper is DEAP, which is a multi-modal open dataset containing 32 EEG channels and multiple other physiological signals. First, high-dimensional features are extracted from 15 EEG channels of DEAP after using a 10 s time window for data slicing. Second, MI and SFFS are integrated as a novel feature-selection method. Then, support vector machine (SVM), k-nearest neighbor (KNN) and random forest (RF) are employed to classify positive and negative emotions to obtain the output probabilities of classifiers as weighted features for further classification. To evaluate the model performance, leave-one-out cross-validation is adopted. Finally, cross-subject classification accuracies of 0.7089, 0.7106 and 0.7361 are achieved by the SVM, KNN and RF classifiers, respectively. The results demonstrate the feasibility of the model by splicing different classifiers’ output probabilities as a portion of the weighted features.
Collapse
Affiliation(s)
- Haihui Yang
- College of Electronic Engineering, Heilongjiang University, Harbin 150080, China; (H.Y.); (S.H.); (S.G.)
- Key Laboratory of Information Fusion Estimation and Detection, Harbin 150080, China
| | - Shiguo Huang
- College of Electronic Engineering, Heilongjiang University, Harbin 150080, China; (H.Y.); (S.H.); (S.G.)
- Key Laboratory of Information Fusion Estimation and Detection, Harbin 150080, China
| | - Shengwei Guo
- College of Electronic Engineering, Heilongjiang University, Harbin 150080, China; (H.Y.); (S.H.); (S.G.)
- Key Laboratory of Information Fusion Estimation and Detection, Harbin 150080, China
| | - Guobing Sun
- College of Electronic Engineering, Heilongjiang University, Harbin 150080, China; (H.Y.); (S.H.); (S.G.)
- Key Laboratory of Information Fusion Estimation and Detection, Harbin 150080, China
- Correspondence: ; Tel.: +86-18946119665
| |
Collapse
|
13
|
Generator-based Domain Adaptation Method with Knowledge Free for Cross-subject EEG Emotion Recognition. Cognit Comput 2022. [DOI: 10.1007/s12559-022-10016-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
|
14
|
Pusarla AN, Singh BA, Tripathi CS. Learning DenseNet features from EEG based spectrograms for subject independent emotion recognition. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.103485] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Zhu M, Wang Q, Luo J. Emotion Recognition Based on Dynamic Energy Features Using a Bi-LSTM Network. Front Comput Neurosci 2022; 15:741086. [PMID: 35264939 PMCID: PMC8900638 DOI: 10.3389/fncom.2021.741086] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2021] [Accepted: 12/31/2021] [Indexed: 11/22/2022] Open
Abstract
Among electroencephalogram (EEG) signal emotion recognition methods based on deep learning, most methods have difficulty in using a high-quality model due to the low resolution and the small sample size of EEG images. To solve this problem, this study proposes a deep network model based on dynamic energy features. In this method, first, to reduce the noise superposition caused by feature analysis and extraction, the concept of an energy sequence is proposed. Second, to obtain the feature set reflecting the time persistence and multicomponent complexity of EEG signals, the construction method of the dynamic energy feature set is given. Finally, to make the network model suitable for small datasets, we used fully connected layers and bidirectional long short-term memory (Bi-LSTM) networks. To verify the effectiveness of the proposed method, we used leave one subject out (LOSO) and 10-fold cross validation (CV) strategies to carry out experiments on the SEED and DEAP datasets. The experimental results show that the accuracy of the proposed method can reach 89.42% (SEED) and 77.34% (DEAP).
Collapse
Affiliation(s)
- Meili Zhu
- Modern Animation Technology Engineering Research Center of Jilin Higher Learning Institutions, Jilin Animation Institute, Changchun, China
| | | | | |
Collapse
|
16
|
Apicella A, Arpaia P, Mastrati G, Moccaldi N. EEG-based detection of emotional valence towards a reproducible measurement of emotions. Sci Rep 2021; 11:21615. [PMID: 34732756 PMCID: PMC8566577 DOI: 10.1038/s41598-021-00812-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 09/20/2021] [Indexed: 11/09/2022] Open
Abstract
A methodological contribution to a reproducible Measurement of Emotions for an EEG-based system is proposed. Emotional Valence detection is the suggested use case. Valence detection occurs along the interval scale theorized by the Circumplex Model of emotions. The binary choice, positive valence vs negative valence, represents a first step towards the adoption of a metric scale with a finer resolution. EEG signals were acquired through a 8-channel dry electrode cap. An implicit-more controlled EEG paradigm was employed to elicit emotional valence through the passive view of standardized visual stimuli (i.e., Oasis dataset) in 25 volunteers without depressive disorders. Results from the Self Assessment Manikin questionnaire confirmed the compatibility of the experimental sample with that of Oasis. Two different strategies for feature extraction were compared: (i) based on a-priory knowledge (i.e., Hemispheric Asymmetry Theories), and (ii) automated (i.e., a pipeline of a custom 12-band Filter Bank and Common Spatial Pattern). An average within-subject accuracy of 96.1 %, was obtained by a shallow Artificial Neural Network, while k-Nearest Neighbors allowed to obtain a cross-subject accuracy equal to 80.2%.
Collapse
Affiliation(s)
- Andrea Apicella
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Pasquale Arpaia
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy.
- Interdepartmental Center for Research on Management and Innovation in Healthcare (CIRMIS), University of Naples Federico II, Naples, Italy.
| | - Giovanna Mastrati
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| | - Nicola Moccaldi
- Laboratory of Augmented Reality for Health Monitoring (ARHeMLab), Department of Electrical Engineering and Information Technology, University of Naples Federico II, Naples, Italy
| |
Collapse
|
17
|
Hatipoglu Yilmaz B, Kose C. A novel signal to image transformation and feature level fusion for multimodal emotion recognition. ACTA ACUST UNITED AC 2021; 66:353-362. [PMID: 33823091 DOI: 10.1515/bmt-2020-0229] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Accepted: 03/12/2021] [Indexed: 11/15/2022]
Abstract
Emotion is one of the most complex and difficult expression to be predicted. Nowadays, many recognition systems that use classification methods have focused on different types of emotion recognition problems. In this paper, we aimed to propose a multimodal fusion method between electroencephalography (EEG) and electrooculography (EOG) signals for emotion recognition. Therefore, before the feature extraction stage, we applied different angle-amplitude transformations to EEG-EOG signals. These transformations take arbitrary time domain signals and convert them two-dimensional images named as Angle-Amplitude Graph (AAG). Then, we extracted image-based features using a scale invariant feature transform method, fused these features originates basically from EEG-EOG and lastly classified with support vector machines. To verify the validity of these proposed methods, we performed experiments on the multimodal DEAP dataset which is a benchmark dataset widely used for emotion analysis with physiological signals. In the experiments, we applied the proposed emotion recognition procedures on the arousal-valence dimensions. We achieved (91.53%) accuracy for the arousal space and (90.31%) for the valence space after fusion. Experimental results showed that the combination of AAG image features belonging to EEG-EOG signals in the baseline angle amplitude transformation approaches enhanced the classification performance on the DEAP dataset.
Collapse
Affiliation(s)
| | - Cemal Kose
- Department of Computer Engineering, Karadeniz Technical University, Trabzon, Turkey
| |
Collapse
|
18
|
Gao Q, Yang Y, Kang Q, Tian Z, Song Y. EEG-based Emotion Recognition with Feature Fusion Networks. INT J MACH LEARN CYB 2021. [DOI: 10.1007/s13042-021-01414-5] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
19
|
Muhammad G, Shamim Hossain M. COVID-19 and Non-COVID-19 Classification using Multi-layers Fusion From Lung Ultrasound Images. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2021; 72:80-88. [PMID: 33649704 PMCID: PMC7904462 DOI: 10.1016/j.inffus.2021.02.013] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Revised: 11/26/2020] [Accepted: 02/21/2021] [Indexed: 05/18/2023]
Abstract
COVID-19 or related viral pandemics should be detected and managed without hesitation, since the virus spreads very rapidly. Often with insufficient human and electronic resources, patients need to be checked from stable patients using vital signs, radiographic photographs, or ultrasound images. Vital signs do not often offer the right outcome, and radiographic photos have a variety of other problems. Lung ultrasound (LUS) images can provide good screening without a lot of complications. This paper suggests a model of a convolutionary neural network (CNN) that has fewer learning parameters but can achieve strong accuracy. The model has five main blocks or layers of convolution connectors. A multi-layer fusion functionality of each block is proposed to improve the efficiency of the COVID-19 screening method utilizing the proposed model. Experiments are conducted using freely accessible LUS photographs and video datasets. The proposed fusion method has 92.5% precision, 91.8% accuracy, and 93.2% retrieval using the data collection. These efficiency metric levels are considerably higher than those used in any of the state-of-the-art CNN versions.
Collapse
Affiliation(s)
- Ghulam Muhammad
- Chair of Pervasive and Mobile Computing, King Saud University, Riyadh 11543, Saudi Arabia
- Department of Computer Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| | - M Shamim Hossain
- Chair of Pervasive and Mobile Computing, King Saud University, Riyadh 11543, Saudi Arabia
- Department of Software Engineering, College of Computer and Information Sciences, King Saud University, Riyadh 11543, Saudi Arabia
| |
Collapse
|
20
|
Ni T, Ni Y, Xue J, Wang S. A Domain Adaptation Sparse Representation Classifier for Cross-Domain Electroencephalogram-Based Emotion Classification. Front Psychol 2021; 12:721266. [PMID: 34393958 PMCID: PMC8358659 DOI: 10.3389/fpsyg.2021.721266] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2021] [Accepted: 06/28/2021] [Indexed: 11/24/2022] Open
Abstract
The brain-computer interface (BCI) interprets the physiological information of the human brain in the process of consciousness activity. It builds a direct information transmission channel between the brain and the outside world. As the most common non-invasive BCI modality, electroencephalogram (EEG) plays an important role in the emotion recognition of BCI; however, due to the individual variability and non-stationary of EEG signals, the construction of EEG-based emotion classifiers for different subjects, different sessions, and different devices is an important research direction. Domain adaptation utilizes data or knowledge from more than one domain and focuses on transferring knowledge from the source domain (SD) to the target domain (TD), in which the EEG data may be collected from different subjects, sessions, or devices. In this study, a new domain adaptation sparse representation classifier (DASRC) is proposed to address the cross-domain EEG-based emotion classification. To reduce the differences in domain distribution, the local information preserved criterion is exploited to project the samples from SD and TD into a shared subspace. A common domain-invariant dictionary is learned in the projection subspace so that an inherent connection can be built between SD and TD. In addition, both principal component analysis (PCA) and Fisher criteria are exploited to promote the recognition ability of the learned dictionary. Besides, an optimization method is proposed to alternatively update the subspace and dictionary learning. The comparison of CSFDDL shows the feasibility and competitive performance for cross-subject and cross-dataset EEG-based emotion classification problems.
Collapse
Affiliation(s)
- Tongguang Ni
- School of Computer Science and Artificial Intelligence, Changzhou University, Changzhou, China
| | - Yuyao Ni
- School of Electrical Engineering, Xi'an Jiaotong University, Xi'an, China
| | - Jing Xue
- Department of Nephrology, Affiliated Wuxi People's Hospital of Nanjing Medical University, Wuxi, China
| | - Suhong Wang
- Department of Clinical Psychology, The Third Affiliated Hospital of Soochow University, Changzhou, China
| |
Collapse
|
21
|
Ahmad IS, Zhang S, Saminu S, Wang L, Isselmou AEK, Cai Z, Javaid I, Kamhi S, Kulsum U. Deep Learning Based on CNN for Emotion Recognition Using EEG Signal. WSEAS TRANSACTIONS ON SIGNAL PROCESSING 2021; 17:28-40. [DOI: 10.37394/232014.2021.17.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Emotion recognition based on brain-computer interface (BCI) has attracted important research attention despite its difficulty. It plays a vital role in human cognition and helps in making the decision. Many researchers use electroencephalograms (EEG) signals to study emotion because of its easy and convenient. Deep learning has been employed for the emotion recognition system. It recognizes emotion into single or multi-models, with visual or music stimuli shown on a screen. In this article, the convolutional neural network (CNN) model is introduced to simultaneously learn the feature and recognize the emotion of positive, neutral, and negative states of pure EEG signals single model based on the SJTU emotion EEG dataset (SEED) with ResNet50 and Adam optimizer. The dataset is shuffle, divided into training and testing, and then fed to the CNN model. The negative emotion has the highest accuracy of 94.86% fellow by neutral emotion with 94.29% and positive emotion with 93.25% respectively. With average accuracy of 94.13%. The results showed excellent classification ability of the model and can improve emotion recognition.
Collapse
Affiliation(s)
- Isah Salim Ahmad
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Shuai Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Sani Saminu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Lingyue Wang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Abd El Kader Isselmou
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Ziliang Cai
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Imran Javaid
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Souha Kamhi
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Ummay Kulsum
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| |
Collapse
|
22
|
Optimizing Residual Networks and VGG for Classification of EEG Signals: Identifying Ideal Channels for Emotion Recognition. JOURNAL OF HEALTHCARE ENGINEERING 2021; 2021:5599615. [PMID: 33859808 PMCID: PMC8024101 DOI: 10.1155/2021/5599615] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 03/19/2021] [Indexed: 12/05/2022]
Abstract
Emotion is a crucial aspect of human health, and emotion recognition systems serve important roles in the development of neurofeedback applications. Most of the emotion recognition methods proposed in previous research take predefined EEG features as input to the classification algorithms. This paper investigates the less studied method of using plain EEG signals as the classifier input, with the residual networks (ResNet) as the classifier of interest. ResNet having excelled in the automated hierarchical feature extraction in raw data domains with vast number of samples (e.g., image processing) is potentially promising in the future as the amount of publicly available EEG databases has been increasing. Architecture of the original ResNet designed for image processing is restructured for optimal performance on EEG signals. The arrangement of convolutional kernel dimension is demonstrated to largely affect the model's performance on EEG signal processing. The study is conducted on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED), with our proposed ResNet18 architecture achieving 93.42% accuracy on the 3-class emotion classification, compared to the original ResNet18 at 87.06% accuracy. Our proposed ResNet18 architecture has also achieved a model parameter reduction of 52.22% from the original ResNet18. We have also compared the importance of different subsets of EEG channels from a total of 62 channels for emotion recognition. The channels placed near the anterior pole of the temporal lobes appeared to be most emotionally relevant. This agrees with the location of emotion-processing brain structures like the insular cortex and amygdala.
Collapse
|
23
|
A Comparative Study of Window Size and Channel Arrangement on EEG-Emotion Recognition Using Deep CNN. SENSORS 2021; 21:s21051678. [PMID: 33804366 PMCID: PMC7957771 DOI: 10.3390/s21051678] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/19/2021] [Revised: 02/19/2021] [Accepted: 02/23/2021] [Indexed: 12/31/2022]
Abstract
Emotion recognition based on electroencephalograms has become an active research area. Yet, identifying emotions using only brainwaves is still very challenging, especially the subject-independent task. Numerous studies have tried to propose methods to recognize emotions, including machine learning techniques like convolutional neural network (CNN). Since CNN has shown its potential in generalization to unseen subjects, manipulating CNN hyperparameters like the window size and electrode order might be beneficial. To our knowledge, this is the first work that extensively observed the parameter selection effect on the CNN. The temporal information in distinct window sizes was found to significantly affect the recognition performance, and CNN was found to be more responsive to changing window sizes than the support vector machine. Classifying the arousal achieved the best performance with a window size of ten seconds, obtaining 56.85% accuracy and a Matthews correlation coefficient (MCC) of 0.1369. Valence recognition had the best performance with a window length of eight seconds at 73.34% accuracy and an MCC value of 0.4669. Spatial information from varying the electrode orders had a small effect on the classification. Overall, valence results had a much more superior performance than arousal results, which were, perhaps, influenced by features related to brain activity asymmetry between the left and right hemispheres.
Collapse
|
24
|
Kumar S, Sharma R, Sharma A. OPTICAL+: a frequency-based deep learning scheme for recognizing brain wave signals. PeerJ Comput Sci 2021; 7:e375. [PMID: 33817023 PMCID: PMC7959638 DOI: 10.7717/peerj-cs.375] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2020] [Accepted: 01/06/2021] [Indexed: 06/12/2023]
Abstract
A human-computer interaction (HCI) system can be used to detect different categories of the brain wave signals that can be beneficial for neurorehabilitation, seizure detection and sleep stage classification. Research on developing HCI systems using brain wave signals has progressed a lot over the years. However, real-time implementation, computational complexity and accuracy are still a concern. In this work, we address the problem of selecting the appropriate filtering frequency band while also achieving a good system performance by proposing a frequency-based approach using long short-term memory network (LSTM) for recognizing different brain wave signals. Adaptive filtering using genetic algorithm is incorporated for a hybrid system utilizing common spatial pattern and LSTM network. The proposed method (OPTICAL+) achieved an overall average classification error rate of 30.41% and a kappa coefficient value of 0.398, outperforming the state-of-the-art methods. The proposed OPTICAL+ predictor can be used to develop improved HCI systems that will aid in neurorehabilitation and may also be beneficial for sleep stage classification and seizure detection.
Collapse
Affiliation(s)
- Shiu Kumar
- School of Electrical and Electronic Engineering, Fiji National University, Suva, Fiji
| | - Ronesh Sharma
- School of Electrical and Electronic Engineering, Fiji National University, Suva, Fiji
| | - Alok Sharma
- STEMP, University of the South Pacific, Suva, Fiji
- Institute for Integrated and Intelligent Systems, Griffith University, Brisbane, Australia
- Laboratory for Medical Science Mathematics, RIKEN Center for Integrative Medical Sciences, Yokohama, Kanagawa, Japan
| |
Collapse
|
25
|
Fdez J, Guttenberg N, Witkowski O, Pasquali A. Cross-Subject EEG-Based Emotion Recognition Through Neural Networks With Stratified Normalization. Front Neurosci 2021; 15:626277. [PMID: 33613187 PMCID: PMC7888301 DOI: 10.3389/fnins.2021.626277] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 01/05/2021] [Indexed: 12/14/2022] Open
Abstract
Due to a large number of potential applications, a good deal of effort has been recently made toward creating machine learning models that can recognize evoked emotions from one's physiological recordings. In particular, researchers are investigating the use of EEG as a low-cost, non-invasive method. However, the poor homogeneity of the EEG activity across participants hinders the implementation of such a system by a time-consuming calibration stage. In this study, we introduce a new participant-based feature normalization method, named stratified normalization, for training deep neural networks in the task of cross-subject emotion classification from EEG signals. The new method is able to subtract inter-participant variability while maintaining the emotion information in the data. We carried out our analysis on the SEED dataset, which contains 62-channel EEG recordings collected from 15 participants watching film clips. Results demonstrate that networks trained with stratified normalization significantly outperformed standard training with batch normalization. In addition, the highest model performance was achieved when extracting EEG features with the multitaper method, reaching a classification accuracy of 91.6% for two emotion categories (positive and negative) and 79.6% for three (also neutral). This analysis provides us with great insight into the potential benefits that stratified normalization can have when developing any cross-subject model based on EEG.
Collapse
Affiliation(s)
- Javier Fdez
- Cross Labs, Cross Compass Ltd., Tokyo, Japan
| | | | | | | |
Collapse
|
26
|
Harris I, Küssner MB. Come on Baby, Light My Fire: Sparking Further Research in Socio-Affective Mechanisms of Music Using Computational Advancements. Front Psychol 2020; 11:557162. [PMID: 33363492 PMCID: PMC7753094 DOI: 10.3389/fpsyg.2020.557162] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 10/30/2020] [Indexed: 12/13/2022] Open
Affiliation(s)
- Ilana Harris
- Centre for Music and Science, Faculty of Music, University of Cambridge, Cambridge, United Kingdom
| | - Mats B Küssner
- Department of Musicology and Media Studies, Humboldt-Universität zu Berlin, Berlin, Germany
| |
Collapse
|
27
|
Jin L, Kim EY. Interpretable Cross-Subject EEG-Based Emotion Recognition Using Channel-Wise Features. SENSORS 2020; 20:s20236719. [PMID: 33255374 PMCID: PMC7727848 DOI: 10.3390/s20236719] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Revised: 11/17/2020] [Accepted: 11/23/2020] [Indexed: 11/25/2022]
Abstract
Electroencephalogram (EEG)-based emotion recognition is receiving significant attention in research on brain-computer interfaces (BCI) and health care. To recognize cross-subject emotion from EEG data accurately, a technique capable of finding an effective representation robust to the subject-specific variability associated with EEG data collection processes is necessary. In this paper, a new method to predict cross-subject emotion using time-series analysis and spatial correlation is proposed. To represent the spatial connectivity between brain regions, a channel-wise feature is proposed, which can effectively handle the correlation between all channels. The channel-wise feature is defined by a symmetric matrix, the elements of which are calculated by the Pearson correlation coefficient between two-pair channels capable of complementarily handling subject-specific variability. The channel-wise features are then fed to two-layer stacked long short-term memory (LSTM), which can extract temporal features and learn an emotional model. Extensive experiments on two publicly available datasets, the Dataset for Emotion Analysis using Physiological Signals (DEAP) and the SJTU (Shanghai Jiao Tong University) Emotion EEG Dataset (SEED), demonstrate the effectiveness of the combined use of channel-wise features and LSTM. Experimental results achieve state-of-the-art classification rates of 98.93% and 99.10% during the two-class classification of valence and arousal in DEAP, respectively, with an accuracy of 99.63% during three-class classification in SEED.
Collapse
|
28
|
Cimtay Y, Ekmekcioglu E. Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition. SENSORS 2020; 20:s20072034. [PMID: 32260445 PMCID: PMC7181114 DOI: 10.3390/s20072034] [Citation(s) in RCA: 79] [Impact Index Per Article: 15.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/01/2020] [Accepted: 04/02/2020] [Indexed: 11/16/2022]
Abstract
The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.
Collapse
|
29
|
Wu D, Xu Y, Lu BL. Transfer Learning for EEG-Based Brain-Computer Interfaces: A Review of Progress Made Since 2016. IEEE Trans Cogn Dev Syst 2020. [DOI: 10.1109/tcds.2020.3007453] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|