1
|
Gunda NK, Khalaf MI, Bhatnagar S, Quraishi A, Gudala L, Venkata AKP, Alghayadh FY, Alsubai S, Bhatnagar V. Lightweight attention mechanisms for EEG emotion recognition for brain computer interface. J Neurosci Methods 2024; 410:110223. [PMID: 39032522 DOI: 10.1016/j.jneumeth.2024.110223] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2024] [Revised: 06/18/2024] [Accepted: 07/17/2024] [Indexed: 07/23/2024]
Abstract
BACKGROUND In the realm of brain-computer interfaces (BCI), identifying emotions from electroencephalogram (EEG) data is a difficult endeavor because of the volume of data, the intricacy of the signals, and the several channels that make up the signals. NEW METHODS Using dual-stream structure scaling and multiple attention mechanisms (LDMGEEG), a lightweight network is provided to maximize the accuracy and performance of EEG-based emotion identification. Reducing the number of computational parameters while maintaining the current level of classification accuracy is the aim. This network employs a symmetric dual-stream architecture to assess separately time-domain and frequency-domain spatio-temporal maps constructed using differential entropy features of EEG signals as inputs. RESULT The experimental results show that after significantly lowering the number of parameters, the model achieved the best possible performance in the field, with a 95.18 % accuracy on the SEED dataset. COMPARISON WITH EXISTING METHODS Moreover, it reduced the number of parameters by 98 % when compared to existing models. CONCLUSION The proposed method distinct channel-time/frequency-space multiple attention and post-attention methods enhance the model's ability to aggregate features and result in lightweight performance.
Collapse
Affiliation(s)
- Naresh Kumar Gunda
- Information Technology Management, Campbellsville Univeristy, Campbellsville, KY, United States.
| | - Mohammed I Khalaf
- Department of computer science, Al Maarif University College, Al Anbar 31001, Iraq.
| | - Shaleen Bhatnagar
- Department of Computer Science and Engineering, Manipal Institute of Technology Bengaluru, Manipal Academy of Higher Education, Manipal, India.
| | - Aadam Quraishi
- M. D. Research, Interventional Treatment Institute, Al Anbar, TX, United States.
| | | | | | - Faisal Yousef Alghayadh
- Computer Science and Information Systems Department, College of Applied Sciences, AlMaarefa University, Riyadh, Saudi Arabia.
| | - Shtwai Alsubai
- Department of Computer Science, College of Computer Engineering and Sciences in Al-Kharj, Prince Sattam bin Abdulaziz University, P.O. Box 151, Al-Kharj 11942, Saudi Arabia.
| | - Vaibhav Bhatnagar
- Department of Computer Applications, Manipal University Jaipur, India.
| |
Collapse
|
2
|
Jin H, Gao Y, Wang T, Gao P. DAST: A Domain-Adaptive Learning Combining Spatio-Temporal Dynamic Attention for Electroencephalography Emotion Recognition. IEEE J Biomed Health Inform 2024; 28:2512-2523. [PMID: 37607151 DOI: 10.1109/jbhi.2023.3307606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/24/2023]
Abstract
Multimodal emotion recognition with EEG-based have become mainstream in affective computing. However, previous studies mainly focus on perceived emotions (including posture, speech or face expression et al.) of different subjects, while the lack of research on induced emotions (including video or music et al.) limited the development of two-ways emotions. To solve this problem, we propose a multimodal domain adaptive method based on EEG and music called the DAST, which uses spatio-temporal adaptive attention (STA-attention) to globally model the EEG and maps all embeddings dynamically into high-dimensionally space by adaptive space encoder (ASE). Then, adversarial training is performed with domain discriminator and ASE to learn invariant emotion representations. Furthermore, we conduct extensive experiments on the DEAP dataset, and the results show that our method can further explore the relationship between induced and perceived emotions, and provide a reliable reference for exploring the potential correlation between EEG and music stimulation.
Collapse
|
3
|
Vaidya A, Chen RJ, Williamson DFK, Song AH, Jaume G, Yang Y, Hartvigsen T, Dyer EC, Lu MY, Lipkova J, Shaban M, Chen TY, Mahmood F. Demographic bias in misdiagnosis by computational pathology models. Nat Med 2024; 30:1174-1190. [PMID: 38641744 DOI: 10.1038/s41591-024-02885-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/03/2023] [Accepted: 02/23/2024] [Indexed: 04/21/2024]
Abstract
Despite increasing numbers of regulatory approvals, deep learning-based computational pathology systems often overlook the impact of demographic factors on performance, potentially leading to biases. This concern is all the more important as computational pathology has leveraged large public datasets that underrepresent certain demographic groups. Using publicly available data from The Cancer Genome Atlas and the EBRAINS brain tumor atlas, as well as internal patient data, we show that whole-slide image classification models display marked performance disparities across different demographic groups when used to subtype breast and lung carcinomas and to predict IDH1 mutations in gliomas. For example, when using common modeling approaches, we observed performance gaps (in area under the receiver operating characteristic curve) between white and Black patients of 3.0% for breast cancer subtyping, 10.9% for lung cancer subtyping and 16.0% for IDH1 mutation prediction in gliomas. We found that richer feature representations obtained from self-supervised vision foundation models reduce performance variations between groups. These representations provide improvements upon weaker models even when those weaker models are combined with state-of-the-art bias mitigation strategies and modeling choices. Nevertheless, self-supervised vision foundation models do not fully eliminate these discrepancies, highlighting the continuing need for bias mitigation efforts in computational pathology. Finally, we demonstrate that our results extend to other demographic factors beyond patient race. Given these findings, we encourage regulatory and policy agencies to integrate demographic-stratified evaluation into their assessment guidelines.
Collapse
Affiliation(s)
- Anurag Vaidya
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Health Sciences and Technology, Harvard-MIT, Cambridge, MA, USA
| | - Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology and Laboratory Medicine, Emory University School of Medicine, Atlanta, GA, USA
| | - Andrew H Song
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Guillaume Jaume
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Yuzhe Yang
- Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Thomas Hartvigsen
- School of Data Science, University of Virginia, Charlottesville, VA, USA
| | - Emma C Dyer
- T.H. Chan School of Public Health, Harvard University, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, MIT, Cambridge, MA, USA
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Muhammad Shaban
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and MIT, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
4
|
Zhang R, Guo H, Xu Z, Hu Y, Chen M, Zhang L. MGFKD: A semi-supervised multi-source domain adaptation algorithm for cross-subject EEG emotion recognition. Brain Res Bull 2024; 208:110901. [PMID: 38355058 DOI: 10.1016/j.brainresbull.2024.110901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Revised: 12/31/2023] [Accepted: 02/11/2024] [Indexed: 02/16/2024]
Abstract
Currently, most models rarely consider the negative transfer problem in the research field of cross-subject EEG emotion recognition. To solve this problem, this paper proposes a semi-supervised domain adaptive algorithm based on few labeled samples of target subject, which called multi-domain geodesic flow kernel dynamic distribution alignment (MGFKD). It consists of three modules: 1) GFK common feature extractor: projects the feature distribution of source and target subjects to the Grassmann manifold space, and obtains the latent common features of the two feature distributions through GFK method. 2) Source domain selector: obtains pseudo-labels of the target subject through weak classifier, finds "golden source subjects" by using few known labels of target subjects. 3) Label corrector: uses a dynamic distribution balance strategy to correct the pseudo-labels of the target subject. We conducted comparison experiments on the SEED and SEED-IV datasets, and the results show that MGFKD outperforms unsupervised and semi-supervised domain adaptation algorithms, achieving an average accuracy of 87.51±7.68% and 68.79±8.25% on the SEED and SEED-IV datasets with only one labeled sample per video for target subject. Especially when the number of source domains is set as 6 and the number of known labels is set as 5, the accuracy increase to 90.20±7.57% and 69.99±7.38%, respectively. The above results prove that our proposed algorithm can efficiently and quickly improve the cross-subject EEG emotion classification performance. Since it only need a small number of labeled samples of new subjects, making it has strong application value in future EEG-based emotion recognition applications.
Collapse
Affiliation(s)
- Rui Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China
| | - Huifeng Guo
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China
| | - Zongxin Xu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China
| | - Yuxia Hu
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China
| | - Mingming Chen
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China
| | - Lipeng Zhang
- Henan Key Laboratory of Brain Science and Brain-Computer Interface Technology, School of Electrical and Information Engineering, Zhengzhou University, Zhengzhou 450001, PR China.
| |
Collapse
|
5
|
Zhang G, Zhang A, Liu H, Luo J, Chen J. Positional multi-length and mutual-attention network for epileptic seizure classification. Front Comput Neurosci 2024; 18:1358780. [PMID: 38333103 PMCID: PMC10850335 DOI: 10.3389/fncom.2024.1358780] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2023] [Accepted: 01/05/2024] [Indexed: 02/10/2024] Open
Abstract
The automatic classification of epilepsy electroencephalogram (EEG) signals plays a crucial role in diagnosing neurological diseases. Although promising results have been achieved by deep learning methods in this task, capturing the minute abnormal characteristics, contextual information, and long dependencies of EEG signals remains a challenge. To address this challenge, a positional multi-length and mutual-attention (PMM) network is proposed for the automatic classification of epilepsy EEG signals. The PMM network incorporates a positional feature encoding process that extracts minute abnormal characteristics from the EEG signal and utilizes a multi-length feature learning process with a hierarchy residual dilated LSTM (RDLSTM) to capture long contextual dependencies. Furthermore, a mutual-attention feature reinforcement process is employed to learn the global and relative feature dependencies and enhance the discriminative abilities of the network. To validate the effectiveness PMM network, we conduct extensive experiments on the public dataset and the experimental results demonstrate the superior performance of the PMM network compared to state-of-the-art methods.
Collapse
Affiliation(s)
- Guokai Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Aiming Zhang
- School of Optical-Electrical and Computer Engineering, University of Shanghai for Science and Technology, Shanghai, China
| | - Huan Liu
- Department of Hematology, Affiliated Qingdao Central Hospital of Qingdao University, Qingdao Cancer Hospital, Qingdao, China
| | - Jihao Luo
- School of Computing, National University of Singapore, Singapore, Singapore
| | - Jianqing Chen
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Affiliated to Shanghai Jiaotong University School of Medicine, Shanghai, China
| |
Collapse
|
6
|
Tao J, Dan Y, Zhou D. Local domain generalization with low-rank constraint for EEG-based emotion recognition. Front Neurosci 2023; 17:1213099. [PMID: 38027525 PMCID: PMC10662311 DOI: 10.3389/fnins.2023.1213099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2023] [Accepted: 10/04/2023] [Indexed: 12/01/2023] Open
Abstract
As an important branch in the field of affective computing, emotion recognition based on electroencephalography (EEG) faces a long-standing challenge due to individual diversities. To conquer this challenge, domain adaptation (DA) or domain generalization (i.e., DA without target domain in the training stage) techniques have been introduced into EEG-based emotion recognition to eliminate the distribution discrepancy between different subjects. The preceding DA or domain generalization (DG) methods mainly focus on aligning the global distribution shift between source and target domains, yet without considering the correlations between the subdomains within the source domain and the target domain of interest. Since the ignorance of the fine-grained distribution information in the source may still bind the DG expectation on EEG datasets with multimodal structures, multiple patches (or subdomains) should be reconstructed from the source domain, on which multi-classifiers could be learned collaboratively. It is expected that accurately aligning relevant subdomains by excavating multiple distribution patterns within the source domain could further boost the learning performance of DG/DA. Therefore, we propose in this work a novel DG method for EEG-based emotion recognition, i.e., Local Domain Generalization with low-rank constraint (LDG). Specifically, the source domain is firstly partitioned into multiple local domains, each of which contains only one positive sample and its positive neighbors and k2 negative neighbors. Multiple subject-invariant classifiers on different subdomains are then co-learned in a unified framework by minimizing local regression loss with low-rank regularization for considering the shared knowledge among local domains. In the inference stage, the learned local classifiers are discriminatively selected according to their importance of adaptation. Extensive experiments are conducted on two benchmark databases (DEAP and SEED) under two cross-validation evaluation protocols, i.e., cross-subject within-dataset and cross-dataset within-session. The experimental results under the 5-fold cross-validation demonstrate the superiority of the proposed method compared with several state-of-the-art methods.
Collapse
Affiliation(s)
- Jianwen Tao
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Zhejiang, China
| | - Yufang Dan
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Zhejiang, China
| | - Di Zhou
- Industrial Technological Institute of Intelligent Manufacturing, Sichuan University of Arts and Science, Dazhou, China
| |
Collapse
|
7
|
Chen RJ, Wang JJ, Williamson DFK, Chen TY, Lipkova J, Lu MY, Sahai S, Mahmood F. Algorithmic fairness in artificial intelligence for medicine and healthcare. Nat Biomed Eng 2023; 7:719-742. [PMID: 37380750 PMCID: PMC10632090 DOI: 10.1038/s41551-023-01056-8] [Citation(s) in RCA: 30] [Impact Index Per Article: 30.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 04/13/2023] [Indexed: 06/30/2023]
Abstract
In healthcare, the development and deployment of insufficiently fair systems of artificial intelligence (AI) can undermine the delivery of equitable care. Assessments of AI models stratified across subpopulations have revealed inequalities in how patients are diagnosed, treated and billed. In this Perspective, we outline fairness in machine learning through the lens of healthcare, and discuss how algorithmic biases (in data acquisition, genetic variation and intra-observer labelling variability, in particular) arise in clinical workflows and the resulting healthcare disparities. We also review emerging technology for mitigating biases via disentanglement, federated learning and model explainability, and their role in the development of AI-based software as a medical device.
Collapse
Affiliation(s)
- Richard J Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
| | - Judy J Wang
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Boston University School of Medicine, Boston, MA, USA
| | - Drew F K Williamson
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Tiffany Y Chen
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jana Lipkova
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Ming Y Lu
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA
- Electrical Engineering and Computer Science, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Sharifa Sahai
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA
- Department of Biomedical Informatics, Harvard Medical School, Boston, MA, USA
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA
- Department of Systems Biology, Harvard Medical School, Boston, MA, USA
| | - Faisal Mahmood
- Department of Pathology, Brigham and Women's Hospital, Harvard Medical School, Boston, MA, USA.
- Cancer Program, Broad Institute of Harvard and Massachusetts Institute of Technology, Cambridge, MA, USA.
- Cancer Data Science Program, Dana-Farber Cancer Institute, Boston, MA, USA.
- Department of Pathology, Massachusetts General Hospital, Harvard Medical School, Boston, MA, USA.
- Harvard Data Science Initiative, Harvard University, Cambridge, MA, USA.
| |
Collapse
|
8
|
Jeon E, Ko W, Yoon JS, Suk HI. Mutual Information-Driven Subject-Invariant and Class-Relevant Deep Representation Learning in BCI. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2023; 34:739-749. [PMID: 34357871 DOI: 10.1109/tnnls.2021.3100583] [Citation(s) in RCA: 11] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
In recent years, deep learning-based feature representation methods have shown a promising impact on electroencephalography (EEG)-based brain-computer interface (BCI). Nonetheless, owing to high intra- and inter-subject variabilities, many studies on decoding EEG were designed in a subject-specific manner by using calibration samples, with no concern of its practical use, hampered by time-consuming steps and a large data requirement. To this end, recent studies adopted a transfer learning strategy, especially domain adaptation techniques. Among those, we have witnessed the potential of adversarial learning-based transfer learning in BCIs. In the meantime, it is known that adversarial learning-based domain adaptation methods are prone to negative transfer that disrupts learning generalized feature representations, applicable to diverse domains, for example, subjects or sessions in BCIs. In this article, we propose a novel framework that learns class-relevant and subject-invariant feature representations in an information-theoretic manner, without using adversarial learning. To be specific, we devise two operational components in a deep network that explicitly estimate mutual information between feature representations: 1) to decompose features in an intermediate layer into class-relevant and class-irrelevant ones and 2) to enrich class-discriminative feature representation. On two large EEG datasets, we validated the effectiveness of our proposed framework by comparing with several comparative methods in performance. Furthermore, we conducted rigorous analyses by performing an ablation study in regard to the components in our network, explaining our model's decision on input EEG signals via layer-wise relevance propagation, and visualizing the distribution of learned features via t-SNE.
Collapse
|
9
|
Extracting a Novel Emotional EEG Topographic Map Based on a Stacked Autoencoder Network. JOURNAL OF HEALTHCARE ENGINEERING 2023; 2023:9223599. [PMID: 36714412 PMCID: PMC9879679 DOI: 10.1155/2023/9223599] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/09/2022] [Revised: 11/02/2022] [Accepted: 12/23/2022] [Indexed: 01/21/2023]
Abstract
Emotion recognition based on brain signals has increasingly become attractive to evaluate human's internal emotional states. Conventional emotion recognition studies focus on developing machine learning and classifiers. However, most of these methods do not provide information on the involvement of different areas of the brain in emotions. Brain mapping is considered as one of the most distinguishing methods of showing the involvement of different areas of the brain in performing an activity. Most mapping techniques rely on projection and visualization of only one of the electroencephalogram (EEG) subband features onto brain regions. The present study aims to develop a new EEG-based brain mapping, which combines several features to provide more complete and useful information on a single map instead of common maps. In this study, the optimal combination of EEG features for each channel was extracted using a stacked autoencoder (SAE) network and visualizing a topographic map. Based on the research hypothesis, autoencoders can extract optimal features for quantitative EEG (QEEG) brain mapping. The DEAP EEG database was employed to extract topographic maps. The accuracy of image classifiers using the convolutional neural network (CNN) was used as a criterion for evaluating the distinction of the obtained maps from a stacked autoencoder topographic map (SAETM) method for different emotions. The average classification accuracy was obtained 0.8173 and 0.8037 in the valence and arousal dimensions, respectively. The extracted maps were also ranked by a team of experts compared to common maps. The results of quantitative and qualitative evaluation showed that the obtained map by SAETM has more information than conventional maps.
Collapse
|
10
|
Sorinas J, Troyano JCF, Ferrández JM, Fernandez E. Unraveling the Development of an Algorithm for Recognizing Primary Emotions Through Electroencephalography. Int J Neural Syst 2023; 33:2250057. [PMID: 36495049 DOI: 10.1142/s0129065722500575] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
The large range of potential applications, not only for patients but also for healthy people, that could be achieved by affective brain-computer interface (aBCI) makes more latent the necessity of finding a commonly accepted protocol for real-time EEG-based emotion recognition. Based on wavelet package for spectral feature extraction, attending to the nature of the EEG signal, we have specified some of the main parameters needed for the implementation of robust positive and negative emotion classification. Twelve seconds has resulted as the most appropriate sliding window size; from that, a set of 20 target frequency-location variables have been proposed as the most relevant features that carry the emotional information. Lastly, QDA and KNN classifiers and population rating criterion for stimuli labeling have been suggested as the most suitable approaches for EEG-based emotion recognition. The proposed model reached a mean accuracy of 98% (s.d. 1.4) and 98.96% (s.d. 1.28) in a subject-dependent (SD) approach for QDA and KNN classifier, respectively. This new model represents a step forward towards real-time classification. Moreover, new insights regarding subject-independent (SI) approximation have been discussed, although the results were not conclusive.
Collapse
Affiliation(s)
- Jennifer Sorinas
- Institute of Bioengineering, University Miguel Hernandez and CIBER BBN, Elche 03202, Spain
| | - Juan C Fernandez Troyano
- Department of Electronics and Computer Technology, University of Cartagena, Cartagena 30202, Spain
| | - Jose Manuel Ferrández
- Department of Electronics and Computer Technology, University of Cartagena, Cartagena 30202, Spain
| | - Eduardo Fernandez
- Institute of Bioengineering, University Miguel Hernandez and CIBER BBN, Elche 03202, Spain
| |
Collapse
|
11
|
Cavazza J, Ahmed W, Volpi R, Morerio P, Bossi F, Willemse C, Wykowska A, Murino V. Understanding action concepts from videos and brain activity through subjects' consensus. Sci Rep 2022; 12:19073. [PMID: 36351956 PMCID: PMC9646846 DOI: 10.1038/s41598-022-23067-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 10/25/2022] [Indexed: 11/11/2022] Open
Abstract
In this paper, we investigate brain activity associated with complex visual tasks, showing that electroencephalography (EEG) data can help computer vision in reliably recognizing actions from video footage that is used to stimulate human observers. Notably, we consider not only typical "explicit" video action benchmarks, but also more complex data sequences in which action concepts are only referred to, implicitly. To this end, we consider a challenging action recognition benchmark dataset-Moments in Time-whose video sequences do not explicitly visualize actions, but only implicitly refer to them (e.g., fireworks in the sky as an extreme example of "flying"). We employ such videos as stimuli and involve a large sample of subjects to collect a high-definition, multi-modal EEG and video data, designed for understanding action concepts. We discover an agreement among brain activities of different subjects stimulated by the same video footage. We name it as subjects consensus, and we design a computational pipeline to transfer knowledge from EEG to video, sharply boosting the recognition performance.
Collapse
Affiliation(s)
- Jacopo Cavazza
- grid.25786.3e0000 0004 1764 2907Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Waqar Ahmed
- grid.25786.3e0000 0004 1764 2907Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Riccardo Volpi
- grid.25786.3e0000 0004 1764 2907Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy ,Naver Labs Europe, 6 Chemin de Maupertuis, Meylan, 38240 Grenoble, France
| | - Pietro Morerio
- grid.25786.3e0000 0004 1764 2907Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Francesco Bossi
- grid.462365.00000 0004 1790 9464IMT School for Advanced Studies Lucca, Piazza San Francesco 19, 55100 Lucca, Italy ,grid.25786.3e0000 0004 1764 2907Social Cognition in Human-Robot Interaction (S4HRI), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Cesco Willemse
- grid.25786.3e0000 0004 1764 2907Social Cognition in Human-Robot Interaction (S4HRI), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Agnieszka Wykowska
- grid.25786.3e0000 0004 1764 2907Social Cognition in Human-Robot Interaction (S4HRI), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy
| | - Vittorio Murino
- grid.25786.3e0000 0004 1764 2907Pattern Analysis & Computer Vision (PAVIS), Istituto Italiano di Tecnologia (IIT), Via Enrico Melen 83, 16152 Genova, Italy ,grid.5611.30000 0004 1763 1124Department of Computer Science, University of Verona, Strada Le Grazie 15, 37134 Verona, Italy
| |
Collapse
|
12
|
EEG Signals to Digit Classification Using Deep Learning-Based One-Dimensional Convolutional Neural Network. ARABIAN JOURNAL FOR SCIENCE AND ENGINEERING 2022. [DOI: 10.1007/s13369-022-07313-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
13
|
Liang S, Su L, Fu Y, Wu L. Multi-source joint domain adaptation for cross-subject and cross-session emotion recognition from electroencephalography. Front Hum Neurosci 2022; 16:921346. [PMID: 36188181 PMCID: PMC9520599 DOI: 10.3389/fnhum.2022.921346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2022] [Accepted: 08/29/2022] [Indexed: 11/13/2022] Open
Abstract
As an important component to promote the development of affective brain–computer interfaces, the study of emotion recognition based on electroencephalography (EEG) has encountered a difficult challenge; the distribution of EEG data changes among different subjects and at different time periods. Domain adaptation methods can effectively alleviate the generalization problem of EEG emotion recognition models. However, most of them treat multiple source domains, with significantly different distributions, as one single source domain, and only adapt the cross-domain marginal distribution while ignoring the joint distribution difference between the domains. To gain the advantages of multiple source distributions, and better match the distributions of the source and target domains, this paper proposes a novel multi-source joint domain adaptation (MSJDA) network. We first map all domains to a shared feature space and then align the joint distributions of the further extracted private representations and the corresponding classification predictions for each pair of source and target domains. Extensive cross-subject and cross-session experiments on the benchmark dataset, SEED, demonstrate the effectiveness of the proposed model, where more significant classification results are obtained on the more difficult cross-subject emotion recognition task.
Collapse
|
14
|
Feature matching as improved transfer learning technique for wearable EEG. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2022.104009] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
15
|
Xu DQ, Li MA. A dual alignment-based multi-source domain adaptation framework for motor imagery EEG classification. APPL INTELL 2022; 53:10766-10788. [PMID: 36039116 PMCID: PMC9402410 DOI: 10.1007/s10489-022-04077-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 11/25/2022]
Affiliation(s)
- Dong-qin Xu
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 China
| | - Ming-ai Li
- Faculty of Information Technology, Beijing University of Technology, Beijing, 100124 China
- Beijing Key Laboratory of Computational Intelligence and Intelligent System, Beijing, 100124 China
- Engineering Research Center of Digital Community, Ministry of Education, Beijing, 100124 China
| |
Collapse
|
16
|
Wang M, Yin X, Zhu Y, Hu J. Representation Learning and Pattern Recognition in Cognitive Biometrics: A Survey. SENSORS (BASEL, SWITZERLAND) 2022; 22:5111. [PMID: 35890799 PMCID: PMC9320620 DOI: 10.3390/s22145111] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/17/2022] [Revised: 07/01/2022] [Accepted: 07/05/2022] [Indexed: 01/27/2023]
Abstract
Cognitive biometrics is an emerging branch of biometric technology. Recent research has demonstrated great potential for using cognitive biometrics in versatile applications, including biometric recognition and cognitive and emotional state recognition. There is a major need to summarize the latest developments in this field. Existing surveys have mainly focused on a small subset of cognitive biometric modalities, such as EEG and ECG. This article provides a comprehensive review of cognitive biometrics, covering all the major biosignal modalities and applications. A taxonomy is designed to structure the corresponding knowledge and guide the survey from signal acquisition and pre-processing to representation learning and pattern recognition. We provide a unified view of the methodological advances in these four aspects across various biosignals and applications, facilitating interdisciplinary research and knowledge transfer across fields. Furthermore, this article discusses open research directions in cognitive biometrics and proposes future prospects for developing reliable and secure cognitive biometric systems.
Collapse
Affiliation(s)
- Min Wang
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia; (M.W.); (X.Y.)
| | - Xuefei Yin
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia; (M.W.); (X.Y.)
| | - Yanming Zhu
- School of Computer Science and Engineering, University of New South Wales, Sydney, NSW 2052, Australia;
| | - Jiankun Hu
- School of Engineering and Information Technology, University of New South Wales, Canberra, ACT 2612, Australia; (M.W.); (X.Y.)
| |
Collapse
|
17
|
Tavakkoli H, Motie Nasrabadi A. A Spherical Phase Space Partitioning Based Symbolic Time Series Analysis (SPSP—STSA) for Emotion Recognition Using EEG Signals. Front Hum Neurosci 2022; 16:936393. [PMID: 35845249 PMCID: PMC9276988 DOI: 10.3389/fnhum.2022.936393] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2022] [Accepted: 06/01/2022] [Indexed: 02/01/2023] Open
Abstract
Emotion recognition systems have been of interest to researchers for a long time. Improvement of brain-computer interface systems currently makes EEG-based emotion recognition more attractive. These systems try to develop strategies that are capable of recognizing emotions automatically. There are many approaches due to different features extractions methods for analyzing the EEG signals. Still, Since the brain is supposed to be a nonlinear dynamic system, it seems a nonlinear dynamic analysis tool may yield more convenient results. A novel approach in Symbolic Time Series Analysis (STSA) for signal phase space partitioning and symbol sequence generating is introduced in this study. Symbolic sequences have been produced by means of spherical partitioning of phase space; then, they have been compared and classified based on the maximum value of a similarity index. Obtaining the automatic independent emotion recognition EEG-based system has always been discussed because of the subject-dependent content of emotion. Here we introduce a subject-independent protocol to solve the generalization problem. To prove our method’s effectiveness, we used the DEAP dataset, and we reached an accuracy of 98.44% for classifying happiness from sadness (two- emotion groups). It was 93.75% for three (happiness, sadness, and joy), 89.06% for four (happiness, sadness, joy, and terrible), and 85% for five emotional groups (happiness, sadness, joy, terrible and mellow). According to these results, it is evident that our subject-independent method is more accurate rather than many other methods in different studies. In addition, a subject-independent method has been proposed in this study, which is not considered in most of the studies in this field.
Collapse
|
18
|
Abstract
Non-stationarity of EEG signals lead to high variability across sessions, which results in low classification accuracy. To reduce the inter-session variability, an unsupervised domain adaptation method is proposed. Arithmetic mean and covariance are exploited to represent the data distribution. First, overall mean alignment is conducted between the source and target data. Then, the data in the target domain is labeled by a classifier trained with the source data. The per-class mean and covariance of the target data are estimated based on the predicted labels. Next, an alignment from the source domain to the target domain is performed according to the covariance of each class in the target domain. Finally, per-class mean adaptation is required after covariance alignment to remove the shift of data distribution caused by covariance alignment. Two public BCI competition datasets, namely the BCI competition III dataset IVa and the BCI competition IV dataset IIa were used to evaluate the proposed method. On both datasets, the proposed method effectively improved classification accuracy.
Collapse
|
19
|
Dan Y, Tao J, Zhou D. Multi-Model Adaptation Learning With Possibilistic Clustering Assumption for EEG-Based Emotion Recognition. Front Neurosci 2022; 16:855421. [PMID: 35600616 PMCID: PMC9114636 DOI: 10.3389/fnins.2022.855421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2022] [Accepted: 02/25/2022] [Indexed: 11/15/2022] Open
Abstract
In machine learning community, graph-based semi-supervised learning (GSSL) approaches have attracted more extensive research due to their elegant mathematical formulation and good performance. However, one of the reasons affecting the performance of the GSSL method is that the training data and test data need to be independently identically distributed (IID); any individual user may show a completely different encephalogram (EEG) data in the same situation. The EEG data may be non-IID. In addition, noise/outlier sensitiveness still exist in GSSL approaches. To these ends, we propose in this paper a novel clustering method based on structure risk minimization model, called multi-model adaptation learning with possibilistic clustering assumption for EEG-based emotion recognition (MA-PCA). It can effectively minimize the influence from the noise/outlier samples based on different EEG-based data distribution in some reproduced kernel Hilbert space. Our main ideas are as follows: (1) reducing the negative impact of noise/outlier patterns through fuzzy entropy regularization, (2) considering the training data and test data are IID and non-IID to obtain a better performance by multi-model adaptation learning, and (3) the algorithm implementation and convergence theorem are also given. A large number of experiments and deep analysis on real DEAP datasets and SEED datasets was carried out. The results show that the MA-PCA method has superior or comparable robustness and generalization performance to EEG-based emotion recognition.
Collapse
Affiliation(s)
- Yufang Dan
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo, China
- Key Laboratory of 3D Printing Equipment and Manufacturing in Colleges and Universities of Fujian Province, Fujian, China
| | - Jianwen Tao
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo, China
| | - Di Zhou
- Industrial Technological Institute of Intelligent Manufacturing, Sichuan University of Arts and Science, Dazhou, China
| |
Collapse
|
20
|
Heremans ERM, Phan H, Borzée P, Buyse B, Testelmans D, De Vos M. From unsupervised to semi-supervised adversarial domain adaptation in EEG-based sleep staging. J Neural Eng 2022; 19. [PMID: 35508121 DOI: 10.1088/1741-2552/ac6ca8] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/24/2022] [Accepted: 05/04/2022] [Indexed: 10/18/2022]
Abstract
OBJECTIVE The recent breakthrough of wearable sleep monitoring devices results in large amounts of sleep data. However, as limited labels are available, interpreting these data requires automated sleep stage classification methods with a small need for labeled training data. Transfer learning and domain adaptation offer possible solutions by enabling models to learn on a source dataset and adapt to a target dataset. APPROACH In this paper, we investigate adversarial domain adaptation applied to real use cases with wearable sleep datasets acquired from diseased patient populations. Different practical aspects of the adversarial domain adaptation framework \hl{are examined}, including the added value of (pseudo-)labels from the target dataset and the influence of domain mismatch between the source and target data. The method is also implemented for personalization to specific patients. MAIN RESULTS The results show that adversarial domain adaptation is effective in the application of sleep staging on wearable data. When compared to a model applied on a target dataset without any adaptation, the domain adaptation method in its simplest form achieves relative gains of 7%-27% in accuracy. The performance on the target domain is further boosted by adding pseudo-labels and real target domain labels when available, and by choosing an appropriate source dataset. Furthermore, unsupervised adversarial domain adaptation can also personalize a model, improving the performance by 1%-2% compared to a non-personal model. SIGNIFICANCE In conclusion, adversarial domain adaptation provides a flexible framework for semi-supervised and unsupervised transfer learning. This is particularly useful in sleep staging and other wearable EEG applications.
Collapse
Affiliation(s)
- Elisabeth Roxane Marie Heremans
- Department of Electrical Engineering, KU Leuven Science Engineering and Technology Group, Kasteelpark Arenberg 10, Leuven, 3001, BELGIUM
| | - Huy Phan
- School of Electronic Engineering and Computer Science, Queen Mary University of London, Mile End Rd, Bethnal Green, London, E1 4NS, UNITED KINGDOM OF GREAT BRITAIN AND NORTHERN IRELAND
| | - Pascal Borzée
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, 3000, BELGIUM
| | - Bertien Buyse
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, Flanders, 3000, BELGIUM
| | - Dries Testelmans
- Department of Pneumology, KU Leuven University Hospitals Leuven, Herestraat 49, Leuven, 3000, BELGIUM
| | - Maarten De Vos
- Department of Electrical Engineering, KU Leuven Science Engineering and Technology Group, Kasteelpark Arenberg 10, Leuven, 3000, BELGIUM
| |
Collapse
|
21
|
Tao J, Dan Y, Zhou D, He S. Robust Latent Multi-Source Adaptation for Encephalogram-Based Emotion Recognition. Front Neurosci 2022; 16:850906. [PMID: 35573289 PMCID: PMC9091911 DOI: 10.3389/fnins.2022.850906] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2022] [Accepted: 02/11/2022] [Indexed: 11/18/2022] Open
Abstract
In practical encephalogram (EEG)-based machine learning, different subjects can be represented by many different EEG patterns, which would, in some extent, degrade the performance of extant subject-independent classifiers obtained from cross-subjects datasets. To this end, in this paper, we present a robust Latent Multi-source Adaptation (LMA) framework for cross-subject/dataset emotion recognition with EEG signals by uncovering multiple domain-invariant latent subspaces. Specifically, by jointly aligning the statistical and semantic distribution discrepancies between each source and target pair, multiple domain-invariant classifiers can be trained collaboratively in a unified framework. This framework can fully utilize the correlated knowledge among multiple sources with a novel low-rank regularization term. Comprehensive experiments on DEAP and SEED datasets demonstrate the superior or comparable performance of LMA with the state of the art in the EEG-based emotion recognition.
Collapse
Affiliation(s)
- Jianwen Tao
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo, China
| | - Yufang Dan
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo, China
| | - Di Zhou
- Industrial Technological Institute of Intelligent Manufacturing, Sichuan University of Arts and Science, Dazhou, China
| | - Songsong He
- Institute of Artificial Intelligence Application, Ningbo Polytechnic, Ningbo, China
| |
Collapse
|
22
|
Olamat A, Ozel P, Atasever S. Deep Learning Methods for Multi-Channel EEG-Based Emotion Recognition. Int J Neural Syst 2022; 32:2250021. [DOI: 10.1142/s0129065722500216] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
Currently, Fourier-based, wavelet-based, and Hilbert-based time–frequency techniques have generated considerable interest in classification studies for emotion recognition in human–computer interface investigations. Empirical mode decomposition (EMD), one of the Hilbert-based time–frequency techniques, has been developed as a tool for adaptive signal processing. Additionally, the multi-variate version strongly influences designing the common oscillation structure of a multi-channel signal by utilizing the common instantaneous concepts of frequency and bandwidth. Additionally, electroencephalographic (EEG) signals are strongly preferred for comprehending emotion recognition perspectives in human–machine interactions. This study aims to herald an emotion detection design via EEG signal decomposition using multi-variate empirical mode decomposition (MEMD). For emotion recognition, the SJTU emotion EEG dataset (SEED) is classified using deep learning methods. Convolutional neural networks (AlexNet, DenseNet-201, ResNet-101, and ResNet50) and AutoKeras architectures are selected for image classification. The proposed framework reaches 99% and 100% classification accuracy when transfer learning methods and the AutoKeras method are used, respectively.
Collapse
Affiliation(s)
- Ali Olamat
- Biomedical Engineering Department, Yildiz Technical University, Istanbul, Turkey
| | - Pinar Ozel
- Biomedical Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey
| | - Sema Atasever
- Computer Engineering Department, Nevsehir Hacı Bektas Veli University, Nevsehir, Turkey
| |
Collapse
|
23
|
Chen C, Vong CM, Wang S, Wang H, Pang M. Easy Domain Adaptation for cross-subject multi-view emotion recognition. Knowl Based Syst 2022. [DOI: 10.1016/j.knosys.2021.107982] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
24
|
Bhosale S, Chakraborty R, Kopparapu SK. Calibration free meta learning based approach for subject independent EEG emotion recognition. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103289] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
25
|
Linda Rose G, Punithavalli M. Stress emotion classification using optimized convolutional neural network for online transfer learning dataset. Comput Methods Biomech Biomed Engin 2022; 25:1576-1587. [PMID: 35098835 DOI: 10.1080/10255842.2021.2024169] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Nowadays, deep learning methods with transfer learning (TL) makes ease of stress emotion classification tasks. Amongst, an optimized convolutional neural network with TL (OCNNTL) executes OCNN-based classification on emotion and stress data domains to learn high-level features at the top layers. However, it fails to handle the abrupt concept drift in real-time; besides, it end up with huge time complication while on gathering the required data and its transformation. To tackle the aforementioned concerns, a novel online OCNNTL (O2CNNTL) model is proposed; whereas, OCNNTL process initiates in the stress-emotion domain via the prior knowledge acquired by learning the training data both from the stress as well as the emotion domains. Moreover in O2CNNTL model, the concept-drifting data streams are taken into account for solving the online classification by the OCNN classifier; whereas, to enhance the learning efficiency a regularization learning technique is instigated on varied feature spaces. Thus, the proposed O2CNNTL achieves higher efficiency than the state-of-the-art models.
Collapse
Affiliation(s)
- G Linda Rose
- Department of Computer Science, Bharathiar University, Coimbatore, Tamil Nadu, India
| | - M Punithavalli
- Department of Computer Applications, Bharathiar University, Coimbatore, Tamil Nadu, India
| |
Collapse
|
26
|
Jiang L, Liu S, Ma Z, Lei W, Chen C. Regularized RKHS-Based Subspace Learning for Motor Imagery Classification. ENTROPY 2022; 24:e24020195. [PMID: 35205490 PMCID: PMC8870989 DOI: 10.3390/e24020195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Revised: 01/04/2022] [Accepted: 01/17/2022] [Indexed: 02/01/2023]
Abstract
Brain–computer interface (BCI) technology allows people with disabilities to communicate with the physical environment. One of the most promising signals is the non-invasive electroencephalogram (EEG) signal. However, due to the non-stationary nature of EEGs, a subject’s signal may change over time, which poses a challenge for models that work across time. Recently, domain adaptive learning (DAL) has shown its superior performance in various classification tasks. In this paper, we propose a regularized reproducing kernel Hilbert space (RKHS) subspace learning algorithm with K-nearest neighbors (KNNs) as a classifier for the task of motion imagery signal classification. First, we reformulate the framework of RKHS subspace learning with a rigorous mathematical inference. Secondly, since the commonly used maximum mean difference (MMD) criterion measures the distribution variance based on the mean value only and ignores the local information of the distribution, a regularization term of source domain linear discriminant analysis (SLDA) is proposed for the first time, which reduces the variance of similar data and increases the variance of dissimilar data to optimize the distribution of source domain data. Finally, the RKHS subspace framework was constructed sparsely considering the sensitivity of the BCI data. We test the proposed algorithm in this paper, first on four standard datasets, and the experimental results show that the other baseline algorithms improve the average accuracy by 2–9% after adding SLDA. In the motion imagery classification experiments, the average accuracy of our algorithm is 3% higher than the other algorithms, demonstrating the adaptability and effectiveness of the proposed algorithm.
Collapse
Affiliation(s)
- Linzhi Jiang
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China; (L.J.); (W.L.); (C.C.)
| | - Shuyu Liu
- Public Experimental Teaching Center, Sun Yat-sen University, Guangzhou 510006, China
- Correspondence: (S.L.); (Z.M.)
| | - Zhengming Ma
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China; (L.J.); (W.L.); (C.C.)
- Correspondence: (S.L.); (Z.M.)
| | - Wenjie Lei
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China; (L.J.); (W.L.); (C.C.)
| | - Cheng Chen
- School of Electronics and Information Technology, Sun Yat-sen University, Guangzhou 510006, China; (L.J.); (W.L.); (C.C.)
| |
Collapse
|
27
|
Feedback Artificial Shuffled Shepherd Optimization-Based Deep Maxout Network for Human Emotion Recognition Using EEG Signals. Int J Telemed Appl 2022; 2022:3749413. [PMID: 35282409 PMCID: PMC8904914 DOI: 10.1155/2022/3749413] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Accepted: 12/24/2021] [Indexed: 11/17/2022] Open
Abstract
Emotion recognition is very important for the humans in order to enhance the self-awareness and react correctly to the actions around them. Based on the complication and series of emotions, EEG-enabled emotion recognition is still a difficult issue. Hence, an effective human recognition approach is designed using the proposed feedback artificial shuffled shepherd optimization- (FASSO-) based deep maxout network (DMN) for recognizing emotions using EEG signals. The proposed technique incorporates feedback artificial tree (FAT) algorithm and shuffled shepherd optimization algorithm (SSOA). Here, median filter is used for preprocessing to remove the noise present in the EEG signals. The features, like DWT, spectral flatness, logarithmic band power, fluctuation index, spectral decrease, spectral roll-off, and relative energy, are extracted to perform further processing. Based on the data augmented results, emotion recognition can be accomplished using the DMN, where the training process of the DMN is performed using the proposed FASSO method. Furthermore, the experimental results and performance analysis of the proposed algorithm provide efficient performance with respect to accuracy, specificity, and sensitivity with the maximal values of 0.889, 0.89, and 0.886, respectively.
Collapse
|
28
|
Chen H, Jin M, Li Z, Fan C, Li J, He H. MS-MDA: Multisource Marginal Distribution Adaptation for Cross-Subject and Cross-Session EEG Emotion Recognition. Front Neurosci 2021; 15:778488. [PMID: 34949983 PMCID: PMC8688841 DOI: 10.3389/fnins.2021.778488] [Citation(s) in RCA: 21] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 10/27/2021] [Indexed: 11/13/2022] Open
Abstract
As an essential element for the diagnosis and rehabilitation of psychiatric disorders, the electroencephalogram (EEG) based emotion recognition has achieved significant progress due to its high precision and reliability. However, one obstacle to practicality lies in the variability between subjects and sessions. Although several studies have adopted domain adaptation (DA) approaches to tackle this problem, most of them treat multiple EEG data from different subjects and sessions together as a single source domain for transfer, which either fails to satisfy the assumption of domain adaptation that the source has a certain marginal distribution, or increases the difficulty of adaptation. We therefore propose the multi-source marginal distribution adaptation (MS-MDA) for EEG emotion recognition, which takes both domain-invariant and domain-specific features into consideration. First, we assume that different EEG data share the same low-level features, then we construct independent branches for multiple EEG data source domains to adopt one-to-one domain adaptation and extract domain-specific features. Finally, the inference is made by multiple branches. We evaluate our method on SEED and SEED-IV for recognizing three and four emotions, respectively. Experimental results show that the MS-MDA outperforms the comparison methods and state-of-the-art models in cross-session and cross-subject transfer scenarios in our settings. Codes at https://github.com/VoiceBeer/MS-MDA.
Collapse
Affiliation(s)
- Hao Chen
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Ming Jin
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Zhunan Li
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Cunhang Fan
- Anhui Province Key Laboratory of Multimodal Cognitive Computation, School of Computer Science and Technology, Anhui University, Hefei, China
| | - Jinpeng Li
- HwaMei Hospital, University of Chinese Academy, Ningbo, China.,Center for Pattern Recognition and Intelligent Medicine, Ningbo Institute of Life and Health Industry, University of Chinese Academy of Sciences, Ningbo, China
| | - Huiguang He
- Research Center for Brain-inspired Intelligence and National Laboratory of Pattern Recognition, Institute of Automation, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
29
|
Cai J, Xiao R, Cui W, Zhang S, Liu G. Application of Electroencephalography-Based Machine Learning in Emotion Recognition: A Review. Front Syst Neurosci 2021; 15:729707. [PMID: 34887732 PMCID: PMC8649925 DOI: 10.3389/fnsys.2021.729707] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2021] [Accepted: 11/08/2021] [Indexed: 11/13/2022] Open
Abstract
Emotion recognition has become increasingly prominent in the medical field and human-computer interaction. When people’s emotions change under external stimuli, various physiological signals of the human body will fluctuate. Electroencephalography (EEG) is closely related to brain activity, making it possible to judge the subject’s emotional changes through EEG signals. Meanwhile, machine learning algorithms, which are good at digging out data features from a statistical perspective and making judgments, have developed by leaps and bounds. Therefore, using machine learning to extract feature vectors related to emotional states from EEG signals and constructing a classifier to separate emotions into discrete states to realize emotion recognition has a broad development prospect. This paper introduces the acquisition, preprocessing, feature extraction, and classification of EEG signals in sequence following the progress of EEG-based machine learning algorithms for emotion recognition. And it may help beginners who will use EEG-based machine learning algorithms for emotion recognition to understand the development status of this field. The journals we selected are all retrieved from the Web of Science retrieval platform. And the publication dates of most of the selected articles are concentrated in 2016–2021.
Collapse
Affiliation(s)
- Jing Cai
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Ruolan Xiao
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Wenjie Cui
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Shang Zhang
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| | - Guangda Liu
- College of Instrumentation and Electrical Engineering, Jilin University, Changchun, China
| |
Collapse
|
30
|
He W, Ye Y, Li Y, Pan T, Lu L. Online Cross-subject Emotion Recognition from ECG via Unsupervised Domain Adaptation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2021; 2021:1001-1005. [PMID: 34891457 DOI: 10.1109/embc46164.2021.9630433] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Performing cross-subject emotion recognition (ER) using electrocardiogram (ECG) is challenging, since inter-subject discrepancy (caused by individual differences) between source and target subjects (new subjects) may hinder the generalization for new subjects. Recently, some ER methods based on unsupervised domain adaptation (UDA) are proposed to address inter-subject discrepancy. However, when being applied for online scenarios with time-varying ECG, existing methods may suffer performance degradation due to neglecting intra-subject discrepancy (caused by time-varying ECG) within target subjects, or need to re-train ER model, leading to time-and resource-consuming. In the paper, we propose an online cross-subject ER approach from ECG signals via UDA, consisting of two stages. In a training stage, we propose to train a classifier on a shared subspace with a lower inter-subject discrepancy. In an online recognition stage, an online data adaptation (ODA) method is introduced to adapt time-varying ECG via reducing the intra-subject discrepancy, and then online recognition results can be obtained by the trained classifier. Experimental results on Dreamer and Amigos with emotions of valence and arousal demonstrate that our proposed approach improves the classification accuracy by about 12% compared with the baseline method, and is robust to time-varying ECG in online scenarios.
Collapse
|
31
|
Zhao Y, Dai G, Borghini G, Zhang J, Li X, Zhang Z, Aricò P, Di Flumeri G, Babiloni F, Zeng H. Label-Based Alignment Multi-Source Domain Adaptation for Cross-Subject EEG Fatigue Mental State Evaluation. Front Hum Neurosci 2021; 15:706270. [PMID: 34658814 PMCID: PMC8519604 DOI: 10.3389/fnhum.2021.706270] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2021] [Accepted: 08/27/2021] [Indexed: 11/13/2022] Open
Abstract
Accurate detection of driving fatigue is helpful in significantly reducing the rate of road traffic accidents. Electroencephalogram (EEG) based methods are proven to be efficient to evaluate mental fatigue. Due to its high non-linearity, as well as significant individual differences, how to perform EEG fatigue mental state evaluation across different subjects still keeps challenging. In this study, we propose a Label-based Alignment Multi-Source Domain Adaptation (LA-MSDA) for cross-subject EEG fatigue mental state evaluation. Specifically, LA-MSDA considers the local feature distributions of relevant labels between different domains, which efficiently eliminates the negative impact of significant individual differences by aligning label-based feature distributions. In addition, the strategy of global optimization is introduced to address the classifier confusion decision boundary issues and improve the generalization ability of LA-MSDA. Experimental results show LA-MSDA can achieve remarkable results on EEG-based fatigue mental state evaluation across subjects, which is expected to have wide application prospects in practical brain-computer interaction (BCI), such as online monitoring of driver fatigue, or assisting in the development of on-board safety systems.
Collapse
Affiliation(s)
- Yue Zhao
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Guojun Dai
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Gianluca Borghini
- Industrial NeuroScience Lab, University of Rome "La Sapienza", Rome, Italy
| | - Jiaming Zhang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Xiufeng Li
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Zhenyan Zhang
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Pietro Aricò
- Industrial NeuroScience Lab, University of Rome "La Sapienza", Rome, Italy
| | | | - Fabio Babiloni
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China.,Industrial NeuroScience Lab, University of Rome "La Sapienza", Rome, Italy
| | - Hong Zeng
- School of Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China.,Key Laboratory of Brain Machine Collaborative Intelligence of Zhejiang Province, Hangzhou, China
| |
Collapse
|
32
|
Luo J, Wu M, Wang Z, Chen Y, Yang Y. Progressive low-rank subspace alignment based on semi-supervised joint domain adaption for personalized emotion recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.05.064] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Filali H, Riffi J, Aboussaleh I, Mahraz AM, Tairi H. Meaningful Learning for Deep Facial Emotional Features. Neural Process Lett 2021. [DOI: 10.1007/s11063-021-10636-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
34
|
Gu X, Cao Z, Jolfaei A, Xu P, Wu D, Jung TP, Lin CT. EEG-Based Brain-Computer Interfaces (BCIs): A Survey of Recent Studies on Signal Sensing Technologies and Computational Intelligence Approaches and Their Applications. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:1645-1666. [PMID: 33465029 DOI: 10.1109/tcbb.2021.3052811] [Citation(s) in RCA: 66] [Impact Index Per Article: 22.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Brain-Computer interfaces (BCIs) enhance the capability of human brain activities to interact with the environment. Recent advancements in technology and machine learning algorithms have increased interest in electroencephalographic (EEG)-based BCI applications. EEG-based intelligent BCI systems can facilitate continuous monitoring of fluctuations in human cognitive states under monotonous tasks, which is both beneficial for people in need of healthcare support and general researchers in different domain areas. In this review, we survey the recent literature on EEG signal sensing technologies and computational intelligence approaches in BCI applications, compensating for the gaps in the systematic summary of the past five years. Specifically, we first review the current status of BCI and signal sensing technologies for collecting reliable EEG signals. Then, we demonstrate state-of-the-art computational intelligence techniques, including fuzzy models and transfer learning in machine learning and deep learning algorithms, to detect, monitor, and maintain human cognitive states and task performance in prevalent applications. Finally, we present a couple of innovative BCI-inspired healthcare applications and discuss future research directions in EEG-based BCI research.
Collapse
|
35
|
Ko W, Jeon E, Jeong S, Phyo J, Suk HI. A Survey on Deep Learning-Based Short/Zero-Calibration Approaches for EEG-Based Brain-Computer Interfaces. Front Hum Neurosci 2021; 15:643386. [PMID: 34140883 PMCID: PMC8204721 DOI: 10.3389/fnhum.2021.643386] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 04/27/2021] [Indexed: 11/28/2022] Open
Abstract
Brain-computer interfaces (BCIs) utilizing machine learning techniques are an emerging technology that enables a communication pathway between a user and an external system, such as a computer. Owing to its practicality, electroencephalography (EEG) is one of the most widely used measurements for BCI. However, EEG has complex patterns and EEG-based BCIs mostly involve a cost/time-consuming calibration phase; thus, acquiring sufficient EEG data is rarely possible. Recently, deep learning (DL) has had a theoretical/practical impact on BCI research because of its use in learning representations of complex patterns inherent in EEG. Moreover, algorithmic advances in DL facilitate short/zero-calibration in BCI, thereby suppressing the data acquisition phase. Those advancements include data augmentation (DA), increasing the number of training samples without acquiring additional data, and transfer learning (TL), taking advantage of representative knowledge obtained from one dataset to address the so-called data insufficiency problem in other datasets. In this study, we review DL-based short/zero-calibration methods for BCI. Further, we elaborate methodological/algorithmic trends, highlight intriguing approaches in the literature, and discuss directions for further research. In particular, we search for generative model-based and geometric manipulation-based DA methods. Additionally, we categorize TL techniques in DL-based BCIs into explicit and implicit methods. Our systematization reveals advances in the DA and TL methods. Among the studies reviewed herein, ~45% of DA studies used generative model-based techniques, whereas ~45% of TL studies used explicit knowledge transferring strategy. Moreover, based on our literature review, we recommend an appropriate DA strategy for DL-based BCIs and discuss trends of TLs used in DL-based BCIs.
Collapse
Affiliation(s)
- Wonjun Ko
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Eunjin Jeon
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Seungwoo Jeong
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| | - Jaeun Phyo
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Heung-Il Suk
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
- Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
36
|
Tao J, Dan Y. Multi-Source Co-adaptation for EEG-Based Emotion Recognition by Mining Correlation Information. Front Neurosci 2021; 15:677106. [PMID: 34054422 PMCID: PMC8155359 DOI: 10.3389/fnins.2021.677106] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 03/22/2021] [Indexed: 11/17/2022] Open
Abstract
Since each individual subject may present completely different encephalogram (EEG) patterns with respect to other subjects, existing subject-independent emotion classifiers trained on data sampled from cross-subjects or cross-dataset generally fail to achieve sound accuracy. In this scenario, the domain adaptation technique could be employed to address this problem, which has recently got extensive attention due to its effectiveness on cross-distribution learning. Focusing on cross-subject or cross-dataset automated emotion recognition with EEG features, we propose in this article a robust multi-source co-adaptation framework by mining diverse correlation information (MACI) among domains and features with l 2,1-norm as well as correlation metric regularization. Specifically, by minimizing the statistical and semantic distribution differences between source and target domains, multiple subject-invariant classifiers can be learned together in a joint framework, which can make MACI use relevant knowledge from multiple sources by exploiting the developed correlation metric function. Comprehensive experimental evidence on DEAP and SEED datasets verifies the better performance of MACI in EEG-based emotion recognition.
Collapse
|
37
|
Ahmad IS, Zhang S, Saminu S, Wang L, Isselmou AEK, Cai Z, Javaid I, Kamhi S, Kulsum U. Deep Learning Based on CNN for Emotion Recognition Using EEG Signal. WSEAS TRANSACTIONS ON SIGNAL PROCESSING 2021; 17:28-40. [DOI: 10.37394/232014.2021.17.4] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/02/2023]
Abstract
Emotion recognition based on brain-computer interface (BCI) has attracted important research attention despite its difficulty. It plays a vital role in human cognition and helps in making the decision. Many researchers use electroencephalograms (EEG) signals to study emotion because of its easy and convenient. Deep learning has been employed for the emotion recognition system. It recognizes emotion into single or multi-models, with visual or music stimuli shown on a screen. In this article, the convolutional neural network (CNN) model is introduced to simultaneously learn the feature and recognize the emotion of positive, neutral, and negative states of pure EEG signals single model based on the SJTU emotion EEG dataset (SEED) with ResNet50 and Adam optimizer. The dataset is shuffle, divided into training and testing, and then fed to the CNN model. The negative emotion has the highest accuracy of 94.86% fellow by neutral emotion with 94.29% and positive emotion with 93.25% respectively. With average accuracy of 94.13%. The results showed excellent classification ability of the model and can improve emotion recognition.
Collapse
Affiliation(s)
- Isah Salim Ahmad
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Shuai Zhang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Sani Saminu
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Lingyue Wang
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Abd El Kader Isselmou
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Ziliang Cai
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Imran Javaid
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Souha Kamhi
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| | - Ummay Kulsum
- State Key Laboratory of Reliability and Intelligence of Electrical Equipment, Hebei University of Technology, Tianjin, 300130, P.r. China
| |
Collapse
|
38
|
Hagad JL, Kimura T, Fukui KI, Numao M. Learning Subject-Generalized Topographical EEG Embeddings Using Deep Variational Autoencoders and Domain-Adversarial Regularization. SENSORS (BASEL, SWITZERLAND) 2021; 21:1792. [PMID: 33806712 PMCID: PMC7961341 DOI: 10.3390/s21051792] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 02/21/2021] [Accepted: 03/02/2021] [Indexed: 02/07/2023]
Abstract
Two of the biggest challenges in building models for detecting emotions from electroencephalography (EEG) devices are the relatively small amount of labeled samples and the strong variability of signal feature distributions between different subjects. In this study, we propose a context-generalized model that tackles the data constraints and subject variability simultaneously using a deep neural network architecture optimized for normally distributed subject-independent feature embeddings. Variational autoencoders (VAEs) at the input level allow the lower feature layers of the model to be trained on both labeled and unlabeled samples, maximizing the use of the limited data resources. Meanwhile, variational regularization encourages the model to learn Gaussian-distributed feature embeddings, resulting in robustness to small dataset imbalances. Subject-adversarial regularization applied to the bi-lateral features further enforces subject-independence on the final feature embedding used for emotion classification. The results from subject-independent performance experiments on the SEED and DEAP EEG-emotion datasets show that our model generalizes better across subjects than other state-of-the-art feature embeddings when paired with deep learning classifiers. Furthermore, qualitative analysis of the embedding space reveals that our proposed subject-invariant bi-lateral variational domain adversarial neural network (BiVDANN) architecture may improve the subject-independent performance by discovering normally distributed features.
Collapse
Affiliation(s)
- Juan Lorenzo Hagad
- Graduate School of Information Science and Technology, Osaka University, Suita, Osaka 565-0871, Japan
- Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka 567-0047, Japan; (T.K.); (K.-i.F.); (M.N.)
| | - Tsukasa Kimura
- Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka 567-0047, Japan; (T.K.); (K.-i.F.); (M.N.)
| | - Ken-ichi Fukui
- Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka 567-0047, Japan; (T.K.); (K.-i.F.); (M.N.)
| | - Masayuki Numao
- Institute of Scientific and Industrial Research, Osaka University, Ibaraki, Osaka 567-0047, Japan; (T.K.); (K.-i.F.); (M.N.)
| |
Collapse
|
39
|
Hu W, Huang G, Li L, Zhang L, Zhang Z, Liang Z. Video‐triggered EEG‐emotion public databases and current methods: A survey. BRAIN SCIENCE ADVANCES 2021. [DOI: 10.26599/bsa.2020.9050026] [Citation(s) in RCA: 11] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
Emotions, formed in the process of perceiving external environment, directly affect human daily life, such as social interaction, work efficiency, physical wellness, and mental health. In recent decades, emotion recognition has become a promising research direction with significant application values. Taking the advantages of electroencephalogram (EEG) signals (i.e., high time resolution) and video‐based external emotion evoking (i.e., rich media information), video‐triggered emotion recognition with EEG signals has been proven as a useful tool to conduct emotion‐related studies in a laboratory environment, which provides constructive technical supports for establishing real‐time emotion interaction systems. In this paper, we will focus on video‐triggered EEG‐based emotion recognition and present a systematical introduction of the current available video‐triggered EEG‐based emotion databases with the corresponding analysis methods. First, current video‐triggered EEG databases for emotion recognition (e.g., DEAP, MAHNOB‐HCI, SEED series databases) will be presented with full details. Then, the commonly used EEG feature extraction, feature selection, and modeling methods in video‐triggered EEG‐based emotion recognition will be systematically summarized and a brief review of current situation about video‐triggered EEG‐based emotion studies will be provided. Finally, the limitations and possible prospects of the existing video‐triggered EEG‐emotion databases will be fully discussed.
Collapse
Affiliation(s)
- Wanrou Hu
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Gan Huang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Linling Li
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Li Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| | - Zhiguo Zhang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
- Peng Cheng Laboratory, Shenzhen 518055, Guangdong, China
| | - Zhen Liang
- School of Biomedical Engineering, Health Science Center, Shenzhen University, Shenzhen 518060, Guangdong, China
- Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, Guangdong, China
| |
Collapse
|
40
|
Fdez J, Guttenberg N, Witkowski O, Pasquali A. Cross-Subject EEG-Based Emotion Recognition Through Neural Networks With Stratified Normalization. Front Neurosci 2021; 15:626277. [PMID: 33613187 PMCID: PMC7888301 DOI: 10.3389/fnins.2021.626277] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2020] [Accepted: 01/05/2021] [Indexed: 12/14/2022] Open
Abstract
Due to a large number of potential applications, a good deal of effort has been recently made toward creating machine learning models that can recognize evoked emotions from one's physiological recordings. In particular, researchers are investigating the use of EEG as a low-cost, non-invasive method. However, the poor homogeneity of the EEG activity across participants hinders the implementation of such a system by a time-consuming calibration stage. In this study, we introduce a new participant-based feature normalization method, named stratified normalization, for training deep neural networks in the task of cross-subject emotion classification from EEG signals. The new method is able to subtract inter-participant variability while maintaining the emotion information in the data. We carried out our analysis on the SEED dataset, which contains 62-channel EEG recordings collected from 15 participants watching film clips. Results demonstrate that networks trained with stratified normalization significantly outperformed standard training with batch normalization. In addition, the highest model performance was achieved when extracting EEG features with the multitaper method, reaching a classification accuracy of 91.6% for two emotion categories (positive and negative) and 79.6% for three (also neutral). This analysis provides us with great insight into the potential benefits that stratified normalization can have when developing any cross-subject model based on EEG.
Collapse
Affiliation(s)
- Javier Fdez
- Cross Labs, Cross Compass Ltd., Tokyo, Japan
| | | | | | | |
Collapse
|
41
|
Bao G, Zhuang N, Tong L, Yan B, Shu J, Wang L, Zeng Y, Shen Z. Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition. Front Hum Neurosci 2021; 14:605246. [PMID: 33551775 PMCID: PMC7854906 DOI: 10.3389/fnhum.2020.605246] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 12/22/2020] [Indexed: 11/16/2022] Open
Abstract
Emotion recognition plays an important part in human-computer interaction (HCI). Currently, the main challenge in electroencephalogram (EEG)-based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network (TDANN) to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are then passed through TDANN for two-level domain confusion. The first level uses the maximum mean discrepancy (MMD) to reduce the distribution discrepancy of deep features between source domain and target domain, and the second uses the domain adversarial neural network (DANN) to force the deep features closer to their corresponding class centers. We evaluated the domain-transfer performance of the model on both our self-built data set and the public data set SEED. In the cross-day transfer experiment, the ability to accurately discriminate joy from other emotions was high: sadness (84%), anger (87.04%), and fear (85.32%) on the self-built data set. The accuracy reached 74.93% on the SEED data set. In the cross-subject transfer experiment, the ability to accurately discriminate joy from other emotions was equally high: sadness (83.79%), anger (84.13%), and fear (81.72%) on the self-built data set. The average accuracy reached 87.9% on the SEED data set, which was higher than WGAN-DA. The experimental results demonstrate that the proposed TDANN can effectively handle the domain transfer problem in EEG-based emotion recognition.
Collapse
Affiliation(s)
- Guangcheng Bao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Ning Zhuang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Li Tong
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Jun Shu
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Linyuan Wang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Ying Zeng
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China.,Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhichong Shen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| |
Collapse
|
42
|
A deep multi-source adaptation transfer network for cross-subject electroencephalogram emotion recognition. Neural Comput Appl 2021. [DOI: 10.1007/s00521-020-05670-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
43
|
Li W, Huan W, Hou B, Tian Y, Zhang Z, Song A. Can Emotion be Transferred? – A Review on Transfer Learning for EEG-Based Emotion Recognition. IEEE Trans Cogn Dev Syst 2021. [DOI: 10.1109/tcds.2021.3098842] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
44
|
Standardization-refinement domain adaptation method for cross-subject EEG-based classification in imagined speech recognition. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2020.11.013] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
45
|
|
46
|
Xia K, Ni T, Yin H, Chen B. Cross-Domain Classification Model With Knowledge Utilization Maximization for Recognition of Epileptic EEG Signals. IEEE/ACM TRANSACTIONS ON COMPUTATIONAL BIOLOGY AND BIOINFORMATICS 2021; 18:53-61. [PMID: 32078557 DOI: 10.1109/tcbb.2020.2973978] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Conventional classification models for epileptic EEG signal recognition need sufficient labeled samples as training dataset. In addition, when training and testing EEG signal samples are collected from different distributions, for example, due to differences in patient groups or acquisition devices, such methods generally cannot perform well. In this paper, a cross-domain classification model with knowledge utilization maximization called CDC-KUM is presented, which takes advantage of the data global structure provided by the labeled samples in the related domain and unlabeled samples in the current domain. Through mapping the data into kernel space, the pairwise constraint regularization term is combined together the predictive differences of the labeled data in the source domain. Meanwhile, the soft clustering regularization term using quadratic weights and Gini-Simpson diversity is applied to exploit the distribution information of unlabeled data in the target domain. Experimental results show that CDC-KUM model outperformed several traditional non-transfer and transfer classification methods for recognition of epileptic EEG signals.
Collapse
|
47
|
Zhang X, Yao L, Wang X, Monaghan JJM, Mcalpine D, Zhang Y. A survey on deep learning-based non-invasive brain signals: recent advances and new frontiers. J Neural Eng 2020; 18. [PMID: 33171452 DOI: 10.1088/1741-2552/abc902] [Citation(s) in RCA: 59] [Impact Index Per Article: 14.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 11/10/2020] [Indexed: 12/25/2022]
Abstract
Brain signals refer to the biometric information collected from the human brain. The research on brain signals aims to discover the underlying neurological or physical status of the individuals by signal decoding. The emerging deep learning techniques have improved the study of brain signals significantly in recent years. In this work, we first present a taxonomy of non-invasive brain signals and the basics of deep learning algorithms. Then, we provide a comprehensive survey of the frontiers of applying deep learning for non-invasive brain signals analysis, by summarizing a large number of recent publications. Moreover, upon the deep learning-powered brain signal studies, we report the potential real-world applications which benefit not only disabled people but also normal individuals. Finally, we discuss the opening challenges and future directions.
Collapse
Affiliation(s)
- Xiang Zhang
- Harvard University, Cambridge, Massachusetts, UNITED STATES
| | - Lina Yao
- University of New South Wales, Sydney, New South Wales, AUSTRALIA
| | - Xianzhi Wang
- Faculty of Engineering and IT, University of Technology Sydney, 81 Broadway, Ultimo, Sydney, New South Wales, 2007, AUSTRALIA
| | | | - David Mcalpine
- Macquarie University, Sydney, New South Wales, AUSTRALIA
| | - Yu Zhang
- Stanford University, Stanford, California, 94305-6104, UNITED STATES
| |
Collapse
|
48
|
Chakraborty J, Nandy A. Discrete wavelet transform based data representation in deep neural network for gait abnormality detection. Biomed Signal Process Control 2020. [DOI: 10.1016/j.bspc.2020.102076] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
49
|
O’Donovan SD, Driessens K, Lopatta D, Wimmenauer F, Lukas A, Neeven J, Stumm T, Smirnov E, Lenz M, Ertaylan G, Jennen DGJ, van Riel NAW, Cavill R, Peeters RLM, de Kok TMCM. Use of deep learning methods to translate drug-induced gene expression changes from rat to human primary hepatocytes. PLoS One 2020; 15:e0236392. [PMID: 32780735 PMCID: PMC7418976 DOI: 10.1371/journal.pone.0236392] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Accepted: 07/06/2020] [Indexed: 11/19/2022] Open
Abstract
In clinical trials, animal and cell line models are often used to evaluate the potential toxic effects of a novel compound or candidate drug before progressing to human trials. However, relating the results of animal and in vitro model exposures to relevant clinical outcomes in the human in vivo system still proves challenging, relying on often putative orthologs. In recent years, multiple studies have demonstrated that the repeated dose rodent bioassay, the current gold standard in the field, lacks sufficient sensitivity and specificity in predicting toxic effects of pharmaceuticals in humans. In this study, we evaluate the potential of deep learning techniques to translate the pattern of gene expression measured following an exposure in rodents to humans, circumventing the current reliance on orthologs, and also from in vitro to in vivo experimental designs. Of the applied deep learning architectures applied in this study the convolutional neural network (CNN) and a deep artificial neural network with bottleneck architecture significantly outperform classical machine learning techniques in predicting the time series of gene expression in primary human hepatocytes given a measured time series of gene expression from primary rat hepatocytes following exposure in vitro to a previously unseen compound across multiple toxicologically relevant gene sets. With a reduction in average mean absolute error across 76 genes that have been shown to be predictive for identifying carcinogenicity from 0.0172 for a random regression forest to 0.0166 for the CNN model (p < 0.05). These deep learning architecture also perform well when applied to predict time series of in vivo gene expression given measured time series of in vitro gene expression for rats.
Collapse
Affiliation(s)
- Shauna D. O’Donovan
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Division of Human Nutrition and Health, Wageningen University and Research, Wageningen, The Netherlands
| | - Kurt Driessens
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Daniel Lopatta
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Florian Wimmenauer
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Alexander Lukas
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Jelmer Neeven
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Tobias Stumm
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Evgueni Smirnov
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Michael Lenz
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Institute of Organismic and Molecular Evolution, Johannes Gutenberg University Mainz, Mainz, Germany
- Preventive Cardiology and Preventative Medicine—Center for Cardiology, University Medical Center of the Johannes Gutenberg University Mainz, Mainz, Germany
| | - Gokhan Ertaylan
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Flemish Institute for Technological Research (VITO), Mol, Belgium
| | - Danyel G. J. Jennen
- Dept. of Toxicogenomics, GROW School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| | - Natal A. W. van Riel
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Dept. of Biomedical Engineering, Eindhoven University of Technology, The Netherlands
| | - Rachel Cavill
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Ralf L. M. Peeters
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Dept. of Data Science and Knowledge Engineering, Maastricht University, Maastricht, The Netherlands
| | - Theo M. C. M. de Kok
- Maastricht Centre for Systems Biology (MaCSBio), Maastricht University, Maastricht, The Netherlands
- Dept. of Toxicogenomics, GROW School for Oncology and Developmental Biology, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
50
|
Moon SE, Lee JS. MaDeNet: Disentangling Individuality of EEG Signals through Feature Space Mapping and Detachment. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2020; 2020:260-263. [PMID: 33017978 DOI: 10.1109/embc44109.2020.9176301] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
The cross-subject variability, or individuality, of electroencephalography (EEG) signals often has been an obstacle to extracting target-related information from EEG signals for classification of subjects' perceptual states. In this paper, we propose a deep learning-based EEG classification approach, which learns feature space mapping and performs individuality detachment to reduce subject-related information from EEG signals and maximize classification performance. Our experiment on EEG-based video classification shows that our method significantly improves the classification accuracy.
Collapse
|