1
|
Yuan Z, Zhou Q, Wang B, Zhang Q, Yang Y, Zhao Y, Guo Y, Zhou J, Wang C. PSAEEGNet: pyramid squeeze attention mechanism-based CNN for single-trial EEG classification in RSVP task. Front Hum Neurosci 2024; 18:1385360. [PMID: 38756843 PMCID: PMC11097777 DOI: 10.3389/fnhum.2024.1385360] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2024] [Accepted: 04/08/2024] [Indexed: 05/18/2024] Open
Abstract
Introduction Accurate classification of single-trial electroencephalogram (EEG) is crucial for EEG-based target image recognition in rapid serial visual presentation (RSVP) tasks. P300 is an important component of a single-trial EEG for RSVP tasks. However, single-trial EEG are usually characterized by low signal-to-noise ratio and limited sample sizes. Methods Given these challenges, it is necessary to optimize existing convolutional neural networks (CNNs) to improve the performance of P300 classification. The proposed CNN model called PSAEEGNet, integrates standard convolutional layers, pyramid squeeze attention (PSA) modules, and deep convolutional layers. This approach arises the extraction of temporal and spatial features of the P300 to a finer granularity level. Results Compared with several existing single-trial EEG classification methods for RSVP tasks, the proposed model shows significantly improved performance. The mean true positive rate for PSAEEGNet is 0.7949, and the mean area under the receiver operating characteristic curve (AUC) is 0.9341 (p < 0.05). Discussion These results suggest that the proposed model effectively extracts features from both temporal and spatial dimensions of P300, leading to a more accurate classification of single-trial EEG during RSVP tasks. Therefore, this model has the potential to significantly enhance the performance of target recognition systems based on EEG, contributing to the advancement and practical implementation of target recognition in this field.
Collapse
Affiliation(s)
- Zijian Yuan
- School of Intelligent Medicine and Biotechnology, Guilin Medical University, Guangxi, China
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Qian Zhou
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Baozeng Wang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Qi Zhang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yang Yang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yuwei Zhao
- Beijing Institute of Basic Medical Sciences, Beijing, China
| | - Yong Guo
- School of Intelligent Medicine and Biotechnology, Guilin Medical University, Guangxi, China
| | - Jin Zhou
- Beijing Institute of Basic Medical Sciences, Beijing, China
- Chinese Institute for Brain Research, Beijing, China
| | - Changyong Wang
- Beijing Institute of Basic Medical Sciences, Beijing, China
| |
Collapse
|
2
|
Olmez Y, Koca GO, Sengur A, Acharya UR. PS-VTS: particle swarm with visit table strategy for automated emotion recognition with EEG signals. Health Inf Sci Syst 2023; 11:22. [PMID: 37151916 PMCID: PMC10160266 DOI: 10.1007/s13755-023-00224-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 04/24/2023] [Indexed: 05/09/2023] Open
Abstract
Recognizing emotions accurately in real life is crucial in human-computer interaction (HCI) systems. Electroencephalogram (EEG) signals have been extensively employed to identify emotions. The researchers have used several EEG-based emotion identification datasets to validate their proposed models. In this paper, we have employed a novel metaheuristic optimization approach for accurate emotion classification by applying it to select both channel and rhythm of EEG data. In this work, we have proposed the particle swarm with visit table strategy (PS-VTS) metaheuristic technique to improve the effectiveness of EEG-based human emotion identification. First, the EEG signals are denoised using a low pass filter, and then rhythm extraction is done using discrete wavelet transform (DWT). The continuous wavelet transform (CWT) approach transforms each rhythm signal into a rhythm image. The pre-trained MobilNetv2 model has been pre-trained for deep feature extraction, and a support vector machine (SVM) is used to classify the emotions. Two models are developed for optimal channels and rhythm sets. In Model 1, optimal channels are selected separately for each rhythm, and global optima are determined in the optimization process according to the best channel sets of the rhythms. The best rhythms are first determined for each channel, and then the optimal channel-rhythm set is selected in Model 2. Our proposed model obtained an accuracy of 99.2871% and 97.8571% for the classification of HA (high arousal)-LA (low arousal) and HV (high valence)-LV (low valence), respectively with the DEAP dataset. Our generated model obtained the highest classification accuracy compared to the previously reported methods.
Collapse
Affiliation(s)
- Yagmur Olmez
- Department of Mechatronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - Gonca Ozmen Koca
- Department of Mechatronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - Abdulkadir Sengur
- Department of Electrical and Electronics Engineering, University of Firat, 23119 Elazig, Turkey
| | - U. Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
3
|
Jafari M, Shoeibi A, Khodatars M, Bagherzadeh S, Shalbaf A, García DL, Gorriz JM, Acharya UR. Emotion recognition in EEG signals using deep learning methods: A review. Comput Biol Med 2023; 165:107450. [PMID: 37708717 DOI: 10.1016/j.compbiomed.2023.107450] [Citation(s) in RCA: 9] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2023] [Revised: 08/03/2023] [Accepted: 09/01/2023] [Indexed: 09/16/2023]
Abstract
Emotions are a critical aspect of daily life and serve a crucial role in human decision-making, planning, reasoning, and other mental states. As a result, they are considered a significant factor in human interactions. Human emotions can be identified through various sources, such as facial expressions, speech, behavior (gesture/position), or physiological signals. The use of physiological signals can enhance the objectivity and reliability of emotion detection. Compared with peripheral physiological signals, electroencephalogram (EEG) recordings are directly generated by the central nervous system and are closely related to human emotions. EEG signals have the great spatial resolution that facilitates the evaluation of brain functions, making them a popular modality in emotion recognition studies. Emotion recognition using EEG signals presents several challenges, including signal variability due to electrode positioning, individual differences in signal morphology, and lack of a universal standard for EEG signal processing. Moreover, identifying the appropriate features for emotion recognition from EEG data requires further research. Finally, there is a need to develop more robust artificial intelligence (AI) including conventional machine learning (ML) and deep learning (DL) methods to handle the complex and diverse EEG signals associated with emotional states. This paper examines the application of DL techniques in emotion recognition from EEG signals and provides a detailed discussion of relevant articles. The paper explores the significant challenges in emotion recognition using EEG signals, highlights the potential of DL techniques in addressing these challenges, and suggests the scope for future research in emotion recognition using DL techniques. The paper concludes with a summary of its findings.
Collapse
Affiliation(s)
- Mahboobeh Jafari
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Afshin Shoeibi
- Data Science and Computational Intelligence Institute, University of Granada, Spain.
| | - Marjane Khodatars
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Sara Bagherzadeh
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Ahmad Shalbaf
- Department of Biomedical Engineering and Medical Physics, School of Medicine, Shahid Beheshti University of Medical Sciences, Tehran, Iran
| | - David López García
- Data Science and Computational Intelligence Institute, University of Granada, Spain
| | - Juan M Gorriz
- Data Science and Computational Intelligence Institute, University of Granada, Spain; Department of Psychiatry, University of Cambridge, UK
| | - U Rajendra Acharya
- School of Mathematics, Physics and Computing, University of Southern Queensland, Springfield, Australia
| |
Collapse
|
4
|
Kuang D, Michoski C. SEER-net: Simple EEG-based Recognition network. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2023.104620] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
|
5
|
Quiles Pérez M, Martínez Beltrán ET, López Bernal S, Martínez Pérez G, Huertas Celdrán A. Analyzing the impact of Driving tasks when detecting emotions through brain–computer interfaces. Neural Comput Appl 2023. [DOI: 10.1007/s00521-023-08343-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Abstract
AbstractTraffic accidents are the leading cause of death among young people, a problem that today costs an enormous number of victims. Several technologies have been proposed to prevent accidents, being brain–computer interfaces (BCIs) one of the most promising. In this context, BCIs have been used to detect emotional states, concentration issues, or stressful situations, which could play a fundamental role in the road since they are directly related to the drivers’ decisions. However, there is no extensive literature applying BCIs to detect subjects’ emotions in driving scenarios. In such a context, there are some challenges to be solved, such as (i) the impact of performing a driving task on the emotion detection and (ii) which emotions are more detectable in driving scenarios. To improve these challenges, this work proposes a framework focused on detecting emotions using electroencephalography with machine learning and deep learning algorithms. In addition, a use case has been designed where two scenarios are presented. The first scenario consists in listening to sounds as the primary task to perform, while in the second scenario listening to sound becomes a secondary task, being the primary task using a driving simulator. In this way, it is intended to demonstrate whether BCIs are useful in this driving scenario. The results improve those existing in the literature, achieving 99% accuracy for the detection of two emotions (non-stimuli and angry), 93% for three emotions (non-stimuli, angry and neutral) and 75% for four emotions (non-stimuli, angry, neutral and joy).
Collapse
|
6
|
Developing an efficient functional connectivity-based geometric deep network for automatic EEG-based visual decoding. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104221] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
7
|
Bouazizi S, benmohamed E, Ltifi H. Decision-making based on an improved visual analytics approach for emotion prediction. INTELLIGENT DECISION TECHNOLOGIES 2023. [DOI: 10.3233/idt-220263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/10/2023]
Abstract
Visual Analytics approach allows driving informed and effective decision-making. It assists decision-makers to visually interact with large amount of data and to computationally learn valuable hidden patterns in that data, which improve the decision quality. In this article, we introduce an enhanced visual analytics model combining cognitive-based visual analysis to data mining-based automatic analysis. As emotions are strongly related to human behaviour and society, emotion prediction is widely considered by decision making activities. Unlike speech and facial expressions modalities, EEG (electroencephalogram) has the advantage of being able to record information about the internal emotional state that is not always translated by perceptible external manifestations. For this reason, we applied the proposed cognitive approach on EEG data to demonstrate its efficiency for predicting emotional reaction to films. For automatic analysis, we developed the Echo State Network (ESN) technique considered as an efficient machine learning solution due to its straightforward training procedure and high modelling ability for handling time-series problems. Finally, utility and usability tests were performed to evaluate the developed prototype.
Collapse
Affiliation(s)
- Samar Bouazizi
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
- Computer Sciences and Mathematics Department, Faculty of sciences and technology of Sidi Bouzid, University of Kairouan, Kairouan, Tunisia
| | - Emna benmohamed
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
| | - Hela Ltifi
- Research Groups in Intelligent Machines, National Engineering School of Sfax, University of Sfax, Sfax, Tunisia
- Computer Sciences and Mathematics Department, Faculty of sciences and technology of Sidi Bouzid, University of Kairouan, Kairouan, Tunisia
| |
Collapse
|
8
|
Garg S. A novel convolution bi-directional gated recurrent unit neural network for emotion recognition in multichannel electroencephalogram signals. Technol Health Care 2022:THC220458. [PMID: 36617799 DOI: 10.3233/thc-220458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023]
Abstract
BACKGROUND Recognising emotions in humans is a great challenge in the present era and has several applications under affective computing. Deep learning (DL) found a success tool for predict for emotions in different modalities. OBJECTIVE To predict 3D emotions with high accuracy in multichannel physiological signals, i.e. electroencephalogram (EEG). METHODS A hybrid DL model consist of CNN and GRU is proposed in this work for emotion recognition in EEG recordings. A convolution neural network (CNN) has the capability of learning abstract representation, whereas gated recurrent units (GRU) have the capability of exploring temporal correlation. A bi-directional variation of GRU is used here to learn features in both directions. Discrete and dimensional emotion indices are recognised in two publicly available datasets namely SEED and DREAMER, respectively. A fused feature of energy and Shannon entropy (𝐸𝑛𝑆𝐸→) and energy and differential entropy (𝐸𝑛𝐷𝐸→) features are fed to the proposed classifier to improve the efficiency of the model. RESULTS The performance of the presented model is measured in terms of average accuracy, which is obtained as 86.9% and 93.9% for SEED and DREAMER datasets, respectively. CONCLUSION The proposed convolution bi-directional gated recurrent unit neural network (CNN-BiGRU) model outperforms most of the state-of-the-art and competitive hybrid DL models, which indicates the effectiveness of emotion recognition using EEG signals and provides a scientific base for the implementation of human-computer interaction (HCI).
Collapse
|
9
|
Wu JY, Ching CTS, Wang HMD, Liao LD. Emerging Wearable Biosensor Technologies for Stress Monitoring and Their Real-World Applications. BIOSENSORS 2022; 12:1097. [PMID: 36551064 PMCID: PMC9776100 DOI: 10.3390/bios12121097] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Accepted: 11/15/2022] [Indexed: 06/17/2023]
Abstract
Wearable devices are being developed faster and applied more widely. Wearables have been used to monitor movement-related physiological indices, including heartbeat, movement, and other exercise metrics, for health purposes. People are also paying more attention to mental health issues, such as stress management. Wearable devices can be used to monitor emotional status and provide preliminary diagnoses and guided training functions. The nervous system responds to stress, which directly affects eye movements and sweat secretion. Therefore, the changes in brain potential, eye potential, and cortisol content in sweat could be used to interpret emotional changes, fatigue levels, and physiological and psychological stress. To better assess users, stress-sensing devices can be integrated with applications to improve cognitive function, attention, sports performance, learning ability, and stress release. These application-related wearables can be used in medical diagnosis and treatment, such as for attention-deficit hyperactivity disorder (ADHD), traumatic stress syndrome, and insomnia, thus facilitating precision medicine. However, many factors contribute to data errors and incorrect assessments, including the various wearable devices, sensor types, data reception methods, data processing accuracy and algorithms, application reliability and validity, and actual user actions. Therefore, in the future, medical platforms for wearable devices and applications should be developed, and product implementations should be evaluated clinically to confirm product accuracy and perform reliable research.
Collapse
Affiliation(s)
- Ju-Yu Wu
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Congo Tak-Shing Ching
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Department of Electrical Engineering, National Chi Nan University, No. 1 University Road, Puli Township, Nantou County 545301, Taiwan
| | - Hui-Min David Wang
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
- Graduate Institute of Biomedical Engineering, National Chung Hsing University, South District, Taichung City 402, Taiwan
| | - Lun-De Liao
- Institute of Biomedical Engineering and Nanomedicine, National Health Research Institutes, Zhunan Township, Miaoli County 35053, Taiwan
- Program in Tissue Engineering and Regenerative Medicine, National Chung Hsing University, South District, Taichung City 402, Taiwan
| |
Collapse
|
10
|
Xefteris VR, Tsanousa A, Georgakopoulou N, Diplaris S, Vrochidis S, Kompatsiaris I. Graph Theoretical Analysis of EEG Functional Connectivity Patterns and Fusion with Physiological Signals for Emotion Recognition. SENSORS (BASEL, SWITZERLAND) 2022; 22:8198. [PMID: 36365896 PMCID: PMC9656224 DOI: 10.3390/s22218198] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Revised: 10/22/2022] [Accepted: 10/24/2022] [Indexed: 06/16/2023]
Abstract
Emotion recognition is a key attribute for realizing advances in human-computer interaction, especially when using non-intrusive physiological sensors, such as electroencephalograph (EEG) and electrocardiograph. Although functional connectivity of EEG has been utilized for emotion recognition, the graph theory analysis of EEG connectivity patterns has not been adequately explored. The exploitation of brain network characteristics could provide valuable information regarding emotions, while the combination of EEG and peripheral physiological signals can reveal correlation patterns of human internal state. In this work, a graph theoretical analysis of EEG functional connectivity patterns along with fusion between EEG and peripheral physiological signals for emotion recognition has been proposed. After extracting functional connectivity from EEG signals, both global and local graph theory features are extracted. Those features are concatenated with statistical features from peripheral physiological signals and fed to different classifiers and a Convolutional Neural Network (CNN) for emotion recognition. The average accuracy on the DEAP dataset using CNN was 55.62% and 57.38% for subject-independent valence and arousal classification, respectively, and 83.94% and 83.87% for subject-dependent classification. Those scores went up to 75.44% and 78.77% for subject-independent classification and 88.27% and 90.84% for subject-dependent classification using a feature selection algorithm, exceeding the current state-of-the-art results.
Collapse
|
11
|
Use of Differential Entropy for Automated Emotion Recognition in a Virtual Reality Environment with EEG Signals. Diagnostics (Basel) 2022; 12:diagnostics12102508. [PMID: 36292197 PMCID: PMC9601226 DOI: 10.3390/diagnostics12102508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 10/13/2022] [Accepted: 10/14/2022] [Indexed: 11/20/2022] Open
Abstract
Emotion recognition is one of the most important issues in human–computer interaction (HCI), neuroscience, and psychology fields. It is generally accepted that emotion recognition with neural data such as electroencephalography (EEG) signals, functional magnetic resonance imaging (fMRI), and near-infrared spectroscopy (NIRS) is better than other emotion detection methods such as speech, mimics, body language, facial expressions, etc., in terms of reliability and accuracy. In particular, EEG signals are bioelectrical signals that are frequently used because of the many advantages they offer in the field of emotion recognition. This study proposes an improved approach for EEG-based emotion recognition on a publicly available newly published dataset, VREED. Differential entropy (DE) features were extracted from four wavebands (theta 4–8 Hz, alpha 8–13 Hz, beta 13–30 Hz, and gamma 30–49 Hz) to classify two emotional states (positive/negative). Five classifiers, namely Support Vector Machine (SVM), k-Nearest Neighbor (kNN), Naïve Bayesian (NB), Decision Tree (DT), and Logistic Regression (LR) were employed with DE features for the automated classification of two emotional states. In this work, we obtained the best average accuracy of 76.22% ± 2.06 with the SVM classifier in the classification of two states. Moreover, we observed from the results that the highest average accuracy score was produced with the gamma band, as previously reported in studies in EEG-based emotion recognition.
Collapse
|
12
|
Machine Learning Models for Classification of Human Emotions Using Multivariate Brain Signals. COMPUTERS 2022. [DOI: 10.3390/computers11100152] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Humans can portray different expressions contrary to their emotional state of mind. Therefore, it is difficult to judge humans’ real emotional state simply by judging their physical appearance. Although researchers are working on facial expressions analysis, voice recognition, and gesture recognition; the accuracy levels of such analysis are much less and the results are not reliable. Hence, it becomes vital to have realistic emotion detector. Electroencephalogram (EEG) signals remain neutral to the external appearance and behavior of the human and help in ensuring accurate analysis of the state of mind. The EEG signals from various electrodes in different scalp regions are studied for performance. Hence, EEG has gained attention over time to obtain accurate results for the classification of emotional states in human beings for human–machine interaction as well as to design a program where an individual could perform a self-analysis of his emotional state. In the proposed scheme, we extract power spectral densities of multivariate EEG signals from different sections of the brain. From the extracted power spectral density (PSD), the features which provide a better feature for classification are selected and classified using long short-term memory (LSTM) and bi-directional long short-term memory (Bi-LSTM). The 2-D emotion model considered for the classification of frontal, parietal, temporal, and occipital is studied. The region-based classification is performed by considering positive and negative emotions. The performance accuracy of our previous model’s results of artificial neural network (ANN), support vector machine (SVM), K-nearest neighbor (K-NN), and LSTM was compared and 94.95% accuracy was received using Bi-LSTM considering four prefrontal electrodes.
Collapse
|
13
|
Zuo X, Zhang C, Hämäläinen T, Gao H, Fu Y, Cong F. Cross-Subject Emotion Recognition Using Fused Entropy Features of EEG. ENTROPY (BASEL, SWITZERLAND) 2022; 24:1281. [PMID: 36141167 PMCID: PMC9497745 DOI: 10.3390/e24091281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 09/04/2022] [Accepted: 09/05/2022] [Indexed: 06/16/2023]
Abstract
Emotion recognition based on electroencephalography (EEG) has attracted high interest in fields such as health care, user experience evaluation, and human-computer interaction (HCI), as it plays an important role in human daily life. Although various approaches have been proposed to detect emotion states in previous studies, there is still a need to further study the dynamic changes of EEG in different emotions to detect emotion states accurately. Entropy-based features have been proved to be effective in mining the complexity information in EEG in many areas. However, different entropy features vary in revealing the implicit information of EEG. To improve system reliability, in this paper, we propose a framework for EEG-based cross-subject emotion recognition using fused entropy features and a Bidirectional Long Short-term Memory (BiLSTM) network. Features including approximate entropy (AE), fuzzy entropy (FE), Rényi entropy (RE), differential entropy (DE), and multi-scale entropy (MSE) are first calculated to study dynamic emotional information. Then, we train a BiLSTM classifier with the inputs of entropy features to identify different emotions. Our results show that MSE of EEG is more efficient than other single-entropy features in recognizing emotions. The performance of BiLSTM is further improved with an accuracy of 70.05% using fused entropy features compared with that of single-type feature.
Collapse
Affiliation(s)
- Xin Zuo
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
- Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Chi Zhang
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
- Liaoning Key Laboratory of Integrated Circuit and Biomedical Electronic System, Dalian 116024, China
| | - Timo Hämäläinen
- Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
| | - Hanbing Gao
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Yu Fu
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
| | - Fengyu Cong
- School of Biomedical Engineering, Faculty of Electronic Information and Electrical Engineering, Dalian University of Technology, Dalian 116024, China
- Faculty of Information Technology, University of Jyväskylä, 40014 Jyväskylä, Finland
| |
Collapse
|
14
|
Joshi VM, Ghongade RB, Joshi AM, Kulkarni RV. Deep BiLSTM neural network model for emotion detection using cross-dataset approach. Biomed Signal Process Control 2022. [DOI: 10.1016/j.bspc.2021.103407] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/02/2022]
|
15
|
Abstract
As a long-standing research topic in the field of brain–computer interface, emotion recognition still suffers from low recognition accuracy. In this research, we present a novel model named DE-CNN-BiLSTM deeply integrating the complexity of EEG signals, the spatial structure of brain and temporal contexts of emotion formation. Firstly, we extract the complexity properties of the EEG signal by calculating Differential Entropy in different time slices of different frequency bands to obtain 4D feature tensors according to brain location. Subsequently, the 4D tensors are input into the Convolutional Neural Network to learn brain structure and output time sequences; after that Bidirectional Long-Short Term Memory is used to learn past and future information of the time sequences. Compared with the existing emotion recognition models, the new model can decode the EEG signal deeply and extract key emotional features to improve accuracy. The simulation results show the algorithm achieves an average accuracy of 94% for DEAP dataset and 94.82% for SEED dataset, confirming its high accuracy and strong robustness.
Collapse
|
16
|
Tuncer T, Dogan S, Baygin M, Rajendra Acharya U. Tetromino pattern based accurate EEG emotion classification model. Artif Intell Med 2022; 123:102210. [PMID: 34998511 DOI: 10.1016/j.artmed.2021.102210] [Citation(s) in RCA: 23] [Impact Index Per Article: 11.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 10/31/2021] [Accepted: 11/01/2021] [Indexed: 11/17/2022]
Abstract
Nowadays, emotion recognition using electroencephalogram (EEG) signals is becoming a hot research topic. The aim of this paper is to classify emotions of EEG signals using a novel game-based feature generation function with high accuracy. Hence, a multileveled handcrafted feature generation automated emotion classification model using EEG signals is presented. A novel textural features generation method inspired by the Tetris game called Tetromino is proposed in this work. The Tetris game is one of the famous games worldwide, which uses various characters in the game. First, the EEG signals are subjected to discrete wavelet transform (DWT) to create various decomposition levels. Then, novel features are generated from the decomposed DWT sub-bands using the Tetromino method. Next, the maximum relevance minimum redundancy (mRMR) features selection method is utilized to select the most discriminative features, and the selected features are classified using support vector machine classifier. Finally, each channel's results (validation predictions) are obtained, and the mode function-based voting method is used to obtain the general results. We have validated our developed model using three databases (DREAMER, GAMEEMO, and DEAP). We have attained 100% accuracies using DREAMER and GAMEEMO datasets. Furthermore, over 99% of classification accuracy is achieved for DEAP dataset. Thus, our developed emotion detection model has yielded the best classification accuracy rate compared to the state-of-the-art techniques and is ready to be tested for clinical application after validating with more diverse datasets. Our results show the success of the presented Tetromino pattern-based EEG signal classification model validated using three public emotional EEG datasets.
Collapse
Affiliation(s)
- Turker Tuncer
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey.
| | - Sengul Dogan
- Department of Digital Forensics Engineering, College of Technology, Firat University, Elazig, Turkey
| | - Mehmet Baygin
- Department of Computer Engineering, Faculty of Engineering, Ardahan University, Ardahan, Turkey
| | - U Rajendra Acharya
- Ngee Ann Polytechnic, Department of Electronics and Computer Engineering, 599489, Singapore; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan
| |
Collapse
|
17
|
Wang H, Zhu X, Chen P, Yang Y, Ma C, Gao Z. A gradient-based automatic optimization CNN framework for EEG state recognition. J Neural Eng 2021; 19. [PMID: 34883472 DOI: 10.1088/1741-2552/ac41ac] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2021] [Accepted: 12/09/2021] [Indexed: 11/12/2022]
Abstract
The Electroencephalogram (EEG) signal, as a data carrier that can contain a large amount of information about the human brain in different states, is one of the most widely used metrics for assessing human psychophysiological states. Among a variety of analysis methods, deep learning, especially convolutional neural network (CNN), has achieved remarkable results in recent years as a method to effectively extract features from EEG signals. Although deep learning has the advantages of automatic feature extraction and effective classification, it also faces difficulties in network structure design and requires an army of prior knowledge. Automating the design of these hyperparameters can therefore save experts' time and manpower. Neural architecture search techniques have thus emerged. In this paper, based on an existing gradient-based NAS algorithm, PC-DARTS, with targeted improvements and optimizations for the characteristics of EEG signals. Specifically, we establish the model architecture step by step based on the manually designed deep learning models for EEG discrimination by retaining the framework of the search algorithm and performing targeted optimization of the model search space. Corresponding features are extracted separately according to the frequency domain, time domain characteristics of the EEG signal and the spatial position of the EEG electrode. The architecture was applied to EEG-based emotion recognition and driver drowsiness assessment tasks. The results illustrate that compared with the existing methods, the model architecture obtained in this paper can achieve competitive overall accuracy and better standard deviation in both tasks. Therefore, this approach is an effective migration of NAS technology into the field of EEG analysis and has great potential to provide high-performance results for other types of classification and prediction tasks. This can effectively reduce the time cost for researchers and facilitate the application of CNN in more areas.
Collapse
Affiliation(s)
- He Wang
- Tianjin University, 26E Academic Building, Tianjin University, Tianjin, 300072, CHINA
| | - Xinshan Zhu
- School of Electrical Engineering and Automation, Tianjin University, 26E Academic Building, Tianjin University, Tianjin, Tianjin, 300072, CHINA
| | - Pinyin Chen
- Tianjin University, 26E Academic Building, Tianjin University, Tianjin, Tianjin, 300072, CHINA
| | - Yuxuan Yang
- School of Economic Information Engineering, Southwestern University of Finance and Economics, School of Economic Information Engineering, Chengdu, Sichuan, 610074, CHINA
| | - Chao Ma
- Tianjin University, 26E Academic Building, Tianjin University, Tianjin, Tianjin, 300072, CHINA
| | - Zhongke Gao
- Tianjin University, Postal Code is a required field, Tianjin, 300072, CHINA
| |
Collapse
|
18
|
Liu H, Zhang Y, Li Y, Kong X. Review on Emotion Recognition Based on Electroencephalography. Front Comput Neurosci 2021; 15:758212. [PMID: 34658828 PMCID: PMC8518715 DOI: 10.3389/fncom.2021.758212] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 08/31/2021] [Indexed: 11/13/2022] Open
Abstract
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
Collapse
Affiliation(s)
- Haoran Liu
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Ying Zhang
- Patent Examination Cooperation (Henan) Center of the Patent Office, CNIPA, Zhengzhou, China
| | - Yujun Li
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Xiangyi Kong
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| |
Collapse
|
19
|
PrimePatNet87: Prime pattern and tunable q-factor wavelet transform techniques for automated accurate EEG emotion recognition. Comput Biol Med 2021; 138:104867. [PMID: 34543892 DOI: 10.1016/j.compbiomed.2021.104867] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2021] [Revised: 09/08/2021] [Accepted: 09/08/2021] [Indexed: 11/24/2022]
Abstract
Nowadays, many deep models have been presented to recognize emotions using electroencephalogram (EEG) signals. These deep models are computationally intensive, it takes a longer time to train the model. Also, it is difficult to achieve high classification performance using for emotion classification using machine learning techniques. To overcome these limitations, we present a hand-crafted conventional EEG emotion classification network. In this work, we have used novel prime pattern and tunable q-factor wavelet transform (TQWT) techniques to develop an automated model to classify human emotions. Our proposed cognitive model comprises feature extraction, feature selection, and classification steps. We have used TQWT on the EEG signals to obtain the sub-bands. The prime pattern and statistical feature generator are employed on the generated sub-bands and original signal to generate 798 features. 399 (half of them) out of 798 features are selected using minimum redundancy maximum relevance (mRMR) selector, and misclassification rates of each signal are evaluated using support vector machine (SVM) classifier. The proposed network generated 87 feature vectors hence, this model is named PrimePatNet87. In the last step of the feature generation, the best 20 feature vectors which are selected based on the calculated misclassification rates, are concatenated. The generated feature vector is subjected to the feature selection and the most significant 1000 features are selected using the mRMR selector. These selected features are then classified using an SVM classifier. In the last phase, iterative majority voting has been used to generate a general result. We have used three publicly available datasets, namely DEAP, DREAMER, and GAMEEMO, to develop our proposed model. Our presented PrimePatNet87 model reached over 99% classification accuracy on whole datasets with leave one subject out (LOSO) validation. Our results demonstrate that the developed prime pattern network is accurate and ready for real-world applications.
Collapse
|