1
|
Zhang Y, Liao Y, Chen W, Zhang X, Huang L. Emotion recognition of EEG signals based on contrastive learning graph convolutional model. J Neural Eng 2024; 21:046060. [PMID: 39151459 DOI: 10.1088/1741-2552/ad7060] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2024] [Accepted: 08/16/2024] [Indexed: 08/19/2024]
Abstract
Objective.Electroencephalogram (EEG) signals offer invaluable insights into the complexities of emotion generation within the brain. Yet, the variability in EEG signals across individuals presents a formidable obstacle for empirical implementations. Our research addresses these challenges innovatively, focusing on the commonalities within distinct subjects' EEG data.Approach.We introduce a novel approach named Contrastive Learning Graph Convolutional Network (CLGCN). This method captures the distinctive features and crucial channel nodes related to individuals' emotional states. Specifically, CLGCN merges the dual benefits of CL's synchronous multisubject data learning and the GCN's proficiency in deciphering brain connectivity matrices. Understanding multifaceted brain functions and their information interchange processes is realized as CLGCN generates a standardized brain network learning matrix during a dataset's learning process.Main results.Our model underwent rigorous testing on the Database for Emotion Analysis using Physiological Signals (DEAP) and SEED datasets. In the five-fold cross-validation used for dependent subject experimental setting, it achieved an accuracy of 97.13% on the DEAP dataset and surpassed 99% on the SEED and SEED_IV datasets. In the incremental learning experiments with the SEED dataset, merely 5% of the data was sufficient to fine-tune the model, resulting in an accuracy of 92.8% for the new subject. These findings validate the model's efficacy.Significance.This work combines CL with GCN, improving the accuracy of decoding emotional states from EEG signals and offering valuable insights into uncovering the underlying mechanisms of emotional processes in the brain.
Collapse
Affiliation(s)
- Yiling Zhang
- College of electronic and optical engineering & college of flexible electronics (future technology), Nanjing University of Posts and Telecommunications, Jiangsu 210023, People's Republic of China
| | - Yuan Liao
- College of electronic and optical engineering & college of flexible electronics (future technology), Nanjing University of Posts and Telecommunications, Jiangsu 210023, People's Republic of China
| | - Wei Chen
- College of electronic and optical engineering & college of flexible electronics (future technology), Nanjing University of Posts and Telecommunications, Jiangsu 210023, People's Republic of China
| | - Xiruo Zhang
- College of electronic and optical engineering & college of flexible electronics (future technology), Nanjing University of Posts and Telecommunications, Jiangsu 210023, People's Republic of China
| | - Liya Huang
- College of electronic and optical engineering & college of flexible electronics (future technology), Nanjing University of Posts and Telecommunications, Jiangsu 210023, People's Republic of China
| |
Collapse
|
2
|
Hamzah HA, Abdalla KK. EEG-based emotion recognition systems; comprehensive study. Heliyon 2024; 10:e31485. [PMID: 38818173 PMCID: PMC11137547 DOI: 10.1016/j.heliyon.2024.e31485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2024] [Accepted: 05/16/2024] [Indexed: 06/01/2024] Open
Abstract
Emotion recognition technology through EEG signal analysis is currently a fundamental concept in artificial intelligence. This recognition has major practical implications in emotional health care, human-computer interaction, and so on. This paper provides a comprehensive study of different methods for extracting electroencephalography (EEG) features for emotion recognition from four different perspectives, including time domain features, frequency domain features, time-frequency features, and nonlinear features. We summarize the current pattern recognition methods adopted in most related works, and with the rapid development of deep learning (DL) attracting the attention of researchers in this field, we pay more attention to deep learning-based studies and analyse the characteristics, advantages, disadvantages, and applicable scenarios. Finally, the current challenges and future development directions in this field were summarized. This paper can help novice researchers in this field gain a systematic understanding of the current status of emotion recognition research based on EEG signals and provide ideas for subsequent related research.
Collapse
Affiliation(s)
- Hussein Ali Hamzah
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| | - Kasim K. Abdalla
- Electrical Engineering Department, College of Engineering, University of Babylon, Iraq
| |
Collapse
|
3
|
Patel P, Balasubramanian S, Annavarapu RN. Cross subject emotion identification from multichannel EEG sub-bands using Tsallis entropy feature and KNN classifier. Brain Inform 2024; 11:7. [PMID: 38441825 PMCID: PMC11358557 DOI: 10.1186/s40708-024-00220-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 02/05/2024] [Indexed: 08/29/2024] Open
Abstract
Human emotion recognition remains a challenging and prominent issue, situated at the convergence of diverse fields, such as brain-computer interfaces, neuroscience, and psychology. This study utilizes an EEG data set for investigating human emotion, presenting novel findings and a refined approach for EEG-based emotion detection. Tsallis entropy features, computed for q values of 2, 3, and 4, are extracted from signal bands, including theta-θ (4-7 Hz), alpha-α (8-15 Hz), beta-β (16-31 Hz), gamma-γ (32-55 Hz), and the overall frequency range (0-75 Hz). These Tsallis entropy features are employed to train and test a KNN classifier, aiming for accurate identification of two emotional states: positive and negative. In this study, the best average accuracy of 79% and an F-score of 0.81 were achieved in the gamma frequency range for the Tsallis parameter q = 3. In addition, the highest accuracy and F-score of 84% and 0.87 were observed. Notably, superior performance was noted in the anterior and left hemispheres compared to the posterior and right hemispheres in the context of emotion studies. The findings show that the proposed method exhibits enhanced performance, making it a highly competitive alternative to existing techniques. Furthermore, we identify and discuss the shortcomings of the proposed approach, offering valuable insights into potential avenues for improvements.
Collapse
Affiliation(s)
- Pragati Patel
- Department of Physics, Pondicherry University, Puducherry, 605014, India
| | | | | |
Collapse
|
4
|
Hancer E, Subasi A. EEG-based emotion recognition using dual tree complex wavelet transform and random subspace ensemble classifier. Comput Methods Biomech Biomed Engin 2023; 26:1772-1784. [PMID: 36367337 DOI: 10.1080/10255842.2022.2143714] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2022] [Revised: 08/25/2022] [Accepted: 10/26/2022] [Indexed: 11/13/2022]
Abstract
Emotions are strongly admitted as a main source to establish meaningful interactions between humans and computers. Thanks to the advancements in electroencephalography (EEG), especially in the usage of portable and cheap wearable EEG devices, the demand for identifying emotions has extremely increased. However, the overall scientific knowledge and works concerning EEG-based emotion recognition is still limited. To cover this issue, we introduce an EEG-based emotion recognition framework in this study. The proposed framework involves the following stages: preprocessing, feature extraction, feature selection and classification. For the preprocessing stage, multi scale principle component analysis and sysmlets-4 filter are used. A version of discrete wavelet transform (DWT), namely dual tree complex wavelet transform (DTCWT) is utilized for the feature extraction stage. To reduce the feature dimension size, a variety of statistical criteria are employed. For the final stage, we adopt ensemble classifiers due to their promising performance in classification problems. The proposed framework achieves nearly 96.8% accuracy by using random subspace ensemble classifier. It can therefore be resulted that the proposed EEG-based framework performs well in terms of identifying emotions.
Collapse
Affiliation(s)
- Emrah Hancer
- Department of Software Engineering, Bucak Technology Faculty, Mehmet Akif Ersoy University, Burdur, Turkey
| | - Abdulhamit Subasi
- Institute of Biomedicine, Faculty of Medicine, University of Turku, Turku, Finland
- Department of Computer Science, College of Engineering, Effat University, Jeddah, Saudi Arabia
| |
Collapse
|
5
|
Qiu X, Wang S, Wang R, Zhang Y, Huang L. A multi-head residual connection GCN for EEG emotion recognition. Comput Biol Med 2023; 163:107126. [PMID: 37327757 DOI: 10.1016/j.compbiomed.2023.107126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Revised: 03/22/2023] [Accepted: 06/01/2023] [Indexed: 06/18/2023]
Abstract
Electroencephalography (EEG) emotion recognition is a crucial aspect of human-computer interaction. However, conventional neural networks have limitations in extracting profound EEG emotional features. This paper introduces a novel multi-head residual graph convolutional neural network (MRGCN) model that incorporates complex brain networks and graph convolution networks. The decomposition of multi-band differential entropy (DE) features exposes the temporal intricacy of emotion-linked brain activity, and the combination of short and long-distance brain networks can explore complex topological characteristics. Moreover, the residual-based architecture not only enhances performance but also augments classification stability across subjects. The visualization of brain network connectivity offers a practical technique for investigating emotional regulation mechanisms. The MRGCN model exhibits average classification accuracies of 95.8% and 98.9% for the DEAP and SEED datasets, respectively, highlighting its excellent performance and robustness.
Collapse
Affiliation(s)
- Xiangkai Qiu
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Shenglin Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Ruqing Wang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Yiling Zhang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China
| | - Liya Huang
- College of Electronic and Optical Engineering & College of Flexible Electronics, Nanjing University of Posts and Telecommunications, Nanjing, China; National and Local Joint Engineering Laboratory of RF Integration and Micro-Assembly Technology, Nanjing, China.
| |
Collapse
|
6
|
Bhatt P, Sethi A, Tasgaonkar V, Shroff J, Pendharkar I, Desai A, Sinha P, Deshpande A, Joshi G, Rahate A, Jain P, Walambe R, Kotecha K, Jain NK. Machine learning for cognitive behavioral analysis: datasets, methods, paradigms, and research directions. Brain Inform 2023; 10:18. [PMID: 37524933 PMCID: PMC10390406 DOI: 10.1186/s40708-023-00196-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2023] [Accepted: 06/06/2023] [Indexed: 08/02/2023] Open
Abstract
Human behaviour reflects cognitive abilities. Human cognition is fundamentally linked to the different experiences or characteristics of consciousness/emotions, such as joy, grief, anger, etc., which assists in effective communication with others. Detection and differentiation between thoughts, feelings, and behaviours are paramount in learning to control our emotions and respond more effectively in stressful circumstances. The ability to perceive, analyse, process, interpret, remember, and retrieve information while making judgments to respond correctly is referred to as Cognitive Behavior. After making a significant mark in emotion analysis, deception detection is one of the key areas to connect human behaviour, mainly in the forensic domain. Detection of lies, deception, malicious intent, abnormal behaviour, emotions, stress, etc., have significant roles in advanced stages of behavioral science. Artificial Intelligence and Machine learning (AI/ML) has helped a great deal in pattern recognition, data extraction and analysis, and interpretations. The goal of using AI and ML in behavioral sciences is to infer human behaviour, mainly for mental health or forensic investigations. The presented work provides an extensive review of the research on cognitive behaviour analysis. A parametric study is presented based on different physical characteristics, emotional behaviours, data collection sensing mechanisms, unimodal and multimodal datasets, modelling AI/ML methods, challenges, and future research directions.
Collapse
Affiliation(s)
- Priya Bhatt
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Amanrose Sethi
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Vaibhav Tasgaonkar
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Jugal Shroff
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Isha Pendharkar
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Aditya Desai
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Pratyush Sinha
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Aditya Deshpande
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Gargi Joshi
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Anil Rahate
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India
| | - Priyanka Jain
- Centre for Development of Advanced Computing (C-DAC), Delhi, India
| | - Rahee Walambe
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India.
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International Deemed University, Pune, India.
| | - Ketan Kotecha
- Symbiosis Institute of Technology, Symbiosis International Deemed University, Pune, India.
- Symbiosis Centre for Applied Artificial Intelligence, Symbiosis International Deemed University, Pune, India.
- UCSI University, Kuala Lumpur, Malaysia.
| | - N K Jain
- Centre for Development of Advanced Computing (C-DAC), Delhi, India
| |
Collapse
|
7
|
Li JW, Lin D, Che Y, Lv JJ, Chen RJ, Wang LJ, Zeng XX, Ren JC, Zhao HM, Lu X. An innovative EEG-based emotion recognition using a single channel-specific feature from the brain rhythm code method. Front Neurosci 2023; 17:1221512. [PMID: 37547144 PMCID: PMC10397731 DOI: 10.3389/fnins.2023.1221512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Accepted: 06/30/2023] [Indexed: 08/08/2023] Open
Abstract
Introduction Efficiently recognizing emotions is a critical pursuit in brain-computer interface (BCI), as it has many applications for intelligent healthcare services. In this work, an innovative approach inspired by the genetic code in bioinformatics, which utilizes brain rhythm code features consisting of δ, θ, α, β, or γ, is proposed for electroencephalography (EEG)-based emotion recognition. Methods These features are first extracted from the sequencing technique. After evaluating them using four conventional machine learning classifiers, an optimal channel-specific feature that produces the highest accuracy in each emotional case is identified, so emotion recognition through minimal data is realized. By doing so, the complexity of emotion recognition can be significantly reduced, making it more achievable for practical hardware setups. Results The best classification accuracies achieved for the DEAP and MAHNOB datasets range from 83-92%, and for the SEED dataset, it is 78%. The experimental results are impressive, considering the minimal data employed. Further investigation of the optimal features shows that their representative channels are primarily on the frontal region, and associated rhythmic characteristics are typical of multiple kinds. Additionally, individual differences are found, as the optimal feature varies with subjects. Discussion Compared to previous studies, this work provides insights into designing portable devices, as only one electrode is appropriate to generate satisfactory performances. Consequently, it would advance the understanding of brain rhythms, which offers an innovative solution for classifying EEG signals in diverse BCI applications, including emotion recognition.
Collapse
Affiliation(s)
- Jia Wen Li
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, China
- Hubei Province Key Laboratory of Intelligent Information Processing and Real-Time Industrial System, Wuhan University of Science and Technology, Wuhan, China
- Guangxi Key Lab of Multi-Source Information Mining and Security, Guangxi Normal University, Guilin, China
| | - Di Lin
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, China
- New Engineering Industry College, Putian University, Putian, China
| | - Yan Che
- Engineering Research Center of Big Data Application in Private Health Medicine, Fujian Province University, Putian, China
- New Engineering Industry College, Putian University, Putian, China
| | - Ju Jian Lv
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Rong Jun Chen
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Lei Jun Wang
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Xian Xian Zeng
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Jin Chang Ren
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
- National Subsea Centre, Robert Gordon University, Aberdeen, United Kingdom
| | - Hui Min Zhao
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| | - Xu Lu
- School of Computer Science, Guangdong Polytechnic Normal University, Guangzhou, China
| |
Collapse
|
8
|
Malnar D, Vrankic M. Optimising Time-Frequency Distributions: A Surface Metrology Approach. SENSORS (BASEL, SWITZERLAND) 2023; 23:5804. [PMID: 37447655 DOI: 10.3390/s23135804] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/17/2023] [Accepted: 06/19/2023] [Indexed: 07/15/2023]
Abstract
Time-frequency signal processing offers a significant advantage over temporal or frequency-only methods, but representations require optimisation for a given signal. Standard practice includes choosing the appropriate time-frequency distribution and fine-tuning its parameters, usually via visual inspection and various measures-the most commonly used ones are based on the Rényi entropies or energy concentration by Stanković. However, a discrepancy between the observed representation quality and reported numerical value may arise when the filter kernel has greater adaptability. Herein, a performance measure derived from the Abbot-Firestone curve similar to the volume parameters in surface metrology is proposed as the objective function to be minimised by the proposed minimalistic differential evolution variant that is parameter-free and uses a population of five members. Tests were conducted on two synthetic signals of different frequency modulations and one real-life signal. The multiform tiltable exponential kernel was optimised according to the Rényi entropy, Stanković's energy concentration and the proposed measure. The resulting distributions were mutually evaluated using the same measures and visual inspection. The optimiser demonstrated a reliable convergence for all considered measures and signals, while the proposed measure showed consistent alignment of reported numerical values and visual assessments.
Collapse
Affiliation(s)
- Damir Malnar
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia
| | - Miroslav Vrankic
- Faculty of Engineering, University of Rijeka, Vukovarska 58, 51000 Rijeka, Croatia
| |
Collapse
|
9
|
Cittadini R, Tamantini C, Scotto di Luzio F, Lauretti C, Zollo L, Cordella F. Affective state estimation based on Russell's model and physiological measurements. Sci Rep 2023; 13:9786. [PMID: 37328550 PMCID: PMC10275929 DOI: 10.1038/s41598-023-36915-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2021] [Accepted: 06/12/2023] [Indexed: 06/18/2023] Open
Abstract
Affective states are psycho-physiological constructs connecting mental and physiological processes. They can be represented in terms of arousal and valence according to the Russel's model and can be extracted from physiological changes in human body. However, a well-established optimal feature set and a classification method effective in terms of accuracy and estimation time are not present in the literature. This paper aims at defining a reliable and efficient approach for real-time affective state estimation. To obtain this, the optimal physiological feature set and the most effective machine learning algorithm, to cope with binary as well as multi-class classification problems, were identified. ReliefF feature selection algorithm was implemented to define a reduced optimal feature set. Supervised learning algorithms, such as K-Nearest Neighbors (KNN), cubic and gaussian Support Vector Machine, and Linear Discriminant Analysis, were implemented to compare their effectiveness in affective state estimation. The developed approach was tested on physiological signals acquired on 20 healthy volunteers during the administration of images, belonging to the International Affective Picture System, conceived for inducing different affective states. ReliefF algorithm reduced the number of physiological features from 23 to 13. The performances of machine learning algorithms were compared and the experimental results showed that both accuracy and estimation time benefited from the optimal feature set use. Furthermore, the KNN algorithm resulted to be the most suitable for affective state estimation. The results of the assessment of arousal and valence states on 20 participants indicate that KNN classifier, adopted with the 13 identified optimal features, is the most effective approach for real-time affective state estimation.
Collapse
Affiliation(s)
- Roberto Cittadini
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy.
| | - Christian Tamantini
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Francesco Scotto di Luzio
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Clemente Lauretti
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Loredana Zollo
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
| | - Francesca Cordella
- Research Unit of Advanced Robotics and Human-Centred Technologies, Department of Engineering, Università Campus Bio-Medico di Roma, Rome, Italy
| |
Collapse
|
10
|
Muhammad F, Hussain M, Aboalsamh H. A Bimodal Emotion Recognition Approach through the Fusion of Electroencephalography and Facial Sequences. Diagnostics (Basel) 2023; 13:diagnostics13050977. [PMID: 36900121 PMCID: PMC10000366 DOI: 10.3390/diagnostics13050977] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 01/26/2023] [Accepted: 02/06/2023] [Indexed: 03/08/2023] Open
Abstract
In recent years, human-computer interaction (HCI) systems have become increasingly popular. Some of these systems demand particular approaches for discriminating actual emotions through the use of better multimodal methods. In this work, a deep canonical correlation analysis (DCCA) based multimodal emotion recognition method is presented through the fusion of electroencephalography (EEG) and facial video clips. A two-stage framework is implemented, where the first stage extracts relevant features for emotion recognition using a single modality, while the second stage merges the highly correlated features from the two modalities and performs classification. Convolutional neural network (CNN) based Resnet50 and 1D-CNN (1-Dimensional CNN) have been utilized to extract features from facial video clips and EEG modalities, respectively. A DCCA-based approach was used to fuse highly correlated features, and three basic human emotion categories (happy, neutral, and sad) were classified using the SoftMax classifier. The proposed approach was investigated based on the publicly available datasets called MAHNOB-HCI and DEAP. Experimental results revealed an average accuracy of 93.86% and 91.54% on the MAHNOB-HCI and DEAP datasets, respectively. The competitiveness of the proposed framework and the justification for exclusivity in achieving this accuracy were evaluated by comparison with existing work.
Collapse
|
11
|
Cai Y, Li X, Li J. Emotion Recognition Using Different Sensors, Emotion Models, Methods and Datasets: A Comprehensive Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052455. [PMID: 36904659 PMCID: PMC10007272 DOI: 10.3390/s23052455] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/31/2023] [Revised: 02/18/2023] [Accepted: 02/21/2023] [Indexed: 06/12/2023]
Abstract
In recent years, the rapid development of sensors and information technology has made it possible for machines to recognize and analyze human emotions. Emotion recognition is an important research direction in various fields. Human emotions have many manifestations. Therefore, emotion recognition can be realized by analyzing facial expressions, speech, behavior, or physiological signals. These signals are collected by different sensors. Correct recognition of human emotions can promote the development of affective computing. Most existing emotion recognition surveys only focus on a single sensor. Therefore, it is more important to compare different sensors or unimodality and multimodality. In this survey, we collect and review more than 200 papers on emotion recognition by literature research methods. We categorize these papers according to different innovations. These articles mainly focus on the methods and datasets used for emotion recognition with different sensors. This survey also provides application examples and developments in emotion recognition. Furthermore, this survey compares the advantages and disadvantages of different sensors for emotion recognition. The proposed survey can help researchers gain a better understanding of existing emotion recognition systems, thus facilitating the selection of suitable sensors, algorithms, and datasets.
Collapse
|
12
|
Bai Z, Liu J, Hou F, Chen Y, Cheng M, Mao Z, Song Y, Gao Q. Emotion recognition with residual network driven by spatial-frequency characteristics of EEG recorded from hearing-impaired adults in response to video clips. Comput Biol Med 2023; 152:106344. [PMID: 36470142 DOI: 10.1016/j.compbiomed.2022.106344] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 10/31/2022] [Accepted: 11/21/2022] [Indexed: 12/03/2022]
Abstract
In recent years, emotion recognition based on electroencephalography (EEG) signals has attracted plenty of attention. Most of the existing works focused on normal or depressed people. Due to the lack of hearing ability, it is difficult for hearing-impaired people to express their emotions through language in their social activities. In this work, we collected the EEG signals of hearing-impaired subjects when they were watching six kinds of emotional video clips (happiness, inspiration, neutral, anger, fear, and sadness) for emotion recognition. The biharmonic spline interpolation method was utilized to convert the traditional frequency domain features, Differential Entropy (DE), Power Spectral Density (PSD), and Wavelet Entropy (WE) into the spatial domain. The patch embedding (PE) method was used to segment the feature map into the same patch to obtain the differences in the distribution of emotional information among brain regions. For feature classification, a compact residual network with Depthwise convolution (DC) and Pointwise convolution (PC) is proposed to separate spatial and channel mixing dimensions to better extract information between channels. Dependent subject experiments based on 70% training sets and 30% testing sets were performed. The results showed that the average classification accuracies by PE (DE), PE (PSD), and PE (WE) were 91.75%, 85.53%, and 75.68%, respectively which were improved by 11.77%, 23.54%, and 16.61% compared with DE, PSD, and WE. Moreover, the comparison experiments were carried out on the SEED and DEAP datasets with PE (DE), which achieved average accuracies of 90.04% (positive, neutral, and negative) and 88.75% (high valence and low valence). By exploring the emotional brain regions, we found that the frontal, parietal, and temporal lobes of hearing-impaired people were associated with emotional activity compared to normal people whose main emotional brain area was the frontal lobe.
Collapse
Affiliation(s)
- Zhongli Bai
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Junjie Liu
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Fazheng Hou
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Yirui Chen
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Meiyi Cheng
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Zemin Mao
- Technical College for the Deaf, Tianjin University of Technology, Tianjin, 300384, China.
| | - Yu Song
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin, 300384, China.
| | - Qiang Gao
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, TUT Maritime College, Tianjin University of Technology, Tianjin, 300384, China.
| |
Collapse
|
13
|
Yang L, He J, Liu D, Zheng W, Song Z. EEG Microstate Features as an Automatic Recognition Model of High-Density Epileptic EEG Using Support Vector Machine. Brain Sci 2022; 12:brainsci12121731. [PMID: 36552190 PMCID: PMC9775561 DOI: 10.3390/brainsci12121731] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 12/10/2022] [Accepted: 12/14/2022] [Indexed: 12/24/2022] Open
Abstract
Epilepsy is one of the most serious nervous system diseases; it can be diagnosed accurately by video electroencephalogram. In this study, we analyzed microstate epileptic electroencephalogram (EEG) to aid in the diagnosis and identification of epilepsy. We recruited patients with focal epilepsy and healthy participants from the Third Xiangya Hospital and recorded their resting EEG data. In this study, the EEG data were analyzed by microstate analysis, and the support vector machine (SVM) classifier was used for automatic epileptic EEG classification based on features of the EEG microstate series, including microstate parameters (duration, occurrence, and coverage), linear features (median, second quartile, mean, kurtosis, and skewness) and non-linear features (Petrosian fractal dimension, approximate entropy, sample entropy, fuzzy entropy, and Lempel-Ziv complexity). In the gamma sub-band, the microstate parameters as a model were the best for interictal epilepsy recognition, with an accuracy of 87.18%, recall of 70.59%, and an area under the curve of 94.52%. There was a recognition effect of interictal epilepsy through the features extracted from the EEG microstate, which varied within the 4~45 Hz band with an accuracy of 79.55%. Based on the SVM classifier, microstate parameters and EEG features can be effectively used to classify epileptic EEG, and microstate parameters can better classify epileptic EEG compared with EEG features.
Collapse
Affiliation(s)
| | | | | | | | - Zhi Song
- Correspondence: ; Tel.: +1-39-74-814-092
| |
Collapse
|
14
|
Kaklauskas A, Abraham A, Ubarte I, Kliukas R, Luksaite V, Binkyte-Veliene A, Vetloviene I, Kaklauskiene L. A Review of AI Cloud and Edge Sensors, Methods, and Applications for the Recognition of Emotional, Affective and Physiological States. SENSORS (BASEL, SWITZERLAND) 2022; 22:7824. [PMID: 36298176 PMCID: PMC9611164 DOI: 10.3390/s22207824] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/18/2022] [Revised: 09/28/2022] [Accepted: 10/12/2022] [Indexed: 06/16/2023]
Abstract
Affective, emotional, and physiological states (AFFECT) detection and recognition by capturing human signals is a fast-growing area, which has been applied across numerous domains. The research aim is to review publications on how techniques that use brain and biometric sensors can be used for AFFECT recognition, consolidate the findings, provide a rationale for the current methods, compare the effectiveness of existing methods, and quantify how likely they are to address the issues/challenges in the field. In efforts to achieve the key goals of Society 5.0, Industry 5.0, and human-centered design better, the recognition of emotional, affective, and physiological states is progressively becoming an important matter and offers tremendous growth of knowledge and progress in these and other related fields. In this research, a review of AFFECT recognition brain and biometric sensors, methods, and applications was performed, based on Plutchik's wheel of emotions. Due to the immense variety of existing sensors and sensing systems, this study aimed to provide an analysis of the available sensors that can be used to define human AFFECT, and to classify them based on the type of sensing area and their efficiency in real implementations. Based on statistical and multiple criteria analysis across 169 nations, our outcomes introduce a connection between a nation's success, its number of Web of Science articles published, and its frequency of citation on AFFECT recognition. The principal conclusions present how this research contributes to the big picture in the field under analysis and explore forthcoming study trends.
Collapse
Affiliation(s)
- Arturas Kaklauskas
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ajith Abraham
- Machine Intelligence Research Labs, Scientific Network for Innovation and Research Excellence, Auburn, WA 98071, USA
| | - Ieva Ubarte
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Romualdas Kliukas
- Department of Applied Mechanics, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Vaida Luksaite
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Arune Binkyte-Veliene
- Institute of Sustainable Construction, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Ingrida Vetloviene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| | - Loreta Kaklauskiene
- Department of Construction Management and Real Estate, Vilnius Gediminas Technical University, Sauletekio Ave. 11, LT-10223 Vilnius, Lithuania
| |
Collapse
|
15
|
Fabio RA, Chiarini L, Canegallo V. Pain in Rett syndrome: a pilot study and a single case study on the assessment of pain and the construction of a suitable measuring scale. Orphanet J Rare Dis 2022; 17:356. [PMID: 36104823 PMCID: PMC9476284 DOI: 10.1186/s13023-022-02519-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2022] [Accepted: 09/05/2022] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Rett Syndrome (RTT) is a severe, neurodevelopmental disorder mainly caused by mutations in the MECP2 gene, affecting around 1 in 10,000 female births. Severe physical, language, and social impairments impose a wide range of limitations in the quality of life of the patients with RTT. Comorbidities of patients with RTT are varied and cause a lot of pain, but communicating this suffering is difficult for these patients due to their problems, such as apraxia that does not allow them to express pain in a timely manner, and their difficulties with expressive language that also do not permit them to communicate. Two studies, a pilot study and a single case study, investigate the manifestation of pain of patients with RTT and propose a suitable scale to measure it. AIMS OF THIS STUDY The first aim was to describe pain situations of RTT by collecting information by parents; the second aim was to test and compare existing questionnaires for non-communicating disorders on pain such as Pain assessment in advanced demenzia (PAINAD), the Critical care pain observation tool (CPOT) and the Non-communicating Children's Pain Checklist-Revised (NCCPC-R) to assess which of them is best related to the pain behavior of patients with RTT. The third aim was to identify the specific verbal and non-verbal behaviors that characterize pain in girls with Rett syndrome, discriminating them from non-pain behaviors. METHOD Nineteen participants, eighteen girls with RTT and one girl with RTT with 27 manifestations of pain were video-recorded both in pain and base-line conditions. Two independent observers codified the 90 video-recording (36 and 54) to describe their behavioral characteristics. RESULTS The two studies showed that the most significant pain behaviors expressed by girls with respect to the baseline condition, at the facial level were a wrinkled forehead, wide eyes, grinding, banging teeth, complaining, making sounds, crying and screaming, and the most common manifestations of the body were tremors, forward and backward movement of the torso, tension in the upper limbs, increased movement of the lower limbs and a sprawling movement affecting the whole body. CONCLUSION The results of the two studies helped to create an easy-to-apply scale that healthcare professionals can use to assess pain in patients with Rett's syndrome. This scale used PAINAD as its basic structure, with some changes in the items related to the behavior of patients with RTT.
Collapse
Affiliation(s)
- Rosa Angela Fabio
- Department of Economy, University of Messina, via Dei Verdi, 75, 98123 Messina, Italy
| | - Liliana Chiarini
- Department of Economy, University of Messina, via Dei Verdi, 75, 98123 Messina, Italy
- CARI, (Airett Center Innovation and Research), Vicolo Volto S. Luca, 16, 37100 Verona, Italy
| | - Virginia Canegallo
- Vita-Salute San Raffaele University, Via Olgettina, 58, 20132 Milano, MI Italy
| |
Collapse
|
16
|
Park H, Kim J, Jo S, Kim H, Jo Y, Kim S, Yoo I. Measuring emotional variables in occupational performance: A scoping review. Work 2022; 72:1195-1203. [DOI: 10.3233/wor-205162] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
BACKGROUND: As interest in job-related psychology increased, the need to focus on understanding workplace stress was emphasized. Negative emotional states such as anxiety and stress permeate the organization and, if uncontrolled, can negatively impact the health and work performance of workers. Therefore, attempts to analyze various signals to understand human emotional states or attitudes may be important for future technological development. OBJECTIVE: The purpose of this study was to identify what biological variables can discriminate emotions that can significantly affect work results. METHODS: Databases (Embase, PsychINFO, PubMed, and CINAHL) were searched for all relevant literature published as of December 31, 2019. RESULTS: Brain activity (BA) and heart rate (HR) or heart rate variability (HRV) are adequate for assessing negative emotions, while BA, galvanic skin response (GSR), and salivary samples (SS) can confirm positive and negative emotions. CONCLUSION: In the future, researchers should study measurement tools and bio-related variables while workers perform tasks and develop intervention strategies to address emotions associated with work. This may enable workers to perform tasks more efficiently, prevent accidents, and satisfy clients.
Collapse
Affiliation(s)
- Hoojung Park
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Jisu Kim
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Subeen Jo
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Hanseon Kim
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Yunjo Jo
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Suhyeon Kim
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| | - Ingyu Yoo
- Department of Occupational Therapy, College of Medical Science, Jeonju University, Jeonju-si, Jeollabuk-do, Republic of Korea
| |
Collapse
|
17
|
Dadebayev D, Goh WW, Tan EX. EEG-based emotion recognition: Review of commercial EEG devices and machine learning techniques. JOURNAL OF KING SAUD UNIVERSITY - COMPUTER AND INFORMATION SCIENCES 2022. [DOI: 10.1016/j.jksuci.2021.03.009] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
18
|
Li D, Xie L, Chai B, Wang Z, Yang H. Spatial-frequency convolutional self-attention network for EEG emotion recognition. Appl Soft Comput 2022. [DOI: 10.1016/j.asoc.2022.108740] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
19
|
Wang Y, Zhang L, Xia P, Wang P, Chen X, Du L, Fang Z, Du M. EEG-Based Emotion Recognition Using a 2D CNN with Different Kernels. Bioengineering (Basel) 2022; 9:bioengineering9060231. [PMID: 35735474 PMCID: PMC9219701 DOI: 10.3390/bioengineering9060231] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 05/21/2022] [Accepted: 05/23/2022] [Indexed: 11/16/2022] Open
Abstract
Emotion recognition is receiving significant attention in research on health care and Human-Computer Interaction (HCI). Due to the high correlation with emotion and the capability to affect deceptive external expressions such as voices and faces, Electroencephalogram (EEG) based emotion recognition methods have been globally accepted and widely applied. Recently, great improvements have been made in the development of machine learning for EEG-based emotion detection. However, there are still some major disadvantages in previous studies. Firstly, traditional machine learning methods require extracting features manually which is time-consuming and rely heavily on human experts. Secondly, to improve the model accuracies, many researchers used user-dependent models that lack generalization and universality. Moreover, there is still room for improvement in the recognition accuracies in most studies. Therefore, to overcome these shortcomings, an EEG-based novel deep neural network is proposed for emotion classification in this article. The proposed 2D CNN uses two convolutional kernels of different sizes to extract emotion-related features along both the time direction and the spatial direction. To verify the feasibility of the proposed model, the pubic emotion dataset DEAP is used in experiments. The results show accuracies of up to 99.99% and 99.98 for arousal and valence binary classification, respectively, which are encouraging for research and applications in the emotion recognition field.
Collapse
Affiliation(s)
- Yuqi Wang
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; (Y.W.); (L.Z.)
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
| | - Lijun Zhang
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; (Y.W.); (L.Z.)
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
| | - Pan Xia
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Peng Wang
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Xianxiang Chen
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Lidong Du
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Zhen Fang
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
- Personalized Management of Chronic Respiratory Disease, Chinese Academy of Medical Sciences, Beijing 100190, China
- Correspondence: (Z.F.); (M.D.)
| | - Mingyan Du
- China Beijing Luhe Hospital, Capital Medical University, Beijing 101199, China
- Correspondence: (Z.F.); (M.D.)
| |
Collapse
|
20
|
García-Martínez B, Fernández-Caballero A, Martínez-Rodrigo A. Entropy and the Emotional Brain: Overview of a Research Field. ARTIF INTELL 2022. [DOI: 10.5772/intechopen.98342] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
During the last years, there has been a notable increase in the number of studies focused on the assessment of brain dynamics for the recognition of emotional states by means of nonlinear methodologies. More precisely, different entropy metrics have been applied for the analysis of electroencephalographic recordings for the detection of emotions. In this sense, regularity-based entropy metrics, symbolic predictability-based entropy indices, and different multiscale and multilag variants of the aforementioned methods have been successfully tested in a series of studies for emotion recognition from the EEG recording. This chapter aims to unify all those contributions to this scientific area, summarizing the main discoverings recently achieved in this research field.
Collapse
|
21
|
Sun Y, Chen X. Automatic Detection of Epilepsy Based on Entropy Feature Fusion and Convolutional Neural Network. OXIDATIVE MEDICINE AND CELLULAR LONGEVITY 2022; 2022:1322826. [PMID: 35602093 PMCID: PMC9117030 DOI: 10.1155/2022/1322826] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2022] [Revised: 04/04/2022] [Accepted: 04/12/2022] [Indexed: 11/17/2022]
Abstract
Epilepsy is a neurological disorder, caused by various genetic and acquired factors. Electroencephalogram (EEG) is an important means of diagnosis for epilepsy. Aiming at the low efficiency of clinical artificial diagnosis of epilepsy signals, this paper proposes an automatic detection algorithm for epilepsy based on multifeature fusion and convolutional neural network. Firstly, in order to retain the spatial information between multiple adjacent channels, a two-dimensional Eigen matrix is constructed from one-dimensional eigenvectors according to the electrode distribution diagram. According to the feature matrix, sample entropy SE, permutation entropy PE, and fuzzy entropy FE were used for feature extraction. The combined entropy feature is taken as the input information of three-dimensional convolutional neural network, and the automatic detection of epilepsy is realized by convolutional neural network algorithm. Epilepsy detection experiments were performed in CHB-MIT and TUH datasets, respectively. Experimental results show that the performance of the algorithm based on spatial multifeature fusion and convolutional neural network achieves excellent results.
Collapse
Affiliation(s)
- Yongxin Sun
- College of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, Jilin 130000, China
- College of Physics and Electronic Information, Baicheng Normal University, Baicheng, Jilin 137000, China
| | - Xiaojuan Chen
- College of Electronic Information Engineering, Changchun University of Science and Technology, Changchun, Jilin 130000, China
| |
Collapse
|
22
|
Maithri M, Raghavendra U, Gudigar A, Samanth J, Murugappan M, Chakole Y, Acharya UR. Automated emotion recognition: Current trends and future perspectives. COMPUTER METHODS AND PROGRAMS IN BIOMEDICINE 2022; 215:106646. [PMID: 35093645 DOI: 10.1016/j.cmpb.2022.106646] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2021] [Revised: 12/25/2021] [Accepted: 01/16/2022] [Indexed: 06/14/2023]
Abstract
BACKGROUND Human emotions greatly affect the actions of a person. The automated emotion recognition has applications in multiple domains such as health care, e-learning, surveillance, etc. The development of computer-aided diagnosis (CAD) tools has led to the automated recognition of human emotions. OBJECTIVE This review paper provides an insight into various methods employed using electroencephalogram (EEG), facial, and speech signals coupled with multi-modal emotion recognition techniques. In this work, we have reviewed most of the state-of-the-art papers published on this topic. METHOD This study was carried out by considering the various emotion recognition (ER) models proposed between 2016 and 2021. The papers were analysed based on methods employed, classifier used and performance obtained. RESULTS There is a significant rise in the application of deep learning techniques for ER. They have been widely applied for EEG, speech, facial expression, and multimodal features to develop an accurate ER model. CONCLUSION Our study reveals that most of the proposed machine and deep learning-based systems have yielded good performances for automated ER in a controlled environment. However, there is a need to obtain high performance for ER even in an uncontrolled environment.
Collapse
Affiliation(s)
- M Maithri
- Department of Mechatronics, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Raghavendra
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - Anjan Gudigar
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India.
| | - Jyothi Samanth
- Department of Cardiovascular Technology, Manipal College of Health Professions, Manipal Academy of Higher Education, Manipal, Karnataka 576104, India
| | - Murugappan Murugappan
- Department of Electronics and Communication Engineering, Kuwait College of Science and Technology, 13133, Kuwait
| | - Yashas Chakole
- Department of Instrumentation and Control Engineering, Manipal Institute of Technology, Manipal Academy of Higher Education, Manipal 576104, India
| | - U Rajendra Acharya
- School of Engineering, Ngee Ann Polytechnic, Clementi 599489, Singapore; Department of Biomedical Informatics and Medical Engineering, Asia University, Taichung, Taiwan; Department of Biomedical Engineering, School of Science and Technology, SUSS University, Singapore
| |
Collapse
|
23
|
Evaluation of a Single-Channel EEG-Based Sleep Staging Algorithm. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19052845. [PMID: 35270548 PMCID: PMC8910622 DOI: 10.3390/ijerph19052845] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 02/06/2022] [Accepted: 02/22/2022] [Indexed: 12/17/2022]
Abstract
Sleep staging is the basis of sleep assessment and plays a crucial role in the early diagnosis and intervention of sleep disorders. Manual sleep staging by a specialist is time-consuming and is influenced by subjective factors. Moreover, some automatic sleep staging algorithms are complex and inaccurate. The paper proposes a single-channel EEG-based sleep staging method that provides reliable technical support for diagnosing sleep problems. In this study, 59 features were extracted from three aspects: time domain, frequency domain, and nonlinear indexes based on single-channel EEG data. Support vector machine, neural network, decision tree, and random forest classifier were used to classify sleep stages automatically. The results reveal that the random forest classifier has the best sleep staging performance among the four algorithms. The recognition rate of the Wake phase was the highest, at 92.13%, and that of the N1 phase was the lowest, at 73.46%, with an average accuracy of 83.61%. The embedded method was adopted for feature filtering. The results of sleep staging of the 11-dimensional features after filtering show that the random forest model achieved 83.51% staging accuracy under the condition of reduced feature dimensions, and the coincidence rate with the use of all features for sleep staging was 94.85%. Our study confirms the robustness of the random forest model in sleep staging, which also represents a high classification accuracy with appropriate classifier algorithms, even using single-channel EEG data. This study provides a new direction for the portability of clinical EEG monitoring.
Collapse
|
24
|
Liu H, Zhang Y, Li Y, Kong X. Review on Emotion Recognition Based on Electroencephalography. Front Comput Neurosci 2021; 15:758212. [PMID: 34658828 PMCID: PMC8518715 DOI: 10.3389/fncom.2021.758212] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2021] [Accepted: 08/31/2021] [Indexed: 11/13/2022] Open
Abstract
Emotions are closely related to human behavior, family, and society. Changes in emotions can cause differences in electroencephalography (EEG) signals, which show different emotional states and are not easy to disguise. EEG-based emotion recognition has been widely used in human-computer interaction, medical diagnosis, military, and other fields. In this paper, we describe the common steps of an emotion recognition algorithm based on EEG from data acquisition, preprocessing, feature extraction, feature selection to classifier. Then, we review the existing EEG-based emotional recognition methods, as well as assess their classification effect. This paper will help researchers quickly understand the basic theory of emotion recognition and provide references for the future development of EEG. Moreover, emotion is an important representation of safety psychology.
Collapse
Affiliation(s)
- Haoran Liu
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Ying Zhang
- Patent Examination Cooperation (Henan) Center of the Patent Office, CNIPA, Zhengzhou, China
| | - Yujun Li
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| | - Xiangyi Kong
- The Boiler and Pressure Vessel Safety Inspection Institute of Henan Province, Zhengzhou, China
| |
Collapse
|
25
|
Long F, Zhao S, Wei X, Ng SC, Ni X, Chi A, Fang P, Zeng W, Wei B. Positive and Negative Emotion Classification Based on Multi-channel. Front Behav Neurosci 2021; 15:720451. [PMID: 34512288 PMCID: PMC8428531 DOI: 10.3389/fnbeh.2021.720451] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Accepted: 07/29/2021] [Indexed: 11/13/2022] Open
Abstract
The EEG features of different emotions were extracted based on multi-channel and forehead channels in this study. The EEG signals of 26 subjects were collected by the emotional video evoked method. The results show that the energy ratio and differential entropy of the frequency band can be used to classify positive and negative emotions effectively, and the best effect can be achieved by using an SVM classifier. When only the forehead and forehead signals are used, the highest classification accuracy can reach 66%. When the data of all channels are used, the highest accuracy of the model can reach 82%. After channel selection, the best model of this study can be obtained. The accuracy is more than 86%.
Collapse
Affiliation(s)
- Fangfang Long
- Department of Psychology, Nanjing University, Nanjing, China
| | - Shanguang Zhao
- Centre for Sport and Exercise Sciences, University of Malaya, Kuala Lumpur, Malaysia
| | - Xin Wei
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China.,Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Siew-Cheok Ng
- Faculty of Engineering, University of Malaya, Kuala Lumpur, Malaysia
| | - Xiaoli Ni
- Institute of Social Psychology, School of Humanities and Social Sciences, Xi'an Jiaotong University, Xi'an, China
| | - Aiping Chi
- School of Sports, Shaanxi Normal University, Xi'an, China
| | - Peng Fang
- Department of the Psychology of Military Medicine, Air Force Medical University, Xi'an, China
| | - Weigang Zeng
- Key & Core Technology Innovation Institute of the Greater Bay Area, Guangdong, China
| | - Bokun Wei
- Xi'an Middle School of Shaanxi Province, Xi'an, China
| |
Collapse
|
26
|
Khare SK, Bajaj V. Time-Frequency Representation and Convolutional Neural Network-Based Emotion Recognition. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2021; 32:2901-2909. [PMID: 32735536 DOI: 10.1109/tnnls.2020.3008938] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
Emotions composed of cognizant logical reactions toward various situations. Such mental responses stem from physiological, cognitive, and behavioral changes. Electroencephalogram (EEG) signals provide a noninvasive and nonradioactive solution for emotion identification. Accurate and automatic classification of emotions can boost the development of human-computer interface. This article proposes automatic extraction and classification of features through the use of different convolutional neural networks (CNNs). At first, the proposed method converts the filtered EEG signals into an image using a time-frequency representation. Smoothed pseudo-Wigner-Ville distribution is used to transform time-domain EEG signals into images. These images are fed to pretrained AlexNet, ResNet50, and VGG16 along with configurable CNN. The performance of four CNNs is evaluated by measuring the accuracy, precision, Mathew's correlation coefficient, F1-score, and false-positive rate. The results obtained by evaluating four CNNs show that configurable CNN requires very less learning parameters with better accuracy. Accuracy scores of 90.98%, 91.91%, 92.71%, and 93.01% obtained by AlexNet, ResNet50, VGG16, and configurable CNN show that the proposed method is best among other existing methods.
Collapse
|
27
|
Subasi A, Tuncer T, Dogan S, Tanko D, Sakoglu U. EEG-based emotion recognition using tunable Q wavelet transform and rotation forest ensemble classifier. Biomed Signal Process Control 2021. [DOI: 10.1016/j.bspc.2021.102648] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/17/2023]
|
28
|
Galvão F, Alarcão SM, Fonseca MJ. Predicting Exact Valence and Arousal Values from EEG. SENSORS (BASEL, SWITZERLAND) 2021; 21:3414. [PMID: 34068895 PMCID: PMC8155937 DOI: 10.3390/s21103414] [Citation(s) in RCA: 19] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/29/2021] [Revised: 04/20/2021] [Accepted: 05/11/2021] [Indexed: 11/18/2022]
Abstract
Recognition of emotions from physiological signals, and in particular from electroencephalography (EEG), is a field within affective computing gaining increasing relevance. Although researchers have used these signals to recognize emotions, most of them only identify a limited set of emotional states (e.g., happiness, sadness, anger, etc.) and have not attempted to predict exact values for valence and arousal, which would provide a wider range of emotional states. This paper describes our proposed model for predicting the exact values of valence and arousal in a subject-independent scenario. To create it, we studied the best features, brain waves, and machine learning models that are currently in use for emotion classification. This systematic analysis revealed that the best prediction model uses a KNN regressor (K = 1) with Manhattan distance, features from the alpha, beta and gamma bands, and the differential asymmetry from the alpha band. Results, using the DEAP, AMIGOS and DREAMER datasets, show that our model can predict valence and arousal values with a low error (MAE < 0.06, RMSE < 0.16) and a strong correlation between predicted and expected values (PCC > 0.80), and can identify four emotional classes with an accuracy of 84.4%. The findings of this work show that the features, brain waves and machine learning models, typically used in emotion classification tasks, can be used in more challenging situations, such as the prediction of exact values for valence and arousal.
Collapse
Affiliation(s)
| | | | - Manuel J. Fonseca
- LASIGE, Faculdade de Ciências, Universidade de Lisboa, 1749-016 Lisboa, Portugal; (F.G.); (S.M.A.)
| |
Collapse
|
29
|
Maheshwari D, Ghosh SK, Tripathy RK, Sharma M, Acharya UR. Automated accurate emotion recognition system using rhythm-specific deep convolutional neural network technique with multi-channel EEG signals. Comput Biol Med 2021; 134:104428. [PMID: 33984749 DOI: 10.1016/j.compbiomed.2021.104428] [Citation(s) in RCA: 32] [Impact Index Per Article: 10.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 04/15/2021] [Accepted: 04/19/2021] [Indexed: 10/21/2022]
Abstract
Emotion is interpreted as a psycho-physiological process, and it is associated with personality, behavior, motivation, and character of a person. The objective of affective computing is to recognize different types of emotions for human-computer interaction (HCI) applications. The spatiotemporal brain electrical activity is measured using multi-channel electroencephalogram (EEG) signals. Automated emotion recognition using multi-channel EEG signals is an exciting research topic in cognitive neuroscience and affective computing. This paper proposes the rhythm-specific multi-channel convolutional neural network (CNN) based approach for automated emotion recognition using multi-channel EEG signals. The delta (δ), theta (θ), alpha (α), beta (β), and gamma (γ) rhythms of EEG signal for each channel are evaluated using band-pass filters. The EEG rhythms from the selected channels coupled with deep CNN are used for emotion classification tasks such as low-valence (LV) vs. high valence (HV), low-arousal (LA) vs. high-arousal (HA), and low-dominance (LD) vs. high dominance (HD) respectively. The deep CNN architecture considered in the proposed work has eight convolutions, three average pooling, four batch-normalization, three spatial drop-outs, two drop-outs, one global average pooling and, three dense layers. We have validated our developed model using three publicly available databases: DEAP, DREAMER, and DASPS. The results reveal that the proposed multivariate deep CNN approach coupled with β-rhythm has obtained the accuracy values of 98.91%, 98.45%, and 98.69% for LV vs. HV, LA vs. HA, and LD vs. HD emotion classification strategies, respectively using DEAP database with 10-fold cross-validation (CV) scheme. Similarly, the accuracy values of 98.56%, 98.82%, and 98.99% are obtained for LV vs. HV, LA vs. HA, and LD vs. HD classification schemes, respectively, using deep CNN and θ-rhythm. The proposed multi-channel rhythm-specific deep CNN classification model has obtained the average accuracy value of 57.14% using α-rhythm and trial-specific CV using DASPS database. Moreover, for 8-quadrant based emotion classification strategy, the deep CNN based classifier has obtained an overall accuracy value of 24.37% using γ-rhythms of multi-channel EEG signals. Our developed deep CNN model can be used for real-time automated emotion recognition applications.
Collapse
Affiliation(s)
- Daksh Maheshwari
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - S K Ghosh
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India
| | - R K Tripathy
- Department of Electrical and Electronics Engineering, BITS-Pilani, Hyderabad Campus, Hyderabad, 500078, India.
| | - Manish Sharma
- Department of Electrical and Computer Science Engineering, IITRAM, Ahmedabad, India
| | - U Rajendra Acharya
- Department of Electronics and Computer Engineering, Ngee Ann Polytechnic, Singapore; Department of Bioinformatics and Medical Engineering, Asia University, Taichung, Taiwan; International Research Organization for Advanced Science and Technology, Kumamoto University, Kumamoto, Japan
| |
Collapse
|
30
|
Bao G, Zhuang N, Tong L, Yan B, Shu J, Wang L, Zeng Y, Shen Z. Two-Level Domain Adaptation Neural Network for EEG-Based Emotion Recognition. Front Hum Neurosci 2021; 14:605246. [PMID: 33551775 PMCID: PMC7854906 DOI: 10.3389/fnhum.2020.605246] [Citation(s) in RCA: 20] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/11/2020] [Accepted: 12/22/2020] [Indexed: 11/16/2022] Open
Abstract
Emotion recognition plays an important part in human-computer interaction (HCI). Currently, the main challenge in electroencephalogram (EEG)-based emotion recognition is the non-stationarity of EEG signals, which causes performance of the trained model decreasing over time. In this paper, we propose a two-level domain adaptation neural network (TDANN) to construct a transfer model for EEG-based emotion recognition. Specifically, deep features from the topological graph, which preserve topological information from EEG signals, are extracted using a deep neural network. These features are then passed through TDANN for two-level domain confusion. The first level uses the maximum mean discrepancy (MMD) to reduce the distribution discrepancy of deep features between source domain and target domain, and the second uses the domain adversarial neural network (DANN) to force the deep features closer to their corresponding class centers. We evaluated the domain-transfer performance of the model on both our self-built data set and the public data set SEED. In the cross-day transfer experiment, the ability to accurately discriminate joy from other emotions was high: sadness (84%), anger (87.04%), and fear (85.32%) on the self-built data set. The accuracy reached 74.93% on the SEED data set. In the cross-subject transfer experiment, the ability to accurately discriminate joy from other emotions was equally high: sadness (83.79%), anger (84.13%), and fear (81.72%) on the self-built data set. The average accuracy reached 87.9% on the SEED data set, which was higher than WGAN-DA. The experimental results demonstrate that the proposed TDANN can effectively handle the domain transfer problem in EEG-based emotion recognition.
Collapse
Affiliation(s)
- Guangcheng Bao
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Ning Zhuang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Li Tong
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Bin Yan
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Jun Shu
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Linyuan Wang
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| | - Ying Zeng
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China.,Key Laboratory for NeuroInformation of Ministry of Education, School of Life Science and Technology, University of Electronic Science and Technology of China, Chengdu, China
| | - Zhichong Shen
- Henan Key Laboratory of Imaging and Intelligent Processing, PLA Strategic Support Force Information Engineering University, Zhengzhou, China
| |
Collapse
|
31
|
Ismael AM, Alçin ÖF, Abdalla KH, Şengür A. Two-stepped majority voting for efficient EEG-based emotion classification. Brain Inform 2020; 7:9. [PMID: 32940803 PMCID: PMC7498529 DOI: 10.1186/s40708-020-00111-3] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2020] [Accepted: 09/08/2020] [Indexed: 12/24/2022] Open
Abstract
In this paper, a novel approach that is based on two-stepped majority voting is proposed for efficient EEG-based emotion classification. Emotion recognition is important for human–machine interactions. Facial features- and body gestures-based approaches have been generally proposed for emotion recognition. Recently, EEG-based approaches become more popular in emotion recognition. In the proposed approach, the raw EEG signals are initially low-pass filtered for noise removal and band-pass filters are used for rhythms extraction. For each rhythm, the best performed EEG channels are determined based on wavelet-based entropy features and fractal dimension-based features. The k-nearest neighbor (KNN) classifier is used in classification. The best five EEG channels are used in majority voting for getting the final predictions for each EEG rhythm. In the second majority voting step, the predictions from all rhythms are used to get a final prediction. The DEAP dataset is used in experiments and classification accuracy, sensitivity and specificity are used for performance evaluation metrics. The experiments are carried out to classify the emotions into two binary classes such as high valence (HV) vs low valence (LV) and high arousal (HA) vs low arousal (LA). The experiments show that 86.3% HV vs LV discrimination accuracy and 85.0% HA vs LA discrimination accuracy is obtained. The obtained results are also compared with some of the existing methods. The comparisons show that the proposed method has potential in the use of EEG-based emotion classification.
Collapse
Affiliation(s)
- Aras M Ismael
- Sulaimani Polytechnic University, Sulaymaniyah, Iraq.
| | - Ömer F Alçin
- Electrical Engineering Department, Engineering and Natural Sciences Faculty, Malatya Turgut Ozal University, 44210, Malatya, Turkey
| | | | - Abdulkadir Şengür
- Electrical-Electronics Engineering Department, Technology Faculty, Firat University, Elazig, Turkey
| |
Collapse
|
32
|
Raheel A, Majid M, Alnowami M, Anwar SM. Physiological Sensors Based Emotion Recognition While Experiencing Tactile Enhanced Multimedia. SENSORS 2020; 20:s20144037. [PMID: 32708056 PMCID: PMC7411620 DOI: 10.3390/s20144037] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Revised: 05/12/2020] [Accepted: 05/14/2020] [Indexed: 12/18/2022]
Abstract
Emotion recognition has increased the potential of affective computing by getting an instant feedback from users and thereby, have a better understanding of their behavior. Physiological sensors have been used to recognize human emotions in response to audio and video content that engages single (auditory) and multiple (two: auditory and vision) human senses, respectively. In this study, human emotions were recognized using physiological signals observed in response to tactile enhanced multimedia content that engages three (tactile, vision, and auditory) human senses. The aim was to give users an enhanced real-world sensation while engaging with multimedia content. To this end, four videos were selected and synchronized with an electric fan and a heater, based on timestamps within the scenes, to generate tactile enhanced content with cold and hot air effect respectively. Physiological signals, i.e., electroencephalography (EEG), photoplethysmography (PPG), and galvanic skin response (GSR) were recorded using commercially available sensors, while experiencing these tactile enhanced videos. The precision of the acquired physiological signals (including EEG, PPG, and GSR) is enhanced using pre-processing with a Savitzky-Golay smoothing filter. Frequency domain features (rational asymmetry, differential asymmetry, and correlation) from EEG, time domain features (variance, entropy, kurtosis, and skewness) from GSR, heart rate and heart rate variability from PPG data are extracted. The K nearest neighbor classifier is applied to the extracted features to classify four (happy, relaxed, angry, and sad) emotions. Our experimental results show that among individual modalities, PPG-based features gives the highest accuracy of 78.57% as compared to EEG- and GSR-based features. The fusion of EEG, GSR, and PPG features further improved the classification accuracy to 79.76% (for four emotions) when interacting with tactile enhanced multimedia.
Collapse
Affiliation(s)
- Aasim Raheel
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
| | - Muhammad Majid
- Department of Computer Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
- Correspondence:
| | - Majdi Alnowami
- Department of Nuclear Engineering, King Abdulaziz University, Jeddah 21589, Saudi Arabia;
| | - Syed Muhammad Anwar
- Department of Software Engineering, University of Engineering and Technology, Taxila 47050, Pakistan;
| |
Collapse
|
33
|
Pan L, Yin Z, She S, Song A. Emotional State Recognition from Peripheral Physiological Signals Using Fused Nonlinear Features and Team-Collaboration Identification Strategy. ENTROPY 2020; 22:e22050511. [PMID: 33286283 PMCID: PMC7517002 DOI: 10.3390/e22050511] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/25/2020] [Revised: 04/25/2020] [Accepted: 04/27/2020] [Indexed: 11/16/2022]
Abstract
Emotion recognition realizing human inner perception has a very important application prospect in human-computer interaction. In order to improve the accuracy of emotion recognition, a novel method combining fused nonlinear features and team-collaboration identification strategy was proposed for emotion recognition using physiological signals. Four nonlinear features, namely approximate entropy (ApEn), sample entropy (SaEn), fuzzy entropy (FuEn) and wavelet packet entropy (WpEn) are employed to reflect emotional states deeply with each type of physiological signal. Then the features of different physiological signals are fused to represent the emotional states from multiple perspectives. Each classifier has its own advantages and disadvantages. In order to make full use of the advantages of other classifiers and avoid the limitation of single classifier, the team-collaboration model is built and the team-collaboration decision-making mechanism is designed according to the proposed team-collaboration identification strategy which is based on the fusion of support vector machine (SVM), decision tree (DT) and extreme learning machine (ELM). Through analysis, SVM is selected as the main classifier with DT and ELM as auxiliary classifiers. According to the designed decision-making mechanism, the proposed team-collaboration identification strategy can effectively employ different classification methods to make decision based on the characteristics of the samples through SVM classification. For samples which are easy to be identified by SVM, SVM directly determines the identification results, whereas SVM-DT-ELM collaboratively determines the identification results, which can effectively utilize the characteristics of each classifier and improve the classification accuracy. The effectiveness and universality of the proposed method are verified by Augsburg database and database for emotion analysis using physiological (DEAP) signals. The experimental results uniformly indicated that the proposed method combining fused nonlinear features and team-collaboration identification strategy presents better performance than the existing methods.
Collapse
Affiliation(s)
- Lizheng Pan
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China; (Z.Y.); (S.S.)
- Remote Measurement and Control Key Lab of Jiangsu Province, School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China;
- Correspondence:
| | - Zeming Yin
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China; (Z.Y.); (S.S.)
| | - Shigang She
- School of Mechanical Engineering, Changzhou University, Changzhou 213164, China; (Z.Y.); (S.S.)
| | - Aiguo Song
- Remote Measurement and Control Key Lab of Jiangsu Province, School of Instrument Science and Engineering, Southeast University, Nanjing 210096, China;
| |
Collapse
|
34
|
Cimtay Y, Ekmekcioglu E. Investigating the Use of Pretrained Convolutional Neural Network on Cross-Subject and Cross-Dataset EEG Emotion Recognition. SENSORS 2020; 20:s20072034. [PMID: 32260445 PMCID: PMC7181114 DOI: 10.3390/s20072034] [Citation(s) in RCA: 73] [Impact Index Per Article: 18.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 04/01/2020] [Accepted: 04/02/2020] [Indexed: 11/16/2022]
Abstract
The electroencephalogram (EEG) has great attraction in emotion recognition studies due to its resistance to deceptive actions of humans. This is one of the most significant advantages of brain signals in comparison to visual or speech signals in the emotion recognition context. A major challenge in EEG-based emotion recognition is that EEG recordings exhibit varying distributions for different people as well as for the same person at different time instances. This nonstationary nature of EEG limits the accuracy of it when subject independency is the priority. The aim of this study is to increase the subject-independent recognition accuracy by exploiting pretrained state-of-the-art Convolutional Neural Network (CNN) architectures. Unlike similar studies that extract spectral band power features from the EEG readings, raw EEG data is used in our study after applying windowing, pre-adjustments and normalization. Removing manual feature extraction from the training system overcomes the risk of eliminating hidden features in the raw data and helps leverage the deep neural network’s power in uncovering unknown features. To improve the classification accuracy further, a median filter is used to eliminate the false detections along a prediction interval of emotions. This method yields a mean cross-subject accuracy of 86.56% and 78.34% on the Shanghai Jiao Tong University Emotion EEG Dataset (SEED) for two and three emotion classes, respectively. It also yields a mean cross-subject accuracy of 72.81% on the Database for Emotion Analysis using Physiological Signals (DEAP) and 81.8% on the Loughborough University Multimodal Emotion Dataset (LUMED) for two emotion classes. Furthermore, the recognition model that has been trained using the SEED dataset was tested with the DEAP dataset, which yields a mean prediction accuracy of 58.1% across all subjects and emotion classes. Results show that in terms of classification accuracy, the proposed approach is superior to, or on par with, the reference subject-independent EEG emotion recognition studies identified in literature and has limited complexity due to the elimination of the need for feature extraction.
Collapse
|
35
|
Oh S, Lee JY, Kim DK. The Design of CNN Architectures for Optimal Six Basic Emotion Classification Using Multiple Physiological Signals. SENSORS (BASEL, SWITZERLAND) 2020; 20:E866. [PMID: 32041226 PMCID: PMC7038703 DOI: 10.3390/s20030866] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Revised: 02/03/2020] [Accepted: 02/03/2020] [Indexed: 12/27/2022]
Abstract
This study aimed to design an optimal emotion recognition method using multiple physiological signal parameters acquired by bio-signal sensors for improving the accuracy of classifying individual emotional responses. Multiple physiological signals such as respiration (RSP) and heart rate variability (HRV) were acquired in an experiment from 53 participants when six basic emotion states were induced. Two RSP parameters were acquired from a chest-band respiration sensor, and five HRV parameters were acquired from a finger-clip blood volume pulse (BVP) sensor. A newly designed deep-learning model based on a convolutional neural network (CNN) was adopted for detecting the identification accuracy of individual emotions. Additionally, the signal combination of the acquired parameters was proposed to obtain high classification accuracy. Furthermore, a dominant factor influencing the accuracy was found by comparing the relativeness of the parameters, providing a basis for supporting the results of emotion classification. The users of this proposed model will soon be able to improve the emotion recognition model further based on CNN using multimodal physiological signals and their sensors.
Collapse
Affiliation(s)
- SeungJun Oh
- Department of Sports ICT Convergence, Sangmyung University Graduate School, Seoul 03016, Korea;
| | - Jun-Young Lee
- Department of Psychiatry and Neuroscience Research Institute, Seoul National University College of Medicine, SMG-SNU Boramae Medical Center, Seoul 07061, Korea;
| | - Dong Keun Kim
- Department of Intelligent Engineering Informatics for Human, Institute of Intelligent Informatics Technology, Sangmyung University, Seoul 03016, Korea
| |
Collapse
|
36
|
Dzedzickis A, Kaklauskas A, Bucinskas V. Human Emotion Recognition: Review of Sensors and Methods. SENSORS (BASEL, SWITZERLAND) 2020; 20:E592. [PMID: 31973140 PMCID: PMC7037130 DOI: 10.3390/s20030592] [Citation(s) in RCA: 99] [Impact Index Per Article: 24.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/11/2019] [Revised: 01/10/2020] [Accepted: 01/12/2020] [Indexed: 11/16/2022]
Abstract
Automated emotion recognition (AEE) is an important issue in various fields of activities which use human emotional reactions as a signal for marketing, technical equipment, or human-robot interaction. This paper analyzes scientific research and technical papers for sensor use analysis, among various methods implemented or researched. This paper covers a few classes of sensors, using contactless methods as well as contact and skin-penetrating electrodes for human emotion detection and the measurement of their intensity. The results of the analysis performed in this paper present applicable methods for each type of emotion and their intensity and propose their classification. The classification of emotion sensors is presented to reveal area of application and expected outcomes from each method, as well as their limitations. This paper should be relevant for researchers using human emotion evaluation and analysis, when there is a need to choose a proper method for their purposes or to find alternative decisions. Based on the analyzed human emotion recognition sensors and methods, we developed some practical applications for humanizing the Internet of Things (IoT) and affective computing systems.
Collapse
Affiliation(s)
- Andrius Dzedzickis
- Faculty of Mechanics, Vilnius Gediminas Technical University, J. Basanaviciaus g. 28, LT-03224 Vilnius, Lithuania;
| | - Artūras Kaklauskas
- Faculty of Civil engineering, Vilnius Gediminas Technical University, Sauletekio ave. 11, LT-10223 Vilnius, Lithuania;
| | - Vytautas Bucinskas
- Faculty of Mechanics, Vilnius Gediminas Technical University, J. Basanaviciaus g. 28, LT-03224 Vilnius, Lithuania;
| |
Collapse
|
37
|
Alex M, Tariq U, Al-Shargie F, Mir HS, Nashash HA. Discrimination of Genuine and Acted Emotional Expressions Using EEG Signal and Machine Learning. IEEE ACCESS 2020; 8:191080-191089. [DOI: 10.1109/access.2020.3032380] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
Affiliation(s)
- Meera Alex
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Usman Tariq
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Fares Al-Shargie
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Hasan S. Mir
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| | - Hasan Al Nashash
- Biomedical Engineering Graduate Program, American University of Sharjah, Sharjah, United Arab Emirates
| |
Collapse
|
38
|
Tonic Cold Pain Detection Using Choi–Williams Time-Frequency Distribution Analysis of EEG Signals: A Feasibility Study. APPLIED SCIENCES-BASEL 2019. [DOI: 10.3390/app9163433] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/27/2022]
Abstract
Detecting pain based on analyzing electroencephalography (EEG) signals can enhance the ability of caregivers to characterize and manage clinical pain. However, the subjective nature of pain and the nonstationarity of EEG signals increase the difficulty of pain detection using EEG signals analysis. In this work, we present an EEG-based pain detection approach that analyzes the EEG signals using a quadratic time-frequency distribution, namely the Choi–Williams distribution (CWD). The use of the CWD enables construction of a time-frequency representation (TFR) of the EEG signals to characterize the time-varying spectral components of the EEG signals. The TFR of the EEG signals is analyzed to extract 12 time-frequency features for pain detection. These features are used to train a support vector machine classifier to distinguish between EEG signals that are associated with the no-pain and pain classes. To evaluate the performance of our proposed approach, we have recorded EEG signals for 24 healthy subjects under tonic cold pain stimulus. Moreover, we have developed two performance evaluation procedures—channel- and feature-based evaluation procedures—to study the effect of the utilized EEG channels and time-frequency features on the accuracy of pain detection. The experimental results show that our proposed approach achieved an average classification accuracy of 89.24% in distinguishing between the no-pain and pain classes. In addition, the classification performance achieved using our proposed approach outperforms the classification results reported in several existing EEG-based pain detection approaches.
Collapse
|
39
|
EEG-based BCI system for decoding finger movements within the same hand. Neurosci Lett 2019; 698:113-120. [DOI: 10.1016/j.neulet.2018.12.045] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2018] [Revised: 12/28/2018] [Accepted: 12/29/2018] [Indexed: 11/18/2022]
|
40
|
Al-Shargie F, Tariq U, Alex M, Mir H, Al-Nashash H. Emotion Recognition Based on Fusion of Local Cortical Activations and Dynamic Functional Networks Connectivity: An EEG Study. IEEE ACCESS 2019; 7:143550-143562. [DOI: 10.1109/access.2019.2944008] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/05/2024]
|
41
|
Shanmuga Priya K, Vasanthi S. Emotion classification using EEG signal for women safety application based on deep learning. JOURNAL OF INTELLIGENT & FUZZY SYSTEMS 2014. [DOI: 10.3233/jifs-221825] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/03/2023]
Abstract
An emotion is a conscious logical response that varies for different situations in women’s life. These mental responses are caused by physiological, cognitive, and behavioral changes. Gender-based violence undermines the participation of women in decision-making, resulting in a decline in their quality of life. More accurate and automatic classification of women’s emotions can enhance human-computer interfaces and security in real time. There are some wearable technologies and mobile applications that claim to ensure the safety of women. However, they rely on limited social action and are ineffective at ensuring women’s safety when and where it is needed. In this work, a novel CDB-LSTM network has been proposed to accurately classify the emotions of women in seven different classes. The electroencephalogram (EEG) offers non-radioactive methods of identifying emotions. Initially, the EEG signals are preprocessed and they are converted into images via Time-Frequency Representation (TPR). A smoothed pseudo-Wigner-Ville distribution (SPWVD) is employed to convert the EEG time-domain signals into input images. Consequently, these converted images are given as input to the Convolutional Deep Belief Network (CDBN) for extracting the most relevant features. Finally, Bi-directional LSTM is used for classifying the emotions of women into seven classes namely: happy, relax, sad, fear, anxiety, anger, and stress. The proposed CDB-LSTM network preserves the high accuracy range of 97.27% in the validation phase. The proposed CDB-LSTM network improves the overall accuracy by 6.20% 32.98% 6.85% and 3.30% better than CNN-LSTM, Multi-domain feature fusion model, GCNN-LSTM and CNN with SVM and DT respectively.
Collapse
|