2
|
Zeng X, Zhao X, Wang S, Qin J, Xie J, Zhong X, Chen J, Liu G. Affection of facial artifacts caused by micro-expressions on electroencephalography signals. Front Neurosci 2022; 16:1048199. [PMID: 36507351 PMCID: PMC9729706 DOI: 10.3389/fnins.2022.1048199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 11/03/2022] [Indexed: 11/25/2022] Open
Abstract
Macro-expressions are widely used in emotion recognition based on electroencephalography (EEG) because of their use as an intuitive external expression. Similarly, micro-expressions, as suppressed and brief emotional expressions, can also reflect a person's genuine emotional state. Therefore, researchers have started to focus on emotion recognition studies based on micro-expressions and EEG. However, compared to the effect of artifacts generated by macro-expressions on the EEG signal, it is not clear how artifacts generated by micro-expressions affect EEG signals. In this study, we investigated the effects of facial muscle activity caused by micro-expressions in positive emotions on EEG signals. We recorded the participants' facial expression images and EEG signals while they watched positive emotion-inducing videos. We then divided the 13 facial regions and extracted the main directional mean optical flow features as facial micro-expression image features, and the power spectral densities of theta, alpha, beta, and gamma frequency bands as EEG features. Multiple linear regression and Granger causality test analyses were used to determine the extent of the effect of facial muscle activity artifacts on EEG signals. The results showed that the average percentage of EEG signals affected by muscle artifacts caused by micro-expressions was 11.5%, with the frontal and temporal regions being significantly affected. After removing the artifacts from the EEG signal, the average percentage of the affected EEG signal dropped to 3.7%. To the best of our knowledge, this is the first study to investigate the affection of facial artifacts caused by micro-expressions on EEG signals.
Collapse
Affiliation(s)
- Xiaomei Zeng
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Xingcong Zhao
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Shiyuan Wang
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jian Qin
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jialan Xie
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Xinyue Zhong
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jiejia Chen
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China,*Correspondence: Guangyuan Liu,
| |
Collapse
|
4
|
Review of Automatic Microexpression Recognition in the Past Decade. MACHINE LEARNING AND KNOWLEDGE EXTRACTION 2021. [DOI: 10.3390/make3020021] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Facial expressions provide important information concerning one’s emotional state. Unlike regular facial expressions, microexpressions are particular kinds of small quick facial movements, which generally last only 0.05 to 0.2 s. They reflect individuals’ subjective emotions and real psychological states more accurately than regular expressions which can be acted. However, the small range and short duration of facial movements when microexpressions happen make them challenging to recognize both by humans and machines alike. In the past decade, automatic microexpression recognition has attracted the attention of researchers in psychology, computer science, and security, amongst others. In addition, a number of specialized microexpression databases have been collected and made publicly available. The purpose of this article is to provide a comprehensive overview of the current state of the art automatic facial microexpression recognition work. To be specific, the features and learning methods used in automatic microexpression recognition, the existing microexpression data sets, the major outstanding challenges, and possible future development directions are all discussed.
Collapse
|
5
|
Development of a Robust Multi-Scale Featured Local Binary Pattern for Improved Facial Expression Recognition. SENSORS 2020; 20:s20185391. [PMID: 32967087 PMCID: PMC7571087 DOI: 10.3390/s20185391] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/14/2020] [Revised: 09/11/2020] [Accepted: 09/14/2020] [Indexed: 11/16/2022]
Abstract
Compelling facial expression recognition (FER) processes have been utilized in very successful fields like computer vision, robotics, artificial intelligence, and dynamic texture recognition. However, the FER’s critical problem with traditional local binary pattern (LBP) is the loss of neighboring pixels related to different scales that can affect the texture of facial images. To overcome such limitations, this study describes a new extended LBP method to extract feature vectors from images, detecting each image from facial expressions. The proposed method is based on the bitwise AND operation of two rotational kernels applied on LBP(8,1) and LBP(8,2) and utilizes two accessible datasets. Firstly, the facial parts are detected and the essential components of a face are observed, such as eyes, nose, and lips. The portion of the face is then cropped to reduce the dimensions and an unsharp masking kernel is applied to sharpen the image. The filtered images then go through the feature extraction method and wait for the classification process. Four machine learning classifiers were used to verify the proposed method. This study shows that the proposed multi-scale featured local binary pattern (MSFLBP), together with Support Vector Machine (SVM), outperformed the recent LBP-based state-of-the-art approaches resulting in an accuracy of 99.12% for the Extended Cohn–Kanade (CK+) dataset and 89.08% for the Karolinska Directed Emotional Faces (KDEF) dataset.
Collapse
|