1
|
Yang H, Xie L, Pan H, Li C, Wang Z, Zhong J. Multimodal Attention Dynamic Fusion Network for Facial Micro-Expression Recognition. ENTROPY (BASEL, SWITZERLAND) 2023; 25:1246. [PMID: 37761545 PMCID: PMC10528512 DOI: 10.3390/e25091246] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Revised: 08/07/2023] [Accepted: 08/15/2023] [Indexed: 09/29/2023]
Abstract
The emotional changes in facial micro-expressions are combinations of action units. The researchers have revealed that action units can be used as additional auxiliary data to improve facial micro-expression recognition. Most of the researchers attempt to fuse image features and action unit information. However, these works ignore the impact of action units on the facial image feature extraction process. Therefore, this paper proposes a local detail feature enhancement model based on a multimodal dynamic attention fusion network (MADFN) method for micro-expression recognition. This method uses a masked autoencoder based on learnable class tokens to remove local areas with low emotional expression ability in micro-expression images. Then, we utilize the action unit dynamic fusion module to fuse action unit representation to improve the potential representation ability of image features. The state-of-the-art performance of our proposed model is evaluated and verified on SMIC, CASME II, SAMM, and their combined 3DB-Combined datasets. The experimental results demonstrated that the proposed model achieved competitive performance with accuracy rates of 81.71%, 82.11%, and 77.21% on SMIC, CASME II, and SAMM datasets, respectively, that show the MADFN model can help to improve the discrimination of facial image emotional features.
Collapse
Affiliation(s)
- Hongling Yang
- Department of Computer Science, Changzhi University, Changzhi 046011, China;
| | - Lun Xie
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (L.X.); (C.L.); (Z.W.)
| | - Hang Pan
- Department of Computer Science, Changzhi University, Changzhi 046011, China;
| | - Chiqin Li
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (L.X.); (C.L.); (Z.W.)
| | - Zhiliang Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing 100083, China; (L.X.); (C.L.); (Z.W.)
| | - Jialiang Zhong
- School of Mathematics and Computer Sciences, Nanchang University, Nanchang 330031, China;
| |
Collapse
|
2
|
Pan H, Yang H, Xie L, Wang Z. Multi-scale fusion visual attention network for facial micro-expression recognition. Front Neurosci 2023; 17:1216181. [PMID: 37575295 PMCID: PMC10412924 DOI: 10.3389/fnins.2023.1216181] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2023] [Accepted: 06/26/2023] [Indexed: 08/15/2023] Open
Abstract
Introduction Micro-expressions are facial muscle movements that hide genuine emotions. In response to the challenge of micro-expression low-intensity, recent studies have attempted to locate localized areas of facial muscle movement. However, this ignores the feature redundancy caused by the inaccurate locating of the regions of interest. Methods This paper proposes a novel multi-scale fusion visual attention network (MFVAN), which learns multi-scale local attention weights to mask regions of redundancy features. Specifically, this model extracts the multi-scale features of the apex frame in the micro-expression video clips by convolutional neural networks. The attention mechanism focuses on the weights of local region features in the multi-scale feature maps. Then, we mask operate redundancy regions in multi-scale features and fuse local features with high attention weights for micro-expression recognition. The self-supervision and transfer learning reduce the influence of individual identity attributes and increase the robustness of multi-scale feature maps. Finally, the multi-scale classification loss, mask loss, and removing individual identity attributes loss joint to optimize the model. Results The proposed MFVAN method is evaluated on SMIC, CASME II, SAMM, and 3DB-Combined datasets that achieve state-of-the-art performance. The experimental results show that focusing on local at the multi-scale contributes to micro-expression recognition. Discussion This paper proposed MFVAN model is the first to combine image generation with visual attention mechanisms to solve the combination challenge problem of individual identity attribute interference and low-intensity facial muscle movements. Meanwhile, the MFVAN model reveal the impact of individual attributes on the localization of local ROIs. The experimental results show that a multi-scale fusion visual attention network contributes to micro-expression recognition.
Collapse
Affiliation(s)
- Hang Pan
- Department of Computer Science, Changzhi University, Changzhi, China
| | - Hongling Yang
- Department of Computer Science, Changzhi University, Changzhi, China
| | - Lun Xie
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| | - Zhiliang Wang
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
3
|
Gunderson C, ten Brinke L, Sokol-Hessner P. When the body knows: Interoceptive accuracy enhances physiological but not explicit differentiation between liars and truth-tellers. PERSONALITY AND INDIVIDUAL DIFFERENCES 2023. [DOI: 10.1016/j.paid.2022.112039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
|
4
|
Zhou H, Huang S, Li J, Wang SJ. Dual-ATME: Dual-Branch Attention Network for Micro-Expression Recognition. ENTROPY (BASEL, SWITZERLAND) 2023; 25:460. [PMID: 36981348 PMCID: PMC10048169 DOI: 10.3390/e25030460] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Revised: 02/26/2023] [Accepted: 02/28/2023] [Indexed: 06/18/2023]
Abstract
Micro-expression recognition (MER) is challenging due to the difficulty of capturing the instantaneous and subtle motion changes of micro-expressions (MEs). Early works based on hand-crafted features extracted from prior knowledge showed some promising results, but have recently been replaced by deep learning methods based on the attention mechanism. However, with limited ME sample sizes, features extracted by these methods lack discriminative ME representations, in yet-to-be improved MER performance. This paper proposes the Dual-branch Attention Network (Dual-ATME) for MER to address the problem of ineffective single-scale features representing MEs. Specifically, Dual-ATME consists of two components: Hand-crafted Attention Region Selection (HARS) and Automated Attention Region Selection (AARS). HARS uses prior knowledge to manually extract features from regions of interest (ROIs). Meanwhile, AARS is based on attention mechanisms and extracts hidden information from data automatically. Finally, through similarity comparison and feature fusion, the dual-scale features could be used to learn ME representations effectively. Experiments on spontaneous ME datasets (including CASME II, SAMM, SMIC) and their composite dataset, MEGC2019-CD, showed that Dual-ATME achieves better, or more competitive, performance than the state-of-the-art MER methods.
Collapse
Affiliation(s)
- Haoliang Zhou
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang 212100, China;
- Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
| | - Shucheng Huang
- School of Computer, Jiangsu University of Science and Technology, Zhenjiang 212100, China;
| | - Jingting Li
- Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing 100049, China
| | - Su-Jing Wang
- Key Laboratory of Behavior Sciences, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China;
- Department of Psychology, University of the Chinese Academy of Sciences, Beijing 100049, China
| |
Collapse
|
5
|
Micro-expression recognition model based on TV-L1 optical flow method and improved ShuffleNet. Sci Rep 2022; 12:17522. [PMID: 36266408 PMCID: PMC9585088 DOI: 10.1038/s41598-022-21738-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Accepted: 09/30/2022] [Indexed: 01/13/2023] Open
Abstract
Micro-expression is a kind of facial action that reflects the real emotional state of a person, and has high objectivity in emotion detection. Therefore, micro-expression recognition has become one of the research hotspots in the field of computer vision in recent years. Research with neural networks with convolutional structure is still one of the main methods of recognition. This method has the advantage of high operational efficiency and low computational complexity, but the disadvantage is its localization of feature extraction. In recent years, there are more and more plug-and-play self-attentive modules being used in convolutional neural networks to improve the ability of the model to extract global features of the samples. In this paper, we propose the ShuffleNet model combined with a miniature self-attentive module, which has only 1.53 million training parameters. First, the start frame and vertex frame of each sample will be taken out, and its TV-L1 optical flow features will be extracted. After that, the optical flow features are fed into the model for pre-training. Finally, the weights obtained from the pre-training are used as initialization weights for the model to train the complete micro-expression samples and classify them by the SVM classifier. To evaluate the effectiveness of the method, it was trained and tested on a composite dataset consisting of CASMEII, SMIC, and SAMM, and the model achieved competitive results compared to state-of-the-art methods through cross-validation of leave-one-out subjects.
Collapse
|
6
|
Pan H, Xie L, Wang Z. Spatio-temporal convolutional emotional attention network for spotting macro- and micro-expression intervals in long video sequences. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.09.008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
7
|
Micro-Expression Recognition Based on Optical Flow and PCANet+. SENSORS 2022; 22:s22114296. [PMID: 35684917 PMCID: PMC9185295 DOI: 10.3390/s22114296] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/31/2021] [Revised: 05/10/2022] [Accepted: 05/31/2022] [Indexed: 11/27/2022]
Abstract
Micro-expressions are rapid and subtle facial movements. Different from ordinary facial expressions in our daily life, micro-expressions are very difficult to detect and recognize. In recent years, due to a wide range of potential applications in many domains, micro-expression recognition has aroused extensive attention from computer vision. Because available micro-expression datasets are very small, deep neural network models with a huge number of parameters are prone to over-fitting. In this article, we propose an OF-PCANet+ method for micro-expression recognition, in which we design a spatiotemporal feature learning strategy based on shallow PCANet+ model, and we incorporate optical flow sequence stacking with the PCANet+ network to learn discriminative spatiotemporal features. We conduct comprehensive experiments on publicly available SMIC and CASME2 datasets. The results show that our lightweight model obviously outperforms popular hand-crafted methods and also achieves comparable performances with deep learning based methods, such as 3D-FCNN and ELRCN.
Collapse
|
8
|
Turi A, Rebeleș MR, Visu-Petra L. The tangled webs they weave: A scoping review of deception detection and production in relation to Dark Triad traits. Acta Psychol (Amst) 2022; 226:103574. [PMID: 35367639 DOI: 10.1016/j.actpsy.2022.103574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Revised: 03/18/2022] [Accepted: 03/21/2022] [Indexed: 11/15/2022] Open
Abstract
People deceive for different reasons, from avoiding interpersonal conflicts to preserving, protecting, and nurturing interpersonal relationships, and to obtaining social status and power. A growing body of research highlights the role of personality in both deception detection and production, with a particular focus on high Dark Triad (DT) traits (Narcissism, Machiavellianism and Psychopathy), for their shared tendency to engage in unethical self-benefitting behaviors, despite negative consequences for others. The main goal of the current scoping review was to bring together the studies investigating self-reported and performance-based deception production and detection performances, as presented in individuals characterized by high DT traits and point out the possible contribution of DT to deception research. To do so, we identified the relevant studies documenting the similarities and discrepancies between the three personality traits and presented their results, based on the procedure used for deception assessment: subjective or objective measurements for production / detection. Then, we discussed possible explanatory mechanisms for inter-individual differences in lie detection / production and argue for the contribution of DT to deception research beyond the typical personality models, particularly for the antisocial character of deception.
Collapse
Affiliation(s)
- Andreea Turi
- Research in Individual Differences and Legal Psychology (RIDDLE) Lab, Department of Psychology, Babeș-Bolyai University, Cluj-Napoca, Romania; Gherla Penitentiary, Andrei Mureșanu, 4, 405300, Gherla, Romania
| | - Mădălina-Raluca Rebeleș
- Research in Individual Differences and Legal Psychology (RIDDLE) Lab, Department of Psychology, Babeș-Bolyai University, Cluj-Napoca, Romania
| | - Laura Visu-Petra
- Research in Individual Differences and Legal Psychology (RIDDLE) Lab, Department of Psychology, Babeș-Bolyai University, Cluj-Napoca, Romania.
| |
Collapse
|
9
|
Facial Micro-Expression Recognition Based on Deep Local-Holistic Network. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12094643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/10/2022]
Abstract
A micro-expression is a subtle, local and brief facial movement. It can reveal the genuine emotions that a person tries to conceal and is considered an important clue for lie detection. The micro-expression research has attracted much attention due to its promising applications in various fields. However, due to the short duration and low intensity of micro-expression movements, micro-expression recognition faces great challenges, and the accuracy still demands improvement. To improve the efficiency of micro-expression feature extraction, inspired by the psychological study of attentional resource allocation for micro-expression cognition, we propose a deep local-holistic network method for micro-expression recognition. Our proposed algorithm consists of two sub-networks. The first is a Hierarchical Convolutional Recurrent Neural Network (HCRNN), which extracts the local and abundant spatio-temporal micro-expression features. The second is a Robust principal-component-analysis-based recurrent neural network (RPRNN), which extracts global and sparse features with micro-expression-specific representations. The extracted effective features are employed for micro-expression recognition through the fusion of sub-networks. We evaluate the proposed method on combined databases consisting of the four most commonly used databases, i.e., CASME, CASME II, CAS(ME)2, and SAMM. The experimental results show that our method achieves a reasonably good performance.
Collapse
|
10
|
Learning two groups of discriminative features for micro-expression recognition. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.12.088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
11
|
Zak PJ, Barraza JA, Hu X, Zahedzadeh G, Murray J. Predicting Dishonesty When the Stakes Are High: Physiologic Responses During Face-to-Face Interactions Identifies Who Reneges on Promises to Cooperate. Front Behav Neurosci 2022; 15:787905. [PMID: 35177971 PMCID: PMC8845462 DOI: 10.3389/fnbeh.2021.787905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 12/22/2021] [Indexed: 11/13/2022] Open
Abstract
Trust is risky. The mere perception of strategically deceptive behavior that disguises intent or conveys unreliable information can inhibit cooperation. As gregariously social creatures, human beings would have evolved physiologic mechanisms to identify likely defectors in cooperative tasks, though these mechanisms may not cross into conscious awareness. We examined trust and trustworthiness in an ecological valid manner by (i) studying working-age adults, (ii) who make decisions with meaningful stakes, and (iii) permitting participants to discuss their intentions face-to-face prior to making private decisions. In order to identify why people fulfill or renege on their commitments, we measured neurophysiologic responses in blood and with electrodermal activity while participants interacted. Participants (mean age 32) made decisions in a trust game in which they could earn up to $530. Nearly all interactions produced promises to cooperate, although first decision-makers in the trust game reneged on 30.7% of their promises while second decision-makers reneged on 28%. First decision-makers who reneged on a promise had elevated physiologic stress using two measures (the change in adrenocorticotropin hormone and the change in skin conductance levels) during pre-decision communication compared to those who fulfilled their promises and had increased negative affect after their decisions. Neurophysiologic reactivity predicted who would cooperate or defect with 86% accuracy. While self-serving behavior is not rare, those who exhibit it are stressed and unhappy.
Collapse
Affiliation(s)
- Paul J. Zak
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
- *Correspondence: Paul J. Zak,
| | - Jorge A. Barraza
- Department of Psychology, University of Southern California, Los Angeles, CA, United States
| | - Xinbo Hu
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
| | - Giti Zahedzadeh
- Center for Neuroeconomics Studies, Claremont Graduate University, Claremont, CA, United States
| | - John Murray
- Association for Computing Machinery, New York, NY, United States
| |
Collapse
|
12
|
A comparative study on movement feature in different directions for micro-expression recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
13
|
Wang SJ, He Y, Li J, Fu X. MESNet: A Convolutional Neural Network for Spotting Multi-Scale Micro-Expression Intervals in Long Videos. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2021; 30:3956-3969. [PMID: 33788686 DOI: 10.1109/tip.2021.3064258] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Micro-expression spotting is a fundamental step in the micro-expression analysis. This paper proposes a novel network based convolutional neural network (CNN) for spotting multi-scale spontaneous micro-expression intervals in long videos. We named the network as Micro-Expression Spotting Network (MESNet). It is composed of three modules. The first module is a 2+1D Spatiotemporal Convolutional Network, which uses 2D convolution to extract spatial features and 1D convolution to extract temporal features. The second module is a Clip Proposal Network, which gives some proposed micro-expression clips. The last module is a Classification Regression Network, which classifies the proposed clips to micro-expression or not, and further regresses their temporal boundaries. We also propose a novel evaluation metric for spotting micro-expression. Extensive experiments have been conducted on the two long video datasets: CAS(ME)2 and SAMM, and the leave-one-subject-out cross-validation is used to evaluate the spotting performance. Results show that the proposed MESNet effectively enhances the F1-score metric. And comparative results show the proposed MESNet has achieved a good performance, which outperforms other state-of-the-art methods, especially in the SAMM dataset.
Collapse
|
14
|
Zong Y, Zheng W, Cui Z, Zhao G, Hu B. Toward Bridging Microexpressions From Different Domains. IEEE TRANSACTIONS ON CYBERNETICS 2020; 50:5047-5060. [PMID: 31180877 DOI: 10.1109/tcyb.2019.2914512] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Recently, microexpression recognition has attracted a lot of researchers' attention due to its challenges and valuable applications. However, it is noticed that currently most of the existing proposed methods are often evaluated and tested on the single database and, hence, this brings us a question whether these methods are still effective if the training and testing samples belong to different domains, for example, different microexpression databases. In this case, a large feature distribution difference may exist between training (source) and testing (target) samples and, hence, microexpression recognition tasks would become more difficult. To solve this challenging problem, that is, cross-domain microexpression recognition, in this paper, we propose an effective method consisting of an auxiliary set selection model (ASSM) and a transductive transfer regression model (TTRM). In our method, an ASSM is designed to automatically select an optimal set of samples from the target domain to serve as the auxiliary set, which is used for subsequent TTRM training. As for TTRM, it aims at bridging the feature distribution gap between the source and target domains by learning a joint regression model with the source domain samples and the auxiliary set selected from the target domain. We evaluate the proposed TTRM plus ASSM by extensive cross-domain microexpression recognition experiments on SMIC and CASME II databases. Compared with the recent state-of-the-art domain adaptation methods, our proposed method has a more satisfactory performance in dealing with the cross-domain microexpression recognition tasks.
Collapse
|
15
|
FACS-Based Graph Features for Real-Time Micro-Expression Recognition. J Imaging 2020; 6:jimaging6120130. [PMID: 34460527 PMCID: PMC8321161 DOI: 10.3390/jimaging6120130] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 11/17/2020] [Accepted: 11/27/2020] [Indexed: 11/17/2022] Open
Abstract
Several studies on micro-expression recognition have contributed mainly to accuracy improvement. However, the computational complexity receives lesser attention comparatively and therefore increases the cost of micro-expression recognition for real-time application. In addition, majority of the existing approaches required at least two frames (i.e., onset and apex frames) to compute features of every sample. This paper puts forward new facial graph features based on 68-point landmarks using Facial Action Coding System (FACS). The proposed feature extraction technique (FACS-based graph features) utilizes facial landmark points to compute graph for different Action Units (AUs), where the measured distance and gradient of every segment within an AU graph is presented as feature. Moreover, the proposed technique processes ME recognition based on single input frame sample. Results indicate that the proposed FACS-baed graph features achieve up to 87.33% of recognition accuracy with F1-score of 0.87 using leave one subject out cross-validation on SAMM datasets. Besides, the proposed technique computes features at the speed of 2 ms per sample on Xeon Processor E5-2650 machine.
Collapse
|
16
|
Zloteanu M, Bull P, Krumhuber EG, Richardson DC. Veracity judgement, not accuracy: Reconsidering the role of facial expressions, empathy, and emotion recognition training on deception detection. Q J Exp Psychol (Hove) 2020; 74:910-927. [PMID: 33234008 PMCID: PMC8056713 DOI: 10.1177/1747021820978851] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
People hold strong beliefs about the role of emotional cues in detecting deception. While research on the diagnostic value of such cues has been mixed, their influence on human veracity judgements is yet to be fully explored. Here, we address the relationship between emotional information and veracity judgements. In Study 1, the role of emotion recognition in the process of detecting naturalistic lies was investigated. Decoders’ veracity judgements were compared based on differences in trait empathy and their ability to recognise microexpressions and subtle expressions. Accuracy was found to be unrelated to facial cue recognition and negatively related to empathy. In Study 2, we manipulated decoders’ emotion recognition ability and the type of lies they saw: experiential or affective (emotional and unemotional). Decoders received either emotion recognition training, bogus training, or no training. In all scenarios, training did not affect veracity judgements. Experiential lies were easier to detect than affective lies; however, affective unemotional lies were overall the hardest to judge. The findings illustrate the complex relationship between emotion recognition and veracity judgements, with abilities for facial cue detection being high yet unrelated to deception accuracy.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Psychology, Teesside University, Middlesbrough, UK.,Department of Criminology and Sociology, Kingston University, London, UK
| | - Peter Bull
- Department of Psychology, University of York, York, UK.,Department of Psychology, University of Salford, Salford, UK
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, UK
| | - Daniel C Richardson
- Department of Experimental Psychology, University College London, London, UK
| |
Collapse
|
17
|
The Lie Deflator – The effect of polygraph test feedback on subsequent (dis)honesty. JUDGMENT AND DECISION MAKING 2019. [DOI: 10.1017/s1930297500005441] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/04/2023]
Abstract
AbstractDespite its controversial status, the lie detection test is still a popular organizational instrument for credibility assessment. Due to its popularity, we examined the effect of the lie-detection test feedback on subsequent moral behavior. In three studies, participants could cheat to increase their monetary payoff in two consecutive phases. Between these two phases the participants underwent a mock polygraph test and were randomly given Deception Indicated (DI) or No Deception Indicated (NDI) assigned feedback. Then, participants engaged in the second phase of the task and their level of dishonesty was measured. Study 1 showed that both NDI and DI feedback (but not the control) reduced cheating behavior on the subsequent task. However, Study 2 showed that the mere presence of the lie-detection test (without feedback) did not produce the same effect. When the role of the lie detector as a moral reminder was cancelled out in Study 3, feedback had no effect on the magnitude of cheating behavior. However, cheaters who were given NDI feedback exhibited a lower level of physiological arousal than cheaters who were given DI feedback. These results suggest that lie detection tests can be used to promote honesty in the field, and that, while feedback type does not affect the magnitude of cheating, NDI may allow people to feel better about cheating.
Collapse
|
18
|
Jensen AM, Stevens RJ, Burls AJ. The Impact of Using Emotionally Arousing Stimuli on Muscle Response Testing Accuracy. Complement Med Res 2019; 26:301-309. [PMID: 30999291 DOI: 10.1159/000497188] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2018] [Accepted: 01/16/2019] [Indexed: 11/19/2022]
Abstract
INTRODUCTION Muscle response testing (MRT) is an assessment method used by 1 million practitioners worldwide, yet its usefulness remains uncertain. The aim of this study, one in a series assessing the accuracy of MRT, was to determine whether emotionally arousing stimuli influence its accuracy compared to neutral stimuli. METHODS To assess diagnostic test accuracy 20 MRT practitioners were paired with 20 test patients (TPs). Forty MRTs were performed as TPs made true and false statements about emotionally arousing and neutral pictures. Blocks of MRT alternated with blocks of intuitive guessing (IG). RESULTS MRT accuracy using emotionally arousing stimuli was different than when using neutral stimuli. However, MRT accuracy was found to be significantly better than IG and chance. Similar to previous studies in this series, this study failed to detect any characteristic that consistently influenced MRT accuracy. CONCLUSION Using emotionally arousing stimuli had no effect on MRT accuracy compared to using neutral stimuli. This study would have been strengthened by adding personally relevant lies instead of impersonal stimuli. A limitation of this study is its lack of generalizability to other applications of MRT. This study shows that a simple yet robust methodology for assessing MRT as a diagnostic tool can be implemented effectively.
Collapse
Affiliation(s)
- Anne M Jensen
- Department of Continuing Professional Education and Department of Primary Care Health Sciences, University of Oxford, Oxford, United Kingdom,
| | - Richard J Stevens
- Department of Primary Care Health Sciences, Radcliffe Observatory Quarter, University of Oxford, Oxford, United Kingdom
| | - Amanda J Burls
- School of Health Sciences, City University London, London, United Kingdom
| |
Collapse
|
19
|
Abstract
Micro-expressions are brief spontaneous facial expressions that appear on a face when a person conceals an emotion, making them different to normal facial expressions in subtlety and duration. Currently, emotion classes within the CASME II dataset (Chinese Academy of Sciences Micro-expression II) are based on Action Units and self-reports, creating conflicts during machine learning training. We will show that classifying expressions using Action Units, instead of predicted emotion, removes the potential bias of human reporting. The proposed classes are tested using LBP-TOP (Local Binary Patterns from Three Orthogonal Planes), HOOF (Histograms of Oriented Optical Flow) and HOG 3D (3D Histogram of Oriented Gradient) feature descriptors. The experiments are evaluated on two benchmark FACS (Facial Action Coding System) coded datasets: CASME II and SAMM (A Spontaneous Micro-Facial Movement). The best result achieves 86.35% accuracy when classifying the proposed 5 classes on CASME II using HOG 3D, outperforming the result of the state-of-the-art 5-class emotional-based classification in CASME II. Results indicate that classification based on Action Units provides an objective method to improve micro-expression recognition.
Collapse
|
20
|
Maier BG, Niehaus S, Wachholz S, Volbert R. The Strategic Meaning of CBCA Criteria From the Perspective of Deceivers. Front Psychol 2018; 9:855. [PMID: 29937741 PMCID: PMC6002523 DOI: 10.3389/fpsyg.2018.00855] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 05/11/2018] [Indexed: 11/13/2022] Open
Abstract
In 2014, Volbert and Steller introduced a revised model of Criteria-Based Content Analysis (CBCA) that grouped a modified set of content criteria in closer reference to their assumed latent processes, resulting in three dimensions of memory-related, script-deviant and strategy-based criteria. In this model, it is assumed that deceivers try to integrate memory-related criteria-but will not be as good as truth tellers in achieving this-whereas out of strategic considerations they will avoid the expression of the other criteria. The aim of the current study was to test this assumption. A vignette was presented via an online-questionnaire to inquire how participants (n = 135) rate the strategic value of CBCA criteria on a five-point scale. One-sample t-tests showed that participants attribute positive strategic value to most memory-related criteria and negative value to the remaining criteria, except for the criteria self-deprecation and pardoning the perpetrator. Overall, our results corroborated the model's suitability in distinguishing different groups of criteria-some which liars are inclined to integrate and others which liars intend to avoid-and in this way provide useful hints for forensic practitioners in appraising the criteria' diagnostic value.
Collapse
Affiliation(s)
| | - Susanna Niehaus
- Lucerne University of Applied Sciences and Arts, Lucerne, Switzerland
| | - Sina Wachholz
- Charité - Universitaetsmedizin Berlin, Institute of Forensic Psychiatry, Berlin, Germany
| | - Renate Volbert
- Psychologische Hochschule Berlin, Berlin, Germany
- Charité - Universitaetsmedizin Berlin, Institute of Forensic Psychiatry, Berlin, Germany
| |
Collapse
|
21
|
Zong Y, Zheng W, Huang X, Shi J, Cui Z, Zhao G. Domain Regeneration for Cross-Database Micro-Expression Recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:2484-2498. [PMID: 29994602 DOI: 10.1109/tip.2018.2797479] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Recently, micro-expression recognition has attracted lots of researchers' attention due to its potential value in many practical applications, e.g., lie detection. In this paper, we investigate an interesting and challenging problem in micro-expression recognition, i.e., cross-database micro-expression recognition, in which the training and testing samples come from different micro-expression databases. Under this problem setting, the consistent feature distribution between the training and testing samples originally existing in conventional micro-expression recognition would be seriously broken and hence the performance of most current well-performing micro-expression recognition methods may sharply drop. In order to overcome it, we propose a simple yet effective framework called Domain Regeneration (DR) in this paper. DR framework aims at learning a domain regenerator to regenerate the micro-expression samples from source and target databases respectively such that they can abide by the same or similar feature distributions. Thus, we are able to use the classifier learned based on the labeled source micro-expression samples to predict the label information of the unlabeled target micro-expression samples. To evaluate the proposed DR framework, we conduct extensive cross-database micro-expression recognition experiments designed based on SMIC and CASME II databases. Experimental results show that compared with recent state-of-the-art cross-database emotion recognition methods, the proposed DR framework has more promising performance.
Collapse
|
22
|
|
23
|
Zheng H, Zhu J, Yang Z, Jin Z. Effective micro-expression recognition using relaxed K-SVD algorithm. INT J MACH LEARN CYB 2017. [DOI: 10.1007/s13042-017-0684-6] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
24
|
Wang SJ, Yan WJ, Sun T, Zhao G, Fu X. Sparse tensor canonical correlation analysis for micro-expression recognition. Neurocomputing 2016. [DOI: 10.1016/j.neucom.2016.05.083] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
25
|
Kleinberg B, Verschuere B. The role of motivation to avoid detection in reaction time-based concealed information detection. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2016. [DOI: 10.1016/j.jarmac.2015.11.004] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
26
|
Wang SJ, Yan WJ, Li X, Zhao G, Zhou CG, Fu X, Yang M, Tao J. Micro-Expression Recognition Using Color Spaces. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2015; 24:6034-6047. [PMID: 26540689 DOI: 10.1109/tip.2015.2496314] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Micro-expressions are brief involuntary facial expressions that reveal genuine emotions and, thus, help detect lies. Because of their many promising applications, they have attracted the attention of researchers from various fields. Recent research reveals that two perceptual color spaces (CIELab and CIELuv) provide useful information for expression recognition. This paper is an extended version of our International Conference on Pattern Recognition paper, in which we propose a novel color space model, tensor independent color space (TICS), to help recognize micro-expressions. In this paper, we further show that CIELab and CIELuv are also helpful in recognizing micro-expressions, and we indicate why these three color spaces achieve better performance. A micro-expression color video clip is treated as a fourth-order tensor, i.e., a four-dimension array. The first two dimensions are the spatial information, the third is the temporal information, and the fourth is the color information. We transform the fourth dimension from RGB into TICS, in which the color components are as independent as possible. The combination of dynamic texture and independent color components achieves a higher accuracy than does that of RGB. In addition, we define a set of regions of interests (ROIs) based on the facial action coding system and calculated the dynamic texture histograms for each ROI. Experiments are conducted on two micro-expression databases, CASME and CASME 2, and the results show that the performances for TICS, CIELab, and CIELuv are better than those for RGB or gray.
Collapse
|
27
|
Omura Y, Nihrane A, Lu D, Jones MK, Shimotsuura Y, Ohki M. Simple New Method of Detecting Lies By Identifying Invisible Unique Physiological Reflex Response Appearing Often Less Than 10-15 Seconds on the Specific Parts of Face of Lying Person; Quick Screening of Potential Murderers & Problematic Persons. ACUPUNCTURE ELECTRO 2015; 40:101-36. [PMID: 26369253 DOI: 10.3727/036012915x14381285982921] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
Abstract
Frequently, we cannot find any significant visible changes when somebody lies, but we found there are significant invisible changes appearing in specific areas of the face when somebody lies and their location often depends on whether the lie is serious with or without physical violence involvement. These abnormalities were detected non-invasively at areas: 1) lobules and c) a small round area of each upper lateral side of forehead; 2) the skin between the base of the 2 orifices of the nose and the upper end of upper lip and 3) Alae of both sides of nose. These invisible significant changes usually last less than 15 seconds after telling a lie. In these areas, Bi-Digital O-Ring Test (BDORT), which received a U.S. Patent in 1993, became significantly weak with an abnormal value of (-)7 and TXB2, measured non-invasively, was increased from 0.125-0.5ng to 12.5-15ng (within the first 5 seconds) and then went back down to less than 1ng (after 15 seconds). These unique changes can be documented semi-permanently by taking photographs of the face of people who tell a lie, within as short as 10 seconds after saying a lying statement. These abnormal responses appear in one or more of the above-mentioned 3 areas 1), 2) & 3). At least one abnormal pupil with BDORT of (-)8-(-)12 & marked reduction in Acetylcholine and abnormal increase in any of 3 Alzheimer's disease associated factors Apolipoprotein (Apo) E4, β-Amyloid (1-42), Tau protein, viral and bacterial infections were detected in both pupils and forehead of murderers and people who often have problems with others. Analysis of well-known typical examples of recent mass murderers was presented as examples. Using these findings, potential murderers and people who are very likely to develop problems with others can be screened within 5-10 minutes by examining their facial photographs and signatures before school admission or employment.
Collapse
|
28
|
Walczyk JJ, Harris LL, Duck TK, Mulay D. A social-cognitive framework for understanding serious lies: Activation-decision-construction-action theory. NEW IDEAS IN PSYCHOLOGY 2014. [DOI: 10.1016/j.newideapsych.2014.03.001] [Citation(s) in RCA: 93] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
29
|
Hartwig M, Bond CF. Lie Detection from Multiple Cues: A Meta-analysis. APPLIED COGNITIVE PSYCHOLOGY 2014. [DOI: 10.1002/acp.3052] [Citation(s) in RCA: 101] [Impact Index Per Article: 10.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Maria Hartwig
- Department of Psychology; John Jay College of Criminal Justice; City University of New York USA
| | | |
Collapse
|
30
|
Yan WJ, Wang SJ, Liu YJ, Wu Q, Fu X. For micro-expression recognition: Database and suggestions. Neurocomputing 2014. [DOI: 10.1016/j.neucom.2014.01.029] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
I want to lie about not knowing you, but my precuneus refuses to cooperate. Sci Rep 2014; 3:1636. [PMID: 23572081 PMCID: PMC3622132 DOI: 10.1038/srep01636] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2012] [Accepted: 03/19/2013] [Indexed: 11/09/2022] Open
Abstract
Previously identified neural correlates of deception, such as the prefrontal, anterior cingulate, and parietal regions, have proven to be unreliable neural markers of deception, most likely because activity in these regions reflects executive processes that are not specific to deception. Herein, we report the first fMRI study that provides strong preliminary evidence that the neural activity associated with perception but not executive processes could offer a better marker of deception with regard to face familiarity. Using a face-recognition task, activity in the left precuneus during the perception of familiar faces accurately marked 11 of 13 subjects who lied about not knowing faces that were in fact familiar to them. This level of classification accuracy is much higher than the level predicted by chance and agrees with other findings by experts in lie detection.
Collapse
|
32
|
Hurley CM, Anker AE, Frank MG, Matsumoto D, Hwang HC. Background factors predicting accuracy and improvement in micro expression recognition. MOTIVATION AND EMOTION 2014. [DOI: 10.1007/s11031-014-9410-9] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
33
|
Lyons M, Healy N, Bruno D. It takes one to know one: Relationship between lie detection and psychopathy. PERSONALITY AND INDIVIDUAL DIFFERENCES 2013. [DOI: 10.1016/j.paid.2013.05.018] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
34
|
Shen XB, Wu Q, Fu XL. Effects of the duration of expressions on the recognition of microexpressions. J Zhejiang Univ Sci B 2012; 13:221-30. [PMID: 22374615 DOI: 10.1631/jzus.b1100063] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
OBJECTIVE The purpose of this study was to investigate the effects of the duration of expressions on the recognition of microexpressions, which are closely related to deception. METHODS In two experiments, participants were briefly (from 20 to 300 ms) shown one of six basic expressions and then were asked to identify the expression. RESULTS The results showed that the participants' performance in recognition of microexpressions increased with the duration of the expressions, reaching a turning point at 200 ms before levelling off. The results also indicated that practice could improve the participants' performance. CONCLUSIONS The results of this study suggest that the proper upper limit of the duration of microexpressions might be around 1/5 of a second and confirmed that the ability to recognize microexpressions can be enhanced with practice.
Collapse
Affiliation(s)
- Xun-bing Shen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | | | | |
Collapse
|
35
|
|
36
|
Buckley JP. Detection of deception researchers needs to collaborate with experienced practitioners. JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2012. [DOI: 10.1016/j.jarmac.2012.04.002] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
|
37
|
Bond GD. Focus on basic cognitive mechanisms and strategies in deception research (and remand custody of ‘wizards’ to Harry Potter movies). JOURNAL OF APPLIED RESEARCH IN MEMORY AND COGNITION 2012. [DOI: 10.1016/j.jarmac.2012.04.003] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
38
|
Porter S, ten Brinke L, Wallace B. Secrets and Lies: Involuntary Leakage in Deceptive Facial Expressions as a Function of Emotional Intensity. JOURNAL OF NONVERBAL BEHAVIOR 2011. [DOI: 10.1007/s10919-011-0120-7] [Citation(s) in RCA: 33] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|