1
|
Guhan P, Awasthi N, McDonald K, Bussell K, Reeves G, Manocha D, Bera A. Developing a Machine Learning-Based Automated Patient Engagement Estimator for Telehealth: Algorithm Development and Validation Study. JMIR Form Res 2025; 9:e46390. [PMID: 39832353 DOI: 10.2196/46390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2023] [Revised: 06/30/2023] [Accepted: 09/03/2024] [Indexed: 01/22/2025] Open
Abstract
BACKGROUND Patient engagement is a critical but challenging public health priority in behavioral health care. During telehealth sessions, health care providers need to rely predominantly on verbal strategies rather than typical nonverbal cues to effectively engage patients. Hence, the typical patient engagement behaviors are now different, and health care provider training on telehealth patient engagement is unavailable or quite limited. Therefore, we explore the application of machine learning for estimating patient engagement. This can assist psychotherapists in the development of a therapeutic relationship with the patient and enhance patient engagement in the treatment of mental health conditions during tele-mental health sessions. OBJECTIVE This study aimed to examine the ability of machine learning models to estimate patient engagement levels during a tele-mental health session and understand whether the machine learning approach could support therapeutic engagement between the client and psychotherapist. METHODS We proposed a multimodal learning-based approach. We uniquely leveraged latent vectors corresponding to affective and cognitive features frequently used in psychology literature to understand a person's level of engagement. Given the labeled data constraints that exist in health care, we explored a semisupervised learning solution. To support the development of similar technologies for telehealth, we also plan to release a dataset called Multimodal Engagement Detection in Clinical Analysis (MEDICA). This dataset includes 1229 video clips, each lasting 3 seconds. In addition, we present experiments conducted on this dataset, along with real-world tests that demonstrate the effectiveness of our method. RESULTS Our algorithm reports a 40% improvement in root mean square error over state-of-the-art methods for engagement estimation. In our real-world tests on 438 video clips from psychotherapy sessions with 20 patients, in comparison to prior methods, positive correlations were observed between psychotherapists' Working Alliance Inventory scores and our mean and median engagement level estimates. This indicates the potential of the proposed model to present patient engagement estimations that align well with the engagement measures used by psychotherapists. CONCLUSIONS Patient engagement has been identified as being important to improve therapeutic alliance. However, limited research has been conducted to measure this in a telehealth setting, where the therapist lacks conventional cues to make a confident assessment. The algorithm developed is an attempt to model person-oriented engagement modeling theories within machine learning frameworks to estimate the level of engagement of the patient accurately and reliably in telehealth. The results are encouraging and emphasize the value of combining psychology and machine learning to understand patient engagement. Further testing in the real-world setting is necessary to fully assess its usefulness in helping therapists gauge patient engagement during online sessions. However, the proposed approach and the creation of the new dataset, MEDICA, open avenues for future research and the development of impactful tools for telehealth.
Collapse
Affiliation(s)
- Pooja Guhan
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Naman Awasthi
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Kathryn McDonald
- Department of Psychiatry, Child and Adolescent Division, University of Maryland, Baltimore, MD, United States
| | - Kristin Bussell
- School of Nursing, University of Maryland, Baltimore, MD, United States
| | - Gloria Reeves
- Department of Psychiatry, Child and Adolescent Division, University of Maryland, Baltimore, MD, United States
| | - Dinesh Manocha
- Department of Computer Science, University of Maryland, College Park, MD, United States
| | - Aniket Bera
- Department of Computer Science, Purdue University, West Lafayett, IN, United States
| |
Collapse
|
2
|
Dong LD, Batool K, Ann Cameron C, Lee K. Smiling, face covering, and rhythmic body rocking in children who cheat versus do not cheat. J Exp Child Psychol 2025; 249:106119. [PMID: 39531991 DOI: 10.1016/j.jecp.2024.106119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2024] [Revised: 10/01/2024] [Accepted: 10/05/2024] [Indexed: 11/16/2024]
Abstract
Cheating is the behavioral realization of immoral decisions. It is a dynamic process that does not begin or end on the enactment of cheating. However, little research has closely looked at the behavioral dynamics of the cheating process. The current study analyzed smiling, face covering, and rhythmic body rocking among 4- to 7-year-old children (N = 120) who participated in a challenging math test. We compared these target expressive behaviors from baseline practice trials to the critical test trial. Compared with children who did not cheat, we found that those who cheated were more likely to display smiling during the critical test trial, and those who cheated were more likely to cover their faces throughout the experiment even before they had the opportunity to cheat. Rhythmic body rocking did not differ between cheating and non-cheating children. The study identified behavioral differences between children who cheated and those who did not cheat, laying the groundwork for understanding children's cheating behaviors from the lens of behavioral dynamics. It also suggests that with further research there might be some potential for distinguishing between these groups based on behavioral cues.
Collapse
Affiliation(s)
- Liyuzhi D Dong
- Dr. Eric Jackman Institute of Child Study, University of Toronto, Toronto, Ontario M5R 2X2, Canada
| | - Kanza Batool
- Dr. Eric Jackman Institute of Child Study, University of Toronto, Toronto, Ontario M5R 2X2, Canada
| | - Catherine Ann Cameron
- Department of Psychology, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Kang Lee
- Dr. Eric Jackman Institute of Child Study, University of Toronto, Toronto, Ontario M5R 2X2, Canada.
| |
Collapse
|
3
|
Pelot A, Gallant A, Mazerolle MP, Roy-Charland A. Methodological Variations to Explore Conflicting Results in the Existing Literature of Masking Smile Judgment. Behav Sci (Basel) 2024; 14:944. [PMID: 39457816 PMCID: PMC11505263 DOI: 10.3390/bs14100944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/28/2024] Open
Abstract
Although a smile can serve as an expression of genuine happiness, it can also be generated to conceal negative emotions. The traces of negative emotion present in these types of smiles can produce micro-expressions, subtle movements of the facial muscles manifested in the upper or lower half of the face. Studies examining the judgment of smiles masking negative emotions have mostly employed dichotomous rating measures, while also assuming that dichotomous categorization of a smile as happy or not is synonymous with judgments of the smile's authenticity. The aim of the two studies was to explore the judgment of enjoyment and masking smiles using unipolar and bipolar continuous rating measures and examine differences in the judgment when instruction varied between judgments of happiness and authenticity. In Experiment 1, participants rated smiles on 7-point scales on perceived happiness and authenticity. In Experiment 2, participants rated the smiles on bipolar 7-point scales between happiness and a negative emotion label. In both studies, similar patterns were observed: faces with traces of fear were rated significantly less happy/authentic and those with traces of anger in the brows were rated significantly happier/more authentic. Regarding varied instruction type, no effect was found for the two instruction types, indicating that participants perceive and judge enjoyment and masking smiles similarly according to these two instructions. Additionally, the use of bipolar scales with dimensions between a negative emotion label and happiness were not consistently effective in influencing the judgement of the masking smile.
Collapse
Affiliation(s)
- Annalie Pelot
- School of Psychology, Laurentian University, Sudbury, ON P3E 2C6, Canada;
| | - Adèle Gallant
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| | - Marie-Pier Mazerolle
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| | - Annie Roy-Charland
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| |
Collapse
|
4
|
Cash DK, Pazos LA. Masking the truth: the impact of face masks on deception detection. THE JOURNAL OF SOCIAL PSYCHOLOGY 2024; 164:840-853. [PMID: 36987617 DOI: 10.1080/00224545.2023.2195092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 03/21/2023] [Indexed: 03/30/2023]
Abstract
Because of the pandemic, face masks have become ubiquitous in social interactions, but it remains unclear how face masks influence the ability to discriminate between truthful and deceptive statements. The current study manipulated the presence of face masks, statement veracity, statement valence (positive or negative), and whether the statements had been practiced or not. Despite participants' expectations, face masks generally did not impair detection accuracy. However, participants were more accurate when judging negatively valenced statements when the speaker was not wearing a face mask. Participants were also more likely to believe positively rather than negatively valenced statements.
Collapse
|
5
|
Witkower Z, Tian L, Tracy J, Rule NO. Smile variation leaks personality and increases the accuracy of interpersonal judgments. PNAS NEXUS 2024; 3:pgae343. [PMID: 39246668 PMCID: PMC11378078 DOI: 10.1093/pnasnexus/pgae343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 08/02/2024] [Indexed: 09/10/2024]
Abstract
People ubiquitously smile during brief interactions and first encounters, and when posing for photos used for virtual dating, social networking, and professional profiles. Yet not all smiles are the same: subtle individual differences emerge in how people display this nonverbal facial expression. We hypothesized that idiosyncrasies in people's smiles can reveal aspects of their personality and guide the personality judgments made by observers, thus enabling a smiling face to serve as a valuable tool in making more precise inferences about an individual's personality. Study 1 (N = 303) supported the hypothesis that smile variation reveals personality, and identified the facial-muscle activations responsible for this leakage. Study 2 (N = 987) found that observers use the subtle distinctions in smiles to guide their personality judgments, consequently forming slightly more accurate judgments of smiling faces than neutral ones. Smiles thus encode traces of personality traits, which perceivers utilize as valid cues of those traits.
Collapse
Affiliation(s)
- Zachary Witkower
- Department of Psychology, University of Amsterdam, Nieuwe Achtergracht129-B, Amsterdam 1018 WS, The Netherlands
| | - Laura Tian
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| | - Jessica Tracy
- Department of Psychology, University of British Columbia, 2136 West Mall, Vancouver, BC V6T 1Z4, Canada
| | - Nicholas O Rule
- Department of Psychology, University of Toronto, 100 St. George Street, Toronto, ON M5S 3G3, Canada
| |
Collapse
|
6
|
Stewart PA, Svetieva E, Mullins JK. The influence of President Trump's micro-expressions during his COVID-19 national address on viewers' emotional response. Politics Life Sci 2024; 43:167-184. [PMID: 38832534 DOI: 10.1017/pls.2024.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
This preregistered study replicates and extends studies concerning emotional response to wartime rally speeches and applies it to U.S. President Donald Trump's first national address regarding the COVID-19 pandemic on March 11, 2020. We experimentally test the effect of a micro-expression (ME) by Trump associated with appraised threat on change in participant self-reported distress, sadness, anger, affinity, and reassurance while controlling for followership. We find that polarization is perpetuated in emotional response to the address which focused on portraying the COVID-19 threat as being of Chinese provenance. We also find a significant, albeit slight, effect by Trump's ME on self-reported sadness, suggesting that this facial behavior served did not diminish his speech, instead serving as a form of nonverbal punctuation. Further exploration of participant response using the Linguistic Inventory and Word Count software reinforces and extends these findings.
Collapse
Affiliation(s)
- Patrick A Stewart
- Department of Political Science, University of Arkansas, Fayetteville, AR, USA
| | - Elena Svetieva
- Department of Communication, University of Colorado, Colorado Springs, CO, USA
| | - Jeffrey K Mullins
- Department of Information Systems, University of Arkansas, Fayetteville, AR, USA
| |
Collapse
|
7
|
Ahmad A, Li Z, Iqbal S, Aurangzeb M, Tariq I, Flah A, Blazek V, Prokop L. A comprehensive bibliometric survey of micro-expression recognition system based on deep learning. Heliyon 2024; 10:e27392. [PMID: 38495163 PMCID: PMC10943397 DOI: 10.1016/j.heliyon.2024.e27392] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2023] [Revised: 02/21/2024] [Accepted: 02/28/2024] [Indexed: 03/19/2024] Open
Abstract
Micro-expressions (ME) are rapidly occurring expressions that reveal the true emotions that a human being is trying to hide, cover, or suppress. These expressions, which reveal a person's actual feelings, have a broad spectrum of applications in public safety and clinical diagnosis. This study provides a comprehensive review of the area of ME recognition. A bibliometric and network analysis techniques is used to compile all the available literature related to ME recognition. A total of 735 publications from the Web of Science (WOS) and Scopus databases were evaluated from December 2012 to December 2022 using all relevant keywords. The first round of data screening produced some basic information, which was further extracted for citation, coupling, co-authorship, co-occurrence, bibliographic, and co-citation analysis. Additionally, a thematic and descriptive analysis was executed to investigate the content of prior research findings, and research techniques used in the literature. The year wise publications indicated that the published literature between 2012 and 2017 was relatively low but however by 2021, a nearly 24-fold increment made it to 154 publications. The three topmost productive journals and conferences included IEEE Transactions on Affective Computing (n = 20 publications) followed by Neurocomputing (n = 17) and Multimedia tools and applications (n = 15). Zhao G was the most proficient author with 48 publications and the top influential country was China (620 publications). Publications by citations showed that each of the authors acquired citations ranging from 100 to 1225. While publications by organizations indicated that the University of Oulu had the most published papers (n = 51). Deep learning, facial expression recognition, and emotion recognition were among the most frequently used terms. It has been discovered that ME research was primarily classified in the discipline of engineering, with more contribution from China and Malaysia comparatively.
Collapse
Affiliation(s)
- Adnan Ahmad
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Zhao Li
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Sheeraz Iqbal
- Department of Electrical Engineering, University of Azad Jammu and Kashmir, Muzaffarabad, 13100, AJK, Pakistan
| | - Muhammad Aurangzeb
- School of Electrical Engineering, Southeast University, Nanjing, 210096, China
| | - Irfan Tariq
- Key Laboratory of Underwater Acoustic Signal Processing of Ministry of Education, School of Information Science and Engineering, Southeast University, Nanjing, 210096, China
| | - Ayman Flah
- College of Engineering, University of Business and Technology (UBT), Jeddah, 21448, Saudi Arabia
- MEU Research Unit, Middle East University, Amman, Jordan
- The Private Higher School of Applied Sciences and Technology of Gabes, University of Gabes, Gabes, Tunisia
- National Engineering School of Gabes, University of Gabes, Gabes, 6029, Tunisia
| | - Vojtech Blazek
- ENET Centre, VSB—Technical University of Ostrava, Ostrava, Czech Republic
| | - Lukas Prokop
- ENET Centre, VSB—Technical University of Ostrava, Ostrava, Czech Republic
| |
Collapse
|
8
|
Patterson ML, Fridlund AJ, Crivelli C. Four Misconceptions About Nonverbal Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1388-1411. [PMID: 36791676 PMCID: PMC10623623 DOI: 10.1177/17456916221148142] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Research and theory in nonverbal communication have made great advances toward understanding the patterns and functions of nonverbal behavior in social settings. Progress has been hindered, we argue, by presumptions about nonverbal behavior that follow from both received wisdom and faulty evidence. In this article, we document four persistent misconceptions about nonverbal communication-namely, that people communicate using decodable body language; that they have a stable personal space by which they regulate contact with others; that they express emotion using universal, evolved, iconic, categorical facial expressions; and that they can deceive and detect deception, using dependable telltale clues. We show how these misconceptions permeate research as well as the practices of popular behavior experts, with consequences that extend from intimate relationships to the boardroom and courtroom and even to the arena of international security. Notwithstanding these misconceptions, existing frameworks of nonverbal communication are being challenged by more comprehensive systems approaches and by virtual technologies that ambiguate the roles and identities of interactants and the contexts of interaction.
Collapse
Affiliation(s)
| | - Alan J. Fridlund
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | | |
Collapse
|
9
|
Miolla A, Cardaioli M, Scarpazza C. Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behav Res Methods 2023; 55:2559-2574. [PMID: 36002622 PMCID: PMC10439033 DOI: 10.3758/s13428-022-01914-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions are among the most powerful signals for human beings to convey their emotional states. Indeed, emotional facial datasets represent the most effective and controlled method of examining humans' interpretation of and reaction to various emotions. However, scientific research on emotion mainly relied on static pictures of facial expressions posed (i.e., simulated) by actors, creating a significant bias in emotion literature. This dataset tries to fill this gap, providing a considerable amount (N = 1458) of dynamic genuine (N = 707) and posed (N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants' body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.
Collapse
Affiliation(s)
- A. Miolla
- Department of General Psychology, University of Padua, Padua, Italy
| | - M. Cardaioli
- Department of Mathematics, University of Padua, Padua, Italy
- GFT Italy, Milan, Italy
| | - C. Scarpazza
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
10
|
LaPalme ML, Barsade SG, Brackett MA, Floman JL. The Meso-Expression Test (MET): A Novel Assessment of Emotion Perception. J Intell 2023; 11:145. [PMID: 37504788 PMCID: PMC10381771 DOI: 10.3390/jintelligence11070145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 07/13/2023] [Accepted: 07/16/2023] [Indexed: 07/29/2023] Open
Abstract
Emotion perception is a primary facet of Emotional Intelligence (EI) and the underpinning of interpersonal communication. In this study, we examined meso-expressions-the everyday, moderate-intensity emotions communicated through the face, voice, and body. We theoretically distinguished meso-expressions from other well-known emotion research paradigms (i.e., macro-expression and micro-expressions). In Study 1, we demonstrated that people can reliably discriminate between meso-expressions, and we created a corpus of 914 unique video displays of meso-expressions across a race- and gender-diverse set of expressors. In Study 2, we developed a novel video-based assessment of emotion perception ability: The Meso-Expression Test (MET). In this study, we found that the MET is psychometrically valid and demonstrated measurement equivalence across Asian, Black, Hispanic, and White perceiver groups and across men and women. In Study 3, we examined the construct validity of the MET and showed that it converged with other well-known measures of emotion perception and diverged from cognitive ability. Finally, in Study 4, we showed that the MET is positively related to important psychosocial outcomes, including social well-being, social connectedness, and empathic concern and is negatively related to alexithymia, stress, depression, anxiety, and adverse social interactions. We conclude with a discussion focused on the implications of our findings for EI ability research and the practical applications of the MET.
Collapse
Affiliation(s)
- Matthew L LaPalme
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| | - Sigal G Barsade
- Wharton, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marc A Brackett
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| | - James L Floman
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
11
|
Zheng Y, Blasch E. Facial Micro-Expression Recognition Enhanced by Score Fusion and a Hybrid Model from Convolutional LSTM and Vision Transformer. SENSORS (BASEL, SWITZERLAND) 2023; 23:5650. [PMID: 37420815 PMCID: PMC10303532 DOI: 10.3390/s23125650] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 06/02/2023] [Accepted: 06/13/2023] [Indexed: 07/09/2023]
Abstract
In the billions of faces that are shaped by thousands of different cultures and ethnicities, one thing remains universal: the way emotions are expressed. To take the next step in human-machine interactions, a machine (e.g., a humanoid robot) must be able to clarify facial emotions. Allowing systems to recognize micro-expressions affords the machine a deeper dive into a person's true feelings, which will take human emotion into account while making optimal decisions. For instance, these machines will be able to detect dangerous situations, alert caregivers to challenges, and provide appropriate responses. Micro-expressions are involuntary and transient facial expressions capable of revealing genuine emotions. We propose a new hybrid neural network (NN) model capable of micro-expression recognition in real-time applications. Several NN models are first compared in this study. Then, a hybrid NN model is created by combining a convolutional neural network (CNN), a recurrent neural network (RNN, e.g., long short-term memory (LSTM)), and a vision transformer. The CNN can extract spatial features (within a neighborhood of an image), whereas the LSTM can summarize temporal features. In addition, a transformer with an attention mechanism can capture sparse spatial relations residing in an image or between frames in a video clip. The inputs of the model are short facial videos, while the outputs are the micro-expressions recognized from the videos. The NN models are trained and tested with publicly available facial micro-expression datasets to recognize different micro-expressions (e.g., happiness, fear, anger, surprise, disgust, sadness). Score fusion and improvement metrics are also presented in our experiments. The results of our proposed models are compared with that of literature-reported methods tested on the same datasets. The proposed hybrid model performs the best, where score fusion can dramatically increase recognition performance.
Collapse
Affiliation(s)
- Yufeng Zheng
- Department of Data Science, University of Mississippi Medical Center, Jackson, MS 39216, USA
| | | |
Collapse
|
12
|
Klingner CM, Guntinas-Lichius O. Facial expression and emotion. Laryngorhinootologie 2023; 102:S115-S125. [PMID: 37130535 PMCID: PMC10171334 DOI: 10.1055/a-2003-5687] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Human facial expressions are unique in their ability to express our emotions and communicate them to others. The mimic expression of basic emotions is very similar across different cultures and has also many features in common with other mammals. This suggests a common genetic origin of the association between facial expressions and emotion. However, recent studies also show cultural influences and differences. The recognition of emotions from facial expressions, as well as the process of expressing one's emotions facially, occurs within an extremely complex cerebral network. Due to the complexity of the cerebral processing system, there are a variety of neurological and psychiatric disorders that can significantly disrupt the coupling of facial expressions and emotions. Wearing masks also limits our ability to convey and recognize emotions through facial expressions. Through facial expressions, however, not only "real" emotions can be expressed, but also acted ones. Thus, facial expressions open up the possibility of faking socially desired expressions and also of consciously faking emotions. However, these pretenses are mostly imperfect and can be accompanied by short-term facial movements that indicate the emotions that are actually present (microexpressions). These microexpressions are of very short duration and often barely perceptible by humans, but they are the ideal application area for computer-aided analysis. This automatic identification of microexpressions has not only received scientific attention in recent years, but its use is also being tested in security-related areas. This article summarizes the current state of knowledge of facial expressions and emotions.
Collapse
Affiliation(s)
- Carsten M Klingner
- Hans Berger Department of Neurology, Jena University Hospital, Germany
- Biomagnetic Center, Jena University Hospital, Germany
| | | |
Collapse
|
13
|
Gallant A, Pelot A, Mazerolle MP, Sonier RP, Roy-Charland A. The role of emotion-related individual differences in enjoyment and masking smile judgment. BMC Psychol 2023; 11:132. [PMID: 37098621 PMCID: PMC10131331 DOI: 10.1186/s40359-023-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 04/14/2023] [Indexed: 04/27/2023] Open
Abstract
BACKGROUND While some research indicates that individuals can accurately judge smile authenticity of enjoyment and masking smile expressions, other research suggest modest judgment rates of masking smiles. The current study explored the role of emotion-related individual differences in the judgment of authenticity and recognition of negative emotions in enjoyment and masking smile expressions as a potential explanation for the differences observed. METHODS Specifically, Experiment 1 investigated the role of emotion contagion (Doherty in J Nonverbal Behav 21:131-154, 1997), emotion intelligence (Schutte et al. in Personality Individ Differ 25:167-177, 1998), and emotion regulation (Gratz and Roemer in J Psychopathol Behav Assess 26:41-54, 2004) in smile authenticity judgment and recognition of negative emotions in masking smiles. Experiment 2 investigated the role of state and trait anxiety (Spielberger et al. in Manual for the state-trait anxiety inventory, Consulting Psychologists Press, Palo Alto, 1983) in smile authenticity judgment and recognition of negative emotions in the same masking smiles. In both experiments, repeated measures ANOVAs were conducted for judgment of authenticity, probability of producing the expected response, for the detection of another emotion, and for emotion recognition. A series of correlations were also calculated between the proportion of expected responses of smile judgement and the scores on the different subscales. RESULTS Results of the smile judgment and recognition tasks were replicated in both studies, and echoed results from prior studies of masking smile judgment: participants rated enjoyment smiles as happier than the masking smiles and, of the masking smiles, participants responded "really happy" more often for the angry-eyes masking smiles and more often categorized fear masking smiles as "not really happy". CONCLUSIONS Overall, while the emotion-related individual differences used in our study seem to have an impact on recognition of basic emotions in the literature, our study suggest that these traits, except for emotional awareness, do not predict performances on the judgment of complex expressions such as masking smiles. These results provide further information regarding the factors that do and do not contribute to greater judgment of smile authenticity and recognition of negative emotions in masking smiles.
Collapse
Affiliation(s)
- Adèle Gallant
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - Annalie Pelot
- Department of Psychology, Laurentian University, Sudbury, ON, Canada
| | - Marie-Pier Mazerolle
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - René-Pierre Sonier
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - Annie Roy-Charland
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada.
| |
Collapse
|
14
|
Chamberland JA, Collin CA. Effects of forward mask duration variability on the temporal dynamics
of brief facial expression categorization. Iperception 2023; 14:20416695231162580. [PMID: 36968319 PMCID: PMC10031613 DOI: 10.1177/20416695231162580] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2022] [Accepted: 02/22/2023] [Indexed: 03/24/2023] Open
Abstract
The Japanese and Caucasian Brief Affect Recognition Task (JACBART) has been
proposed as a standardized method for measuring people's ability to accurately
categorize briefly presented images of facial expressions. However, the factors
that impact performance in this task are not entirely understood. The current
study sought to explore the role of the forward mask's duration (i.e., fixed vs.
variable) in brief affect categorization across expressions of the six basic
emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) and
three presentation times (i.e., 17, 67, and 500 ms). Current findings do not
demonstrate evidence that a variable duration forward mask negatively impacts
brief affect categorization. However, efficiency and necessity thresholds were
observed to vary across the expressions of emotion. Further exploration of the
temporal dynamics of facial affect categorization will therefore require a
consideration of these differences.
Collapse
Affiliation(s)
- Justin A. Chamberland
- Justin A. Chamberland, School of
Psychology/École de psychologie, University of Ottawa/Université d’Ottawa,
Ottawa, Ontario, K1N 6N5, Canada.
| | | |
Collapse
|
15
|
Okubo M, Ishikawa K, Oyama T, Tanaka Y. The look in your eyes: The role of pupil dilation in disguising the perception of trustworthiness. JOURNAL OF TRUST RESEARCH 2023. [DOI: 10.1080/21515581.2023.2165090] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/20/2023]
Affiliation(s)
- Matia Okubo
- Department of Psychology, Senshu University, Kanagawa, Japan
| | - Kenta Ishikawa
- Department of Psychology, Senshu University, Kanagawa, Japan
| | - Takato Oyama
- Department of Psychology, Senshu University, Kanagawa, Japan
| | | |
Collapse
|
16
|
Money V. Demonstrating Anticipatory Deflection and a Preemptive Measure to Manage It: An Extension of Affect Control Theory. SOCIAL PSYCHOLOGY QUARTERLY 2023. [DOI: 10.1177/01902725221132508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
When people visualize a potential for deflection in future interactions, will they lie to prevent it? Affect control theory emphasizes the salience of deflection management in everyday life, otherwise known as an attempted realignment of experiences and expectations in the face of situational incongruency. Traditionally, deflection management is measured post hoc in an individual who, often disconnected from and unassociated with the situation, reconfigures the experience. This does not, however, speak to deflection management during an active interaction or how an individual might change things in anticipation of deflection. Prior to, or during, an active interaction, individuals have a unique opportunity to preemptively alter the definition of the situation based on anticipated sentiments. In essence, they can foresee oncoming deflection and act to avoid it. Using a vignette experiment, I extend affect control theory by highlighting deflection that is anticipated but not yet experienced. I also show that participants have higher odds of lying in interactions where an honest retelling would incur high deflection. To further inform this cognitive process, I present qualitative explanations from participants on why they chose their responses and how the dynamics of their relationship mattered.
Collapse
|
17
|
Yang C, You X, Xie X, Duan Y, Wang B, Zhou Y, Feng H, Wang W, Fan L, Huang G, Shen X. Development of a Chinese werewolf deception database. Front Psychol 2023; 13:1047427. [PMID: 36698609 PMCID: PMC9869050 DOI: 10.3389/fpsyg.2022.1047427] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2022] [Accepted: 12/15/2022] [Indexed: 01/11/2023] Open
Abstract
Although it is important to accurately detect deception, limited research in this area has been undertaken involving Asian people. We aim to address this gap by undertaking research regarding the identification of deception in Asians in realistic environments. In this study, we develop a Chinese Werewolf Deception Database (C2W2D), which consists of 168 video clips (84 deception videos and 84 honest videos). A total of 1,738,760 frames of facial data are recorded. Fifty-eight healthy undergraduates (24 men and 34 women) and 26 drug addicts (26 men) participated in a werewolf game. The development of C2W2D is accomplished based on a "werewolf" deception game paradigm in which the participants spontaneously tell the truth or a lie. Two synced high-speed cameras are used to capture the game process. To explore the differences between lying and truth-telling in the database, descriptive statistics (e.g., duration and quantity) and hypothesis tests are conducted using action units (AUs) of facial expressions (e.g., t-test). The C2W2D contributes to a relatively sizable number of deceptive and honest samples with high ecological validity. These samples can be used to study the individual differences and the underlying mechanisms of lies and truth-telling between drug addicts and healthy people.
Collapse
Affiliation(s)
- Chaocao Yang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China,School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Xuqun You
- School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Xudong Xie
- School of Psychology, Shaanxi Normal University, Xi’an, China,Shaanxi Provincial Key Laboratory of Behavior and Cognitive Neuroscience, Shaanxi Normal University, Xi’an, China
| | - Yuanyuan Duan
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Buxue Wang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Yuxi Zhou
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Hong Feng
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Wenjing Wang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Ling Fan
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Genying Huang
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Xunbing Shen
- Key Laboratory of Psychology of TCM and Brain Science, Jiangxi Administration of Traditional Chinese Medicine, Jiangxi University of Chinese Medicine, Nanchang, China,*Correspondence: Xunbing Shen,
| |
Collapse
|
18
|
Gunderson CA, Baker A, Pence AD, ten Brinke L. Interpersonal Consequences of Deceptive Expressions of Sadness. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2023; 49:97-109. [PMID: 34906011 PMCID: PMC9684658 DOI: 10.1177/01461672211059700] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
Emotional expressions evoke predictable responses from observers; displays of sadness are commonly met with sympathy and help from others. Accordingly, people may be motivated to feign emotions to elicit a desired response. In the absence of suspicion, we predicted that emotional and behavioral responses to genuine (vs. deceptive) expressers would be guided by empirically valid cues of sadness authenticity. Consistent with this hypothesis, untrained observers (total N = 1,300) reported less sympathy and offered less help to deceptive (vs. genuine) expressers of sadness. This effect was replicated using both posed, low-stakes, laboratory-created stimuli, and spontaneous, real, high-stakes emotional appeals to the public. Furthermore, lens models suggest that sympathy reactions were guided by difficult-to-fake facial actions associated with sadness. Results suggest that naive observers use empirically valid cues to deception to coordinate social interactions, providing novel evidence that people are sensitive to subtle cues to deception.
Collapse
Affiliation(s)
| | - Alysha Baker
- Okanagan College, Kelowna, British
Columbia, Canada
| | | | | |
Collapse
|
19
|
Real-time emotion detection by quantitative facial motion analysis. PLoS One 2023; 18:e0282730. [PMID: 36897921 PMCID: PMC10004542 DOI: 10.1371/journal.pone.0282730] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2022] [Accepted: 02/22/2023] [Indexed: 03/11/2023] Open
Abstract
BACKGROUND Research into mood and emotion has often depended on slow and subjective self-report, highlighting a need for rapid, accurate, and objective assessment tools. METHODS To address this gap, we developed a method using digital image speckle correlation (DISC), which tracks subtle changes in facial expressions invisible to the naked eye, to assess emotions in real-time. We presented ten participants with visual stimuli triggering neutral, happy, and sad emotions and quantified their associated facial responses via detailed DISC analysis. RESULTS We identified key alterations in facial expression (facial maps) that reliably signal changes in mood state across all individuals based on these data. Furthermore, principal component analysis of these facial maps identified regions associated with happy and sad emotions. Compared with commercial deep learning solutions that use individual images to detect facial expressions and classify emotions, such as Amazon Rekognition, our DISC-based classifiers utilize frame-to-frame changes. Our data show that DISC-based classifiers deliver substantially better predictions, and they are inherently free of racial or gender bias. LIMITATIONS Our sample size was limited, and participants were aware their faces were recorded on video. Despite this, our results remained consistent across individuals. CONCLUSIONS We demonstrate that DISC-based facial analysis can be used to reliably identify an individual's emotion and may provide a robust and economic modality for real-time, noninvasive clinical monitoring in the future.
Collapse
|
20
|
Eldesouky L, Guo Y, Bentley K, English T. Decoding the Regulator: Accuracy and Bias in Emotion Regulation Judgments. AFFECTIVE SCIENCE 2022; 3:827-835. [PMID: 36519150 PMCID: PMC9743848 DOI: 10.1007/s42761-022-00144-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 07/25/2022] [Indexed: 12/14/2022]
Abstract
Accurately judging emotion regulation (ER) may help facilitate and maintain social relationships. We investigated the accuracy and bias of ER judgments and their social correlates in a two-part study with 136 married couples (ages 23-85 years). Couples completed trait measures of their own and their partner's suppression, reappraisal, and situation selection. On a separate day, they discussed a conflict, then rated their own and their partner's suppression during the discussion. Couples accurately judged their partner's trait level use of all ER strategies, but they were most accurate for suppression. In contrast, they did not accurately judge state suppression; they showed a similarity bias, such that their own use of state suppression predicted judgments of their partner's suppression. Greater relationship satisfaction predicted positive biases at the trait level (e.g., overestimating reappraisal, underestimating suppression), but not the state level. Relationship length did not predict ER accuracy or bias. Findings suggest ER is more detectable at the trait level than state level and for strategies with more behavioral cues. Greater relationship satisfaction may signal positive perceptions of partners' ER patterns. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-022-00144-3.
Collapse
Affiliation(s)
- Lameese Eldesouky
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, USA
- Present Address: Department of Psychology, The American University in Cairo, New Cairo, Egypt
| | - Yue Guo
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, USA
- Present Address: Department of Psychology, University of Missouri, Columbia, USA
| | - Katlin Bentley
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, USA
| | - Tammy English
- Department of Psychological & Brain Sciences, Washington University in St. Louis, St. Louis, USA
| |
Collapse
|
21
|
Wu Q, Peng K, Xie Y, Lai Y, Liu X, Zhao Z. An ingroup disadvantage in recognizing micro-expressions. Front Psychol 2022; 13:1050068. [PMID: 36507018 PMCID: PMC9732534 DOI: 10.3389/fpsyg.2022.1050068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2022] [Accepted: 11/08/2022] [Indexed: 11/27/2022] Open
Abstract
Micro-expression is a fleeting facial expression of emotion that usually occurs in high-stake situations and reveals the true emotion that a person tries to conceal. Due to its unique nature, recognizing micro-expression has great applications for fields like law enforcement, medical treatment, and national security. However, the psychological mechanism of micro-expression recognition is still poorly understood. In the present research, we sought to expand upon previous research to investigate whether the group membership of the expresser influences the recognition process of micro-expressions. By conducting two behavioral studies, we found that contrary to the widespread ingroup advantage found in macro-expression recognition, there was a robust ingroup disadvantage in micro-expression recognition instead. Specifically, in Study 1A and 1B, we found that participants were more accurate at recognizing the intense and subtle micro-expressions of their racial outgroups than those micro-expressions of their racial ingroups, and neither the training experience nor the duration of micro-expressions moderated this ingroup disadvantage. In Study 2A and 2B, we further found that mere social categorization alone was sufficient to elicit the ingroup disadvantage for the recognition of intense and subtle micro-expressions, and such an effect was also unaffected by the duration of micro-expressions. These results suggest that individuals spontaneously employ the social category information of others to recognize micro-expressions, and the ingroup disadvantage in micro-expression stems partly from motivated differential processing of ingroup micro-expressions.
Collapse
Affiliation(s)
- Qi Wu
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China,*Correspondence: Qi Wu,
| | - Kunling Peng
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Yanni Xie
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Yeying Lai
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Xuanchen Liu
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Ziwei Zhao
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China,Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| |
Collapse
|
22
|
Zeng X, Zhao X, Wang S, Qin J, Xie J, Zhong X, Chen J, Liu G. Affection of facial artifacts caused by micro-expressions on electroencephalography signals. Front Neurosci 2022; 16:1048199. [PMID: 36507351 PMCID: PMC9729706 DOI: 10.3389/fnins.2022.1048199] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Accepted: 11/03/2022] [Indexed: 11/25/2022] Open
Abstract
Macro-expressions are widely used in emotion recognition based on electroencephalography (EEG) because of their use as an intuitive external expression. Similarly, micro-expressions, as suppressed and brief emotional expressions, can also reflect a person's genuine emotional state. Therefore, researchers have started to focus on emotion recognition studies based on micro-expressions and EEG. However, compared to the effect of artifacts generated by macro-expressions on the EEG signal, it is not clear how artifacts generated by micro-expressions affect EEG signals. In this study, we investigated the effects of facial muscle activity caused by micro-expressions in positive emotions on EEG signals. We recorded the participants' facial expression images and EEG signals while they watched positive emotion-inducing videos. We then divided the 13 facial regions and extracted the main directional mean optical flow features as facial micro-expression image features, and the power spectral densities of theta, alpha, beta, and gamma frequency bands as EEG features. Multiple linear regression and Granger causality test analyses were used to determine the extent of the effect of facial muscle activity artifacts on EEG signals. The results showed that the average percentage of EEG signals affected by muscle artifacts caused by micro-expressions was 11.5%, with the frontal and temporal regions being significantly affected. After removing the artifacts from the EEG signal, the average percentage of the affected EEG signal dropped to 3.7%. To the best of our knowledge, this is the first study to investigate the affection of facial artifacts caused by micro-expressions on EEG signals.
Collapse
Affiliation(s)
- Xiaomei Zeng
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Xingcong Zhao
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Shiyuan Wang
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jian Qin
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jialan Xie
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Xinyue Zhong
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Jiejia Chen
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronics and Information Engineering, Southwest University, Chongqing, China,Institute of Affective Computing and Information Processing, Southwest University, Chongqing, China,Chongqing Key Laboratory of Nonlinear Circuits and Intelligent Information Processing, Southwest University, Chongqing, China,*Correspondence: Guangyuan Liu,
| |
Collapse
|
23
|
A fixed-point rotation-based feature selection method for micro-expression recognition. Pattern Recognit Lett 2022. [DOI: 10.1016/j.patrec.2022.10.021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
24
|
Zhao X, Liu Y, Chen T, Wang S, Chen J, Wang L, Liu G. Differences in brain activations between micro- and macro-expressions based on electroencephalography. Front Neurosci 2022; 16:903448. [PMID: 36172039 PMCID: PMC9511965 DOI: 10.3389/fnins.2022.903448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/23/2022] [Indexed: 12/04/2022] Open
Abstract
Micro-expressions can reflect an individual's subjective emotions and true mental state and are widely used in the fields of mental health, justice, law enforcement, intelligence, and security. However, the current approach based on image and expert assessment-based micro-expression recognition technology has limitations such as limited application scenarios and time consumption. Therefore, to overcome these limitations, this study is the first to explore the brain mechanisms of micro-expressions and their differences from macro-expressions from a neuroscientific perspective. This can be a foundation for micro-expression recognition based on EEG signals. We designed a real-time supervision and emotional expression suppression (SEES) experimental paradigm to synchronously collect facial expressions and electroencephalograms. Electroencephalogram signals were analyzed at the scalp and source levels to determine the temporal and spatial neural patterns of micro- and macro-expressions. We found that micro-expressions were more strongly activated in the premotor cortex, supplementary motor cortex, and middle frontal gyrus in frontal regions under positive emotions than macro-expressions. Under negative emotions, micro-expressions were more weakly activated in the somatosensory cortex and corneal gyrus regions than macro-expressions. The activation of the right temporoparietal junction (rTPJ) was stronger in micro-expressions under positive than negative emotions. The reason for this difference is that the pathways of facial control are different; the production of micro-expressions under positive emotion is dependent on the control of the face, while micro-expressions under negative emotions are more dependent on the intensity of the emotion.
Collapse
Affiliation(s)
- Xingcong Zhao
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Ying Liu
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| | - Tong Chen
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Shiyuan Wang
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Jiejia Chen
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Linwei Wang
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| |
Collapse
|
25
|
Ben X, Ren Y, Zhang J, Wang SJ, Kpalma K, Meng W, Liu YJ. Video-Based Facial Micro-Expression Analysis: A Survey of Datasets, Features and Algorithms. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2022; 44:5826-5846. [PMID: 33739920 DOI: 10.1109/tpami.2021.3067464] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Unlike the conventional facial expressions, micro-expressions are involuntary and transient facial expressions capable of revealing the genuine emotions that people attempt to hide. Therefore, they can provide important information in a broad range of applications such as lie detection, criminal detection, etc. Since micro-expressions are transient and of low intensity, however, their detection and recognition is difficult and relies heavily on expert experiences. Due to its intrinsic particularity and complexity, video-based micro-expression analysis is attractive but challenging, and has recently become an active area of research. Although there have been numerous developments in this area, thus far there has been no comprehensive survey that provides researchers with a systematic overview of these developments with a unified evaluation. Accordingly, in this survey paper, we first highlight the key differences between macro- and micro-expressions, then use these differences to guide our research survey of video-based micro-expression analysis in a cascaded structure, encompassing the neuropsychological basis, datasets, features, spotting algorithms, recognition algorithms, applications and evaluation of state-of-the-art approaches. For each aspect, the basic techniques, advanced developments and major challenges are addressed and discussed. Furthermore, after considering the limitations of existing micro-expression datasets, we present and release a new dataset - called micro-and-macro expression warehouse (MMEW) - containing more video samples and more labeled emotion types. We then perform a unified comparison of representative methods on CAS(ME) 2 for spotting, and on MMEW and SAMM for recognition, respectively. Finally, some potential future research directions are explored and outlined.
Collapse
|
26
|
Classification of emotional states via transdermal cardiovascular spatiotemporal facial patterns using multispectral face videos. Sci Rep 2022; 12:11188. [PMID: 35778591 PMCID: PMC9249872 DOI: 10.1038/s41598-022-14808-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2021] [Accepted: 06/13/2022] [Indexed: 11/17/2022] Open
Abstract
We describe a new method for remote emotional state assessment using multispectral face videos, and present our findings: unique transdermal, cardiovascular and spatiotemporal facial patterns associated with different emotional states. The method does not rely on stereotypical facial expressions but utilizes different wavelength sensitivities (visible spectrum, near-infrared, and long-wave infrared) to gauge correlates of autonomic nervous system activity spatially and temporally distributed across the human face (e.g., blood flow, hemoglobin concentration, and temperature). We conducted an experiment where 110 participants viewed 150 short emotion-eliciting videos and reported their emotional experience, while three cameras recorded facial videos with multiple wavelengths. Spatiotemporal multispectral features from the multispectral videos were used as inputs to a machine learning model that was able to classify participants’ emotional state (i.e., amusement, disgust, fear, sexual arousal, or no emotion) with satisfactory results (average ROC AUC score of 0.75), while providing feature importance analysis that allows the examination of facial occurrences per emotional state. We discuss findings concerning the different spatiotemporal patterns associated with different emotional states as well as the different advantages of the current method over existing approaches to emotion detection.
Collapse
|
27
|
Wu Q, Xie Y, Liu X, Liu Y. Oxytocin Impairs the Recognition of Micro-Expressions of Surprise and Disgust. Front Psychol 2022; 13:947418. [PMID: 35846599 PMCID: PMC9277341 DOI: 10.3389/fpsyg.2022.947418] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2022] [Accepted: 06/13/2022] [Indexed: 11/13/2022] Open
Abstract
As fleeting facial expressions which reveal the emotion that a person tries to conceal, micro-expressions have great application potentials for fields like security, national defense and medical treatment. However, the physiological basis for the recognition of these facial expressions is poorly understood. In the present research, we utilized a double-blind, placebo-controlled, mixed-model experimental design to investigate the effects of oxytocin on the recognition of micro-expressions in three behavioral studies. Specifically, in Studies 1 and 2, participants were asked to perform a laboratory-based standardized micro-expression recognition task after self-administration of a single dose of intranasal oxytocin (40 IU) or placebo (containing all ingredients except for the neuropeptide). In Study 3, we further examined the effects of oxytocin on the recognition of natural micro-expressions. The results showed that intranasal oxytocin decreased the recognition speed for standardized intense micro-expressions of surprise (Study 1) and decreased the recognition accuracy for standardized subtle micro-expressions of disgust (Study 2). The results of Study 3 further revealed that intranasal oxytocin administration significantly reduced the recognition accuracy for natural micro-expressions of surprise and disgust. The present research is the first to investigate the effects of oxytocin on micro-expression recognition. It suggests that the oxytocin mainly plays an inhibiting role in the recognition of micro-expressions and there are fundamental differences in the neurophysiological basis for the recognition of micro-expressions and macro-expressions.
Collapse
Affiliation(s)
- Qi Wu
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China
- Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
- *Correspondence: Qi Wu,
| | - Yanni Xie
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China
- Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Xuanchen Liu
- Department of Psychology, School of Educational Science, Hunan Normal University, Changsha, China
- Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
| | - Yulong Liu
- School of Finance and Management, Changsha Social Work College, Changsha, China
| |
Collapse
|
28
|
Zhao S, Tang H, Liu S, Zhang Y, Wang H, Xu T, Chen E, Guan C. ME-PLAN: A deep prototypical learning with local attention network for dynamic micro-expression recognition. Neural Netw 2022; 153:427-443. [DOI: 10.1016/j.neunet.2022.06.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2021] [Revised: 05/09/2022] [Accepted: 06/20/2022] [Indexed: 10/17/2022]
|
29
|
Collins HK. When Listening is Spoken. Curr Opin Psychol 2022; 47:101402. [DOI: 10.1016/j.copsyc.2022.101402] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 06/03/2022] [Accepted: 06/16/2022] [Indexed: 11/17/2022]
|
30
|
Wang Y, Zhang L, Xia P, Wang P, Chen X, Du L, Fang Z, Du M. EEG-Based Emotion Recognition Using a 2D CNN with Different Kernels. Bioengineering (Basel) 2022; 9:bioengineering9060231. [PMID: 35735474 PMCID: PMC9219701 DOI: 10.3390/bioengineering9060231] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2022] [Revised: 05/21/2022] [Accepted: 05/23/2022] [Indexed: 11/16/2022] Open
Abstract
Emotion recognition is receiving significant attention in research on health care and Human-Computer Interaction (HCI). Due to the high correlation with emotion and the capability to affect deceptive external expressions such as voices and faces, Electroencephalogram (EEG) based emotion recognition methods have been globally accepted and widely applied. Recently, great improvements have been made in the development of machine learning for EEG-based emotion detection. However, there are still some major disadvantages in previous studies. Firstly, traditional machine learning methods require extracting features manually which is time-consuming and rely heavily on human experts. Secondly, to improve the model accuracies, many researchers used user-dependent models that lack generalization and universality. Moreover, there is still room for improvement in the recognition accuracies in most studies. Therefore, to overcome these shortcomings, an EEG-based novel deep neural network is proposed for emotion classification in this article. The proposed 2D CNN uses two convolutional kernels of different sizes to extract emotion-related features along both the time direction and the spatial direction. To verify the feasibility of the proposed model, the pubic emotion dataset DEAP is used in experiments. The results show accuracies of up to 99.99% and 99.98 for arousal and valence binary classification, respectively, which are encouraging for research and applications in the emotion recognition field.
Collapse
Affiliation(s)
- Yuqi Wang
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; (Y.W.); (L.Z.)
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
| | - Lijun Zhang
- Institute of Microelectronics of Chinese Academy of Sciences, Beijing 100029, China; (Y.W.); (L.Z.)
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
| | - Pan Xia
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Peng Wang
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Xianxiang Chen
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Lidong Du
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
| | - Zhen Fang
- University of Chinese Academy of Sciences, Beijing 100049, China; (P.X.); (P.W.); (X.C.); (L.D.)
- Aerospace Information Research Institute, Chinese Academy of Sciences (AIRCAS), Beijing 100190, China
- Personalized Management of Chronic Respiratory Disease, Chinese Academy of Medical Sciences, Beijing 100190, China
- Correspondence: (Z.F.); (M.D.)
| | - Mingyan Du
- China Beijing Luhe Hospital, Capital Medical University, Beijing 101199, China
- Correspondence: (Z.F.); (M.D.)
| |
Collapse
|
31
|
Learning two groups of discriminative features for micro-expression recognition. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2021.12.088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
32
|
Ji E, Son LK, Kim MS. Emotion Perception Rules Abide by Cultural Display Rules. Exp Psychol 2022; 69:83-103. [PMID: 35929473 DOI: 10.1027/1618-3169/a000550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
The current study compared emotion perception in two cultures where display rules for emotion expression deviate. In Experiment 1, participants from America and Korea played a repeated prisoner's dilemma game with a counterpart, who was, in actuality, a programmed defector. Emotion expressions were exchanged via emoticons at the end of every round. After winning more points by defecting, the counterpart sent either a matching emoticon (a joyful face) or a mismatching emoticon (a regretful face). The results showed that Americans in the matching condition were more likely to defect, or to punish, compared to those in the mismatching condition, suggesting that more weight was given to their counterpart's joyful expression. This difference was smaller for Koreans, suggesting a higher disregard for the outward expression. In a second, supplementary experiment, we found that Korean participants were more likely to cooperate in the mismatching or regretful condition, when they thought their counterpart was a Westerner. Overall, our data suggest that emotion perception rules abide by the display rules of one's culture but are also influenced by the counterpart's culture.
Collapse
Affiliation(s)
- Eunhee Ji
- Biomedical Institute for Convergence at SKKU (BICS), Sungkyunkwan University, South Korea.,Department of Psychology, Yonsei University, South Korea
| | - Lisa K Son
- Department of Psychology, Barnard College, New York, NY, USA
| | - Min-Shik Kim
- Department of Psychology, Yonsei University, South Korea
| |
Collapse
|
33
|
Monaro M, Maldera S, Scarpazza C, Sartori G, Navarin N. Detecting deception through facial expressions in a dataset of videotaped interviews: A comparison between human judges and machine learning models. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.107063] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
34
|
Guerouaou N, Vaiva G, Aucouturier JJ. The shallow of your smile: the ethics of expressive vocal deep-fakes. Philos Trans R Soc Lond B Biol Sci 2022; 377:20210083. [PMID: 34775820 PMCID: PMC8591385 DOI: 10.1098/rstb.2021.0083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Accepted: 07/28/2021] [Indexed: 11/12/2022] Open
Abstract
Rapid technological advances in artificial intelligence are creating opportunities for real-time algorithmic modulations of a person's facial and vocal expressions, or 'deep-fakes'. These developments raise unprecedented societal and ethical questions which, despite much recent public awareness, are still poorly understood from the point of view of moral psychology. We report here on an experimental ethics study conducted on a sample of N = 303 participants (predominantly young, western and educated), who evaluated the acceptability of vignettes describing potential applications of expressive voice transformation technology. We found that vocal deep-fakes were generally well accepted in the population, notably in a therapeutic context and for emotions judged otherwise difficult to control, and surprisingly, even if the user lies to their interlocutors about using them. Unlike other emerging technologies like autonomous vehicles, there was no evidence of social dilemma in which one would, for example, accept for others what they resent for themselves. The only real obstacle to the massive deployment of vocal deep-fakes appears to be situations where they are applied to a speaker without their knowing, but even the acceptability of such situations was modulated by individual differences in moral values and attitude towards science fiction. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Nadia Guerouaou
- Science and Technology of Music and Sound, IRCAM/CNRS/Sorbonne Université, Paris, France
- Lille Neuroscience and Cognition Center (LiNC), Team PSY, INSERM U-1172/CHRU Lille, France
| | - Guillaume Vaiva
- Lille Neuroscience and Cognition Center (LiNC), Team PSY, INSERM U-1172/CHRU Lille, France
| | | |
Collapse
|
35
|
Yamamoto K, Kimura M, Osaka M. Sorry, Not Sorry: Effects of Different Types of Apologies and Self-Monitoring on Non-verbal Behaviors. Front Psychol 2021; 12:689615. [PMID: 34512447 PMCID: PMC8428520 DOI: 10.3389/fpsyg.2021.689615] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Accepted: 08/03/2021] [Indexed: 11/25/2022] Open
Abstract
This study examines the effects of different types of apologies and individual differences in self-monitoring on non-verbal apology behaviors involving a server apologizing to a customer. Apologies divide into sincere apologies that reflect genuine recognition of fault, and instrumental apologies, made for achieving a personal goal such as avoiding punishment or rejection by others. Self-monitoring (public-performing and other-directedness) were also examined. Fifty-three female undergraduate students participated in the experiment. Participants were assigned randomly to either a sincere apology condition or an instrumental apology condition. They watched the film clip of the communication between a customer and server and then role-played how they would apologize if they were the server. Participants’ non-verbal behavior during the role-play was videotaped. The results showed an interaction between the apology condition and self-monitoring on non-verbal behaviors. When public-performing was low, gaze avoidance was more likely to occur with a sincere apology than an instrumental apology. There was no difference when the public-performing was high. Facial displays of apology were apparent in the instrumental apology compared to the sincere apology. This tendency became more conspicuous with increased public-performing. Our results indicated that the higher the public-performing, the more participants tried to convey the feeling of apology by combining a direct gaze and facial displays in an instrumental apology. On the other hand, results suggest that lower levels of public-performing elicited less immediacy in offering a sincere apology. Further studies are needed to determine whether these results apply to other conflict resolution situations.
Collapse
Affiliation(s)
- Kyoko Yamamoto
- Department of Psychology, Kobe Gakuin University, Kobe, Japan
| | - Masanori Kimura
- Department of Psychological and Behavioral Sciences, Kobe College, Nishinomiya, Japan
| | - Miki Osaka
- Department of Psychological and Behavioral Sciences, Kobe College, Nishinomiya, Japan
| |
Collapse
|
36
|
Döllinger L, Laukka P, Högman LB, Bänziger T, Makower I, Fischer H, Hau S. Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions. Front Psychol 2021; 12:708867. [PMID: 34475841 PMCID: PMC8406528 DOI: 10.3389/fpsyg.2021.708867] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 12/22/2022] Open
Abstract
Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.
Collapse
Affiliation(s)
- Lillian Döllinger
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Lennart Björn Högman
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Tanja Bänziger
- Department of Psychology and Social Work, Mid Sweden University, Sundsvall, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Stephan Hau
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| |
Collapse
|
37
|
Bremhorst A, Mills DS, Würbel H, Riemer S. Evaluating the accuracy of facial expressions as emotion indicators across contexts in dogs. Anim Cogn 2021; 25:121-136. [PMID: 34338869 PMCID: PMC8904359 DOI: 10.1007/s10071-021-01532-1] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Revised: 07/01/2021] [Accepted: 07/07/2021] [Indexed: 11/25/2022]
Abstract
Facial expressions potentially serve as indicators of animal emotions if they are consistently present across situations that (likely) elicit the same emotional state. In a previous study, we used the Dog Facial Action Coding System (DogFACS) to identify facial expressions in dogs associated with conditions presumably eliciting positive anticipation (expectation of a food reward) and frustration (prevention of access to the food). Our first aim here was to identify facial expressions of positive anticipation and frustration in dogs that are context-independent (and thus have potential as emotion indicators) and to distinguish them from expressions that are reward-specific (and thus might relate to a motivational state associated with the expected reward). Therefore, we tested a new sample of 28 dogs with a similar set-up designed to induce positive anticipation (positive condition) and frustration (negative condition) in two reward contexts: food and toys. The previous results were replicated: Ears adductor was associated with the positive condition and Ears flattener, Blink, Lips part, Jaw drop, and Nose lick with the negative condition. Four additional facial actions were also more common in the negative condition. All actions except the Upper lip raiser were independent of reward type. Our second aim was to assess basic measures of diagnostic accuracy for the potential emotion indicators. Ears flattener and Ears downward had relatively high sensitivity but low specificity, whereas the opposite was the case for the other negative correlates. Ears adductor had excellent specificity but low sensitivity. If the identified facial expressions were to be used individually as diagnostic indicators, none would allow consistent correct classifications of the associated emotion. Diagnostic accuracy measures are an essential feature for validity assessments of potential indicators of animal emotion.
Collapse
Affiliation(s)
- A Bremhorst
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland.
- School of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK.
- Graduate School for Cellular and Biomedical Sciences (GCB), University of Bern, 3012, Bern, Switzerland.
| | - D S Mills
- School of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK
| | - H Würbel
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland
| | - S Riemer
- Division of Animal Welfare, DCR-VPHI, Vetsuisse Faculty, University of Bern, 3012, Bern, Switzerland
| |
Collapse
|
38
|
A comparative study on movement feature in different directions for micro-expression recognition. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.03.063] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022]
|
39
|
|
40
|
Park S, Lee SW, Whang M. The Analysis of Emotion Authenticity Based on Facial Micromovements. SENSORS 2021; 21:s21134616. [PMID: 34283146 PMCID: PMC8271774 DOI: 10.3390/s21134616] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 06/11/2021] [Accepted: 07/02/2021] [Indexed: 11/16/2022]
Abstract
People tend to display fake expressions to conceal their true feelings. False expressions are observable by facial micromovements that occur for less than a second. Systems designed to recognize facial expressions (e.g., social robots, recognition systems for the blind, monitoring systems for drivers) may better understand the user's intent by identifying the authenticity of the expression. The present study investigated the characteristics of real and fake facial expressions of representative emotions (happiness, contentment, anger, and sadness) in a two-dimensional emotion model. Participants viewed a series of visual stimuli designed to induce real or fake emotions and were signaled to produce a facial expression at a set time. From the participant's expression data, feature variables (i.e., the degree and variance of movement, and vibration level) involving the facial micromovements at the onset of the expression were analyzed. The results indicated significant differences in the feature variables between the real and fake expression conditions. The differences varied according to facial regions as a function of emotions. This study provides appraisal criteria for identifying the authenticity of facial expressions that are applicable to future research and the design of emotion recognition systems.
Collapse
Affiliation(s)
- Sung Park
- School of Design, Savannah College of Art and Design, Savannah, GA 31401, USA
- Correspondence:
| | - Seong Won Lee
- Department of Human Centered Artificial Intelligence, Sangmyung University, Jongno-gu, Seoul 03016, Korea; (S.W.L.); (M.W.)
| | - Mincheol Whang
- Department of Human Centered Artificial Intelligence, Sangmyung University, Jongno-gu, Seoul 03016, Korea; (S.W.L.); (M.W.)
| |
Collapse
|
41
|
Shen X, Fan G, Niu C, Chen Z. Catching a Liar Through Facial Expression of Fear. Front Psychol 2021; 12:675097. [PMID: 34168597 PMCID: PMC8217652 DOI: 10.3389/fpsyg.2021.675097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the "leakage theory," the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show "The moment of truth" by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.
Collapse
Affiliation(s)
- Xunbing Shen
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Gaojie Fan
- Beck Visual Cognition Laboratory, Louisiana State University, Baton Rouge, LA, United States
| | - Caoyuan Niu
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Zhencai Chen
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| |
Collapse
|
42
|
Stewart PA, Svetieva E. Micro-Expressions of Fear During the 2016 Presidential Campaign Trail: Their Influence on Trait Perceptions of Donald Trump. Front Psychol 2021; 12:608483. [PMID: 34149502 PMCID: PMC8206780 DOI: 10.3389/fpsyg.2021.608483] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2020] [Accepted: 03/24/2021] [Indexed: 11/21/2022] Open
Abstract
The 2016 United States presidential election was exceptional for many reasons; most notably the extreme division between supporters of Donald Trump and Hillary Clinton. In an election that turned more upon the character traits of the candidates than their policy positions, there is reason to believe that the non-verbal performances of the candidates influenced attitudes toward the candidates. Two studies, before Election Day, experimentally tested the influence of Trump’s micro-expressions of fear during his Republican National Convention nomination acceptance speech on how viewers evaluated his key leadership traits of competence and trustworthiness. Results from Study 1, conducted 3 weeks prior to the election, indicated generally positive effects of Trump’s fear micro-expressions on his trait evaluations, particularly when viewers were first exposed to his opponent, Clinton. In contrast, Study 2, conducted 4 days before Election Day, suggests participants had at that point largely established their trait perceptions and were unaffected by the micro-expressions.
Collapse
Affiliation(s)
- Patrick A Stewart
- Department of Political Science, University of Arkansas, Fayetteville, AR, United States
| | - Elena Svetieva
- Department of Communication, University of Colorado Colorado Springs, Colorado Springs, CO, United States
| |
Collapse
|
43
|
Hodges SD, Kezer M. It Is Hard to Read Minds without Words: Cues to Use to Achieve Empathic Accuracy. J Intell 2021; 9:jintelligence9020027. [PMID: 34067669 PMCID: PMC8163163 DOI: 10.3390/jintelligence9020027] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2020] [Revised: 02/09/2021] [Accepted: 05/04/2021] [Indexed: 11/25/2022] Open
Abstract
When faced with the task of trying to “read” a stranger’s thoughts, what cues can perceivers use? We explore two predictors of empathic accuracy (the ability to accurately infer another person’s thoughts): use of stereotypes about the target’s group, and use of the target’s own words. A sample of 326 White American undergraduate students were asked to infer the dynamic thoughts of Middle Eastern male targets, using Ickes’ (Ickes et al. 1990) empathic accuracy paradigm. We predicted use of stereotypes would reduce empathic accuracy because the stereotypes would be negative and inaccurate. However, more stereotypical inferences about the target’s thoughts actually predicted greater empathic accuracy, a pattern in line with past work on the role of stereotypes in empathic accuracy (Lewis et al. 2012), perhaps because the stereotypes of Middle Easterners (collected from a sample of 60 participants drawn from the same population) were less negative than expected. In addition, perceivers who inferred that the targets were thinking thoughts that more closely matched what the target was saying out loud were more empathically accurate. Despite the fact that words can be used intentionally to obscure what a target is thinking, they appear to be a useful cue to empathic accuracy, even in tricky contexts that cross cultural lines.
Collapse
|
44
|
Emotional acknowledgment: How verbalizing others’ emotions fosters interpersonal trust. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2021. [DOI: 10.1016/j.obhdp.2021.02.002] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023]
|
45
|
A Fast Preprocessing Method for Micro-Expression Spotting via Perceptual Detection of Frozen Frames. J Imaging 2021; 7:jimaging7040068. [PMID: 34460518 PMCID: PMC8321339 DOI: 10.3390/jimaging7040068] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2021] [Revised: 03/23/2021] [Accepted: 03/30/2021] [Indexed: 11/17/2022] Open
Abstract
This paper presents a preliminary study concerning a fast preprocessing method for facial microexpression (ME) spotting in video sequences. The rationale is to detect frames containing frozen expressions as a quick warning for the presence of MEs. In fact, those frames can either precede or follow (or both) MEs according to ME type and the subject's reaction. To that end, inspired by the Adelson-Bergen motion energy model and the instinctive nature of the preattentive vision, global visual perception-based features were employed for the detection of frozen frames. Preliminary results achieved on both controlled and uncontrolled videos confirmed that the proposed method is able to correctly detect frozen frames and those revealing the presence of nearby MEs-independently of ME kind and facial region. This property can then contribute to speeding up and simplifying the ME spotting process, especially during long video acquisitions.
Collapse
|
46
|
Oh G, Ryu J, Jeong E, Yang JH, Hwang S, Lee S, Lim S. DRER: Deep Learning-Based Driver's Real Emotion Recognizer. SENSORS 2021; 21:s21062166. [PMID: 33808922 PMCID: PMC8003797 DOI: 10.3390/s21062166] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 02/01/2021] [Revised: 03/15/2021] [Accepted: 03/16/2021] [Indexed: 12/18/2022]
Abstract
In intelligent vehicles, it is essential to monitor the driver’s condition; however, recognizing the driver’s emotional state is one of the most challenging and important tasks. Most previous studies focused on facial expression recognition to monitor the driver’s emotional state. However, while driving, many factors are preventing the drivers from revealing the emotions on their faces. To address this problem, we propose a deep learning-based driver’s real emotion recognizer (DRER), which is a deep learning-based algorithm to recognize the drivers’ real emotions that cannot be completely identified based on their facial expressions. The proposed algorithm comprises of two models: (i) facial expression recognition model, which refers to the state-of-the-art convolutional neural network structure; and (ii) sensor fusion emotion recognition model, which fuses the recognized state of facial expressions with electrodermal activity, a bio-physiological signal representing electrical characteristics of the skin, in recognizing even the driver’s real emotional state. Hence, we categorized the driver’s emotion and conducted human-in-the-loop experiments to acquire the data. Experimental results show that the proposed fusing approach achieves 114% increase in accuracy compared to using only the facial expressions and 146% increase in accuracy compare to using only the electrodermal activity. In conclusion, our proposed method achieves 86.8% recognition accuracy in recognizing the driver’s induced emotion while driving situation.
Collapse
Affiliation(s)
- Geesung Oh
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Junghwan Ryu
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Euiseok Jeong
- Graduate School of Automotive Engineering, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea; (G.O.); (J.R.); (E.J.)
| | - Ji Hyun Yang
- Department of Automobile and IT Convergence, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea;
| | - Sungwook Hwang
- Chassis System Control Research Lab, Hyundai Motor Group, Hwaseong 18280, Korea; (S.H.); (S.L.)
| | - Sangho Lee
- Chassis System Control Research Lab, Hyundai Motor Group, Hwaseong 18280, Korea; (S.H.); (S.L.)
| | - Sejoon Lim
- Department of Automobile and IT Convergence, Kookmin University, 77, Jeongneung-ro, Seongbuk-gu, Seoul 02707, Korea;
- Correspondence: ; Tel.: +82-2-910-5469
| |
Collapse
|
47
|
Geiger M, Bärwaldt R, Wilhelm O. The Good, the Bad, and the Clever: Faking Ability as a Socio-Emotional Ability? J Intell 2021; 9:13. [PMID: 33806368 PMCID: PMC8006246 DOI: 10.3390/jintelligence9010013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2020] [Revised: 02/15/2021] [Accepted: 02/23/2021] [Indexed: 11/16/2022] Open
Abstract
Socio-emotional abilities have been proposed as an extension to models of intelligence, but earlier measurement approaches have either not fulfilled criteria of ability measurement or have covered only predominantly receptive abilities. We argue that faking ability-the ability to adjust responses on questionnaires to present oneself in a desired manner-is a socio-emotional ability that can broaden our understanding of these abilities and intelligence in general. To test this theory, we developed new instruments to measure the ability to fake bad (malingering) and administered them jointly with established tests of faking good ability in a general sample of n = 134. Participants also completed multiple tests of emotion perception along with tests of emotion expression posing, pain expression regulation, and working memory capacity. We found that individual differences in faking ability tests are best explained by a general factor that had a large correlation with receptive socio-emotional abilities and had a zero to medium-sized correlation with different productive socio-emotional abilities. All correlations were still small after controlling these effects for shared variance with general mental ability as indicated by tests of working memory capacity. We conclude that faking ability is indeed correlated meaningfully with other socio-emotional abilities and discuss the implications for intelligence research and applied ability assessment.
Collapse
Affiliation(s)
- Mattis Geiger
- Institute of Psychology and Education, Ulm University, 89069 Ulm, Germany;
| | - Romy Bärwaldt
- Department of Psychology, University of Münster, D-48149 Münster, Germany;
| | - Oliver Wilhelm
- Institute of Psychology and Education, Ulm University, 89069 Ulm, Germany;
| |
Collapse
|
48
|
|
49
|
Validity of Psychiatric Evaluation of Asylum Seekers through Telephone. Case Rep Psychiatry 2021; 2021:8856352. [PMID: 33628562 PMCID: PMC7889332 DOI: 10.1155/2021/8856352] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2020] [Revised: 01/20/2021] [Accepted: 01/22/2021] [Indexed: 12/04/2022] Open
Abstract
The goal of the psychiatric assessment of asylum seekers is to evaluate the asylum seeker's mental health and credibility. The shortage of mental health providers trained in this particular type of evaluation makes in-person evaluation not always feasible. Telephonic interview has been occasionally utilized to fill this void. The validity of such evaluations in assessing credibility has yet to be fully established. In the case of telephonic interviews, evaluators are limited with no access to facial or body language cues that can indicate deception or honesty. We will present a case of a client evaluated via telephone that was deemed credible and eventually released to pursue asylum in the US. Assessment of credibility relied solely on cues obtained from the client's narrative, reported symptoms, and their style of interaction with the evaluator. We will highlight the findings from the client's interview that supported credibility in the case and discuss the challenges of assessing asylum seeker's credibility via telephonic interview. Telephonic evaluation of credibility can be considered a valid method despite major challenges, but psychiatric evaluators should be aware of the limitations of telephonic evaluations given the high possibility of secondary gains and deception.
Collapse
|
50
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|