1
|
Pelot A, Gallant A, Mazerolle MP, Roy-Charland A. Methodological Variations to Explore Conflicting Results in the Existing Literature of Masking Smile Judgment. Behav Sci (Basel) 2024; 14:944. [PMID: 39457816 PMCID: PMC11505263 DOI: 10.3390/bs14100944] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2024] [Revised: 10/08/2024] [Accepted: 10/11/2024] [Indexed: 10/28/2024] Open
Abstract
Although a smile can serve as an expression of genuine happiness, it can also be generated to conceal negative emotions. The traces of negative emotion present in these types of smiles can produce micro-expressions, subtle movements of the facial muscles manifested in the upper or lower half of the face. Studies examining the judgment of smiles masking negative emotions have mostly employed dichotomous rating measures, while also assuming that dichotomous categorization of a smile as happy or not is synonymous with judgments of the smile's authenticity. The aim of the two studies was to explore the judgment of enjoyment and masking smiles using unipolar and bipolar continuous rating measures and examine differences in the judgment when instruction varied between judgments of happiness and authenticity. In Experiment 1, participants rated smiles on 7-point scales on perceived happiness and authenticity. In Experiment 2, participants rated the smiles on bipolar 7-point scales between happiness and a negative emotion label. In both studies, similar patterns were observed: faces with traces of fear were rated significantly less happy/authentic and those with traces of anger in the brows were rated significantly happier/more authentic. Regarding varied instruction type, no effect was found for the two instruction types, indicating that participants perceive and judge enjoyment and masking smiles similarly according to these two instructions. Additionally, the use of bipolar scales with dimensions between a negative emotion label and happiness were not consistently effective in influencing the judgement of the masking smile.
Collapse
Affiliation(s)
- Annalie Pelot
- School of Psychology, Laurentian University, Sudbury, ON P3E 2C6, Canada;
| | - Adèle Gallant
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| | - Marie-Pier Mazerolle
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| | - Annie Roy-Charland
- École de Psychologie, Université de Moncton, Moncton, NB E1A 3E9, Canada; (A.G.); (A.R.-C.)
| |
Collapse
|
2
|
Obayashi Y, Uehara S, Yuasa A, Otaka Y. The other person's smiling amount affects one's smiling response during face-to-face conversations. Front Behav Neurosci 2024; 18:1420361. [PMID: 39184933 PMCID: PMC11341491 DOI: 10.3389/fnbeh.2024.1420361] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2024] [Accepted: 07/29/2024] [Indexed: 08/27/2024] Open
Abstract
Introduction Smiling during conversation occurs interactively between people and is known to build good interpersonal relationships. However, whether and how much the amount that an individual smiles is influenced by the other person's smile has remained unclear. This study aimed to quantify the amount of two individuals' smiles during conversations and investigate the dependency of one's smile amount (i.e., intensity and frequency) on that of the other. Method Forty participants (20 females) engaged in three-minute face-to-face conversations as speakers with a listener (male or female), under three conditions, where the amount of smiling response by listeners was controlled as "less," "moderate," and "greater." The amount of the smiles was quantified based on their facial movements through automated facial expression analysis. Results The results showed that the amount of smiling by the speaker changed significantly depending on the listener's smile amount; when the listeners smiled to a greater extent, the speakers tended to smile more, especially when they were of the same gender (i.e., male-male and female-female pairs). Further analysis revealed that the smiling intensities of the two individuals changed in a temporally synchronized manner. Discussion These results provide quantitative evidence for the dependence of one's smile on the other's smile, and the differential effect between gender pairs.
Collapse
Affiliation(s)
- Yota Obayashi
- Department of Rehabilitation, Fujita Health University Hospital, Aichi, Japan
| | - Shintaro Uehara
- Faculty of Rehabilitation, Fujita Health University School of Health Sciences, Aichi, Japan
| | - Akiko Yuasa
- Department of Rehabilitation Medicine, Fujita Health University School of Medicine, Aichi, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
| | - Yohei Otaka
- Department of Rehabilitation Medicine, Fujita Health University School of Medicine, Aichi, Japan
| |
Collapse
|
3
|
Zhao X, Chen J, Chen T, Liu Y, Wang S, Zeng X, Yan J, Liu G. Micro-Expression Recognition Based on Nodal Efficiency in the EEG Functional Networks. IEEE Trans Neural Syst Rehabil Eng 2024; 32:887-894. [PMID: 38190663 DOI: 10.1109/tnsre.2023.3347601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
Micro-expression recognition based on ima- ges has made some progress, yet limitations persist. For instance, image-based recognition of micro-expressions is affected by factors such as ambient light, changes in head posture, and facial occlusion. The high temporal resolution of electroencephalogram (EEG) technology can record brain activity associated with micro-expressions and identify them objectively from a neurophysiological standpoint. Accordingly, this study introduces a novel method for recognizing micro-expressions using node efficiency features of brain networks derived from EEG signals. We designed a real-time Supervision and Emotional Expression Suppression (SEES) experimental paradigm to collect video and EEG data reflecting micro- and macro-expression states from 70 participants experiencing positive emotions. By constructing functional brain networks based on graph theory, we analyzed the network efficiencies at both macro- and micro-levels. The participants exhibited lower connection density, global efficiency, and nodal efficiency in the alpha, beta, and gamma networks during micro-expressions compared to macro-expressions. We then selected the optimal subset of nodal efficiency features using a random forest algorithm and applied them to various classifiers, including Support Vector Machine (SVM), Gradient-Boosted Decision Tree (GBDT), Logistic Regression (LR), Random Forest (RF), and eXtreme Gradient Boosting (XGBoost). These classifiers achieved promising accuracy in micro-expression recognition, with SVM exhibiting the highest accuracy of 92.6% when 15 channels were selected. This study provides a new neuroscientific indicator for recognizing micro-expressions based on EEG signals, thereby broadening the potential applications for micro-expression recognition.
Collapse
|
4
|
Ng NKY, Dudeney J, Jaaniste T. Parent-Child Communication Incongruence in Pediatric Healthcare. CHILDREN (BASEL, SWITZERLAND) 2023; 11:39. [PMID: 38255353 PMCID: PMC10814587 DOI: 10.3390/children11010039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/09/2023] [Revised: 12/12/2023] [Accepted: 12/26/2023] [Indexed: 01/24/2024]
Abstract
Parents play a key role in providing children with health-related information and emotional support. This communication occurs both in their homes and in pediatric healthcare environments, such as hospitals, outpatient clinics, and primary care offices. Often, this occurs within situations entailing heightened stress for both the parent and the child. There is considerable research within the communication literature regarding the nature of both verbal and nonverbal communication, along with the way in which these communication modalities are either similar (i.e., congruent) or dissimilar (i.e., incongruent) to one another. However, less is known about communication congruency/incongruency, specifically in parent-child relationships, or within healthcare environments. In this narrative review, we explore the concept of verbal and nonverbal communication incongruence, specifically within the context of parent-child communication in a pediatric healthcare setting. We present an overview of verbal and nonverbal communication and propose the Communication Incongruence Model to encapsulate how verbal and nonverbal communication streams are used and synthesized by parents and children. We discuss the nature and possible reasons for parental communication incongruence within pediatric settings, along with the consequences of incongruent communication. Finally, we suggest a number of hypotheses derived from the model that can be tested empirically and used to guide future research directions and influence potential clinical applications.
Collapse
Affiliation(s)
- Nancy Kwun Yiu Ng
- Departments of Pain & Palliative Care, Sydney Children’s Hospital, Randwick, NSW 2031, Australia; (N.K.Y.N.); (J.D.)
- School of Clinical Medicine, University of New South Wales, Kensington, NSW 2033, Australia
| | - Joanne Dudeney
- Departments of Pain & Palliative Care, Sydney Children’s Hospital, Randwick, NSW 2031, Australia; (N.K.Y.N.); (J.D.)
- School of Clinical Medicine, University of New South Wales, Kensington, NSW 2033, Australia
- School of Psychological Sciences, Macquarie University, Macquarie Park, NSW 2109, Australia
| | - Tiina Jaaniste
- Departments of Pain & Palliative Care, Sydney Children’s Hospital, Randwick, NSW 2031, Australia; (N.K.Y.N.); (J.D.)
- School of Clinical Medicine, University of New South Wales, Kensington, NSW 2033, Australia
| |
Collapse
|
5
|
Patterson ML, Fridlund AJ, Crivelli C. Four Misconceptions About Nonverbal Communication. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:1388-1411. [PMID: 36791676 PMCID: PMC10623623 DOI: 10.1177/17456916221148142] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/17/2023]
Abstract
Research and theory in nonverbal communication have made great advances toward understanding the patterns and functions of nonverbal behavior in social settings. Progress has been hindered, we argue, by presumptions about nonverbal behavior that follow from both received wisdom and faulty evidence. In this article, we document four persistent misconceptions about nonverbal communication-namely, that people communicate using decodable body language; that they have a stable personal space by which they regulate contact with others; that they express emotion using universal, evolved, iconic, categorical facial expressions; and that they can deceive and detect deception, using dependable telltale clues. We show how these misconceptions permeate research as well as the practices of popular behavior experts, with consequences that extend from intimate relationships to the boardroom and courtroom and even to the arena of international security. Notwithstanding these misconceptions, existing frameworks of nonverbal communication are being challenged by more comprehensive systems approaches and by virtual technologies that ambiguate the roles and identities of interactants and the contexts of interaction.
Collapse
Affiliation(s)
| | - Alan J. Fridlund
- Department of Psychological and Brain Sciences, University of California, Santa Barbara
| | | |
Collapse
|
6
|
Carmichael CL, Mizrahi M. Connecting cues: The role of nonverbal cues in perceived responsiveness. Curr Opin Psychol 2023; 53:101663. [PMID: 37572551 DOI: 10.1016/j.copsyc.2023.101663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/02/2023] [Revised: 07/06/2023] [Accepted: 07/11/2023] [Indexed: 08/14/2023]
Abstract
Nonverbal cues powerfully shape interpersonal experiences with close others; yet, there has been minimal cross-fertilization between the nonverbal behavior and close relationships literatures. Using examples of responsive nonverbal behavior conveyed across vocal, tactile, facial, and bodily channels of communication, we illustrate the utility of assessing and isolating their effects to differentiate the contributions of verbal and nonverbal displays of listening and responsiveness to relationship outcomes. We offer suggestions for methodological approaches to better capture responsive behavior across verbal and nonverbal channels, and discuss theoretical and practical implications of carrying out this work to better clarify what makes people feel understood, validated, listened to, and cared for.
Collapse
Affiliation(s)
- Cheryl L Carmichael
- Department of Psychology, Brooklyn College, CUNY, 2900 Bedford Avenue, Brooklyn, NY 11210, USA
| | - Moran Mizrahi
- Department of Psychology, Ariel University, 3 Kiryat HaMada, Ariel 40700, Israel.
| |
Collapse
|
7
|
Miolla A, Cardaioli M, Scarpazza C. Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behav Res Methods 2023; 55:2559-2574. [PMID: 36002622 PMCID: PMC10439033 DOI: 10.3758/s13428-022-01914-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions are among the most powerful signals for human beings to convey their emotional states. Indeed, emotional facial datasets represent the most effective and controlled method of examining humans' interpretation of and reaction to various emotions. However, scientific research on emotion mainly relied on static pictures of facial expressions posed (i.e., simulated) by actors, creating a significant bias in emotion literature. This dataset tries to fill this gap, providing a considerable amount (N = 1458) of dynamic genuine (N = 707) and posed (N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants' body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.
Collapse
Affiliation(s)
- A. Miolla
- Department of General Psychology, University of Padua, Padua, Italy
| | - M. Cardaioli
- Department of Mathematics, University of Padua, Padua, Italy
- GFT Italy, Milan, Italy
| | - C. Scarpazza
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
8
|
Gallant A, Pelot A, Mazerolle MP, Sonier RP, Roy-Charland A. The role of emotion-related individual differences in enjoyment and masking smile judgment. BMC Psychol 2023; 11:132. [PMID: 37098621 PMCID: PMC10131331 DOI: 10.1186/s40359-023-01173-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2022] [Accepted: 04/14/2023] [Indexed: 04/27/2023] Open
Abstract
BACKGROUND While some research indicates that individuals can accurately judge smile authenticity of enjoyment and masking smile expressions, other research suggest modest judgment rates of masking smiles. The current study explored the role of emotion-related individual differences in the judgment of authenticity and recognition of negative emotions in enjoyment and masking smile expressions as a potential explanation for the differences observed. METHODS Specifically, Experiment 1 investigated the role of emotion contagion (Doherty in J Nonverbal Behav 21:131-154, 1997), emotion intelligence (Schutte et al. in Personality Individ Differ 25:167-177, 1998), and emotion regulation (Gratz and Roemer in J Psychopathol Behav Assess 26:41-54, 2004) in smile authenticity judgment and recognition of negative emotions in masking smiles. Experiment 2 investigated the role of state and trait anxiety (Spielberger et al. in Manual for the state-trait anxiety inventory, Consulting Psychologists Press, Palo Alto, 1983) in smile authenticity judgment and recognition of negative emotions in the same masking smiles. In both experiments, repeated measures ANOVAs were conducted for judgment of authenticity, probability of producing the expected response, for the detection of another emotion, and for emotion recognition. A series of correlations were also calculated between the proportion of expected responses of smile judgement and the scores on the different subscales. RESULTS Results of the smile judgment and recognition tasks were replicated in both studies, and echoed results from prior studies of masking smile judgment: participants rated enjoyment smiles as happier than the masking smiles and, of the masking smiles, participants responded "really happy" more often for the angry-eyes masking smiles and more often categorized fear masking smiles as "not really happy". CONCLUSIONS Overall, while the emotion-related individual differences used in our study seem to have an impact on recognition of basic emotions in the literature, our study suggest that these traits, except for emotional awareness, do not predict performances on the judgment of complex expressions such as masking smiles. These results provide further information regarding the factors that do and do not contribute to greater judgment of smile authenticity and recognition of negative emotions in masking smiles.
Collapse
Affiliation(s)
- Adèle Gallant
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - Annalie Pelot
- Department of Psychology, Laurentian University, Sudbury, ON, Canada
| | - Marie-Pier Mazerolle
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - René-Pierre Sonier
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada
| | - Annie Roy-Charland
- School of Psychology, Université de Moncton, 18 Avenue Antonine-Maillet, Moncton, NB, E1A 3E9, Canada.
| |
Collapse
|
9
|
Lee M, Lori A, Langford NA, Rilling JK. The neural basis of smile authenticity judgments and the potential modulatory role of the oxytocin receptor gene (OXTR). Behav Brain Res 2023; 437:114144. [PMID: 36216140 DOI: 10.1016/j.bbr.2022.114144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Revised: 09/03/2022] [Accepted: 09/30/2022] [Indexed: 11/13/2022]
Abstract
Accurate perception of genuine vs. posed smiles is crucial for successful social navigation in humans. While people vary in their ability to assess the authenticity of smiles, little is known about the specific biological mechanisms underlying this variation. We investigated the neural substrates of smile authenticity judgments using functional magnetic resonance imaging (fMRI). We also tested a preliminary hypothesis that a common polymorphism in the oxytocin receptor gene (OXTR) rs53576 would modulate the behavioral and neural indices of accurate smile authenticity judgments. A total of 185 healthy adult participants (Neuroimaging arm: N = 44, Behavioral arm: N = 141) determined the authenticity of dynamic facial expressions of genuine and posed smiles either with or without fMRI scanning. Correctly identified genuine vs. posed smiles activated brain areas involved with reward processing, facial mimicry, and mentalizing. Activation within the inferior frontal gyrus and dorsomedial prefrontal cortex correlated with individual differences in sensitivity (d') and response criterion (C), respectively. Our exploratory genetic analysis revealed that rs53576 G homozygotes in the neuroimaging arm had a stronger tendency to judge posed smiles as genuine than did A allele carriers and showed decreased activation in the medial prefrontal cortex when viewing genuine vs. posed smiles. Yet, OXTR rs53576 did not modulate task performance in the behavioral arm, which calls for further studies to evaluate the legitimacy of this result. Our findings extend previous literature on the biological foundations of smile authenticity judgments, particularly emphasizing the involvement of brain regions implicated in reward, facial mimicry, and mentalizing.
Collapse
Affiliation(s)
| | - Adriana Lori
- Department of Psychiatry and Behavioral Science, USA
| | - Nicole A Langford
- Department of Psychiatry and Behavioral Science, USA; Nell Hodgson Woodruff School of Nursing, USA
| | - James K Rilling
- Department of Anthropology, USA; Department of Psychiatry and Behavioral Science, USA; Center for Behavioral Neuroscience, USA; Emory National Primate Research Center, USA; Center for Translational Social Neuroscience, USA.
| |
Collapse
|
10
|
Gunderson CA, Baker A, Pence AD, ten Brinke L. Interpersonal Consequences of Deceptive Expressions of Sadness. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2023; 49:97-109. [PMID: 34906011 PMCID: PMC9684658 DOI: 10.1177/01461672211059700] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2020] [Accepted: 10/26/2021] [Indexed: 11/16/2022]
Abstract
Emotional expressions evoke predictable responses from observers; displays of sadness are commonly met with sympathy and help from others. Accordingly, people may be motivated to feign emotions to elicit a desired response. In the absence of suspicion, we predicted that emotional and behavioral responses to genuine (vs. deceptive) expressers would be guided by empirically valid cues of sadness authenticity. Consistent with this hypothesis, untrained observers (total N = 1,300) reported less sympathy and offered less help to deceptive (vs. genuine) expressers of sadness. This effect was replicated using both posed, low-stakes, laboratory-created stimuli, and spontaneous, real, high-stakes emotional appeals to the public. Furthermore, lens models suggest that sympathy reactions were guided by difficult-to-fake facial actions associated with sadness. Results suggest that naive observers use empirically valid cues to deception to coordinate social interactions, providing novel evidence that people are sensitive to subtle cues to deception.
Collapse
Affiliation(s)
| | - Alysha Baker
- Okanagan College, Kelowna, British
Columbia, Canada
| | | | | |
Collapse
|
11
|
Zhang J, Yin M, Shu D, Liu D. The establishment of the general microexpression recognition ability and its relevant brain activity. Front Hum Neurosci 2022; 16:894702. [PMID: 36569473 PMCID: PMC9774033 DOI: 10.3389/fnhum.2022.894702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2022] [Accepted: 11/11/2022] [Indexed: 12/12/2022] Open
Abstract
Microexpressions are very transitory expressions lasting about 1/25∼1/2 s, which can reveal people's true emotions they try to hide or suppress. The PREMERT (pseudorandom ecological microexpression recognition test) could test the individual's microexpression recognition ability with six microexpression Ms (the mean of accuracy rates of a microexpression type under six expression backgrounds), and six microexpression SDs (the standard deviation of accuracy rates of this microexpression type under six expression backgrounds), but it and other studies did not explore the general microexpression recognition ability (the GMERA) or could not test the GMERA effectively. Therefore, the current study put forward and established the GMERA with the behavioral data of the PREMERT. The spontaneous brain activity in the resting state is a stable index to measure individual cognitive characteristics. Therefore, the current study explored the relevant resting-state brain activity of the GMERA indicators to prove that GMERA is an individual cognitive characteristic from brain mechanisms with the neuroimaging data of the PREMERT. The results showed that (1) there was a three-layer hierarchical structure in human microexpression recognition ability: The GMERA (the highest layer); recognition of a type of microexpression under different expression backgrounds (the second layer); and recognition of a certain microexpression under a certain expression background (the third layer). A common factor GMERA was extracted from the six microexpression types recognition in PREMERT. Four indicators of the GMERA were calculated from six microexpression Ms and six microexpression SDs, such as GMERAL (level of GMERA), GMERAF (fluctuation of GMERA), GMERAB (background effect of GMERA), and GMERABF (fluctuation of GMERAB), which had good parallel-forms reliability, calibration validity, and ecological validity. The GMERA provided a concise and comprehensive overview of the individual's microexpression recognition ability. The PREMERT was proved as a good test to measure the GMERA. (2) ALFFs (the amplitude of low-frequency fluctuations) in both eyes-closed and eyes-opened resting-states and ALFFs-difference could predict the four indicators of the GMERA. The relevant resting-state brain areas were some areas of the expression recognition network, the microexpression consciousness and attention network, and the motor network for the change from expression backgrounds to microexpression. (3) The relevant brain areas of the GMERA and different types of microexpression recognition belonged to the three cognitive processes, but the relevant brain areas of the GMERA were the "higher-order" areas to be more concise and critical than those of different types of microexpression recognition.
Collapse
Affiliation(s)
- Jianxin Zhang
- Jiangsu Province Engineering Research Center of Microexpression Intelligent Sensing and Security Prevention and Control, Nanjing, China,School of Education, Jiangnan University, Wuxi, China
| | - Ming Yin
- Jiangsu Province Engineering Research Center of Microexpression Intelligent Sensing and Security Prevention and Control, Nanjing, China,Jiangsu Police Institute, Nanjing, China
| | - Deming Shu
- School of Education, Soochow University, Soochow, China,*Correspondence: Deming Shu,
| | - Dianzhi Liu
- School of Education, Soochow University, Soochow, China,*Correspondence: Deming Shu,
| |
Collapse
|
12
|
Zhao X, Chen J, Chen T, Wang S, Liu Y, Zeng X, Liu G. Responses of functional brain networks in micro-expressions: An EEG study. Front Psychol 2022; 13:996905. [DOI: 10.3389/fpsyg.2022.996905] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 10/04/2022] [Indexed: 11/13/2022] Open
Abstract
Micro-expressions (MEs) can reflect an individual’s subjective emotions and true mental state, and they are widely used in the fields of mental health, justice, law enforcement, intelligence, and security. However, one of the major challenges of working with MEs is that their neural mechanism is not entirely understood. To the best of our knowledge, the present study is the first to use electroencephalography (EEG) to investigate the reorganizations of functional brain networks involved in MEs. We aimed to reveal the underlying neural mechanisms that can provide electrophysiological indicators for ME recognition. A real-time supervision and emotional expression suppression experimental paradigm was designed to collect video and EEG data of MEs and no expressions (NEs) of 70 participants expressing positive emotions. Based on the graph theory, we analyzed the efficiency of functional brain network at the scalp level on both macro and micro scales. The results revealed that in the presence of MEs compared with NEs, the participants exhibited higher global efficiency and nodal efficiency in the frontal, occipital, and temporal regions. Additionally, using the random forest algorithm to select a subset of functional connectivity features as input, the support vector machine classifier achieved a classification accuracy for MEs and NEs of 0.81, with an area under the curve of 0.85. This finding demonstrates the possibility of using EEG to recognize MEs, with a wide range of application scenarios, such as persons wearing face masks or patients with expression disorders.
Collapse
|
13
|
Zhao X, Liu Y, Wang S, Chen J, Chen T, Liu G. Electrophysiological evidence for inhibition hypothesis of micro-expressions based on tensor component analysis and Physarum network algorithm. Neurosci Lett 2022; 790:136897. [PMID: 36195299 DOI: 10.1016/j.neulet.2022.136897] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Revised: 09/01/2022] [Accepted: 09/29/2022] [Indexed: 11/30/2022]
Abstract
The inhibition hypothesis advocated by Ekman (1985) states when an emotion is concealed or masked, the true emotion is manifested as a micro-expression (ME) which is a fleeting expression lasting for 40 to 500 ms. However, research about the inhibition hypothesis of ME from the perspective of electrophysiology is lacking. Here, we report the electrophysiological evidence obtained from an electroencephalography (EEG) data analysis method. Specifically, we designed an ME elicitation paradigm to collect data of MEs of positive emotions and EEG from 70 subjects, and proposed a method based on tensor component analysis (TCA) combined with the Physarum network (PN) algorithm to characterize the spatial, temporal, and spectral signatures of dynamic EEG data of MEs. The proposed TCA-PN methods revealed two pathways involving dorsal and ventral streams in functional brain networks of MEs, which reflected the inhibition processing and emotion arousal of MEs. The results provide evidence for the inhibition hypothesis from an electrophysiological standpoint, which allows us to better understand the neural mechanism of MEs.
Collapse
Affiliation(s)
- Xingcong Zhao
- School of Electronic and Information Engineering, Southwest University, 400715, China
| | - Ying Liu
- School of Music, Southwest University, 400715, China
| | - Shiyuan Wang
- School of Electronic and Information Engineering, Southwest University, 400715, China
| | - Jiejia Chen
- School of Electronic and Information Engineering, Southwest University, 400715, China
| | - Tong Chen
- School of Electronic and Information Engineering, Southwest University, 400715, China
| | - Guangyuan Liu
- School of Electronic and Information Engineering, Southwest University, 400715, China; Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, 400715, China.
| |
Collapse
|
14
|
Zhao X, Liu Y, Chen T, Wang S, Chen J, Wang L, Liu G. Differences in brain activations between micro- and macro-expressions based on electroencephalography. Front Neurosci 2022; 16:903448. [PMID: 36172039 PMCID: PMC9511965 DOI: 10.3389/fnins.2022.903448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 08/23/2022] [Indexed: 12/04/2022] Open
Abstract
Micro-expressions can reflect an individual's subjective emotions and true mental state and are widely used in the fields of mental health, justice, law enforcement, intelligence, and security. However, the current approach based on image and expert assessment-based micro-expression recognition technology has limitations such as limited application scenarios and time consumption. Therefore, to overcome these limitations, this study is the first to explore the brain mechanisms of micro-expressions and their differences from macro-expressions from a neuroscientific perspective. This can be a foundation for micro-expression recognition based on EEG signals. We designed a real-time supervision and emotional expression suppression (SEES) experimental paradigm to synchronously collect facial expressions and electroencephalograms. Electroencephalogram signals were analyzed at the scalp and source levels to determine the temporal and spatial neural patterns of micro- and macro-expressions. We found that micro-expressions were more strongly activated in the premotor cortex, supplementary motor cortex, and middle frontal gyrus in frontal regions under positive emotions than macro-expressions. Under negative emotions, micro-expressions were more weakly activated in the somatosensory cortex and corneal gyrus regions than macro-expressions. The activation of the right temporoparietal junction (rTPJ) was stronger in micro-expressions under positive than negative emotions. The reason for this difference is that the pathways of facial control are different; the production of micro-expressions under positive emotion is dependent on the control of the face, while micro-expressions under negative emotions are more dependent on the intensity of the emotion.
Collapse
Affiliation(s)
- Xingcong Zhao
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Ying Liu
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
- School of Music, Southwest University, Chongqing, China
| | - Tong Chen
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Shiyuan Wang
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Jiejia Chen
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Linwei Wang
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| | - Guangyuan Liu
- School of Electronic and Information Engineering, Southwest University, Chongqing, China
- Key Laboratory of Cognition and Personality, Ministry of Education, Southwest University, Chongqing, China
| |
Collapse
|
15
|
Hartmann TJ, Hartmann JBJ, Friebe-Hoffmann U, Lato C, Janni W, Lato K. Novel Method for Three-Dimensional Facial Expression Recognition Using Self-Normalizing Neural Networks and Mobile Devices. Geburtshilfe Frauenheilkd 2022; 82:955-969. [PMID: 36110895 PMCID: PMC9470291 DOI: 10.1055/a-1866-2943] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/26/2022] [Indexed: 11/25/2022] Open
Abstract
Introduction To date, most ways to perform facial expression recognition rely on two-dimensional images, advanced approaches with three-dimensional data exist. These however demand stationary apparatuses and thus lack portability and possibilities to scale deployment. As human emotions, intent and even diseases may condense in distinct facial expressions or changes therein, the need for a portable yet capable solution is signified. Due to the superior informative value of three-dimensional data on facial morphology and because certain syndromes find expression in specific facial dysmorphisms, a solution should allow portable acquisition of true three-dimensional facial scans in real time. In this study we present a novel solution for the three-dimensional acquisition of facial geometry data and the recognition of facial expressions from it. The new technology presented here only requires the use of a smartphone or tablet with an integrated TrueDepth camera and enables real-time acquisition of the geometry and its categorization into distinct facial expressions. Material and Methods Our approach consisted of two parts: First, training data was acquired by asking a collective of 226 medical students to adopt defined facial expressions while their current facial morphology was captured by our specially developed app running on iPads, placed in front of the students. In total, the list of the facial expressions to be shown by the participants consisted of "disappointed", "stressed", "happy", "sad" and "surprised". Second, the data were used to train a self-normalizing neural network. A set of all factors describing the current facial expression at a time is referred to as "snapshot". Results In total, over half a million snapshots were recorded in the study. Ultimately, the network achieved an overall accuracy of 80.54% after 400 epochs of training. In test, an overall accuracy of 81.15% was determined. Recall values differed by the category of a snapshot and ranged from 74.79% for "stressed" to 87.61% for "happy". Precision showed similar results, whereas "sad" achieved the lowest value at 77.48% and "surprised" the highest at 86.87%. Conclusions With the present work it can be demonstrated that respectable results can be achieved even when using data sets with some challenges. Through various measures, already incorporated into an optimized version of our app, it is to be expected that the training results can be significantly improved and made more precise in the future. Currently a follow-up study with the new version of our app that encompasses the suggested alterations and adaptions, is being conducted. We aim to build a large and open database of facial scans not only for facial expression recognition but to perform disease recognition and to monitor diseases' treatment progresses.
Collapse
Affiliation(s)
- Tim Johannes Hartmann
- Universitäts-Hautklinik Tübingen, Tübingen, Germany
- Universitätsfrauenklinik Ulm, Ulm, Germany
| | | | | | | | | | | |
Collapse
|
16
|
Monaro M, Maldera S, Scarpazza C, Sartori G, Navarin N. Detecting deception through facial expressions in a dataset of videotaped interviews: A comparison between human judges and machine learning models. COMPUTERS IN HUMAN BEHAVIOR 2022. [DOI: 10.1016/j.chb.2021.107063] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
Sitting in Judgment: How Body Posture Influences Deception Detection and Gazing Behavior. Behav Sci (Basel) 2021; 11:bs11060085. [PMID: 34200633 PMCID: PMC8229315 DOI: 10.3390/bs11060085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Revised: 06/07/2021] [Accepted: 06/08/2021] [Indexed: 11/17/2022] Open
Abstract
Body postures can affect how we process and attend to information. Here, a novel effect of adopting an open or closed posture on the ability to detect deception was investigated. It was hypothesized that the posture adopted by judges would affect their social acuity, resulting in differences in the detection of nonverbal behavior (i.e., microexpression recognition) and the discrimination of deceptive and truthful statements. In Study 1, adopting an open posture produced higher accuracy for detecting naturalistic lies, but no difference was observed in the recognition of brief facial expressions as compared to adopting a closed posture; trait empathy was found to have an additive effect on posture, with more empathic judges having higher deception detection scores. In Study 2, with the use of an eye-tracker, posture effects on gazing behavior when judging both low-stakes and high-stakes lies were measured. Sitting in an open posture reduced judges’ average dwell times looking at senders, and in particular, the amount and length of time they focused on their hands. The findings suggest that simply shifting posture can impact judges’ attention to visual information and veracity judgments (Mg = 0.40, 95% CI (0.03, 0.78)).
Collapse
|
18
|
Shen X, Fan G, Niu C, Chen Z. Catching a Liar Through Facial Expression of Fear. Front Psychol 2021; 12:675097. [PMID: 34168597 PMCID: PMC8217652 DOI: 10.3389/fpsyg.2021.675097] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Accepted: 05/17/2021] [Indexed: 11/13/2022] Open
Abstract
High stakes can be stressful whether one is telling the truth or lying. However, liars can feel extra fear from worrying to be discovered than truth-tellers, and according to the "leakage theory," the fear is almost impossible to be repressed. Therefore, we assumed that analyzing the facial expression of fear could reveal deceits. Detecting and analyzing the subtle leaked fear facial expressions is a challenging task for laypeople. It is, however, a relatively easy job for computer vision and machine learning. To test the hypothesis, we analyzed video clips from a game show "The moment of truth" by using OpenFace (for outputting the Action Units (AUs) of fear and face landmarks) and WEKA (for classifying the video clips in which the players were lying or telling the truth). The results showed that some algorithms achieved an accuracy of >80% merely using AUs of fear. Besides, the total duration of AU20 of fear was found to be shorter under the lying condition than that from the truth-telling condition. Further analysis found that the reason for a shorter duration in the lying condition was that the time window from peak to offset of AU20 under the lying condition was less than that under the truth-telling condition. The results also showed that facial movements around the eyes were more asymmetrical when people are telling lies. All the results suggested that facial clues can be used to detect deception, and fear could be a cue for distinguishing liars from truth-tellers.
Collapse
Affiliation(s)
- Xunbing Shen
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Gaojie Fan
- Beck Visual Cognition Laboratory, Louisiana State University, Baton Rouge, LA, United States
| | - Caoyuan Niu
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| | - Zhencai Chen
- Department of Psychology, Jiangxi University of Chinese Medicine, Nanchang, China
| |
Collapse
|
19
|
Banerjee S, Chua AY. Calling out fake online reviews through robust epistemic belief. INFORMATION & MANAGEMENT 2021. [DOI: 10.1016/j.im.2021.103445] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
20
|
Namba S, Matsui H, Zloteanu M. Distinct temporal features of genuine and deliberate facial expressions of surprise. Sci Rep 2021; 11:3362. [PMID: 33564091 PMCID: PMC7873236 DOI: 10.1038/s41598-021-83077-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Accepted: 01/28/2021] [Indexed: 01/30/2023] Open
Abstract
The physical properties of genuine and deliberate facial expressions remain elusive. This study focuses on observable dynamic differences between genuine and deliberate expressions of surprise based on the temporal structure of facial parts during emotional expression. Facial expressions of surprise were elicited using multiple methods and video recorded: senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine), other senders were asked to produce deliberate surprise with no preparation (Improvised), by mimicking the expression of another (External), or by reproducing the surprised face after having first experienced genuine surprise (Rehearsed). A total of 127 videos were analyzed, and moment-to-moment movements of eyelids and eyebrows were annotated with deep learning-based tracking software. Results showed that all surprise displays were mainly composed of raising eyebrows and eyelids movements. Genuine displays included horizontal movement in the left part of the face, but also showed the weakest movement coupling of all conditions. External displays had faster eyebrow and eyelid movement, while Improvised displays showed the strongest coupling of movements. The findings demonstrate the importance of dynamic information in the encoding of genuine and deliberate expressions of surprise and the importance of the production method employed in research.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, Kyoto, 6190288, Japan.
| | - Hiroshi Matsui
- Center for Human-Nature, Artificial Intelligence, and Neuroscience, Hokkaido University, Hokkaido, 0600808, Japan
| | - Mircea Zloteanu
- Department of Criminology and Sociology, Kingston University London, Kingston Upon Thames, KT1 2EE, UK
| |
Collapse
|
21
|
Yin M, Zhang J, Shu D, Liu D. The relevant resting-state brain activity of ecological microexpression recognition test (EMERT). PLoS One 2020; 15:e0241681. [PMID: 33351809 PMCID: PMC7755225 DOI: 10.1371/journal.pone.0241681] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2020] [Accepted: 10/19/2020] [Indexed: 11/18/2022] Open
Abstract
Zhang, et al. (2017) established the ecological microexpression recognition test (EMERT), but it only used white models’ expressions as microexpressions and backgrounds, and there was no research detecting its relevant brain activity. The current study used white, black and yellow models’ expressions as microexpressions and backgrounds to improve the materials ecological validity of EMERT, and it used eyes-closed and eyes-open resting-state fMRI to detect relevant brain activity of EMERT for the first time. The results showed: (1) Two new recapitulative indexes of EMERT were adopted, such as microexpression M and microexpression SD. The participants could effectively identify almost all the microexpressions, and each microexpression type had a significantly background effect. The EMERT had good retest reliability and calibration validity. (2) ALFFs (Amplitude of Low-Frequency Fluctuations) in both eyes-closed and eyes-open resting-states and ALFFs-difference could predict microexpression M. The relevant brain areas of microexpression M were some frontal lobes, insula, cingulate cortex, hippocampus, parietal lobe, caudate nucleus, thalamus, amygdala, occipital lobe, fusiform, temporal lobe, cerebellum and vermis. (3) ALFFs in both eyes-closed and eyes-open resting-states and ALFFs-difference could predict microexpression SD, and the ALFFs-difference was more predictive. The relevant brain areas of microexpression SD were some frontal lobes, insula, cingulate cortex, cuneus, amygdala, fusiform, occipital lobe, parietal lobe, precuneus, caudate lobe, putamen lobe, thalamus, temporal lobe, cerebellum and vermis. (4) There were many similarities and some differences in the relevant brain areas between microexpression M and SD. All these brain areas can be trained to enhance ecological microexpression recognition ability.
Collapse
Affiliation(s)
- Ming Yin
- Jiangsu Police Institute, Nanjing, China
| | - Jianxin Zhang
- School of Humanities, Jiangnan University, Wuxi, China
- * E-mail: (JZ); (DL)
| | - Deming Shu
- School of Education, Soochow University, Soochow, China
| | - Dianzhi Liu
- School of Education, Soochow University, Soochow, China
- * E-mail: (JZ); (DL)
| |
Collapse
|
22
|
Zloteanu M, Krumhuber EG, Richardson DC. Acting Surprised: Comparing Perceptions of Different Dynamic Deliberate Expressions. JOURNAL OF NONVERBAL BEHAVIOR 2020. [DOI: 10.1007/s10919-020-00349-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
AbstractPeople are accurate at classifying emotions from facial expressions but much poorer at determining if such expressions are spontaneously felt or deliberately posed. We explored if the method used by senders to produce an expression influences the decoder’s ability to discriminate authenticity, drawing inspiration from two well-known acting techniques: the Stanislavski (internal) and Mimic method (external). We compared spontaneous surprise expressions in response to a jack-in-the-box (genuine condition), to posed displays of senders who either focused on their past affective state (internal condition) or the outward expression (external condition). Although decoders performed better than chance at discriminating the authenticity of all expressions, their accuracy was lower in classifying external surprise compared to internal surprise. Decoders also found it harder to discriminate external surprise from spontaneous surprise and were less confident in their decisions, perceiving these to be similarly intense but less genuine-looking. The findings suggest that senders are capable of voluntarily producing genuine-looking expressions of emotions with minimal effort, especially by mimicking a genuine expression. Implications for research on emotion recognition are discussed.
Collapse
|
23
|
Abstract
Objective: Investigation of deception within psychotherapy has recently gained attention. Micro expression training software has been suggested to improve deception detection and enhance emotion recognition. The current study examined the effects of micro expression training software on deception detection and emotion recognition. Method: The current study recruited 23 counseling psychology graduate students and 32 undergraduate students and randomly assigned them to either a training group or a control group. The training and control group received all the same materials and measures pre- and post-test, with the training group differing by receiving the micro expression training. Results: Findings revealed no significant difference in deception detection between the control group and training group. The training did reveal significant improvement for emotion recognition, specifically in contempt, anger, and fear. State and trait anxiety did not predict deception detection nor did it mediate emotion recognition. No significant difference was found between graduate trainees and undergraduate students. Conclusion: The use of the F.A.C.E. software was not effective for increasing deception detection but did serve to increase emotion recognition. Implications for training, practice, and research are discussed.
Collapse
Affiliation(s)
- Drew A Curtis
- Department of Psychology and Sociology, Angelo State University, San Angelo, TX, USA
| |
Collapse
|
24
|
Zhang J, Yin M, Shu D, Liu D. The Establishment of Pseudorandom Ecological Microexpression Recognition Test (PREMERT) and Its Relevant Resting-State Brain Activity. Front Hum Neurosci 2020; 14:281. [PMID: 32848665 PMCID: PMC7406786 DOI: 10.3389/fnhum.2020.00281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2020] [Accepted: 06/22/2020] [Indexed: 11/13/2022] Open
Abstract
The EMERT (ecological microexpression recognition test) by Zhang et al. (2017) used between-subjects Latin square block design for backgrounds; therefore, participants could not get comparable scores. The current study used within-subject pseudorandom design for backgrounds to improve EMERT to PREMERT (pseudorandom EMERT) and used eyes-closed and eyes-open resting-state functional magnetic resonance imaging to detect relevant brain activity of PREMERT for the first time. The results showed (1) two new recapitulative indexes of PREMERT were adopted, such as microexpression M and microexpression SD. Using pseudorandom design, the participants could effectively identify almost all the microexpressions, and each microexpression type had significant background effect. The PREMERT had good split-half reliability, parallel-forms reliability, criterion validity, and ecological validity. Therefore, it could stably and efficiently detect the participants' microexpression recognition abilities. Because of its pseudorandom design, all participants did the same test; their scores could be compared with each other. (2) amplitude of low-frequency fluctuations (ALFF; 0.01-0.1 Hz) in both eyes-closed and eyes-open resting states and ALFF difference could predict microexpression M, and the ALFF difference was less predictive. The relevant resting-state brain areas of microexpression M were some frontal lobes, insula, cingulate cortex, hippocampus, amygdala, fusiform gyrus, parietal lobe, caudate nucleus, precuneus, thalamus, putamen, temporal lobe, and cerebellum. (3) ALFFs in both eyes-closed and eyes-open resting states and ALFF difference could predict microexpression SD, and the ALFF difference was more predictive. The relevant resting-state brain areas of microexpression SD were some frontal lobes, central anterior gyrus, supplementary motor area, insula, hippocampus, amygdala, cuneus, occipital lobe, fusiform gyrus, parietal lobe, caudate nucleus, pallidum, putamen, thalamus, temporal lobe, and cerebellum. (4) There were many similar relevant resting-state brain areas, such as brain areas of expression recognition, microexpressions consciousness and attention, and the change from expression backgrounds to microexpression, and some different relevant resting-state brain areas, such as precuneus, insula, and pallidum, between microexpression M and SD. The ALFF difference was more sensitive to PREMERT fluctuations.
Collapse
Affiliation(s)
- Jianxin Zhang
- School of Humanities, Jiangnan University, Wuxi, China
| | - Ming Yin
- Jiangsu Police Institute, Nanjing, China
| | - Deming Shu
- School of Education, Soochow University, Suzhou, China
| | - Dianzhi Liu
- School of Education, Soochow University, Suzhou, China
| |
Collapse
|
25
|
Küster D, Krumhuber EG, Steinert L, Ahuja A, Baker M, Schultz T. Opportunities and Challenges for Using Automatic Human Affect Analysis in Consumer Research. Front Neurosci 2020; 14:400. [PMID: 32410956 PMCID: PMC7199103 DOI: 10.3389/fnins.2020.00400] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Accepted: 03/31/2020] [Indexed: 11/13/2022] Open
Abstract
The ability to automatically assess emotional responses via contact-free video recording taps into a rapidly growing market aimed at predicting consumer choices. If consumer attention and engagement are measurable in a reliable and accessible manner, relevant marketing decisions could be informed by objective data. Although significant advances have been made in automatic affect recognition, several practical and theoretical issues remain largely unresolved. These concern the lack of cross-system validation, a historical emphasis of posed over spontaneous expressions, as well as more fundamental issues regarding the weak association between subjective experience and facial expressions. To address these limitations, the present paper argues that extant commercial and free facial expression classifiers should be rigorously validated in cross-system research. Furthermore, academics and practitioners must better leverage fine-grained emotional response dynamics, with stronger emphasis on understanding naturally occurring spontaneous expressions, and in naturalistic choice settings. We posit that applied consumer research might be better situated to examine facial behavior in socio-emotional contexts rather than decontextualized, laboratory studies, and highlight how AHAA can be successfully employed in this context. Also, facial activity should be considered less as a single outcome variable, and more as a starting point for further analyses. Implications of this approach and potential obstacles that need to be overcome are discussed within the context of consumer research.
Collapse
Affiliation(s)
- Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany.,Department of Psychology and Methods, Jacobs University Bremen, Bremen, Germany
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Lars Steinert
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Anuj Ahuja
- Maharaja Surajmal Institute of Technology, Guru Gobind Singh Indraprastha University, New Delhi, India
| | - Marc Baker
- Centre for Situated Action and Communication, Department of Psychology, University of Portsmouth, Portsmouth, United Kingdom
| | - Tanja Schultz
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| |
Collapse
|
26
|
Levine EE, Wald KA. Fibbing about your feelings: How feigning happiness in the face of personal hardship affects trust. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2020. [DOI: 10.1016/j.obhdp.2019.05.004] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
27
|
Anderson CL, Chen S, Ayduk Ö. When does changing emotions harm authenticity? Distinct reappraisal strategies differentially impact subjective and observer-rated authenticity. SELF AND IDENTITY 2019. [DOI: 10.1080/15298868.2019.1645041] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Craig L. Anderson
- Department of Psychology, University of California, Berkeley, CA, USA
| | - Serena Chen
- Department of Psychology, University of California, Berkeley, CA, USA
| | - Özlem Ayduk
- Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
28
|
Kihara K, Takeda Y. The Role of Low-Spatial Frequency Components in the Processing of Deceptive Faces: A Study Using Artificial Face Models. Front Psychol 2019; 10:1468. [PMID: 31297078 PMCID: PMC6607955 DOI: 10.3389/fpsyg.2019.01468] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 06/11/2019] [Indexed: 11/13/2022] Open
Abstract
Interpreting another's true emotion is important for social communication, even in the face of deceptive facial cues. Because spatial frequency components provide important clues for recognizing facial expressions, we investigated how we use spatial frequency information from deceptive faces to interpret true emotion. We conducted two different tasks: a face-generating experiment in which participants were asked to generate deceptive and genuine faces by tuning the intensity of happy and angry expressions (Experiment 1) and a face-classification task in which participants had to classify presented faces as either deceptive or genuine (Experiment 2). Low- and high-spatial frequency (LSF and HSF) components were varied independently. The results showed that deceptive happiness (i.e., anger is the hidden expression) involved different intensities for LSF and HSF. These results suggest that we can identify hidden anger by perceiving unbalanced intensities of emotional expression between LSF and HSF information contained in deceptive faces.
Collapse
Affiliation(s)
- Ken Kihara
- Automotive Human Factors Research Center, National Institute of Advanced Industrial, Science and Technology (AIST), Tsukuba, Japan
| | - Yuji Takeda
- Automotive Human Factors Research Center, National Institute of Advanced Industrial, Science and Technology (AIST), Tsukuba, Japan
| |
Collapse
|
29
|
Matsumoto D, Hwang HC. Commentary: Electrophysiological Evidence Reveals Differences between the Recognition of Microexpressions and Macroexpressions. Front Psychol 2019; 10:1293. [PMID: 31263437 PMCID: PMC6584814 DOI: 10.3389/fpsyg.2019.01293] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2018] [Accepted: 05/16/2019] [Indexed: 11/25/2022] Open
Affiliation(s)
- David Matsumoto
- Department of Psychology, San Francisco State University, San Francisco, CA, United States
- Humintell, El Cerrito, CA, United States
- *Correspondence: David Matsumoto
| | | |
Collapse
|
30
|
McLennan K, Mahmoud M. Development of an Automated Pain Facial Expression Detection System for Sheep ( Ovis Aries). Animals (Basel) 2019; 9:E196. [PMID: 31027279 PMCID: PMC6523241 DOI: 10.3390/ani9040196] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2019] [Revised: 04/15/2019] [Accepted: 04/22/2019] [Indexed: 12/02/2022] Open
Abstract
The use of technology to optimize the production and management of each individual animal is becoming key to good farming. There is a need for the real-time systematic detection and control of disease in animals in order to limit the impact on animal welfare and food supply. Diseases such as footrot and mastitis cause significant pain in sheep, and so early detection is vital to ensuring effective treatment and preventing the spread across the flock. Facial expression scoring to assess pain in humans and non-humans is now well utilized, and the Sheep Pain Facial Expression Scale (SPFES) is a tool that can reliably detect pain in this species. The SPFES currently requires manual scoring, leaving it open to observer bias, and it is also time-consuming. The ability of a computer to automatically detect and direct a producer as to where assessment and treatment are needed would increase the chances of controlling the spread of disease. It would also aid in the prevention of resistance across the individual, farm, and landscape at both national and international levels. In this paper, we present our framework for an integrated novel system based on techniques originally applied for human facial expression recognition that could be implemented at the farm level. To the authors' knowledge, this is the first time that this technology has been applied to sheep to assess pain.
Collapse
Affiliation(s)
- Krista McLennan
- Department of Biological Sciences, University of Chester, Parkgate Rd, Chester CH1 4BJ, UK.
| | - Marwa Mahmoud
- Department of Computer Science and Technology, University of Cambridge, 15 JJ Thomson Avenue, Cambridge CB3 0FD, UK.
| |
Collapse
|
31
|
Yin M, Tian L, Hua W, Zhang J, Liu D. The Establishment of Weak Ecological Microexpressions Recognition Test (WEMERT): An Extension on EMERT. Front Psychol 2019; 10:275. [PMID: 30890973 PMCID: PMC6411658 DOI: 10.3389/fpsyg.2019.00275] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2018] [Accepted: 01/28/2019] [Indexed: 12/01/2022] Open
Abstract
The JACBART (Japanese and Caucasian Brief Affect Recognition Test) microexpression recognition test only examines facial expressions under the neutral expression background and the ecological validity is not high. The EMERT (Ecological MicroExpressions Recognition Test) microexpression recognition test examined six microexpressions under seven backgrounds but does not detect the intensity of expressions. In the current study, a weak ecological microexpression recognition test was established to examine the recognition features of six weak microexpressions in all seven high intensity basic expressions. The results found: (1) the test had good retest reliability, criterion validity and ecological validity; and (2) the reliability and validity tests revealed a lot of characteristics of weak microexpression recognition. There were training effects in some weak microexpression recognition. Weak microexpression recognition was generally positively related to the microexpression recognition of JACBART but were generally negatively related to approximate common expressions. The backgrounds main effects in all weak microexpressions were significant and pairwise comparisons show there were a wide range of differences between weak microexpressions under different backgrounds. The standard deviations, of the accuracy of weak microexpressions in different backgrounds, were used to define the fluctuations of the weak microexpression recognition and we found that weak microexpression recognition had many fluctuations. (3) Personality openness and its subdimensions (O1, O2, O3, and O5) were generally positively related to some weak microexpression recognition, except O1, which was significantly negatively related to surprise under neutrality. O1 was positively related to the standard deviation of the weak anger microexpression recognition accuracies and O6 was negatively related to the standard deviation of the weak happiness microexpression recognition accuracies in the first measurement.
Collapse
Affiliation(s)
- Ming Yin
- Jiangsu Police Institute, Nanjing, China
| | | | - Wei Hua
- School of Education, Soochow University, Soochow, China
| | - Jianxin Zhang
- School of Humanities, Jiangnan University, Wuxi, China
| | - Dianzhi Liu
- School of Education, Soochow University, Soochow, China
| |
Collapse
|
32
|
Nortje A, Tredoux C. How good are we at detecting deception? A review of current techniques and theories. SOUTH AFRICAN JOURNAL OF PSYCHOLOGY 2019. [DOI: 10.1177/0081246318822953] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
The task of discerning truth from untruth has long interested psychologists; however, methods for doing so accurately remain elusive. In this article, we provide an overview and evaluation of methods of detecting deception used in the laboratory and the field. We identify and discuss three broad approaches to detecting deception: measurement of non-verbal behaviour, verbal interview methods, and statement evaluation by humans and computers. Part of the problem in devising good methods for detecting deception is the absence of a sound understanding of deception in human lives. We thus consider three theories of deception – leakage, reality monitoring, and truth-default – and conclude that although promising, they do not yet provide an adequate foundation. We review 10 extant methods of detecting deception in the second part of the article, focusing at greatest length on the most widely used method in South Africa, the polygraph test of deception. Our conclusion is that non-verbal methods that work by inducing anxiety in interviewees are fundamentally flawed, and that we ought to move away from such methods. Alternate methods of detecting deception, including statement analysis, are considered, but ultimately our view is that there are currently no methods sufficiently accurate for practitioners to rely on. We suspect that a precondition for developing such measures is a coherent and validated theory.
Collapse
Affiliation(s)
- Alicia Nortje
- Department of Psychology, University of Cape Town, South Africa
| | - Colin Tredoux
- Department of Psychology, University of Cape Town, South Africa
| |
Collapse
|
33
|
Matsumoto D, Hwang HC. Microexpressions Differentiate Truths From Lies About Future Malicious Intent. Front Psychol 2018; 9:2545. [PMID: 30618966 PMCID: PMC6305322 DOI: 10.3389/fpsyg.2018.02545] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2018] [Accepted: 11/28/2018] [Indexed: 11/17/2022] Open
Abstract
The few previous studies testing whether or not microexpressions are indicators of deception have produced equivocal findings, which may have resulted from restrictive operationalizations of microexpression duration. In this study, facial expressions of emotion produced by community participants in an initial screening interview in a mock crime experiment were coded for occurrence and duration. Various expression durations were tested concerning whether they differentiated between truthtellers and liars concerning their intent to commit a malicious act in the future. We operationalized microexpressions as expressions occurring less than the duration of spontaneously occurring, non-concealed, non-repressed facial expressions of emotion based on empirically documented findings, that is ≤0.50 s, and then more systematically ≤0.40, ≤0.30, and ≤0.20 s. We also compared expressions occurring between 0.50 and 6.00 s and all expressions ≤6.00 s. Microexpressions of negative emotions occurring ≤0.40 and ≤0.50 s differentiated truthtellers and liars. Expressions of negative emotions occurring ≤6.00 s also differentiated truthtellers from liars but this finding did not survive when expressions ≤1.00 s were filtered from the data. These findings provided the first systematic evidence for the existence of microexpressions at various durations and their possible ability to differentiate truthtellers from liars about their intent to commit an act of malfeasance in the future.
Collapse
Affiliation(s)
- David Matsumoto
- Department of Psychology, San Francisco State University, San Francisco, CA, United States
- Humintell, El Cerrito, CA, United States
| | - Hyisung C. Hwang
- Department of Psychology, San Francisco State University, San Francisco, CA, United States
- Humintell, El Cerrito, CA, United States
| |
Collapse
|
34
|
Zeng X, Wu Q, Zhang S, Liu Z, Zhou Q, Zhang M. A False Trail to Follow: Differential Effects of the Facial Feedback Signals From the Upper and Lower Face on the Recognition of Micro-Expressions. Front Psychol 2018; 9:2015. [PMID: 30405497 PMCID: PMC6208096 DOI: 10.3389/fpsyg.2018.02015] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2018] [Accepted: 10/01/2018] [Indexed: 01/24/2023] Open
Abstract
Micro-expressions, as fleeting facial expressions, are very important for judging people’s true emotions, thus can provide an essential behavioral clue for lie and dangerous demeanor detection. From embodied accounts of cognition, we derived a novel hypothesis that facial feedback from upper and lower facial regions has differential effects on micro-expression recognition. This hypothesis was tested and supported across three studies. Specifically, the results of Study 1 showed that people became better judges of intense micro-expressions with a duration of 450 ms when the facial feedback from upper face was enhanced via a restricting gel. Additional results of Study 2 showed that the recognition accuracy of subtle micro-expressions was significantly impaired under all duration conditions (50, 150, 333, and 450 ms) when facial feedback from lower face was enhanced. In addition, the results of Study 3 also revealed that blocking the facial feedback of lower face, significantly boosted the recognition accuracy of subtle and intense micro-expressions under all duration conditions (150 and 450 ms). Together, these results highlight the role of facial feedback in judging the subtle movements of micro-expressions.
Collapse
Affiliation(s)
- Xuemei Zeng
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| | - Qi Wu
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| | - Siwei Zhang
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| | - Zheying Liu
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| | - Qing Zhou
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| | - Meishan Zhang
- Cognition and Human Behavior Key Laboratory of Hunan Province, Department of Psychology, Hunan Normal University, Changsha, China
| |
Collapse
|
35
|
Reed LI, Stratton R, Rambeas JD. Face Value and Cheap Talk: How Smiles Can Increase or Decrease the Credibility of Our Words. EVOLUTIONARY PSYCHOLOGY 2018; 16:1474704918814400. [PMID: 30497296 PMCID: PMC10480876 DOI: 10.1177/1474704918814400] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2018] [Accepted: 10/30/2018] [Indexed: 11/17/2022] Open
Abstract
How do our facial expressions affect the credibility of our words? We test whether smiles, either uninhibited or inhibited, affect the credibility of a written statement. Participants viewed a confederate partner displaying a neutral expression, non-Duchenne smile, Duchenne smile, or controlled smile, paired with a written statement. Participants then made a behavioral decision based on how credible they perceived the confederate's statement to be. Compared to a neutral expression, Experiment 1 found that participants were more likely to believe the confederate's statement when it was paired with a deliberate Duchenne smile and less likely to believe the confederate's statement when it was paired with a deliberate controlled smile. Experiment 2 replicated these findings with spontaneously emitted expressions. These findings provide evidence that uninhibited facial expressions can increase the credibility accompanying statements, while inhibited ones can decrease credibility.
Collapse
Affiliation(s)
- Lawrence Ian Reed
- Department of Psychology, New York University, New York, NY, USA
- Department of Psychiatry, McLean Hospital, Harvard Medical School, Boston, MA, USA
| | - Rachel Stratton
- Department of Psychology, New York University, New York, NY, USA
| | | |
Collapse
|
36
|
Burgoon JK. Microexpressions Are Not the Best Way to Catch a Liar. Front Psychol 2018; 9:1672. [PMID: 30294288 PMCID: PMC6158306 DOI: 10.3389/fpsyg.2018.01672] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Accepted: 08/20/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Judee K Burgoon
- Center for the Management of Information, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
37
|
Zloteanu M, Krumhuber EG, Richardson DC. Detecting Genuine and Deliberate Displays of Surprise in Static and Dynamic Faces. Front Psychol 2018; 9:1184. [PMID: 30042717 PMCID: PMC6048358 DOI: 10.3389/fpsyg.2018.01184] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2018] [Accepted: 06/19/2018] [Indexed: 11/13/2022] Open
Abstract
People are good at recognizing emotions from facial expressions, but less accurate at determining the authenticity of such expressions. We investigated whether this depends upon the technique that senders use to produce deliberate expressions, and on decoders seeing these in a dynamic or static format. Senders were filmed as they experienced genuine surprise in response to a jack-in-the-box (Genuine). Other senders faked surprise with no preparation (Improvised) or after having first experienced genuine surprise themselves (Rehearsed). Decoders rated the genuineness and intensity of these expressions, and the confidence of their judgment. It was found that both expression type and presentation format impacted decoder perception and accurate discrimination. Genuine surprise achieved the highest ratings of genuineness, intensity, and judgmental confidence (dynamic only), and was fairly accurately discriminated from deliberate surprise expressions. In line with our predictions, Rehearsed expressions were perceived as more genuine (in dynamic presentation), whereas Improvised were seen as more intense (in static presentation). However, both were poorly discriminated as not being genuine. In general, dynamic stimuli improved authenticity discrimination accuracy and perceptual differences between expressions. While decoders could perceive subtle differences between different expressions (especially from dynamic displays), they were not adept at detecting if these were genuine or deliberate. We argue that senders are capable of producing genuine-looking expressions of surprise, enough to fool others as to their veracity.
Collapse
Affiliation(s)
- Mircea Zloteanu
- Department of Computer Science, University College London, London, United Kingdom.,Department of Experimental Psychology, University College London, London, United Kingdom
| | - Eva G Krumhuber
- Department of Experimental Psychology, University College London, London, United Kingdom
| | - Daniel C Richardson
- Department of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
38
|
Burgoon JK. Separating the Wheat From the Chaff: Guidance From New Technologies for Detecting Deception in the Courtroom. Front Psychiatry 2018; 9:774. [PMID: 30705646 PMCID: PMC6344437 DOI: 10.3389/fpsyt.2018.00774] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/13/2018] [Accepted: 12/24/2018] [Indexed: 11/17/2022] Open
Affiliation(s)
- Judee K Burgoon
- Center for the Management of Information, University of Arizona, Tucson, AZ, United States
| |
Collapse
|
39
|
Arya R. The Animal Surfaces: The Gaping Mouth in Francis Bacon’s Work. VISUAL ANTHROPOLOGY 2017. [DOI: 10.1080/08949468.2017.1333363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
40
|
Byrne RW, Cartmill E, Genty E, Graham KE, Hobaiter C, Tanner J. Great ape gestures: intentional communication with a rich set of innate signals. Anim Cogn 2017; 20:755-769. [PMID: 28502063 PMCID: PMC5486474 DOI: 10.1007/s10071-017-1096-4] [Citation(s) in RCA: 99] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2016] [Revised: 04/13/2017] [Accepted: 05/02/2017] [Indexed: 02/07/2023]
Abstract
Great apes give gestures deliberately and voluntarily, in order to influence particular target audiences, whose direction of attention they take into account when choosing which type of gesture to use. These facts make the study of ape gesture directly relevant to understanding the evolutionary precursors of human language; here we present an assessment of ape gesture from that perspective, focusing on the work of the "St Andrews Group" of researchers. Intended meanings of ape gestures are relatively few and simple. As with human words, ape gestures often have several distinct meanings, which are effectively disambiguated by behavioural context. Compared to the signalling of most other animals, great ape gestural repertoires are large. Because of this, and the relatively small number of intended meanings they achieve, ape gestures are redundant, with extensive overlaps in meaning. The great majority of gestures are innate, in the sense that the species' biological inheritance includes the potential to develop each gestural form and use it for a specific range of purposes. Moreover, the phylogenetic origin of many gestures is relatively old, since gestures are extensively shared between different genera in the great ape family. Acquisition of an adult repertoire is a process of first exploring the innate species potential for many gestures and then gradual restriction to a final (active) repertoire that is much smaller. No evidence of syntactic structure has yet been detected.
Collapse
Affiliation(s)
- R W Byrne
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK.
| | - E Cartmill
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK
- Department of Anthropology, University of California, Los Angeles, 375 Portola Plaza, 341 Haines Hall, Box 951553, Los Angeles, CA, 90095, USA
| | - E Genty
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK
- Laboratoire de cognition comparée, Institut de Biologie, Université de Neuchâtel, Rue Emile-Argand 11, 2000, Neuchâtel, Switzerland
| | - K E Graham
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK
| | - C Hobaiter
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK
| | - J Tanner
- Centre for Social Learning and Cognitive Evolution, School of Psychology and Neuroscience, University of St Andrews, St Andrews, Fife, KY16 9JP, UK
| |
Collapse
|
41
|
ten Brinke L, Porter S, Korva N, Fowler K, Lilienfeld SO, Patrick CJ. An Examination of the Communication Styles Associated with Psychopathy and Their Influence on Observer Impressions. JOURNAL OF NONVERBAL BEHAVIOR 2017. [DOI: 10.1007/s10919-017-0252-5] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
42
|
Sariyanidi E, Gunes H, Cavallaro A. Learning Bases of Activity for Facial Expression Recognition. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2017; 26:1965-1978. [PMID: 28166497 DOI: 10.1109/tip.2017.2662237] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The extraction of descriptive features from sequences of faces is a fundamental problem in facial expression analysis. Facial expressions are represented by psychologists as a combination of elementary movements known as action units: each movement is localised and its intensity is specified with a score that is small when the movement is subtle and large when the movement is pronounced. Inspired by this approach, we propose a novel data-driven feature extraction framework that represents facial expression variations as a linear combination of localised basis functions, whose coefficients are proportional to movement intensity. We show that the linear basis functions required by this framework can be obtained by training a sparse linear model with Gabor phase shifts computed from facial videos. The proposed framework addresses generalisation issues that are not addressed by existing learnt representations, and achieves, with the same learning parameters, state-of-the-art results in recognising both posed expressions and spontaneous micro-expressions. This performance is confirmed even when the data used to train the model differ from test data in terms of the intensity of facial movements and frame rate.
Collapse
|
43
|
Serras Pereira M, Cozijn R, Postma E, Shahid S, Swerts M. Comparing a Perceptual and an Automated Vision-Based Method for Lie Detection in Younger Children. Front Psychol 2016; 7:1936. [PMID: 28018271 PMCID: PMC5149550 DOI: 10.3389/fpsyg.2016.01936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2016] [Accepted: 11/25/2016] [Indexed: 11/13/2022] Open
Abstract
The present study investigates how easily it can be detected whether a child is being truthful or not in a game situation, and it explores the cue validity of bodily movements for such type of classification. To achieve this, we introduce an innovative methodology – the combination of perception studies (in which eye-tracking technology is being used) and automated movement analysis. Film fragments from truthful and deceptive children were shown to human judges who were given the task to decide whether the recorded child was being truthful or not. Results reveal that judges are able to accurately distinguish truthful clips from lying clips in both perception studies. Even though the automated movement analysis for overall and specific body regions did not yield significant results between the experimental conditions, we did find a positive correlation between the amount of movement in a child and the perception of lies, i.e., the more movement the children exhibited during a clip, the higher the chance that the clip was perceived as a lie. The eye-tracking study revealed that, even when there is movement happening in different body regions, judges tend to focus their attention mainly on the face region. This is the first study that compares a perceptual and an automated method for the detection of deceptive behavior in children whose data have been elicited through an ecologically valid paradigm.
Collapse
Affiliation(s)
- Mariana Serras Pereira
- Tilburg Center for Cognition and Communication, Tilburg UniversityTilburg, Netherlands
- *Correspondence: Mariana Serras Pereira,
| | - Reinier Cozijn
- Tilburg Center for Cognition and Communication, Tilburg UniversityTilburg, Netherlands
| | - Eric Postma
- Tilburg Center for Cognition and Communication, Tilburg UniversityTilburg, Netherlands
| | - Suleman Shahid
- Tilburg Center for Cognition and Communication, Tilburg UniversityTilburg, Netherlands
- Department of Computer Science, Lahore University of Management SciencesLahore, Pakistan
| | - Marc Swerts
- Tilburg Center for Cognition and Communication, Tilburg UniversityTilburg, Netherlands
| |
Collapse
|
44
|
Twyman NW, Proudfoot JG, Schuetzler RM, Elkins AC, Derrick DC. Robustness of Multiple Indicators in Automated Screening Systems for Deception Detection. J MANAGE INFORM SYST 2016. [DOI: 10.1080/07421222.2015.1138569] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
45
|
Iwasaki M, Noguchi Y. Hiding true emotions: micro-expressions in eyes retrospectively concealed by mouth movements. Sci Rep 2016; 6:22049. [PMID: 26915796 PMCID: PMC4768101 DOI: 10.1038/srep22049] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2015] [Accepted: 02/04/2016] [Indexed: 12/03/2022] Open
Abstract
When we encounter someone we dislike, we may momentarily display a reflexive disgust expression, only to follow-up with a forced smile and greeting. Our daily lives are replete with a mixture of true and fake expressions. Nevertheless, are these fake expressions really effective at hiding our true emotions? Here we show that brief emotional changes in the eyes (micro-expressions, thought to reflect true emotions) can be successfully concealed by follow-up mouth movements (e.g. a smile). In the same manner as backward masking, mouth movements of a face inhibited conscious detection of all types of micro-expressions in that face, even when viewers paid full attention to the eye region. This masking works only in a backward direction, however, because no disrupting effect was observed when the mouth change preceded the eye change. These results provide scientific evidence for everyday behaviours like smiling to dissemble, and further clarify a major reason for the difficulty we face in discriminating genuine from fake emotional expressions.
Collapse
Affiliation(s)
- Miho Iwasaki
- Department of Psychology, Graduate School of Humanities, Kobe University, Japan
| | - Yasuki Noguchi
- Department of Psychology, Graduate School of Humanities, Kobe University, Japan
| |
Collapse
|
46
|
|
47
|
ten Brinke L, Adams GS. Saving face? When emotion displays during public apologies mitigate damage to organizational performance. ORGANIZATIONAL BEHAVIOR AND HUMAN DECISION PROCESSES 2015. [DOI: 10.1016/j.obhdp.2015.05.003] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
48
|
Niesten IJM, Nentjes L, Merckelbach H, Bernstein DP. Antisocial features and "faking bad": A critical note. INTERNATIONAL JOURNAL OF LAW AND PSYCHIATRY 2015; 41:34-42. [PMID: 25843907 DOI: 10.1016/j.ijlp.2015.03.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/04/2023]
Abstract
We critically review the literature on antisocial personality features and symptom fabrication (i.e., faking bad; e.g., malingering). A widespread assumption is that these constructs are intimately related. Some studies have, indeed, found that antisocial individuals score higher on instruments detecting faking bad, but others have been unable to replicate this pattern. In addition, studies exploring whether antisocial individuals are especially talented in faking bad have generally come up with null results. The notion of an intrinsic link between antisocial features and faking bad is difficult to test and research in this domain is sensitive to selection bias. We argue that research on faking bad would profit from further theoretical articulation. One topic that deserves scrutiny is how antisocial features affect the cognitive dissonance typically induced by faking bad. We illustrate our points with preliminary data and discuss their implications.
Collapse
Affiliation(s)
| | - Lieke Nentjes
- Forensic Psychology Section, Maastricht University, The Netherlands; Department of Clinical Psychology, University of Amsterdam, The Netherlands
| | | | - David P Bernstein
- Forensic Psychology Section, Maastricht University, The Netherlands; Forensic Psychiatric Center 'de Rooyse Wissel', The Netherlands
| |
Collapse
|
49
|
Lodder GMA, Scholte RHJ, Goossens L, Engels RCME, Verhagen M. Loneliness and the social monitoring system: Emotion recognition and eye gaze in a real-life conversation. Br J Psychol 2015; 107:135-53. [PMID: 25854912 DOI: 10.1111/bjop.12131] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2014] [Revised: 10/21/2014] [Indexed: 12/01/2022]
Abstract
Based on the belongingness regulation theory (Gardner et al., 2005, Pers. Soc. Psychol. Bull., 31, 1549), this study focuses on the relationship between loneliness and social monitoring. Specifically, we examined whether loneliness relates to performance on three emotion recognition tasks and whether lonely individuals show increased gazing towards their conversation partner's faces in a real-life conversation. Study 1 examined 170 college students (Mage = 19.26; SD = 1.21) who completed an emotion recognition task with dynamic stimuli (morph task) and a micro(-emotion) expression recognition task. Study 2 examined 130 college students (Mage = 19.33; SD = 2.00) who completed the Reading the Mind in the Eyes Test and who had a conversation with an unfamiliar peer while their gaze direction was videotaped. In both studies, loneliness was measured using the UCLA Loneliness Scale version 3 (Russell, 1996, J. Pers. Assess., 66, 20). The results showed that loneliness was unrelated to emotion recognition on all emotion recognition tasks, but that it was related to increased gaze towards their conversation partner's faces. Implications for the belongingness regulation system of lonely individuals are discussed.
Collapse
Affiliation(s)
- Gerine M A Lodder
- Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
| | - Ron H J Scholte
- Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
| | - Luc Goossens
- Research Group School Psychology and Child and Adolescent Development, KU Leuven - University of Leuven, Belgium
| | | | - Maaike Verhagen
- Behavioural Science Institute, Radboud University Nijmegen, The Netherlands
| |
Collapse
|
50
|
Hartwig M, Bond CF. Lie Detection from Multiple Cues: A Meta-analysis. APPLIED COGNITIVE PSYCHOLOGY 2014. [DOI: 10.1002/acp.3052] [Citation(s) in RCA: 101] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Maria Hartwig
- Department of Psychology; John Jay College of Criminal Justice; City University of New York USA
| | | |
Collapse
|