1
|
Stewart PA, Svetieva E, Mullins JK. The influence of President Trump's micro-expressions during his COVID-19 national address on viewers' emotional response. Politics Life Sci 2024:1-18. [PMID: 38832534 DOI: 10.1017/pls.2024.8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
This preregistered study replicates and extends studies concerning emotional response to wartime rally speeches and applies it to U.S. President Donald Trump's first national address regarding the COVID-19 pandemic on March 11, 2020. We experimentally test the effect of a micro-expression (ME) by Trump associated with appraised threat on change in participant self-reported distress, sadness, anger, affinity, and reassurance while controlling for followership. We find that polarization is perpetuated in emotional response to the address which focused on portraying the COVID-19 threat as being of Chinese provenance. We also find a significant, albeit slight, effect by Trump's ME on self-reported sadness, suggesting that this facial behavior served did not diminish his speech, instead serving as a form of nonverbal punctuation. Further exploration of participant response using the Linguistic Inventory and Word Count software reinforces and extends these findings.
Collapse
Affiliation(s)
- Patrick A Stewart
- Department of Political Science, University of Arkansas, Fayetteville, AR, USA
| | - Elena Svetieva
- Department of Communication, University of Colorado, Colorado Springs, CO, USA
| | - Jeffrey K Mullins
- Department of Information Systems, University of Arkansas, Fayetteville, AR, USA
| |
Collapse
|
2
|
Coppola G, Christopoulou I, Gkantidis N, Verna C, Pandis N, Kanavakis G. The effect of orthodontic treatment on smile attractiveness: a systematic review. Prog Orthod 2023; 24:4. [PMID: 36740663 PMCID: PMC9899877 DOI: 10.1186/s40510-023-00456-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2022] [Accepted: 01/11/2023] [Indexed: 02/07/2023] Open
Abstract
BACKGROUND Smile attractiveness is a primary factor for patients to seek orthodontic treatment, however, there is yet no systematic evaluation of this topic in the literature. OBJECTIVES To assess the current evidence on the effect of orthodontic treatment on smile attractiveness. SEARCH METHODS Seven electronic databases (MEDLINE, Cochrane Library, Virtual Health Library, SCOPUS, Web of Science, Google Scholar and Embase) were searched on 14 September 2022. SELECTION CRITERIA Studies evaluating smile attractiveness before and after orthodontic treatment or only after completion of orthodontic treatment. DATA COLLECTION AND ANALYSIS Extracted data included study design and setting, sample size and demographics, malocclusion type, treatment modality and method for outcome assessment. Risk of bias was assessed with the ROBINS-I tool for non-randomised studies. Random-effects meta-analyses of mean differences and their 95% confidence intervals (CIs) were planned a priori. METHODS After elimination of duplicate studies, data extraction and risk of bias assessment according to the Cochrane guidelines, an evaluation of the overall evidence was performed. The included studies were evaluated based on the characteristics of their study and control groups and based on their main research question. Also, all outcome measures were standardized into a common assessment scale (0-100), in order to obtain more easily interpretable results. RESULTS Ten studies were included in this review, nine of which were assessed as being at serious risk of bias and one at moderate risk of bias. The large heterogeneity between the included studies did not allow for a meta-analysis. Orthodontic treatment has a moderately positive effect on smile attractiveness. When compared to no treatment, orthodontic treatment with premolar extractions improves smile attractiveness by 22%. Also, surgical correction of Class III cases increases smile attractiveness by 7.5% more than camouflage treatment. No other significant differences were shown between different types of treatment. CONCLUSION Based on the available data, orthodontic treatment seems to moderately improve the attractiveness of the smile. There is significant bias in the current literature assessing the effect of orthodontics on smile attractiveness; therefore, the results cannot be accepted with certainty.
Collapse
Affiliation(s)
- G. Coppola
- grid.6612.30000 0004 1937 0642Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine Basel (UZB), University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland
| | - I. Christopoulou
- grid.5216.00000 0001 2155 0800Department of Orthodontics, School of Dentistry, National and Kapodistrian University of Athens, Athens, Greece
| | - N. Gkantidis
- grid.5734.50000 0001 0726 5157Department of Orthodontics and Dentofacial Orthopedics, University of Bern, Bern, Switzerland
| | - C. Verna
- grid.6612.30000 0004 1937 0642Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine Basel (UZB), University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland
| | - N. Pandis
- grid.5734.50000 0001 0726 5157Department of Orthodontics and Dentofacial Orthopedics, University of Bern, Bern, Switzerland ,Private Practice, Corfu, Greece
| | - G. Kanavakis
- grid.6612.30000 0004 1937 0642Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine Basel (UZB), University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland ,grid.429997.80000 0004 1936 7531Department of Orthodontics and Dentofacial Orthopedics, Tufts University School of Dental Medicine, Boston, MA USA
| |
Collapse
|
3
|
The spatio-temporal features of perceived-as-genuine and deliberate expressions. PLoS One 2022; 17:e0271047. [PMID: 35839208 PMCID: PMC9286247 DOI: 10.1371/journal.pone.0271047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/22/2022] [Indexed: 11/24/2022] Open
Abstract
Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.
Collapse
|
4
|
Namba S, Sato W, Matsui H. Spatio-Temporal Properties of Amused, Embarrassed, and Pained Smiles. JOURNAL OF NONVERBAL BEHAVIOR 2022. [DOI: 10.1007/s10919-022-00404-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
AbstractSmiles are universal but nuanced facial expressions that are most frequently used in face-to-face communications, typically indicating amusement but sometimes conveying negative emotions such as embarrassment and pain. Although previous studies have suggested that spatial and temporal properties could differ among these various types of smiles, no study has thoroughly analyzed these properties. This study aimed to clarify the spatiotemporal properties of smiles conveying amusement, embarrassment, and pain using a spontaneous facial behavior database. The results regarding spatial patterns revealed that pained smiles showed less eye constriction and more overall facial tension than amused smiles; no spatial differences were identified between embarrassed and amused smiles. Regarding temporal properties, embarrassed and pained smiles remained in a state of higher facial tension than amused smiles. Moreover, embarrassed smiles showed a more gradual change from tension states to the smile state than amused smiles, and pained smiles had lower probabilities of staying in or transitioning to the smile state compared to amused smiles. By comparing the spatiotemporal properties of these three smile types, this study revealed that the probability of transitioning between discrete states could help distinguish amused, embarrassed, and pained smiles.
Collapse
|
5
|
Dobreva D, Gkantidis N, Halazonetis D, Verna C, Kanavakis G. Smile Reproducibility and Its Relationship to Self-Perceived Smile Attractiveness. BIOLOGY 2022; 11:biology11050719. [PMID: 35625447 PMCID: PMC9138875 DOI: 10.3390/biology11050719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/04/2022] [Accepted: 05/05/2022] [Indexed: 11/16/2022]
Abstract
The reproducibility of facial expressions has been previously explored, however, there is no detailed information regarding the reproducibility of lip morphology forming a social smile. In this study, we recruited 93 young adults, aged 21−35 years old, who agreed to participate in two consecutive study visits four weeks apart. On each visit, they were asked to perform a social smile, which was captured on a 3D facial image acquired using the 3dMD camera system. Assessments of self-perceived smile attractiveness were also performed using a VAS scale. Lip morphology, including smile shape, was described using 62 landmarks and semi-landmarks. A Procrustes superimposition of each set of smiling configurations (first and second visit) was performed and the Euclidean distance between each landmark set was calculated. A linear regression model was used to test the association between smile consistency and self-perceived smile attractiveness. The results show that the average landmark distance between sessions did not exceed 1.5 mm, indicating high repeatability, and that females presented approximately 15% higher smile consistecy than males (p < 0.05). There was no statistically significant association between smile consistency and self-perceived smile attractiveness (η2 = 0.015; p = 0.252), when controlling for the effect of sex and age.
Collapse
Affiliation(s)
- Denitsa Dobreva
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Nikolaos Gkantidis
- Department of Orthodontics and Dentofacial Orthopedics, University of Bern, 3001 Bern, Switzerland;
| | - Demetrios Halazonetis
- Department of Orthodontics, School of Dentistry, National and Kapodistrian University of Athens, GR-11527 Athens, Greece;
| | - Carlalberta Verna
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Georgios Kanavakis
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
- Department of Orthodontics, Tufts University School of Dental Medicine, Boston, MA 02111, USA
- Correspondence:
| |
Collapse
|
6
|
Branco LRF, Ehteshami A, Azgomi HF, Faghih RT. Closed-Loop Tracking and Regulation of Emotional Valence State From Facial Electromyogram Measurements. Front Comput Neurosci 2022; 16:747735. [PMID: 35399915 PMCID: PMC8990324 DOI: 10.3389/fncom.2022.747735] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 02/21/2022] [Indexed: 11/25/2022] Open
Abstract
Affective studies provide essential insights to address emotion recognition and tracking. In traditional open-loop structures, a lack of knowledge about the internal emotional state makes the system incapable of adjusting stimuli parameters and automatically responding to changes in the brain. To address this issue, we propose to use facial electromyogram measurements as biomarkers to infer the internal hidden brain state as feedback to close the loop. In this research, we develop a systematic way to track and control emotional valence, which codes emotions as being pleasant or obstructive. Hence, we conduct a simulation study by modeling and tracking the subject's emotional valence dynamics using state-space approaches. We employ Bayesian filtering to estimate the person-specific model parameters along with the hidden valence state, using continuous and binary features extracted from experimental electromyogram measurements. Moreover, we utilize a mixed-filter estimator to infer the secluded brain state in a real-time simulation environment. We close the loop with a fuzzy logic controller in two categories of regulation: inhibition and excitation. By designing a control action, we aim to automatically reflect any required adjustments within the simulation and reach the desired emotional state levels. Final results demonstrate that, by making use of physiological data, the proposed controller could effectively regulate the estimated valence state. Ultimately, we envision future outcomes of this research to support alternative forms of self-therapy by using wearable machine interface architectures capable of mitigating periods of pervasive emotions and maintaining daily well-being and welfare.
Collapse
Affiliation(s)
- Luciano R. F. Branco
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
| | - Arian Ehteshami
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
| | - Hamid Fekri Azgomi
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
- Department of Neurological Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Rose T. Faghih
- Department of Electrical and Computer Engineering, University of Houston, Houston, TX, United States
- Department of Biomedical Engineering, New York University, New York, NY, United States
| |
Collapse
|
7
|
Namba S. Feedback From Facial Expressions Contribute to Slow Learning Rate in an Iowa Gambling Task. Front Psychol 2021; 12:684249. [PMID: 34434141 PMCID: PMC8381354 DOI: 10.3389/fpsyg.2021.684249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2021] [Accepted: 07/19/2021] [Indexed: 11/13/2022] Open
Abstract
Facial expressions of emotion can convey information about the world and disambiguate elements of the environment, thus providing direction to other people’s behavior. However, the functions of facial expressions from the perspective of learning patterns over time remain elusive. This study investigated how the feedback of facial expressions influences learning tasks in a context of ambiguity using the Iowa Gambling Task. The results revealed that the learning rate for facial expression feedback was slower in the middle of the learning period than it was for symbolic feedback. No difference was observed in deck selection or computational model parameters between the conditions, and no correlation was observed between task indicators and the results of depressive questionnaires.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| |
Collapse
|
8
|
Namba S, Sato W, Osumi M, Shimokawa K. Assessing Automated Facial Action Unit Detection Systems for Analyzing Cross-Domain Facial Expression Databases. SENSORS (BASEL, SWITZERLAND) 2021; 21:4222. [PMID: 34203007 PMCID: PMC8235167 DOI: 10.3390/s21124222] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Revised: 06/15/2021] [Accepted: 06/17/2021] [Indexed: 11/16/2022]
Abstract
In the field of affective computing, achieving accurate automatic detection of facial movements is an important issue, and great progress has already been made. However, a systematic evaluation of systems that now have access to the dynamic facial database remains an unmet need. This study compared the performance of three systems (FaceReader, OpenFace, AFARtoolbox) that detect each facial movement corresponding to an action unit (AU) derived from the Facial Action Coding System. All machines could detect the presence of AUs from the dynamic facial database at a level above chance. Moreover, OpenFace and AFAR provided higher area under the receiver operating characteristic curve values compared to FaceReader. In addition, several confusion biases of facial components (e.g., AU12 and AU14) were observed to be related to each automated AU detection system and the static mode was superior to dynamic mode for analyzing the posed facial database. These findings demonstrate the features of prediction patterns for each system and provide guidance for research on facial expressions.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Wataru Sato
- Psychological Process Team, BZP, Robotics Project, RIKEN, 2-2-2 Hikaridai, Seika-cho, Soraku-gun, Kyoto 6190288, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, 2-7-3, Tateba, Naniwa-ku, Osaka 5560020, Japan; (M.O.); (K.S.)
| |
Collapse
|
9
|
Kim S, Hirokawa M, Matsuda S, Funahashi A, Suzuki K. Smiles as a Signal of Prosocial Behaviors Toward the Robot in the Therapeutic Setting for Children With Autism Spectrum Disorder. Front Robot AI 2021; 8:599755. [PMID: 34124170 PMCID: PMC8187796 DOI: 10.3389/frobt.2021.599755] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2020] [Accepted: 03/08/2021] [Indexed: 11/25/2022] Open
Abstract
We explored how robot-assisted therapy based on smile analysis may facilitate the prosocial behaviors of children with autism spectrum disorder. Prosocial behaviors, which are actions for the benefit of others, are required to belong to society and increase the quality of life. As smiling is a candidate for predicting prosocial behaviors in robot-assisted therapy, we measured smiles by annotating behaviors that were recorded with video cameras and by classifying facial muscle activities recorded with a wearable device. While interacting with a robot, the participants experienced two situations where participants' prosocial behaviors are expected, which were supporting the robot to walk and helping the robot from falling. We first explored the overall smiles at specific timings and prosocial behaviors. Then, we explored the smiles triggered by a robot and behavior changes before engaging in prosocial behaviors. The results show that the specific timing of smiles and prosocial behaviors increased in the second session of children with autism spectrum disorder. Additionally, a smile was followed by a series of behaviors before prosocial behavior. With a proposed Bayesian model, smiling, or heading predicted prosocial behaviors with higher accuracy compared to other variables. Particularly, voluntary prosocial behaviors were observed after smiling. The findings of this exploratory study imply that smiles might be a signal of prosocial behaviors. We also suggest a probabilistic model for predicting prosocial behaviors based on smile analysis, which could be applied to personalized robot-assisted therapy by controlling a robot's movements to arouse smiles and increase the probability that a child with autism spectrum disorder will engage in prosocial behaviors.
Collapse
Affiliation(s)
- SunKyoung Kim
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| | - Masakazu Hirokawa
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| | - Soichiro Matsuda
- Faculty of Human Sciences, University of Tsukuba, Tsukuba, Japan
| | - Atsushi Funahashi
- Faculty of Sport Science, Nippon Sport Science University, Yokohama, Japan
| | - Kenji Suzuki
- Faculty of Engineering, Information and Systems, University of Tsukuba, Tsukuba, Japan
| |
Collapse
|
10
|
Gerłowska J, Dmitruk K, Rejdak K. Facial emotion mimicry in older adults with and without cognitive impairments due to Alzheimer's disease. AIMS Neurosci 2021; 8:226-238. [PMID: 33709026 PMCID: PMC7940111 DOI: 10.3934/neuroscience.2021012] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2021] [Accepted: 01/25/2021] [Indexed: 01/25/2023] Open
Abstract
Facial expression of humans is one of the main channels of everyday communication. The reported research work investigated communication regarding the pattern of emotional expression of healthy older adults and with mild cognitive impairments (MCI) or Alzheimer's disease (AD). It focuses on mimicking of displayed emotional facial expression on a sample of 25 older adults (healthy, MCI and AD patients). The adequacy of the patients' individual facial expressions in six basic emotions was measured with the Kinect 3D recording of the participants' facial expressions and compared to their own typical emotional facial expressions. The reactions were triggered by mimicking 49 still pictures of emotional facial expressions. No statistically significant differences in terms of frequency nor adequacy of emotional facial expression were reported in healthy and MCI groups. Unique patterns of emotional expressions have been observed in the AD group. Further investigating the pattern of older adults' facial expression may decrease the misunderstandings and increase the quality of life of the patients.
Collapse
Affiliation(s)
- Justyna Gerłowska
- Department of Educational Psychology and Psychological Assessment, Institute of Psychology University of Maria Skłodowska-Curie, Lublin, Poland
| | - Krzysztof Dmitruk
- Institute of IT, University of Maria Skłodowska-Curie, Lublin, Poland
| | - Konrad Rejdak
- Department of Neurology, Medical University of Lublin, Lublin, Poland
| |
Collapse
|