1
|
Dildine TC, Amir CM, Parsons J, Atlas LY. How Pain-Related Facial Expressions Are Evaluated in Relation to Gender, Race, and Emotion. AFFECTIVE SCIENCE 2023; 4:350-369. [PMID: 37293681 PMCID: PMC9982800 DOI: 10.1007/s42761-023-00181-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Accepted: 01/24/2023] [Indexed: 03/06/2023]
Abstract
Inequities in pain assessment are well-documented; however, the psychological mechanisms underlying such biases are poorly understood. We investigated potential perceptual biases in the judgments of faces displaying pain-related movements. Across five online studies, 956 adult participants viewed images of computer-generated faces ("targets") that varied in features related to race (Black and White) and gender (women and men). Target identity was manipulated across participants, and each target had equivalent facial movements that displayed varying intensities of movement in facial action-units related to pain (Studies 1-4) or pain and emotion (Study 5). On each trial, participants provided categorical judgments as to whether a target was in pain (Studies 1-4) or which expression the target displayed (Study 5) and then rated the perceived intensity of the expression. Meta-analyses of Studies 1-4 revealed that movement intensity was positively associated with both categorizing a trial as painful and perceived pain intensity. Target race and gender did not consistently affect pain-related judgments, contrary to well-documented clinical inequities. In Study 5, in which pain was equally likely relative to other emotions, pain was the least frequently selected emotion (5%). Our results suggest that perceivers can utilize facial movements to evaluate pain in other individuals, but perceiving pain may depend on contextual factors. Furthermore, assessments of computer-generated, pain-related facial movements online do not replicate sociocultural biases observed in the clinic. These findings provide a foundation for future studies comparing CGI and real images of pain and emphasize the need for further work on the relationship between pain and emotion. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00181-6.
Collapse
Affiliation(s)
- Troy C. Dildine
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
- Department of Clinical Neuroscience, Karolinska Institute, 171 77 Solna, Sweden
| | - Carolyn M. Amir
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
| | - Julie Parsons
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
| | - Lauren Y. Atlas
- National Center for Complementary and Integrative Health, National Institutes of Health, 10, Center Drive, Bethesda, MD 20892 USA
- National Institute of Mental Health, National Institutes of Health, Bethesda, MD 20892 USA
- National Institute on Drug Abuse, National Institutes of Health, Baltimore, MD 21224 USA
| |
Collapse
|
2
|
Borgalli RA, Surve S. Review on learning framework for facial expression recognition. THE IMAGING SCIENCE JOURNAL 2023. [DOI: 10.1080/13682199.2023.2172526] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Affiliation(s)
- Rohan Appasaheb Borgalli
- Department of Electronics Engineering, Fr. Conceicao Rodrigues College of Engineering, Bandra, University of Mumbai, Mumbai, Maharashtra, India
| | - Sunil Surve
- Department of Computer Engineering, Fr. Conceicao Rodrigues College of Engineering, Bandra, University of Mumbai, Mumbai, Maharashtra, India
| |
Collapse
|
3
|
Face detection and grimace scale prediction of white furred mice. MACHINE LEARNING WITH APPLICATIONS 2022. [DOI: 10.1016/j.mlwa.2022.100312] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
|
4
|
Research on Voice-Driven Facial Expression Film and Television Animation Based on Compromised Node Detection in Wireless Sensor Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8563818. [PMID: 35111214 PMCID: PMC8803464 DOI: 10.1155/2022/8563818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/18/2021] [Accepted: 12/27/2021] [Indexed: 11/18/2022]
Abstract
With the continuous development of social economy, film and television animation, as the spiritual needs of ordinary people, is more and more popular. Especially for the development of emerging technologies, the corresponding voice can be used to change AI expression. But at the same time, how to ensure the synchronization of language sound and facial expression is one of the difficulties in animation transformation. Relying on the compromised node detection of wireless sensor networks, this paper combs the synchronous traffic flow between the speech signals and facial expressions, finds the pattern distribution of facial motion based on unsupervised classification, realizes training and learning through neural networks, and realizes one-to-one mapping to facial expressions by using the rhyme distribution of speech features. It avoids the defect of robustness of speech recognition, improves the learning ability of speech recognition, and realizes the driving analysis of facial expression film and television animation. The simulation results show that the compromised node detection in wireless sensor networks is effective and can support the analysis and research of speech-driven facial expression film and television animation.
Collapse
|
5
|
Chen J, Wang C, Wang K, Liu M. Lightweight network architecture using difference saliency maps for facial action unit detection. APPL INTELL 2021. [DOI: 10.1007/s10489-021-02755-y] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
6
|
Li Y, Huang X, Zhao G. Micro-expression action unit detection with spatial and channel attention. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2021.01.032] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
7
|
Zhi R, Zhou C, Li T, Liu S, Jin Y. Action unit analysis enhanced facial expression recognition by deep neural network evolution. Neurocomputing 2021. [DOI: 10.1016/j.neucom.2020.03.036] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
8
|
|
9
|
Ertugrul IO, Yang L, Jeni LA, Cohn JF. D-PAttNet: Dynamic Patch-Attentive Deep Network for Action Unit Detection. FRONTIERS IN COMPUTER SCIENCE 2019; 1:11. [PMID: 31930192 PMCID: PMC6953909 DOI: 10.3389/fcomp.2019.00011] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022] Open
Abstract
Facial action units (AUs) relate to specific local facial regions. Recent efforts in automated AU detection have focused on learning the facial patch representations to detect specific AUs. These efforts have encountered three hurdles. First, they implicitly assume that facial patches are robust to head rotation; yet non-frontal rotation is common. Second, mappings between AUs and patches are defined a priori, which ignores co-occurrences among AUs. And third, the dynamics of AUs are either ignored or modeled sequentially rather than simultaneously as in human perception. Inspired by recent advances in human perception, we propose a dynamic patch-attentive deep network, called D-PAttNet, for AU detection that (i) controls for 3D head and face rotation, (ii) learns mappings of patches to AUs, and (iii) models spatiotemporal dynamics. D-PAttNet approach significantly improves upon existing state of the art.
Collapse
Affiliation(s)
- Itir Onal Ertugrul
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Le Yang
- School of Computer Science, Northwestern Polytechnical University, Xian, China
| | - László A. Jeni
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, United States
| | - Jeffrey F. Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, United States
| |
Collapse
|
10
|
Yi J, Chen A, Cai Z, Sima Y, Zhou M, Wu X. Facial expression recognition of intercepted video sequences based on feature point movement trend and feature block texture variation. Appl Soft Comput 2019. [DOI: 10.1016/j.asoc.2019.105540] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
11
|
Wang SJ, Lin B, Wang Y, Yi T, Zou B, Lyu XW. Action Units recognition based on Deep Spatial-Convolutional and Multi-label Residual network. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.05.018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
12
|
Ma C, Chen L, Yong J. AU R-CNN: Encoding expert prior knowledge into R-CNN for action unit detection. Neurocomputing 2019. [DOI: 10.1016/j.neucom.2019.03.082] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|
13
|
Ma Z, Lai Y, Kleijn WB, Song YZ, Wang L, Guo J. Variational Bayesian Learning for Dirichlet Process Mixture of Inverted Dirichlet Distributions in Non-Gaussian Image Feature Modeling. IEEE TRANSACTIONS ON NEURAL NETWORKS AND LEARNING SYSTEMS 2019; 30:449-463. [PMID: 29994731 DOI: 10.1109/tnnls.2018.2844399] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
In this paper, we develop a novel variational Bayesian learning method for the Dirichlet process (DP) mixture of the inverted Dirichlet distributions, which has been shown to be very flexible for modeling vectors with positive elements. The recently proposed extended variational inference (EVI) framework is adopted to derive an analytically tractable solution. The convergency of the proposed algorithm is theoretically guaranteed by introducing single lower bound approximation to the original objective function in the EVI framework. In principle, the proposed model can be viewed as an infinite inverted Dirichlet mixture model that allows the automatic determination of the number of mixture components from data. Therefore, the problem of predetermining the optimal number of mixing components has been overcome. Moreover, the problems of overfitting and underfitting are avoided by the Bayesian estimation approach. Compared with several recently proposed DP-related methods and conventional applied methods, the good performance and effectiveness of the proposed method have been demonstrated with both synthesized data and real data evaluations.
Collapse
|
14
|
Perveen N, Roy D, Mohan CK. Spontaneous Expression Recognition using Universal Attribute Model. IEEE TRANSACTIONS ON IMAGE PROCESSING : A PUBLICATION OF THE IEEE SIGNAL PROCESSING SOCIETY 2018; 27:5575-5584. [PMID: 30010578 DOI: 10.1109/tip.2018.2856373] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Spontaneous expression recognition refers to recognizing non-posed human expressions. In literature, most of the existing approaches for expression recognition mainly rely on manual annotations by experts, which is both time-consuming and difficult to obtain. Hence, we propose an unsupervised framework for spontaneous expression recognition that preserves discriminative information for the videos of each expression without using annotations. Initially, a large Gaussian mixture model called universal attribute model (UAM) is trained to learn the attributes of various expressions implicitly. Attributes are the movements of various facial muscles that are combined to form a particular facial expression. Then a concatenated mean vector called the super expression-vector (SEV) is formed by using a maximum a posteriori adaptation of the UAM means for each expression clip. This SEV contains attributes from all the expressions resulting in a high dimensional representation. To retain only the attributes of that particular expression clip, the SEV is decomposed using factor analysis to produce a low-dimensional expression-vector. This procedure does not require any class labels and produces expression-vectors that are distinct for each expression irrespective of high inter-actor variability present in spontaneous expressions. On spontaneous expression datasets like BP4D and AFEW, we demonstrate that expression-vector achieves better performance than state-of-the-art techniques. Further, we also show that UAM trained on a constrained dataset can be effectively used to recognize expressions in unconstrained expression videos.
Collapse
|
15
|
Facial Dynamics Interpreter Network: What Are the Important Relations Between Local Dynamics for Facial Trait Estimation? COMPUTER VISION – ECCV 2018 2018. [DOI: 10.1007/978-3-030-01258-8_29] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/02/2022]
|
16
|
Girard JM, Chu WS, Jeni LA, Cohn JF, De la Torre F, Sayette MA. Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION. IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION 2017; 2017:581-588. [PMID: 29606916 PMCID: PMC5876025 DOI: 10.1109/fg.2017.144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.
Collapse
Affiliation(s)
- Jeffrey M Girard
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
| | - Wen-Sheng Chu
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | - László A Jeni
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | | | - Michael A Sayette
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
| |
Collapse
|