1
|
Ahn YA, Moffitt JM, Tao Y, Custode S, Parlade M, Beaumont A, Cardona S, Hale M, Durocher J, Alessandri M, Shyu ML, Perry LK, Messinger DS. Objective Measurement of Social Gaze and Smile Behaviors in Children with Suspected Autism Spectrum Disorder During Administration of the Autism Diagnostic Observation Schedule, 2nd Edition. J Autism Dev Disord 2024; 54:2124-2137. [PMID: 37103660 DOI: 10.1007/s10803-023-05990-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/09/2023] [Indexed: 04/28/2023]
Abstract
Best practice for the assessment of autism spectrum disorder (ASD) symptom severity relies on clinician ratings of the Autism Diagnostic Observation Schedule, 2nd Edition (ADOS-2), but the association of these ratings with objective measures of children's social gaze and smiling is unknown. Sixty-six preschool-age children (49 boys, M = 39.97 months, SD = 10.58) with suspected ASD (61 confirmed ASD) were administered the ADOS-2 and provided social affect calibrated severity scores (SA CSS). Children's social gaze and smiling during the ADOS-2, captured with a camera contained in eyeglasses worn by the examiner and parent, were obtained via a computer vision processing pipeline. Children who gazed more at their parents (p = .04) and whose gaze at their parents involved more smiling (p = .02) received lower social affect severity scores, indicating fewer social affect symptoms, adjusted R2 = .15, p = .003.
Collapse
Affiliation(s)
- Yeojin A Ahn
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Yudong Tao
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Stephanie Custode
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Meaghan Parlade
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Amy Beaumont
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Sandra Cardona
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Melissa Hale
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Jennifer Durocher
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Mei-Ling Shyu
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA.
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA.
- Departments of Pediatrics and Music Engineering, University of Miami, Coral Gables, FL, USA.
- Department of Psychology, University of Miami, 5665 Ponce de Leon Blvd., P.O. Box 248185, Coral Gables, FL, 33124, USA.
| |
Collapse
|
2
|
Zaharieva MS, Salvadori EA, Messinger DS, Visser I, Colonnesi C. Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behav Res Methods 2024:10.3758/s13428-023-02301-3. [PMID: 38273072 DOI: 10.3758/s13428-023-02301-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2023] [Indexed: 01/27/2024]
Abstract
Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.
Collapse
Affiliation(s)
- Martina S Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands.
| | - Eliala A Salvadori
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA
- Department of Pediatrics, University of Miami, Coral Gables, FL, USA
- Department of Music Engineering, University of Miami, Coral Gables, FL, USA
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Cristina Colonnesi
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
3
|
Namba S, Sato W, Namba S, Nomiya H, Shimokawa K, Osumi M. Development of the RIKEN database for dynamic facial expressions with multiple angles. Sci Rep 2023; 13:21785. [PMID: 38066065 PMCID: PMC10709572 DOI: 10.1038/s41598-023-49209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.
Collapse
Affiliation(s)
- Shushi Namba
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan.
| | - Wataru Sato
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
| | - Saori Namba
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Hiroki Nomiya
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto, 6068585, Japan
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| |
Collapse
|
4
|
Ahn YA, Ertuğrul IÖ, Chow SM, Cohn JF, Messinger DS. Automated measurement of infant and mother Duchenne facial expressions in the Face-to-Face/Still-Face. INFANCY 2023; 28:910-929. [PMID: 37466002 PMCID: PMC10426229 DOI: 10.1111/infa.12556] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2023]
Abstract
Although still-face effects are well-studied, little is known about the degree to which the Face-to-Face/Still-Face (FFSF) is associated with the production of intense affective displays. Duchenne smiling expresses more intense positive affect than non-Duchenne smiling, while Duchenne cry-faces express more intense negative affect than non-Duchenne cry-faces. Forty 4-month-old infants and their mothers completed the FFSF, and key affect-indexing facial Action Units (AUs) were coded by expert Facial Action Coding System coders for the first 30 s of each FFSF episode. Computer vision software, automated facial affect recognition (AFAR), identified AUs for the entire 2-min episodes. Expert coding and AFAR produced similar infant and mother Duchenne and non-Duchenne FFSF effects, highlighting the convergent validity of automated measurement. Substantive AFAR analyses indicated that both infant Duchenne and non-Duchenne smiling declined from the FF to the SF, but only Duchenne smiling increased from the SF to the RE. In similar fashion, the magnitude of mother Duchenne smiling changes over the FFSF were 2-4 times greater than non-Duchenne smiling changes. Duchenne expressions appear to be a sensitive index of intense infant and mother affective valence that are accessible to automated measurement and may be a target for future FFSF research.
Collapse
Affiliation(s)
- Yeojin Amy Ahn
- Department of Psychology, University of Miami, Coral
Gables, Florida, USA
| | - Itir Önal Ertuğrul
- Department of Information and Computing Sciences, Utrecht
University, Utrecht, Netherlands
| | - Sy-Miin Chow
- Department of Human Development and Family Studies,
Pennsylvania State University, State College, Pennsylvania, USA
| | - Jeffrey F. Cohn
- Department of Psychology, University of Pittsburgh,
Pittsburgh, Pennsylvania, USA
| | - Daniel S. Messinger
- Department of Psychology, University of Miami, Coral
Gables, Florida, USA
- Department of Electrical and Computer Engineering,
University of Miami, Coral Gables, Florida, USA
- Departments of Pediatrics and Music Engineering, University
of Miami, Coral Gables, Florida, USA
| |
Collapse
|
5
|
Harjunen VJ, Krusemark E, Stigzelius S, Halmesvaara OW, Annala M, Henttonen P, Määttänen I, Silfver M, Keltikangas-Järvinen L, Ravaja N. Under the thin skin of narcissus: Facial muscle activity reveals amplified emotional responses to negative social evaluation in individuals with grandiose narcissistic traits. Psychophysiology 2023; 60:e14315. [PMID: 37186319 DOI: 10.1111/psyp.14315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2022] [Revised: 02/21/2023] [Accepted: 04/04/2023] [Indexed: 05/17/2023]
Abstract
Individuals with grandiose narcissism exhibit enhanced antagonism and a defensive pattern of discordance between their emotional and physiological reactions to self-threatening evaluations. Although theoretical perspectives link narcissistic defensiveness to negative emotions, empirical evidence linking grandiose narcissism to emotional reactivity remains mixed. The current study used self-reported affect, electrocardiography, and facial electromyography (fEMG) to examine whether people scoring high in grandiose narcissism show amplified physiological and self-reported emotional reactivity to negative social evaluation. Following two challenging cognitive tasks, participants received negative and neutral feedback in a face-to-face evaluation situation. Receiving negative feedback decreased self-reported positive affect and dominance, slowed heart rate, and amplified fEMG activity related to frowning and eye constriction. Although self-reported emotional reactions were unrelated to grandiose narcissism, fEMG activity associated with negative affect was significantly enhanced by grandiose narcissism. In conclusion, individuals with higher levels of grandiose narcissism may not be willing to report overt emotional reactivity to self-threatening feedback, but physiological responses "beneath their thin skin" reveal amplified threat-related facial muscle activity suggestive of a negative emotional state.
Collapse
Affiliation(s)
- Ville J Harjunen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Elizabeth Krusemark
- Department of Psychology and Neuroscience, Millsaps College, Jackson, Mississippi, USA
| | - Saskia Stigzelius
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
- Finnish Institute for Health and Welfare (THL), Helsinki, Finland
| | - Otto W Halmesvaara
- Social Psychology, Faculty of Social Sciences, University of Helsinki, Helsinki, Finland
| | - Mikko Annala
- Social Psychology, Faculty of Social Sciences, University of Helsinki, Helsinki, Finland
| | - Pentti Henttonen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Ilmari Määttänen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Mia Silfver
- Social Psychology, Faculty of Social Sciences, University of Helsinki, Helsinki, Finland
| | | | - Niklas Ravaja
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
6
|
Onal Ertugrul I, Ahn YA, Bilalpur M, Messinger DS, Speltz ML, Cohn JF. Infant AFAR: Automated facial action recognition in infants. Behav Res Methods 2023; 55:1024-1035. [PMID: 35538295 PMCID: PMC9646921 DOI: 10.3758/s13428-022-01863-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2022] [Indexed: 11/08/2022]
Abstract
Automated detection of facial action units in infants is challenging. Infant faces have different proportions, less texture, fewer wrinkles and furrows, and unique facial actions relative to adults. For these and related reasons, action unit (AU) detectors that are trained on adult faces may generalize poorly to infant faces. To train and test AU detectors for infant faces, we trained convolutional neural networks (CNN) in adult video databases and fine-tuned these networks in two large, manually annotated, infant video databases that differ in context, head pose, illumination, video resolution, and infant age. AUs were those central to expression of positive and negative emotion. AU detectors trained in infants greatly outperformed ones trained previously in adults. Training AU detectors across infant databases afforded greater robustness to between-database differences than did training database specific AU detectors and outperformed previous state-of-the-art in infant AU detection. The resulting AU detection system, which we refer to as Infant AFAR (Automated Facial Action Recognition), is available to the research community for further testing and applications in infant emotion, social interaction, and related topics.
Collapse
|
7
|
Krumhuber EG, Kappas A. More What Duchenne Smiles Do, Less What They Express. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2022; 17:1566-1575. [PMID: 35712993 DOI: 10.1177/17456916211071083] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We comment on an article by Sheldon et al. from a previous issue of Perspectives (May 2021). They argued that the presence of positive emotion (Hypothesis 1), the intensity of positive emotion (Hypothesis 2), and chronic positive mood (Hypothesis 3) are reliably signaled by the Duchenne smile (DS). We reexamined the cited literature in support of each hypothesis and show that the study findings were mostly inconclusive, irrelevant, incomplete, and/or misread. In fact, there is no single (empirical) article that would unanimously support the idea that DSs function solely as indicators of felt positive affect. Additional evidence is reviewed, suggesting that DSs can be-and often are-displayed deliberately and in the absence of positive feelings. Although DSs may lead to favorable interpersonal perceptions and positive emotional responses in the observer, we propose a functional view that focuses on what facial actions-here specifically DSs-do rather than what they express.
Collapse
Affiliation(s)
- Eva G Krumhuber
- Department of Experimental Psychology, University College London
| | - Arvid Kappas
- Department of Psychology, Jacobs University Bremen
| |
Collapse
|
8
|
Cross MP, Acevedo AM, Leger KA, Pressman SD. How and Why Could Smiling Influence Physical Health? A Conceptual Review. Health Psychol Rev 2022; 17:321-343. [PMID: 35285408 DOI: 10.1080/17437199.2022.2052740] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Smiling has been a topic of interest to psychologists for decades, with a myriad of studies tying this behavior to well-being. Despite this, we know surprisingly little about the nature of the connections between smiling and physical health. We review the literature connecting both naturally occurring smiles and experimentally manipulated smiles to physical health and health-relevant outcomes. This work is discussed in the context of existing affect and health-relevant theoretical models that help explain the connection between smiling and physical health including the facial feedback hypothesis, the undoing hypothesis, the generalized unsafety theory of stress, and polyvagal theory. We also describe a number of plausible pathways, some new and relatively untested, through which smiling may influence physical health such as trait or state positive affect, social relationships, stress buffering, and the oculocardiac reflex. Finally, we provide a discussion of possible future directions, including the importance of cultural variation and replication. Although this field is still in its infancy, the findings from both naturally occurring smile studies and experimentally manipulated smile studies consistently suggest that smiling may have a number of health-relevant benefits including beneficially impacting our physiology during acute stress, improved stress recovery, and reduced illness over time.
Collapse
Affiliation(s)
- Marie P Cross
- Department of Biobehavioral Health, Pennsylvania State University, University Park, University Park, PA, USA
| | - Amanda M Acevedo
- Department of Psychological Science, University of California, Irvine, Irvine, CA, USA
| | - Kate A Leger
- Department of Psychology, University of Kentucky, Lexington, Lexington, KY, USA
| | - Sarah D Pressman
- Department of Psychological Science, University of California, Irvine, Irvine, CA, USA
| |
Collapse
|
9
|
Alvari G, Furlanello C, Venuti P. Is Smiling the Key? Machine Learning Analytics Detect Subtle Patterns in Micro-Expressions of Infants with ASD. J Clin Med 2021; 10:1776. [PMID: 33921756 PMCID: PMC8073678 DOI: 10.3390/jcm10081776] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2021] [Revised: 04/15/2021] [Accepted: 04/16/2021] [Indexed: 01/01/2023] Open
Abstract
Time is a key factor to consider in Autism Spectrum Disorder. Detecting the condition as early as possible is crucial in terms of treatment success. Despite advances in the literature, it is still difficult to identify early markers able to effectively forecast the manifestation of symptoms. Artificial intelligence (AI) provides effective alternatives for behavior screening. To this end, we investigated facial expressions in 18 autistic and 15 typical infants during their first ecological interactions, between 6 and 12 months of age. We employed Openface, an AI-based software designed to systematically analyze facial micro-movements in images in order to extract the subtle dynamics of Social Smiles in unconstrained Home Videos. Reduced frequency and activation intensity of Social Smiles was computed for children with autism. Machine Learning models enabled us to map facial behavior consistently, exposing early differences hardly detectable by non-expert naked eye. This outcome contributes to enhancing the potential of AI as a supportive tool for the clinical framework.
Collapse
Affiliation(s)
- Gianpaolo Alvari
- Department of Psychology and Cognitive Sciences, University of Trento, 38068 Rovereto, Italy;
- Data Science for Health (DSH) Research Unit, Bruno Kessler Foundation (FBK), 38123 Trento, Italy
| | | | - Paola Venuti
- Department of Psychology and Cognitive Sciences, University of Trento, 38068 Rovereto, Italy;
| |
Collapse
|
10
|
Children's Facial Muscular Movements and Risk for Early Psychopathology: Assessing Clinical Utility. Behav Ther 2020; 51:253-267. [PMID: 32138936 PMCID: PMC7476425 DOI: 10.1016/j.beth.2019.08.004] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Revised: 08/12/2019] [Accepted: 08/12/2019] [Indexed: 11/23/2022]
Abstract
Standardized developmentally based assessment systems have transformed the capacity to identify transdiagnostic behavioral markers of mental disorder risk in early childhood, notably, clinically significant irritability and externalizing behaviors. However, behavior-based instruments that both differentiate risk for persistent psychopathology from normative misbehavior, and are feasible for community clinicians to implement, are in nascent phases of development. Young children's facial expressions during frustration challenges may form the basis for novel assessments tools that are flexible, quick, and easy to implement as markers of psychopathology to complement validated questionnaires. However, the accuracy of facial expressions to correctly classify young children falling above and below clinical cut-offs is unknown. Our goal was to test how facial expressions during frustration, defined by different facial muscular movements, related to individual differences in irritability and externalizing behaviors and discriminated children with clinically significant levels from peers. Participants were 79 children (ages 3-7) who completed a short, moderately frustrating computer task while facial expressions were recorded. Only negative facial expressions that included eye constriction related to irritability and externalizing behaviors and were clinically discriminating. Moreover, these expressions significantly discriminated children with and without clinically significant irritability and externalizing symptoms with high Area Under the Curve (AUC) values (> .75) indicating good clinical utility. In contrast, expressions without eye constriction showed no clinical utility. The presence of negative expressions with eye constriction in response to a short frustration prompt may serve as an indicator of early psychopathology, raising the potential for novel assessment tools that may enhance precision of early identification.
Collapse
|
11
|
Haines N, Bell Z, Crowell S, Hahn H, Kamara D, McDonough-Caplan H, Shader T, Beauchaine TP. Using automated computer vision and machine learning to code facial expressions of affect and arousal: Implications for emotion dysregulation research. Dev Psychopathol 2019; 31:871-886. [PMID: 30919792 PMCID: PMC7319037 DOI: 10.1017/s0954579419000312] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023]
Abstract
As early as infancy, caregivers' facial expressions shape children's behaviors, help them regulate their emotions, and encourage or dissuade their interpersonal agency. In childhood and adolescence, proficiencies in producing and decoding facial expressions promote social competence, whereas deficiencies characterize several forms of psychopathology. To date, however, studying facial expressions has been hampered by the labor-intensive, time-consuming nature of human coding. We describe a partial solution: automated facial expression coding (AFEC), which combines computer vision and machine learning to code facial expressions in real time. Although AFEC cannot capture the full complexity of human emotion, it codes positive affect, negative affect, and arousal-core Research Domain Criteria constructs-as accurately as humans, and it characterizes emotion dysregulation with greater specificity than other objective measures such as autonomic responding. We provide an example in which we use AFEC to evaluate emotion dynamics in mother-daughter dyads engaged in conflict. Among other findings, AFEC (a) shows convergent validity with a validated human coding scheme, (b) distinguishes among risk groups, and (c) detects developmental increases in positive dyadic affect correspondence as teen daughters age. Although more research is needed to realize the full potential of AFEC, findings demonstrate its current utility in research on emotion dysregulation.
Collapse
Affiliation(s)
- Nathaniel Haines
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | - Ziv Bell
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | - Sheila Crowell
- Department of Psychology, University of Utah, Salt Lake City, UT, USA
- Department of Psychiatry, University of Utah, Salt Lake City, UT, USA
| | - Hunter Hahn
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | - Dana Kamara
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | | | - Tiffany Shader
- Department of Psychology, Ohio State University, Columbus, OH, USA
| | | |
Collapse
|
12
|
Grabell AS, Huppert TJ, Fishburn FA, Li Y, Jones HM, Wilett AE, Bemis LM, Perlman SB. Using facial muscular movements to understand young children's emotion regulation and concurrent neural activation. Dev Sci 2018; 21:e12628. [PMID: 29226482 PMCID: PMC5995650 DOI: 10.1111/desc.12628] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2016] [Accepted: 09/06/2017] [Indexed: 11/29/2022]
Abstract
Individual differences in young children's frustration responses set the stage for myriad developmental outcomes and represent an area of intense empirical interest. Emotion regulation is hypothesized to comprise the interplay of complex behaviors, such as facial expressions, and activation of concurrent underlying neural systems. At present, however, the literature has mostly examined children's observed emotion regulation behaviors and assumed underlying brain activation through separate investigations, resulting in theoretical gaps in our understanding of how children regulate emotion in vivo. Our goal was to elucidate links between young children's emotion regulation-related neural activation, facial muscular movements, and parent-rated temperamental emotion regulation. Sixty-five children (age 3-7) completed a frustration-inducing computer task while lateral prefrontal cortex (LPFC) activation and concurrent facial expressions were recorded. Negative facial expressions with eye constriction were inversely associated with both parent-rated temperamental emotion regulation and concurrent LPFC activation. Moreover, we found evidence that positive expressions with eye constriction during frustration may be associated with stronger LPFC activation. Results suggest a correspondence between facial expressions and LPFC activation that may explicate how children regulate emotion in real time.
Collapse
Affiliation(s)
- Adam S Grabell
- Department of Psychological and Brain Sciences, University of Massachusetts, Amherst, MA, USA
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Theodore J Huppert
- Department of Radiology, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Frank A Fishburn
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Yanwei Li
- College of Preschool Education Nanjing Xiaozhuang University Nanjing, Jiangsu, China
| | - Hannah M Jones
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Aimee E Wilett
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Lisa M Bemis
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| | - Susan B Perlman
- Department of Psychiatry, University of Pittsburgh School of Medicine, Pittsburgh, PA, USA
| |
Collapse
|
13
|
Hammal Z, Cohn JF, Wallace ER, Heike CL, Birgfeld CB, Oster H, Speltz ML. Facial Expressiveness in Infants With and Without Craniofacial Microsomia: Preliminary Findings. Cleft Palate Craniofac J 2018; 55:711-720. [PMID: 29377723 PMCID: PMC5936082 DOI: 10.1177/1055665617753481] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
OBJECTIVE To compare facial expressiveness (FE) of infants with and without craniofacial macrosomia (cases and controls, respectively) and to compare phenotypic variation among cases in relation to FE. DESIGN Positive and negative affect was elicited in response to standardized emotion inductions, video recorded, and manually coded from video using the Facial Action Coding System for Infants and Young Children. SETTING Five craniofacial centers: Children's Hospital of Los Angeles, Children's Hospital of Philadelphia, Seattle Children's Hospital, University of Illinois-Chicago, and University of North Carolina-Chapel Hill. PARTICIPANTS Eighty ethnically diverse 12- to 14-month-old infants. MAIN OUTCOME MEASURES FE was measured on a frame-by-frame basis as the sum of 9 observed facial action units (AUs) representative of positive and negative affect. RESULTS FE differed between conditions intended to elicit positive and negative affect (95% confidence interval = 0.09-0.66, P = .01). FE failed to differ between cases and controls (ES = -0.16 to -0.02, P = .47 to .92). Among cases, those with and without mandibular hypoplasia showed similar levels of FE (ES = -0.38 to 0.54, P = .10 to .66). CONCLUSIONS FE varied between positive and negative affect, and cases and controls responded similarly. Null findings for case/control differences may be attributable to a lower than anticipated prevalence of nerve palsy among cases, the selection of AUs, or the use of manual coding. In future research, we will reexamine group differences using an automated, computer vision approach that can cover a broader range of facial movements and their dynamics.
Collapse
Affiliation(s)
- Zakia Hammal
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Jeffrey F. Cohn
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | | | - Carrie L. Heike
- Seattle Children’s Research Institute, Seattle, WA, USA
- Seattle Children’s Hospital, Seattle, WA, USA
- University of Washington School of Medicine, Seattle, WA, USA
| | - Craig B. Birgfeld
- Seattle Children’s Research Institute, Seattle, WA, USA
- Seattle Children’s Hospital, Seattle, WA, USA
- University of Washington School of Medicine, Seattle, WA, USA
| | - Harriet Oster
- NYU School of Professional Studies, New York, NY, USA
| | - Matthew L. Speltz
- Seattle Children’s Research Institute, Seattle, WA, USA
- University of Washington School of Medicine, Seattle, WA, USA
| |
Collapse
|
14
|
Hammal Z, Chu WS, Cohn JF, Heike C, Speltz ML. Automatic Action Unit Detection in Infants Using Convolutional Neural Network. INTERNATIONAL CONFERENCE ON AFFECTIVE COMPUTING AND INTELLIGENT INTERACTION AND WORKSHOPS : [PROCEEDINGS]. ACII (CONFERENCE) 2017; 2017:216-221. [PMID: 29862131 PMCID: PMC5976252 DOI: 10.1109/acii.2017.8273603] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Action unit detection in infants relative to adults presents unique challenges. Jaw contour is less distinct, facial texture is reduced, and rapid and unusual facial movements are common. To detect facial action units in spontaneous behavior of infants, we propose a multi-label Convolutional Neural Network (CNN). Eighty-six infants were recorded during tasks intended to elicit enjoyment and frustration. Using an extension of FACS for infants (Baby FACS), over 230,000 frames were manually coded for ground truth. To control for chance agreement, inter-observer agreement between Baby-FACS coders was quantified using free-margin kappa. Kappa coefficients ranged from 0.79 to 0.93, which represents high agreement. The multi-label CNN achieved comparable agreement with manual coding. Kappa ranged from 0.69 to 0.93. Importantly, the CNN-based AU detection revealed the same change in findings with respect to infant expressiveness between tasks. While further research is needed, these findings suggest that automatic AU detection in infants is a viable alternative to manual coding of infant facial expression.
Collapse
Affiliation(s)
- Zakia Hammal
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
| | - Wen-Sheng Chu
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
| | - Jeffrey F Cohn
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
| | | | | |
Collapse
|
15
|
Cooper L, Lui M, Nduka C. Botulinum toxin treatment for facial palsy: A systematic review. J Plast Reconstr Aesthet Surg 2017; 70:833-841. [DOI: 10.1016/j.bjps.2017.01.009] [Citation(s) in RCA: 37] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2016] [Revised: 12/31/2016] [Accepted: 01/31/2017] [Indexed: 10/20/2022]
|
16
|
Namba S, Kabir RS, Miyatani M, Nakao T. Spontaneous Facial Actions Map onto Emotional Experiences in a Non-social Context: Toward a Component-Based Approach. Front Psychol 2017; 8:633. [PMID: 28522979 PMCID: PMC5415601 DOI: 10.3389/fpsyg.2017.00633] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2017] [Accepted: 04/09/2017] [Indexed: 11/20/2022] Open
Abstract
While numerous studies have examined the relationships between facial actions and emotions, they have yet to account for the ways that specific spontaneous facial expressions map onto emotional experiences induced without expressive intent. Moreover, previous studies emphasized that a fine-grained investigation of facial components could establish the coherence of facial actions with actual internal states. Therefore, this study aimed to accumulate evidence for the correspondence between spontaneous facial components and emotional experiences. We reinvestigated data from previous research which secretly recorded spontaneous facial expressions of Japanese participants as they watched film clips designed to evoke four different target emotions: surprise, amusement, disgust, and sadness. The participants rated their emotional experiences via a self-reported questionnaire of 16 emotions. These spontaneous facial expressions were coded using the Facial Action Coding System, the gold standard for classifying visible facial movements. We corroborated each facial action that was present in the emotional experiences by applying stepwise regression models. The results found that spontaneous facial components occurred in ways that cohere to their evolutionary functions based on the rating values of emotional experiences (e.g., the inner brow raiser might be involved in the evaluation of novelty). This study provided new empirical evidence for the correspondence between each spontaneous facial component and first-person internal states of emotion as reported by the expresser.
Collapse
Affiliation(s)
- Shushi Namba
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Russell S Kabir
- Graduate School of Education, Hiroshima UniversityHiroshima, Japan
| | - Makoto Miyatani
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| | - Takashi Nakao
- Department of Psychology, Hiroshima UniversityHiroshima, Japan
| |
Collapse
|
17
|
Hammal Z, Cohn JF, Messinger DS. Head Movement Dynamics During Play and Perturbed Mother-Infant Interaction. IEEE TRANSACTIONS ON AFFECTIVE COMPUTING 2015; 6:361-370. [PMID: 26640622 PMCID: PMC4666546 DOI: 10.1109/taffc.2015.2422702] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
We investigated the dynamics of head movement in mothers and infants during an age-appropriate, well-validated emotion induction, the Still Face paradigm. In this paradigm, mothers and infants play normally for 2 minutes (Play) followed by 2 minutes in which the mothers remain unresponsive (Still Face), and then two minutes in which they resume normal behavior (Reunion). Participants were 42 ethnically diverse 4-month-old infants and their mothers. Mother and infant angular displacement and angular velocity were measured using the CSIRO head tracker. In male but not female infants, angular displacement increased from Play to Still-Face and decreased from Still Face to Reunion. Infant angular velocity was higher during Still-Face than Reunion with no differences between male and female infants. Windowed cross-correlation suggested changes in how infant and mother head movements are associated, revealing dramatic changes in direction of association. Coordination between mother and infant head movement velocity was greater during Play compared with Reunion. Together, these findings suggest that angular displacement, angular velocity and their coordination between mothers and infants are strongly related to age-appropriate emotion challenge. Attention to head movement can deepen our understanding of emotion communication.
Collapse
Affiliation(s)
- Zakia Hammal
- The Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Jeffrey F Cohn
- the Robotics Institute, Carnegie Mellon University and the Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Daniel S Messinger
- the Department of Psychology at the University of Miami with secondary appointment in Pediatrics, Electrical and Computer Engineering, and Music Engineering, University of Miami, FL, USA
| |
Collapse
|
18
|
Aviezer H, Messinger DS, Zangvil S, Mattson WI, Gangi DN, Todorov A. Thrill of victory or agony of defeat? Perceivers fail to utilize information in facial movements. ACTA ACUST UNITED AC 2015; 15:791-7. [PMID: 26010575 DOI: 10.1037/emo0000073] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Although the distinction between positive and negative facial expressions is assumed to be clear and robust, recent research with intense real-life faces has shown that viewers are unable to reliably differentiate the valence of such expressions (Aviezer, Trope, & Todorov, 2012). Yet, the fact that viewers fail to distinguish these expressions does not in itself testify that the faces are physically identical. In Experiment 1, the muscular activity of victorious and defeated faces was analyzed. Higher numbers of individually coded facial actions--particularly smiling and mouth opening--were more common among winners than losers, indicating an objective difference in facial activity. In Experiment 2, we asked whether supplying participants with valid or invalid information about objective facial activity and valence would alter their ratings. Notwithstanding these manipulations, valence ratings were virtually identical in all groups, and participants failed to differentiate between positive and negative faces. While objective differences between intense positive and negative faces are detectable, human viewers do not utilize these differences in determining valence. These results suggest a surprising dissociation between information present in expressions and information used by perceivers.
Collapse
Affiliation(s)
- Hillel Aviezer
- Department of Psychology, Hebrew University of Jerusalem
| | | | - Shiri Zangvil
- Department of Psychology, Hebrew University of Jerusalem
| | | | | | | |
Collapse
|