1
|
Zaharieva MS, Salvadori EA, Messinger DS, Visser I, Colonnesi C. Automated facial expression measurement in a longitudinal sample of 4- and 8-month-olds: Baby FaceReader 9 and manual coding of affective expressions. Behav Res Methods 2024; 56:5709-5731. [PMID: 38273072 PMCID: PMC11335827 DOI: 10.3758/s13428-023-02301-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/20/2023] [Indexed: 01/27/2024]
Abstract
Facial expressions are among the earliest behaviors infants use to express emotional states, and are crucial to preverbal social interaction. Manual coding of infant facial expressions, however, is laborious and poses limitations to replicability. Recent developments in computer vision have advanced automated facial expression analyses in adults, providing reproducible results at lower time investment. Baby FaceReader 9 is commercially available software for automated measurement of infant facial expressions, but has received little validation. We compared Baby FaceReader 9 output to manual micro-coding of positive, negative, or neutral facial expressions in a longitudinal dataset of 58 infants at 4 and 8 months of age during naturalistic face-to-face interactions with the mother, father, and an unfamiliar adult. Baby FaceReader 9's global emotional valence formula yielded reasonable classification accuracy (AUC = .81) for discriminating manually coded positive from negative/neutral facial expressions; however, the discrimination of negative from neutral facial expressions was not reliable (AUC = .58). Automatically detected a priori action unit (AU) configurations for distinguishing positive from negative facial expressions based on existing literature were also not reliable. A parsimonious approach using only automatically detected smiling (AU12) yielded good performance for discriminating positive from negative/neutral facial expressions (AUC = .86). Likewise, automatically detected brow lowering (AU3+AU4) reliably distinguished neutral from negative facial expressions (AUC = .79). These results provide initial support for the use of selected automatically detected individual facial actions to index positive and negative affect in young infants, but shed doubt on the accuracy of complex a priori formulas.
Collapse
Affiliation(s)
- Martina S Zaharieva
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands.
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands.
| | - Eliala A Salvadori
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA
- Department of Pediatrics, University of Miami, Coral Gables, FL, USA
- Department of Music Engineering, University of Miami, Coral Gables, FL, USA
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Ingmar Visser
- Department of Developmental Psychology, Faculty of Social and Behavioural Sciences, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| | - Cristina Colonnesi
- Developmental Psychopathology Unit, Development and Education, Faculty of Social and Behavioural Sciences, Research Institute of Child, University of Amsterdam, Nieuwe Achtergracht 129b, 1001 NK, Amsterdam, The Netherlands
- Yield, Research Priority Area, University of Amsterdam, Amsterdam, The Netherlands
| |
Collapse
|
2
|
Atzil-Slonim D, Penedo JMG, Lutz W. Leveraging Novel Technologies and Artificial Intelligence to Advance Practice-Oriented Research. ADMINISTRATION AND POLICY IN MENTAL HEALTH AND MENTAL HEALTH SERVICES RESEARCH 2024; 51:306-317. [PMID: 37880473 DOI: 10.1007/s10488-023-01309-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/29/2023] [Indexed: 10/27/2023]
Abstract
Mental health services are experiencing notable transformations as innovative technologies and artificial intelligence (AI) are increasingly utilized in a growing number of studies and services.These cutting-edge technologies carry the promise of substantial improvements in the field of mental health. Nevertheless, questions emerge about the alignment of novel technologies and AI systems with human needs, especially in the context of vulnerable populations receiving mental healthcare. The practice-oriented research (POR) model is pivotal in seamlessly integrating these emerging technologies into clinical research and practice. It underscores the importance of tight collaboration between clinicians and researchers, all driven by the central goal of ensuring and elevating client well-being. This paper focuses on how novel technologies can enhance the POR model and highlights its pivotal role in integrating these technologies into clinical research and practice. We discuss two key phases: pre-treatment, and during treatment. For each phase, we describe the challenges, present the major technological innovations, describe recent studies exemplifying technology use, and suggest future directions. Ethical concerns and the importance of aligning humans and technology are also considered, in addition to implications for practice and training.
Collapse
Affiliation(s)
| | | | - Wolfgang Lutz
- Department of Psychology, University of Trier, Trier, Germany
| |
Collapse
|
3
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
4
|
Prasad S, Arunachalam S, Boillat T, Ghoneima A, Gandedkar N, Diar-Bakirly S. Wearable Orofacial Technology and Orthodontics. Dent J (Basel) 2023; 11:24. [PMID: 36661561 PMCID: PMC9858298 DOI: 10.3390/dj11010024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 12/19/2022] [Accepted: 12/30/2022] [Indexed: 01/12/2023] Open
Abstract
Wearable technology to augment traditional approaches are increasingly being added to the arsenals of treatment providers. Wearable technology generally refers to electronic systems, devices, or sensors that are usually worn on or are in close proximity to the human body. Wearables may be stand-alone or integrated into materials that are worn on the body. What sets medical wearables apart from other systems is their ability to collect, store, and relay information regarding an individual's current body status to other devices operating on compatible networks in naturalistic settings. The last decade has witnessed a steady increase in the use of wearables specific to the orofacial region. Applications range from supplementing diagnosis, tracking treatment progress, monitoring patient compliance, and better understanding the jaw's functional and parafunctional activities. Orofacial wearable devices may be unimodal or incorporate multiple sensing modalities. The objective data collected continuously, in real time, in naturalistic settings using these orofacial wearables provide opportunities to formulate accurate and personalized treatment strategies. In the not-too-distant future, it is anticipated that information about an individual's current oral health status may provide patient-centric personalized care to prevent, diagnose, and treat oral diseases, with wearables playing a key role. In this review, we examine the progress achieved, summarize applications of orthodontic relevance and examine the future potential of orofacial wearables.
Collapse
Affiliation(s)
- Sabarinath Prasad
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| | - Sivakumar Arunachalam
- Orthodontics and Dentofacial Orthopedics, School of Dentistry, International Medical University, Kuala Lumpur 57000, Malaysia
| | - Thomas Boillat
- Design Lab, College of Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| | - Ahmed Ghoneima
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| | - Narayan Gandedkar
- Discipline of Orthodontics & Paediatric Dentistry, School of Dentistry, University of Sydney, Sydney, NSW 2006, Australia
| | - Samira Diar-Bakirly
- Department of Orthodontics, Hamdan Bin Mohammed College of Dental Medicine, Mohammed Bin Rashid University of Medicine and Health Sciences, Dubai 50505, United Arab Emirates
| |
Collapse
|
5
|
Mohammed H, Kumar R, Bennani H, Halberstadt JB, Farella M. Automated detection of smiles as discrete episodes. J Oral Rehabil 2022; 49:1173-1180. [PMID: 36205621 PMCID: PMC9828522 DOI: 10.1111/joor.13378] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 07/25/2022] [Accepted: 08/22/2022] [Indexed: 01/12/2023]
Abstract
BACKGROUND Patients seeking restorative and orthodontic treatment expect an improvement in their smiles and oral health-related quality of life. Nonetheless, the qualitative and quantitative characteristics of dynamic smiles are yet to be understood. OBJECTIVE To develop, validate, and introduce open-access software for automated analysis of smiles in terms of their frequency, genuineness, duration, and intensity. MATERIALS AND METHODS A software script was developed using the Facial Action Coding System (FACS) and artificial intelligence to assess activations of (1) cheek raiser, a marker of smile genuineness; (2) lip corner puller, a marker of smile intensity; and (3) perioral lip muscles, a marker of lips apart. Thirty study participants were asked to view a series of amusing videos. A full-face video was recorded using a webcam. The onset and cessation of smile episodes were identified by two examiners trained with FACS coding. A Receiver Operating Characteristic (ROC) curve was then used to assess detection accuracy and optimise thresholding. The videos of participants were then analysed off-line to automatedly assess the features of smiles. RESULTS The area under the ROC curve for smile detection was 0.94, with a sensitivity of 82.9% and a specificity of 89.7%. The software correctly identified 90.0% of smile episodes. While watching the amusing videos, study participants smiled 1.6 (±0.8) times per minute. CONCLUSIONS Features of smiles such as frequency, duration, genuineness, and intensity can be automatedly assessed with an acceptable level of accuracy. The software can be used to investigate the impact of oral conditions and their rehabilitation on smiles.
Collapse
Affiliation(s)
- Hisham Mohammed
- Discipline of Orthodontics, Faculty of DentistryUniversity of OtagoDunedinNew Zealand
| | - Reginald Kumar
- Discipline of Orthodontics, Faculty of DentistryUniversity of OtagoDunedinNew Zealand
| | - Hamza Bennani
- School of Information Technology, Otago PolytechnicDunedinNew Zealand
| | | | - Mauro Farella
- Discipline of Orthodontics, Faculty of DentistryUniversity of OtagoDunedinNew Zealand,Discipline of Orthodontics, Department of Surgical SciencesUniversity of CagliariCagliariItaly
| |
Collapse
|
6
|
Inagaki M, Ito T, Shinozaki T, Fujita I. Convolutional neural networks reveal differences in action units of facial expressions between face image databases developed in different countries. Front Psychol 2022; 13:988302. [DOI: 10.3389/fpsyg.2022.988302] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2022] [Accepted: 10/04/2022] [Indexed: 11/06/2022] Open
Abstract
Cultural similarities and differences in facial expressions have been a controversial issue in the field of facial communications. A key step in addressing the debate regarding the cultural dependency of emotional expression (and perception) is to characterize the visual features of specific facial expressions in individual cultures. Here we developed an image analysis framework for this purpose using convolutional neural networks (CNNs) that through training learned visual features critical for classification. We analyzed photographs of facial expressions derived from two databases, each developed in a different country (Sweden and Japan), in which corresponding emotion labels were available. While the CNNs reached high rates of correct results that were far above chance after training with each database, they showed many misclassifications when they analyzed faces from the database that was not used for training. These results suggest that facial features useful for classifying facial expressions differed between the databases. The selectivity of computational units in the CNNs to action units (AUs) of the face varied across the facial expressions. Importantly, the AU selectivity often differed drastically between the CNNs trained with the different databases. Similarity and dissimilarity of these tuning profiles partly explained the pattern of misclassifications, suggesting that the AUs are important for characterizing the facial features and differ between the two countries. The AU tuning profiles, especially those reduced by principal component analysis, are compact summaries useful for comparisons across different databases, and thus might advance our understanding of universality vs. specificity of facial expressions across cultures.
Collapse
|
7
|
Rinck M, Primbs MA, Verpaalen IAM, Bijlstra G. Face masks impair facial emotion recognition and induce specific emotion confusions. Cogn Res Princ Implic 2022; 7:83. [PMID: 36065042 PMCID: PMC9444085 DOI: 10.1186/s41235-022-00430-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2021] [Accepted: 08/11/2022] [Indexed: 11/10/2022] Open
Abstract
Face masks are now worn frequently to reduce the spreading of the SARS-CoV-2 virus. Their health benefits are undisputable, but covering the lower half of one's face also makes it harder for others to recognize facial expressions of emotions. Three experiments were conducted to determine how strongly the recognition of different facial expressions is impaired by masks, and which emotions are confused with each other. In each experiment, participants had to recognize facial expressions of happiness, sadness, anger, surprise, fear, and disgust, as well as a neutral expression, displayed by male and female actors of the Radboud Faces Database. On half of the 168 trials, the lower part of the face was covered by a face mask. In all experiments, facial emotion recognition (FER) was about 20% worse for masked faces than for unmasked ones (68% correct vs. 88%). The impairment was largest for disgust, followed by fear, surprise, sadness, and happiness. It was not significant for anger and the neutral expression. As predicted, participants frequently confused emotions that share activation of the visible muscles in the upper half of the face. In addition, they displayed response biases in these confusions: They frequently misinterpreted disgust as anger, fear as surprise, and sadness as neutral, whereas the opposite confusions were less frequent. We conclude that face masks do indeed cause a marked impairment of FER and that a person perceived as angry, surprised, or neutral may actually be disgusted, fearful, or sad, respectively. This may lead to misunderstandings, confusions, and inadequate reactions by the perceivers.
Collapse
Affiliation(s)
- Mike Rinck
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands.
| | - Maximilian A Primbs
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| | - Iris A M Verpaalen
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| | - Gijsbert Bijlstra
- Behavioural Science Institute, Radboud University Nijmegen, PO Box 9104, 6500 HE, Nijmegen, The Netherlands
| |
Collapse
|
8
|
Berry M, Brown S. The dynamic mask: Facial correlates of character portrayal in professional actors. Q J Exp Psychol (Hove) 2021; 75:936-953. [PMID: 34499014 PMCID: PMC8958566 DOI: 10.1177/17470218211047935] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Actors make modifications to their face, voice, and body to match standard gestural conceptions of the fictional characters they are portraying during stage performances. However, the gestural manifestations of acting have not been quantified experimentally, least of all in group-level analyses. To quantify the facial correlates of character portrayal in professional actors for the first time, we had 24 actors portray a contrastive series of nine stock characters (e.g., king, bully, lover) that were organised according to a predictive scheme based on the two statistically independent personality dimensions of assertiveness (i.e., the tendency to satisfy personal concerns) and cooperativeness (i.e., the tendency to satisfy others’ concerns). We used three-dimensional motion capture to examine changes in facial dimensions, with an emphasis on the relative expansion/contraction of four facial segments related to the brow, eyebrows, lips, and jaw, respectively. The results demonstrated that expansions in both upper- and lower-facial segments were related to increases in the levels of character cooperativeness, but not assertiveness. These findings demonstrate that actors reliably manipulate their facial features in a contrastive manner to differentiate characters based on their underlying personality traits.
Collapse
Affiliation(s)
- Matthew Berry
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| | - Steven Brown
- Department of Psychology, Neuroscience & Behaviour, McMaster University, Hamilton, Ontario, Canada
| |
Collapse
|
9
|
Murphy NA, Hall JA. Capturing Behavior in Small Doses: A Review of Comparative Research in Evaluating Thin Slices for Behavioral Measurement. Front Psychol 2021; 12:667326. [PMID: 33995225 PMCID: PMC8116694 DOI: 10.3389/fpsyg.2021.667326] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2021] [Accepted: 03/24/2021] [Indexed: 11/13/2022] Open
Abstract
Thin slices are used across a wide array of research domains to observe, measure, and predict human behavior. This article reviews the thin-slice method as a measurement technique and summarizes current comparative thin-slice research regarding the reliability and validity of thin slices to represent behavior or social constructs. We outline decision factors in using thin-slice behavioral coding and detail three avenues of thin-slice comparative research: (1) assessing whether thin slices can adequately approximate the total of the recorded behavior or be interchangeable with each other (representativeness); (2) assessing how well thin slices can predict variables that are different from the behavior measured in the slice (predictive validity), and (3) assessing how interpersonal judgment accuracy can depend on the length of the slice (accuracy-length validity). The aim of the review is to provide information researchers may use when designing and evaluating thin-slice behavioral measurement.
Collapse
Affiliation(s)
- Nora A Murphy
- Department of Psychology, Loyola Marymount University, Los Angeles, CA, United States
| | - Judith A Hall
- Department of Psychology, Northeastern University, Boston, MA, United States
| |
Collapse
|
10
|
Zuk P, Sanchez CE, Kostick K, Torgerson L, Muñoz KA, Hsu R, Kalwani L, Sierra-Mercado D, Robinson JO, Outram S, Koenig BA, Pereira S, McGuire AL, Lázaro-Muñoz G. Researcher Perspectives on Data Sharing in Deep Brain Stimulation. Front Hum Neurosci 2021; 14:578687. [PMID: 33424563 PMCID: PMC7793701 DOI: 10.3389/fnhum.2020.578687] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2020] [Accepted: 11/16/2020] [Indexed: 01/21/2023] Open
Abstract
The expansion of research on deep brain stimulation (DBS) and adaptive DBS (aDBS) raises important neuroethics and policy questions related to data sharing. However, there has been little empirical research on the perspectives of experts developing these technologies. We conducted semi-structured, open-ended interviews with aDBS researchers regarding their data sharing practices and their perspectives on ethical and policy issues related to sharing. Researchers expressed support for and a commitment to sharing, with most saying that they were either sharing their data or would share in the future and that doing so was important for advancing the field. However, those who are sharing reported a variety of sharing partners, suggesting heterogeneity in sharing practices and lack of the broad sharing that would reflect principles of open science. Researchers described several concerns and barriers related to sharing, including privacy and confidentiality, the usability of shared data by others, ownership and control of data (including potential commercialization), and limited resources for sharing. They also suggested potential solutions to these challenges, including additional safeguards to address privacy issues, standardization and transparency in analysis to address issues of data usability, professional norms and heightened cooperation to address issues of ownership and control, and streamlining of data transmission to address resource limitations. Researchers also offered a range of views on the sensitivity of neural activity data (NAD) and data related to mental health in the context of sharing. These findings are an important input to deliberations by researchers, policymakers, neuroethicists, and other stakeholders as they navigate ethics and policy questions related to aDBS research.
Collapse
Affiliation(s)
- Peter Zuk
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Clarissa E Sanchez
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Kristin Kostick
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Laura Torgerson
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Katrina A Muñoz
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Rebecca Hsu
- Evans School of Public Policy and Governance, University of Washington, Seattle, WA, United States
| | - Lavina Kalwani
- Department of Biosciences, Rice University, Houston, TX, United States
| | - Demetrio Sierra-Mercado
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States.,Department of Anatomy and Neurobiology, School of Medicine, University of Puerto Rico, San Juan, Puerto Rico
| | - Jill O Robinson
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Simon Outram
- Program in Bioethics, University of California, San Francisco, San Francisco, CA, United States
| | - Barbara A Koenig
- Program in Bioethics, University of California, San Francisco, San Francisco, CA, United States
| | - Stacey Pereira
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Amy L McGuire
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Gabriel Lázaro-Muñoz
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
11
|
Taubert J, Japee S. Using FACS to trace the neural specializations underlying the recognition of facial expressions: A commentary on Waller et al. (2020). Neurosci Biobehav Rev 2020; 120:75-77. [PMID: 33227326 DOI: 10.1016/j.neubiorev.2020.10.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 10/20/2020] [Accepted: 10/30/2020] [Indexed: 02/08/2023]
Abstract
In the recent review by Waller et al. (2020) the authors discuss how the Facial Action Coding System (FACS) can be used to study the evolution of facial behaviors. This is a timely and thought-provoking review which highlights the numerous ways in which FACS could be used to compare the mechanisms responsible for the production of facial behaviors across species. We propose that FACS could also be used to study the recognition of facial behaviors in nonhuman subjects where one of the key challenges is finding suitable stimuli that convey different emotions. By using FACS-rated images in awake neuroimaging experiments, researchers could accurately identify the brain mechanisms responsible for recognizing expressions across mammalian species. This approach would reveal neural homologs and deepen our understanding of how nonverbal social communication has evolved.
Collapse
Affiliation(s)
- Jessica Taubert
- The Laboratory of Brain and Cognition, The National Institute of Mental Health, United States.
| | - Shruti Japee
- The Laboratory of Brain and Cognition, The National Institute of Mental Health, United States
| |
Collapse
|
12
|
Muñoz KA, Kostick K, Sanchez C, Kalwani L, Torgerson L, Hsu R, Sierra-Mercado D, Robinson JO, Outram S, Koenig BA, Pereira S, McGuire A, Zuk P, Lázaro-Muñoz G. Researcher Perspectives on Ethical Considerations in Adaptive Deep Brain Stimulation Trials. Front Hum Neurosci 2020; 14:578695. [PMID: 33281581 PMCID: PMC7689343 DOI: 10.3389/fnhum.2020.578695] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2020] [Accepted: 10/19/2020] [Indexed: 01/15/2023] Open
Abstract
Interest and investment in closed-loop or adaptive deep brain stimulation (aDBS) systems have quickly expanded due to this neurotechnology's potential to more safely and effectively treat refractory movement and psychiatric disorders compared to conventional DBS. A large neuroethics literature outlines potential ethical concerns about conventional DBS and aDBS systems. Few studies, however, have examined stakeholder perspectives about ethical issues in aDBS research and other next-generation DBS devices. To help fill this gap, we conducted semi-structured interviews with researchers involved in aDBS trials (n = 23) to gain insight into the most pressing ethical questions in aDBS research and any concerns about specific features of aDBS devices, including devices' ability to measure brain activity, automatically adjust stimulation, and store neural data. Using thematic content analysis, we identified 8 central themes in researcher responses. The need to measure and store neural data for aDBS raised concerns among researchers about data privacy and security issues (noted by 91% of researchers), including the avoidance of unintended or unwanted third-party access to data. Researchers reflected on the risks and safety (83%) of aDBS due to the experimental nature of automatically modulating then observing stimulation effects outside a controlled clinical setting and in relation to need for surgical battery changes. Researchers also stressed the importance of ensuring informed consent and adequate patient understanding (74%). Concerns related to automaticity and device programming (65%) were discussed, including current uncertainties about biomarker validity. Additionally, researchers discussed the potential impacts of automatic stimulation on patients' autonomy and control over stimulation (57%). Lastly, researchers discussed concerns related to patient selection (defining criteria for candidacy) (39%), challenges of ensuring post-trial access to care and device maintenance (39%), and potential effects on personality and identity (30%). To help address researcher concerns, we discuss the need to minimize cybersecurity vulnerabilities, advance biomarker validity, promote the balance of device control between patients and clinicians, and enhance ongoing informed consent. The findings from this study will help inform policies that will maximize the benefits and minimize potential harms of aDBS and other next-generation DBS devices.
Collapse
Affiliation(s)
- Katrina A. Muñoz
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Kristin Kostick
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Clarissa Sanchez
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Lavina Kalwani
- Department of Neuroscience, Rice University, Houston, TX, United States
| | - Laura Torgerson
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Rebecca Hsu
- Evans School of Public Policy & Governance, University of Washington, Seattle, WA, United States
| | - Demetrio Sierra-Mercado
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
- Department of Anatomy & Neurobiology, University of Puerto Rico School of Medicine, San Juan, Puerto Rico
| | - Jill O. Robinson
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Simon Outram
- Program in Bioethics, University of California, San Francisco, San Francisco, CA, United States
| | - Barbara A. Koenig
- Program in Bioethics, University of California, San Francisco, San Francisco, CA, United States
| | - Stacey Pereira
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Amy McGuire
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Peter Zuk
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| | - Gabriel Lázaro-Muñoz
- Center for Medical Ethics and Health Policy, Baylor College of Medicine, Houston, TX, United States
| |
Collapse
|
13
|
Richesin MT, Oliver MD, Baldwin DR, Wicks LAM. Game Face expressions and performance on competitive tasks. Stress Health 2020; 36:166-171. [PMID: 31612592 DOI: 10.1002/smi.2899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 08/19/2019] [Accepted: 09/10/2019] [Indexed: 11/05/2022]
Abstract
Facial expressions influence both affective and cardiovascular responses to stress. However, previous research focuses primarily on positive expressions and is limited regarding additional facial expressions utilized on a day-to-day basis. This study examined an expression that is colloquially called a "Game Face": which refers to a serious, focused or determined facial expression. The current study examined whether Game Face expressions would influence psychophysiological response (e.g., heart rate and skin conductance) and performance. In an investigation of physical performance (Study 1), participants (N = 62) were asked to complete the cold-pressor task. Study 2 tested cognitive performance utilizing a puzzle task. Participants (N = 62) were divided into two groups and were asked to complete a puzzle. In both studies, one group was asked to make a Game Face, and the other was given no instruction related to facial expression. Results show no significant differences in performance on the physical task. In terms of cognitive performance, results reveal significantly better performance in the Game Face group. Additionally, assessments of skin conductance show that participants, who employed the Game Face during the cognitive task, displayed significant decreases from baseline following the puzzle manipulation. These results are promising regarding performance on a cognitive task and sympathetic nervous system activation, in concert with making a Game Face.
Collapse
Affiliation(s)
| | - Michael D Oliver
- Department of Psychology, University of Tennessee, Knoxville, Tennessee
| | - Debora R Baldwin
- Department of Psychology, University of Tennessee, Knoxville, Tennessee
| | - Lahai A M Wicks
- Department of Psychology, University of Tennessee, Knoxville, Tennessee
| |
Collapse
|
14
|
Benitez-Quiroz CF, Srinivasan R, Martinez AM. Discriminant Functional Learning of Color Features for the Recognition of Facial Action Units and Their Intensities. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2019; 41:2835-2845. [PMID: 30188814 PMCID: PMC6880652 DOI: 10.1109/tpami.2018.2868952] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Color is a fundamental image feature of facial expressions. For example, when we furrow our eyebrows in anger, blood rushes in, turning some face areas red; or when one goes white in fear as a result of the drainage of blood from the face. Surprisingly, these image properties have not been exploited to recognize the facial action units (AUs) associated with these expressions. Herein, we present the first system to do recognition of AUs and their intensities using these functional color changes. These color features are shown to be robust to changes in identity, gender, race, ethnicity, and skin color. Specifically, we identify the chromaticity changes defining the transition of an AU from inactive to active and use an innovative Gabor transform-based algorithm to gain invariance to the timing of these changes. Because these image changes are given by functions rather than vectors, we use functional classifiers to identify the most discriminant color features of an AU and its intensities. We demonstrate that, using these discriminant color features, one can achieve results superior to those of the state-of-the-art. Finally, we define an algorithm that allows us to use the learned functional color representation in still images. This is done by learning the mapping between images and the identified functional color features in videos. Our algorithm works in realtime, i.e., 30 frames/second/CPU thread.
Collapse
|
15
|
Bhatia S, Goecke R, Hammal Z, Cohn JF. Automated Measurement of Head Movement Synchrony during Dyadic Depression Severity Interviews. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION. IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION 2019; 2019. [PMID: 31745390 DOI: 10.1109/fg.2019.8756509] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
With few exceptions, most research in automated assessment of depression has considered only the patient's behavior to the exclusion of the therapist's behavior. We investigated the interpersonal coordination (synchrony) of head movement during patient-therapist clinical interviews. Our sample consisted of patients diagnosed with major depressive disorder. They were recorded in clinical interviews (Hamilton Rating Scale for Depression, HRSD) at 7-week intervals over a period of 21 weeks. For each session, patient and therapist 3D head movement was tracked from 2D videos. Head angles in the horizontal (pitch) and vertical (yaw) axes were used to measure head movement. Interpersonal coordination of head movement between patients and therapists was measured using windowed cross-correlation. Patterns of coordination in head movement were investigated using the peak picking algorithm. Changes in head movement coordination over the course of treatment were measured using a hierarchical linear model (HLM). The results indicated a strong effect for patient-therapist head movement synchrony. Within-dyad variability in head movement coordination was found to be higher than between-dyad variability, meaning that differences over time in a dyad were higher as compared to the differences between dyads. Head movement synchrony did not change over the course of treatment with change in depression severity. To the best of our knowledge, this study is the first attempt to analyze the mutual influence of patient-therapist head movement in relation to depression severity.
Collapse
Affiliation(s)
- Shalini Bhatia
- Human-Centred Technology Research Centre, University of Canberra, Canberra, Australia
| | - Roland Goecke
- Human-Centred Technology Research Centre, University of Canberra, Canberra, Australia
| | - Zakia Hammal
- Robotics Institute, Carnegie Mellon University, Pittsburgh, USA
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, USA
| |
Collapse
|
16
|
|
17
|
Garrido S, Stevens CJ, Chang E, Dunne L, Perz J. Music and Dementia: Individual Differences in Response to Personalized Playlists. J Alzheimers Dis 2019; 64:933-941. [PMID: 29966193 DOI: 10.3233/jad-180084] [Citation(s) in RCA: 36] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/13/2022]
Abstract
Personalized music playlists are increasingly being used in health-care contexts to address the psychological and behavioral symptoms in people with dementia. However, there is little understanding of how people with different mental health histories and symptoms respond differently to music. A factorial experiment was conducted to investigate the influence of depression, anxiety, apathy, and cognitive decline on affective response to music. Ninety-nine people with dementia listened to three music playlists based on personal preferences. Activation of facial action units was measured, and behavioural responses continuously observed. Results demonstrated that people with high levels of depression and with symptoms of Alzheimer's type dementia demonstrated increased levels of sadness when listening to music. People with low depression but high levels of apathy demonstrated the highest behavioral evidence of pleasure during music listening, although behavioral evidence declined with severity of cognitive impairment. It is concluded that as well as accounting for personal preferences, music interventions for people with dementia need to take mental health history and symptoms into account.
Collapse
|
18
|
Cheong JH, Brooks S, Chang LJ. FaceSync: Open source framework for recording facial expressions with head-mounted cameras. F1000Res 2019; 8:702. [PMID: 32185017 PMCID: PMC7059847 DOI: 10.12688/f1000research.18187.1] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 04/25/2019] [Indexed: 12/14/2022] Open
Abstract
Advances in computer vision and machine learning algorithms have enabled researchers to extract facial expression data from face video recordings with greater ease and speed than standard manual coding methods, which has led to a dramatic increase in the pace of facial expression research. However, there are many limitations in recording facial expressions in laboratory settings. Conventional video recording setups using webcams, tripod-mounted cameras, or pan-tilt-zoom cameras require making compromises between cost, reliability, and flexibility. As an alternative, we propose the use of a mobile head-mounted camera that can be easily constructed from our open-source instructions and blueprints at a fraction of the cost of conventional setups. The head-mounted camera framework is supported by the open source Python toolbox FaceSync, which provides an automated method for synchronizing videos. We provide four proof-of-concept studies demonstrating the benefits of this recording system in reliably measuring and analyzing facial expressions in diverse experimental setups, including group interaction experiments.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| | - Sawyer Brooks
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, 44074, USA
| | - Luke J. Chang
- Psychological and Brain Sciences, Dartmouth College, Hanover, NH, 03755, USA
| |
Collapse
|
19
|
Provenza NR, Matteson ER, Allawala AB, Barrios-Anderson A, Sheth SA, Viswanathan A, McIngvale E, Storch EA, Frank MJ, McLaughlin NCR, Cohn JF, Goodman WK, Borton DA. The Case for Adaptive Neuromodulation to Treat Severe Intractable Mental Disorders. Front Neurosci 2019; 13:152. [PMID: 30890909 PMCID: PMC6412779 DOI: 10.3389/fnins.2019.00152] [Citation(s) in RCA: 29] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2018] [Accepted: 02/11/2019] [Indexed: 12/20/2022] Open
Abstract
Mental disorders are a leading cause of disability worldwide, and available treatments have limited efficacy for severe cases unresponsive to conventional therapies. Neurosurgical interventions, such as lesioning procedures, have shown success in treating refractory cases of mental illness, but may have irreversible side effects. Neuromodulation therapies, specifically Deep Brain Stimulation (DBS), may offer similar therapeutic benefits using a reversible (explantable) and adjustable platform. Early DBS trials have been promising, however, pivotal clinical trials have failed to date. These failures may be attributed to targeting, patient selection, or the “open-loop” nature of DBS, where stimulation parameters are chosen ad hoc during infrequent visits to the clinician’s office that take place weeks to months apart. Further, the tonic continuous stimulation fails to address the dynamic nature of mental illness; symptoms often fluctuate over minutes to days. Additionally, stimulation-based interventions can cause undesirable effects if applied when not needed. A responsive, adaptive DBS (aDBS) system may improve efficacy by titrating stimulation parameters in response to neural signatures (i.e., biomarkers) related to symptoms and side effects. Here, we present rationale for the development of a responsive DBS system for treatment of refractory mental illness, detail a strategic approach for identification of electrophysiological and behavioral biomarkers of mental illness, and discuss opportunities for future technological developments that may harness aDBS to deliver improved therapy.
Collapse
Affiliation(s)
- Nicole R Provenza
- Brown University School of Engineering, Providence, RI, United States.,Charles Stark Draper Laboratory, Cambridge, MA, United States
| | - Evan R Matteson
- Brown University School of Engineering, Providence, RI, United States
| | - Anusha B Allawala
- Brown University School of Engineering, Providence, RI, United States
| | - Adriel Barrios-Anderson
- Psychiatric Neurosurgery Program at Butler Hospital, The Warren Alpert Medical School of Brown University, Providence, RI, United States
| | - Sameer A Sheth
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Ashwin Viswanathan
- Department of Neurosurgery, Baylor College of Medicine, Houston, TX, United States
| | - Elizabeth McIngvale
- Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Eric A Storch
- Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - Michael J Frank
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, RI, United States.,Department of Psychology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Nicole C R McLaughlin
- Psychiatric Neurosurgery Program at Butler Hospital, The Warren Alpert Medical School of Brown University, Providence, RI, United States
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, United States
| | - Wayne K Goodman
- Menninger Department of Psychiatry and Behavioral Sciences, Baylor College of Medicine, Houston, TX, United States
| | - David A Borton
- Brown University School of Engineering, Providence, RI, United States.,Carney Institute for Brain Science, Brown University, Providence, RI, United States.,Department of Veterans Affairs, Providence Medical Center, Center for Neurorestoration and Neurotechnology, Providence, RI, United States
| |
Collapse
|
20
|
Calvo MG, Fernández-Martín A, Recio G, Lundqvist D. Human Observers and Automated Assessment of Dynamic Emotional Facial Expressions: KDEF-dyn Database Validation. Front Psychol 2018; 9:2052. [PMID: 30416473 PMCID: PMC6212581 DOI: 10.3389/fpsyg.2018.02052] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2018] [Accepted: 10/05/2018] [Indexed: 12/11/2022] Open
Abstract
Most experimental studies of facial expression processing have used static stimuli (photographs), yet facial expressions in daily life are generally dynamic. In its original photographic format, the Karolinska Directed Emotional Faces (KDEF) has been frequently utilized. In the current study, we validate a dynamic version of this database, the KDEF-dyn. To this end, we applied animation between neutral and emotional expressions (happy, sad, angry, fearful, disgusted, and surprised; 1,033-ms unfolding) to 40 KDEF models, with morphing software. Ninety-six human observers categorized the expressions of the resulting 240 video-clip stimuli, and automated face analysis assessed the evidence for 6 expressions and 20 facial action units (AUs) at 31 intensities. Low-level image properties (luminance, signal-to-noise ratio, etc.) and other purely perceptual factors (e.g., size, unfolding speed) were controlled. Human recognition performance (accuracy, efficiency, and confusions) patterns were consistent with prior research using static and other dynamic expressions. Automated assessment of expressions and AUs was sensitive to intensity manipulations. Significant correlations emerged between human observers' categorization and automated classification. The KDEF-dyn database aims to provide a balance between experimental control and ecological validity for research on emotional facial expression processing. The stimuli and the validation data are available to the scientific community.
Collapse
Affiliation(s)
- Manuel G. Calvo
- Department of Cognitive Psychology, Universidad de La Laguna, San Cristóbal de La Laguna, Spain
- Instituto Universitario de Neurociencia (IUNE), Universidad de La Laguna, Santa Cruz de Tenerife, Spain
| | | | - Guillermo Recio
- Institute of Psychology, Universität Hamburg, Hamburg, Germany
| | - Daniel Lundqvist
- Department of Clinical Neuroscience, Karolinska Institutet, Stockholm, Sweden
| |
Collapse
|
21
|
Discrimination between smiling faces: Human observers vs. automated face analysis. Acta Psychol (Amst) 2018; 187:19-29. [PMID: 29758397 DOI: 10.1016/j.actpsy.2018.04.019] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2017] [Revised: 04/09/2018] [Accepted: 04/30/2018] [Indexed: 11/23/2022] Open
Abstract
This study investigated (a) how prototypical happy faces (with happy eyes and a smile) can be discriminated from blended expressions with a smile but non-happy eyes, depending on type and intensity of the eye expression; and (b) how smile discrimination differs for human perceivers versus automated face analysis, depending on affective valence and morphological facial features. Human observers categorized faces as happy or non-happy, or rated their valence. Automated analysis (FACET software) computed seven expressions (including joy/happiness) and 20 facial action units (AUs). Physical properties (low-level image statistics and visual saliency) of the face stimuli were controlled. Results revealed, first, that some blended expressions (especially, with angry eyes) had lower discrimination thresholds (i.e., they were identified as "non-happy" at lower non-happy eye intensities) than others (especially, with neutral eyes). Second, discrimination sensitivity was better for human perceivers than for automated FACET analysis. As an additional finding, affective valence predicted human discrimination performance, whereas morphological AUs predicted FACET discrimination. FACET can be a valid tool for categorizing prototypical expressions, but is currently more limited than human observers for discrimination of blended expressions. Configural processing facilitates detection of in/congruence(s) across regions, and thus detection of non-genuine smiling faces (due to non-happy eyes).
Collapse
|
22
|
Sayette MA, Creswell KG, Fairbairn CE, Dimoff JD, Bentley K, Lazerus T. The effects of alcohol on positive emotion during a comedy routine: A facial coding analysis. ACTA ACUST UNITED AC 2018; 19:480-488. [PMID: 29771544 DOI: 10.1037/emo0000451] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
There is considerable interest in understanding the emotional effects of alcohol. While a great deal of experimental research has focused on alcohol's ability to relieve negative emotions, there has been far less focus on the effects of alcohol on positive emotions. Further, the available research on positive emotion tends to test alcohol while participants are alone. Yet alcohol is often consumed in social settings, and enhancing social pleasure is consistently identified as being a primary motive for drinking. We aimed to address this gap in the literature by investigating the impact of alcohol on positive emotional experience in a social setting. We used the Facial Action Coding System (FACS) to examine in a large sample the effects of alcohol on response to comedy in a group setting. Five hundred thirteen social drinkers (51.9% female) were assembled into groups of three unacquainted persons and administered either a moderate dose of alcohol, a placebo, or a nonalcohol control beverage. Following beverage consumption, groups listened to a roughly 5-min comedy clip while their facial expressions were video recorded. More than 5 million frames of video were then FACS-coded. Alcohol consumption enhanced enjoyment (Duchenne) smiles-but not nonenjoyment social smiles-and elevated mood ratings. Results provide multimodal evidence supporting the ability of alcohol to enhance positive emotional experience during a comedy routine delivered in a social context. More broadly, this research illustrates the value of studying emotion in a social context using both self-report and behavior-expressive approaches. (PsycINFO Database Record (c) 2019 APA, all rights reserved).
Collapse
Affiliation(s)
| | | | | | | | | | - Talya Lazerus
- Department of Social and Decision Sciences, Carnegie Mellon University
| |
Collapse
|
23
|
Bina RW, Langevin JP. Closed Loop Deep Brain Stimulation for PTSD, Addiction, and Disorders of Affective Facial Interpretation: Review and Discussion of Potential Biomarkers and Stimulation Paradigms. Front Neurosci 2018; 12:300. [PMID: 29780303 PMCID: PMC5945819 DOI: 10.3389/fnins.2018.00300] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2017] [Accepted: 04/18/2018] [Indexed: 01/06/2023] Open
Abstract
The treatment of psychiatric diseases with Deep Brain Stimulation (DBS) is becoming more of a reality as studies proliferate the indications and targets for therapies. Opinions on the initial failures of DBS trials for some psychiatric diseases point to a certain lack of finesse in using an Open Loop DBS (OLDBS) system in these dynamic, cyclical pathologies. OLDBS delivers monomorphic input into dysfunctional brain circuits with modulation of that input via human interface at discrete time points with no interim modulation or adaptation to the changing circuit dynamics. Closed Loop DBS (CLDBS) promises dynamic, intrinsic circuit modulation based on individual physiologic biomarkers of dysfunction. Discussed here are several psychiatric diseases which may be amenable to CLDBS paradigms as the neurophysiologic dysfunction is stochastic and not static. Post-Traumatic Stress Disorder (PTSD) has several peripheral and central physiologic and neurologic changes preceding stereotyped hyper-activation behavioral responses. Biomarkers for CLDBS potentially include skin conductance changes indicating changes in the sympathetic nervous system, changes in serum and central neurotransmitter concentrations, and limbic circuit activation. Chemical dependency and addiction have been demonstrated to be improved with both ablation and DBS of the Nucleus Accumbens and as a serendipitous side effect of movement disorder treatment. Potential peripheral biomarkers are similar to those proposed for PTSD with possible use of environmental and geolocation based cues, peripheral signs of physiologic arousal, and individual changes in central circuit patterns. Non-substance addiction disorders have also been serendipitously treated in patients with OLDBS for movement disorders. As more is learned about these behavioral addictions, DBS targets and effectors will be identified. Finally, discussed is the use of facial recognition software to modulate activation of inappropriate responses for psychiatric diseases in which misinterpretation of social cues feature prominently. These include Autism Spectrum Disorder, PTSD, and Schizophrenia-all of which have a common feature of dysfunctional interpretation of facial affective clues. Technological advances and improvements in circuit-based, individual-specific, real-time adaptable modulation, forecast functional neurosurgery treatments for heretofore treatment-resistant behavioral diseases.
Collapse
Affiliation(s)
- Robert W Bina
- Division of Neurosurgery, Banner University Medical Center, Tucson, AZ, United States
| | - Jean-Phillipe Langevin
- Neurosurgery Service, VA Greater Los Angeles Healthcare System, Los Angeles, CA, United States.,Department of Neurosurgery, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
24
|
Abstract
Affective computing (AC) adopts a computational approach to study affect. We highlight the AC approach towards automated affect measures that jointly model machine-readable physiological/behavioral signals with affect estimates as reported by humans or experimentally elicited. We describe the conceptual and computational foundations of the approach followed by two case studies: one on discrimination between genuine and faked expressions of pain in the lab, and the second on measuring nonbasic affect in the wild. We discuss applications of the measures, analyze measurement accuracy and generalizability, and highlight advances afforded by computational tipping points, such as big data, wearable sensing, crowdsourcing, and deep learning. We conclude by advocating for increasing synergies between AC and affective science and offer suggestions toward that direction.
Collapse
Affiliation(s)
- Sidney D’Mello
- Department of Computer Science, University of Notre Dame, USA
- Department of Psychology, University of Notre Dame, USA
| | - Arvid Kappas
- Department of Psychology, Jacobs University, Germany
| | - Jonathan Gratch
- Institute of Creative Technologies, University of Southern California, USA
- Computer Science Department, University of Southern California, USA
| |
Collapse
|
25
|
Girard JM, Chu WS, Jeni LA, Cohn JF, De la Torre F, Sayette MA. Sayette Group Formation Task (GFT) Spontaneous Facial Expression Database. PROCEEDINGS OF THE ... INTERNATIONAL CONFERENCE ON AUTOMATIC FACE AND GESTURE RECOGNITION. IEEE INTERNATIONAL CONFERENCE ON AUTOMATIC FACE & GESTURE RECOGNITION 2017; 2017:581-588. [PMID: 29606916 PMCID: PMC5876025 DOI: 10.1109/fg.2017.144] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Abstract
Despite the important role that facial expressions play in interpersonal communication and our knowledge that interpersonal behavior is influenced by social context, no currently available facial expression database includes multiple interacting participants. The Sayette Group Formation Task (GFT) database addresses the need for well-annotated video of multiple participants during unscripted interactions. The database includes 172,800 video frames from 96 participants in 32 three-person groups. To aid in the development of automated facial expression analysis systems, GFT includes expert annotations of FACS occurrence and intensity, facial landmark tracking, and baseline results for linear SVM, deep learning, active patch learning, and personalized classification. Baseline performance is quantified and compared using identical partitioning and a variety of metrics (including means and confidence intervals). The highest performance scores were found for the deep learning and active patch learning methods. Learn more at http://osf.io/7wcyz.
Collapse
Affiliation(s)
- Jeffrey M Girard
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
| | - Wen-Sheng Chu
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | - László A Jeni
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | - Jeffrey F Cohn
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
- Robotic Institute, Carnegie Mellon University, Pittsburgh, PA 15213
| | | | - Michael A Sayette
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260
| |
Collapse
|
26
|
Wen-Sheng Chu, De la Torre F, Cohn JF. Selective Transfer Machine for Personalized Facial Expression Analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2017; 39:529-545. [PMID: 28113267 PMCID: PMC5400741 DOI: 10.1109/tpami.2016.2547397] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Automatic facial action unit (AU) and expression detection from videos is a long-standing problem. The problem is challenging in part because classifiers must generalize to previously unknown subjects that differ markedly in behavior and facial morphology (e.g., heavy versus delicate brows, smooth versus deeply etched wrinkles) from those on which the classifiers are trained. While some progress has been achieved through improvements in choices of features and classifiers, the challenge occasioned by individual differences among people remains. Person-specific classifiers would be a possible solution but for a paucity of training data. Sufficient training data for person-specific classifiers typically is unavailable. This paper addresses the problem of how to personalize a generic classifier without additional labels from the test subject. We propose a transductive learning method, which we refer to as a Selective Transfer Machine (STM), to personalize a generic classifier by attenuating person-specific mismatches. STM achieves this effect by simultaneously learning a classifier and re-weighting the training samples that are most relevant to the test subject. We compared STM to both generic classifiers and cross-domain learning methods on four benchmarks: CK+ [44], GEMEP-FERA [67], RUFACS [4] and GFT [57]. STM outperformed generic classifiers in all.
Collapse
|
27
|
Jeni LA, Cohn JF, Kanade T. Dense 3D Face Alignment from 2D Video for Real-Time Use. IMAGE AND VISION COMPUTING 2017; 58:13-24. [PMID: 29731533 PMCID: PMC5931713 DOI: 10.1016/j.imavis.2016.05.009] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
To enable real-time, person-independent 3D registration from 2D video, we developed a 3D cascade regression approach in which facial landmarks remain invariant across pose over a range of approximately 60 degrees. From a single 2D image of a person's face, a dense 3D shape is registered in real time for each frame. The algorithm utilizes a fast cascade regression framework trained on high-resolution 3D face-scans of posed and spontaneous emotion expression. The algorithm first estimates the location of a dense set of landmarks and their visibility, then reconstructs face shapes by fitting a part-based 3D model. Because no assumptions are required about illumination or surface properties, the method can be applied to a wide range of imaging conditions that include 2D video and uncalibrated multi-view video. The method has been validated in a battery of experiments that evaluate its precision of 3D reconstruction, extension to multi-view reconstruction, temporal integration for videos and 3D head-pose estimation. Experimental findings strongly support the validity of real-time, 3D registration and reconstruction from 2D video. The software is available online at http://zface.org.
Collapse
Affiliation(s)
- László A. Jeni
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Jeffrey F. Cohn
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| | - Takeo Kanade
- Robotics Institute, Carnegie Mellon University, Pittsburgh, PA, USA
| |
Collapse
|
28
|
Sayette MA. The effects of alcohol on emotion in social drinkers. Behav Res Ther 2017; 88:76-89. [PMID: 28110679 PMCID: PMC5724975 DOI: 10.1016/j.brat.2016.06.005] [Citation(s) in RCA: 47] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 06/10/2016] [Accepted: 06/21/2016] [Indexed: 12/28/2022]
Abstract
Understanding why people drink alcohol and in some cases develop drinking problems has long puzzled researchers, clinicians, and patients alike. In the mid-1940s and early 1950s, experimental research began to systematically investigate alcohol's hedonic properties. Presumably, alcohol consumption would prove reinforcing as a consequence of its capacity either to relieve stress or to brighten positive emotional experiences. This article reviews experimental research through the years examining the impact of alcohol on both the relief of negative affect and the enhancement of positive affect. It covers initial accounts that emphasized direct pharmacological effects of ethanol on the central nervous system. These early studies offered surprisingly tepid support for the premise that alcohol improved emotional states. Next, studies conducted in the 1970s are considered. Informed by social learning theory and employing advances derived from experimental psychology, this research sought to better understand the complex effects of alcohol on emotion. Coverage of this work is followed by discussion of current formulations, which integrate biological and behavioral approaches with the study of cognitive, affective, and social processes. These current perspectives provide insight into the particular conditions under which alcohol can boost emotional experiences. Finally, future research directions and clinical implications are considered.
Collapse
Affiliation(s)
- Michael A Sayette
- Department of Psychology, University of Pittsburgh, 3137 Sennott Square, 210 S. Bouquet St., Pittsburgh, PA 15260, United States.
| |
Collapse
|
29
|
Calvo MG, Gutiérrez-García A, Del Líbano M. What makes a smiling face look happy? Visual saliency, distinctiveness, and affect. PSYCHOLOGICAL RESEARCH 2016; 82:296-309. [PMID: 27900467 DOI: 10.1007/s00426-016-0829-3] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/19/2016] [Accepted: 11/19/2016] [Indexed: 11/28/2022]
Abstract
We investigated the relative contribution of (a) perceptual (eyes and mouth visual saliency), (b) conceptual or categorical (eye expression distinctiveness), and (c) affective (rated valence and arousal) factors, and (d) specific morphological facial features (Action Units; AUs), to the recognition of facial happiness. The face stimuli conveyed truly happy expressions with a smiling mouth and happy eyes, or blended expressions with a smile but non-happy eyes (neutral, sad, fearful, disgusted, surprised, or angry). Saliency, distinctiveness, affect, and AUs served as predictors; the probability of judging a face as happy was the criterion. Both for truly happy and for blended expressions, the probability of perceiving happiness increased mainly as a function of positive valence of the facial configuration. In addition, for blended expressions, the probability of being (wrongly) perceived as happy increased as a function of (a) delayed saliency and (b) reduced distinctiveness of the non-happy eyes, and (c) enhanced AU 6 (cheek raiser) or (d) reduced AUs 4, 5, and 9 (brow lowerer, upper lid raiser, and nose wrinkler, respectively). Importantly, the later the eyes become visually salient relative to the smiling mouth, the more likely it is that faces will look happy.
Collapse
Affiliation(s)
- Manuel G Calvo
- Department of Cognitive Psychology, Universidad de La Laguna, 38205, Tenerife, Spain.
| | | | | |
Collapse
|
30
|
Large-Scale Observational Evidence of Cross-Cultural Differences in Facial Behavior. JOURNAL OF NONVERBAL BEHAVIOR 2016. [DOI: 10.1007/s10919-016-0244-x] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
31
|
Bosch N, D'mello SK, Ocumpaugh J, Baker RS, Shute V. Using Video to Automatically Detect Learner Affect in Computer-Enabled Classrooms. ACM T INTERACT INTEL 2016. [DOI: 10.1145/2946837] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
Affect detection is a key component in intelligent educational interfaces that respond to students’ affective states. We use computer vision and machine-learning techniques to detect students’ affect from facial expressions (primary channel) and gross body movements (secondary channel) during interactions with an educational physics game. We collected data in the real-world environment of a school computer lab with up to 30 students simultaneously playing the game while moving around, gesturing, and talking to each other. The results were cross-validated at the student level to ensure generalization to new students. Classification accuracies, quantified as area under the receiver operating characteristic curve (AUC), were above chance (AUC of 0.5) for all the affective states observed, namely, boredom (AUC = .610), confusion (AUC = .649), delight (AUC = .867), engagement (AUC = .679), frustration (AUC = .631), and for off-task behavior (AUC = .816). Furthermore, the detectors showed temporal generalizability in that there was less than a 2% decrease in accuracy when tested on data collected from different times of the day and from different days. There was also some evidence of generalizability across ethnicity (as perceived by human coders) and gender, although with a higher degree of variability attributable to differences in affect base rates across subpopulations. In summary, our results demonstrate the feasibility of generalizable video-based detectors of naturalistic affect in a real-world setting, suggesting that the time is ripe for affect-sensitive interventions in educational games and other intelligent interfaces.
Collapse
Affiliation(s)
- Nigel Bosch
- University of Notre Dame, Notre Dame IN, USA
| | | | | | | | | |
Collapse
|
32
|
Abstract
Observational measurement plays an integral role in a variety of scientific endeavors within biology, psychology, sociology, education, medicine, and marketing. The current article provides an interdisciplinary primer on observational measurement; in particular, it highlights recent advances in observational methodology and the challenges that accompany such growth. First, we detail the various types of instrument that can be used to standardize measurements across observers. Second, we argue for the importance of validity in observational measurement and provide several approaches to validation based on contemporary validity theory. Third, we outline the challenges currently faced by observational researchers pertaining to measurement drift, observer reactivity, reliability analysis, and time/expense. Fourth, we describe recent advances in computer-assisted measurement, fully automated measurement, and statistical data analysis. Finally, we identify several key directions for future observational research to explore.
Collapse
|
33
|
Zhao R, Martinez AM. Labeled Graph Kernel for Behavior Analysis. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2016; 38:1640-1650. [PMID: 26415154 PMCID: PMC4846576 DOI: 10.1109/tpami.2015.2481404] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Automatic behavior analysis from video is a major topic in many areas of research, including computer vision, multimedia, robotics, biology, cognitive science, social psychology, psychiatry, and linguistics. Two major problems are of interest when analyzing behavior. First, we wish to automatically categorize observed behaviors into a discrete set of classes (i.e., classification). For example, to determine word production from video sequences in sign language. Second, we wish to understand the relevance of each behavioral feature in achieving this classification (i.e., decoding). For instance, to know which behavior variables are used to discriminate between the words apple and onion in American Sign Language (ASL). The present paper proposes to model behavior using a labeled graph, where the nodes define behavioral features and the edges are labels specifying their order (e.g., before, overlaps, start). In this approach, classification reduces to a simple labeled graph matching. Unfortunately, the complexity of labeled graph matching grows exponentially with the number of categories we wish to represent. Here, we derive a graph kernel to quickly and accurately compute this graph similarity. This approach is very general and can be plugged into any kernel-based classifier. Specifically, we derive a Labeled Graph Support Vector Machine (LGSVM) and a Labeled Graph Logistic Regressor (LGLR) that can be readily employed to discriminate between many actions (e.g., sign language concepts). The derived approach can be readily used for decoding too, yielding invaluable information for the understanding of a problem (e.g., to know how to teach a sign language). The derived algorithms allow us to achieve higher accuracy results than those of state-of-the-art algorithms in a fraction of the time. We show experimental results on a variety of problems and datasets, including multimodal data.
Collapse
|
34
|
Benitez-Quiroz CF, Wilbur RB, Martinez AM. The not face: A grammaticalization of facial expressions of emotion. Cognition 2016; 150:77-84. [PMID: 26872248 DOI: 10.1016/j.cognition.2016.02.004] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2015] [Revised: 01/29/2016] [Accepted: 02/04/2016] [Indexed: 10/22/2022]
Abstract
Facial expressions of emotion are thought to have evolved from the development of facial muscles used in sensory regulation and later adapted to express moral judgment. Negative moral judgment includes the expressions of anger, disgust and contempt. Here, we study the hypothesis that these facial expressions of negative moral judgment have further evolved into a facial expression of negation regularly used as a grammatical marker in human language. Specifically, we show that people from different cultures expressing negation use the same facial muscles as those employed to express negative moral judgment. We then show that this nonverbal signal is used as a co-articulator in speech and that, in American Sign Language, it has been grammaticalized as a non-manual marker. Furthermore, this facial expression of negation exhibits the theta oscillation (3-8 Hz) universally seen in syllable and mouthing production in speech and signing. These results provide evidence for the hypothesis that some components of human language have evolved from facial expressions of emotion, and suggest an evolutionary route for the emergence of grammatical markers.
Collapse
|
35
|
Kawulok M, Nalepa J, Nurzynska K, Smolka B. In Search of Truth: Analysis of Smile Intensity Dynamics to Detect Deception. LECTURE NOTES IN COMPUTER SCIENCE 2016. [DOI: 10.1007/978-3-319-47955-2_27] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/13/2023]
|
36
|
Abstract
Analysis of observable behavior in depression primarily relies on subjective measures. New computational approaches make possible automated audiovisual measurement of behaviors that humans struggle to quantify (e.g., movement velocity and voice inflection). These tools have the potential to improve screening and diagnosis, identify new behavioral indicators of depression, measure response to clinical intervention, and test clinical theories about underlying mechanisms. Highlights include a study that measured the temporal coordination of vocal tract and facial movements, a study that predicted which adolescents would go on to develop depression based on their voice qualities, and a study that tested the behavioral predictions of clinical theories using automated measures of facial actions and head motion.
Collapse
Affiliation(s)
- Jeffrey M. Girard
- Department of Psychology, University of Pittsburgh Sennott Square, 210 S. Bouquet Street, Pittsburgh, PA, USA 15260
| | - Jeffrey F. Cohn
- Department of Psychology, University of Pittsburgh Sennott Square, 210 S. Bouquet Street, Pittsburgh, PA, USA 15260
| |
Collapse
|