1
|
Kramer M, Hirsch D, Sacic A, Sader A, Willms J, Juckel G, Mavrogiorgou P. AI-enhanced analysis of naturalistic social interactions characterizes interaffective impairments in schizophrenia. J Psychiatr Res 2024; 178:210-218. [PMID: 39153454 DOI: 10.1016/j.jpsychires.2024.08.013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/22/2024] [Revised: 07/27/2024] [Accepted: 08/09/2024] [Indexed: 08/19/2024]
Abstract
Social deficits in schizophrenia have been attributed to an impaired attunement to mutual interaction, or "interaffectivity". While impairments in emotion recognition and facial expressivity in schizophrenia have been consistently reported, findings on mimicry and social synchrony are inconsistent, and previous studies have often lacked ecological validity. To investigate interaffective behavior in dyadic interactions in a real-world-like setting, 20 individuals with schizophrenia and 20 without mental disorder played a cooperative board game with a previously unacquainted healthy control participant. Facial expression analysis was conducted using Affectiva Emotion AI in iMotions 9.3. The contingency and state space distribution of emotional facial expressions was assessed using Mangold INTERACT. Psychotic symptoms, subjective stress, affectivity and game experience were evaluated through questionnaires. Due to a considerable between-group age difference, age-adjusted ANCOVA was performed. Overall, despite an unchanged subjective experience of the social interaction, individuals with schizophrenia exhibited reduced responsiveness to positive affective stimuli. Subjective game experience did not differ between groups. Descriptively, facial expressions in schizophrenia were generally more negative, with increased sadness and decreased joy. Facial mimicry was impaired specifically regarding joyful expressions in schizophrenia, which correlated with blunted affect as measured by the SANS. Dyadic interactions involving persons with schizophrenia were less attracted toward mutual joyful affective states. Only unadjusted for age, in the absence of emotional stimuli from their interaction partner, individuals with schizophrenia showed more angry and sad expressions. These impairments in interaffective processes may contribute to social dysfunction in schizophrenia and provide new avenues for future research.
Collapse
Affiliation(s)
- Marco Kramer
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany.
| | - Dustin Hirsch
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany
| | - Anesa Sacic
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany
| | - Alice Sader
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany
| | - Julien Willms
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany
| | - Georg Juckel
- Dept. of Psychiatry, LWL University Hospital, Ruhr University Bochum, Germany
| | | |
Collapse
|
2
|
Kramer M, Fink F, Campo LA, Akinci E, Wieser MO, Juckel G, Mavrogiorgou P. Video analysis of interaction in schizophrenia reveals functionally relevant abnormalities. Schizophr Res 2024; 274:24-32. [PMID: 39250840 DOI: 10.1016/j.schres.2024.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/30/2023] [Revised: 08/28/2024] [Accepted: 09/03/2024] [Indexed: 09/11/2024]
Abstract
OBJECTIVE Deficits of dyadic social interaction seem to diminish social functioning in schizophrenia. However, most previous studies are of a limited ecological validity due to decontextualized experimental conditions far off from real-world interaction. In this pilot study, we thus exposed participants to a more real-world-like situation to generate new hypotheses for research and therapeutic interventions. METHODS Dyads of either participants with schizophrenia (n = 21) or control participants without mental disorder (n = 21) were presented with a 5-min emotionally engaging movie. The subsequent uninstructed dyadic interaction was videotaped and analyzed by means of a semi-quantitative, software-supported behavioral analysis. RESULTS The patients with schizophrenia showed significant abnormalities regarding their social interaction, such as more negative verbalizations, a more open display of negative affect and gaze abnormalities. Their interaction behavior was mostly characterized by neutral affect, silence and avoidance of direct eye contact. Neutral affect was associated with poorer psychosocial performance. Verbal intelligence and empathy were associated with positive interaction variables, which were also not impaired by psychotic symptom severity. CONCLUSION In this real-world-like dyadic interaction, participants with schizophrenia show distinct abnormalities that are relevant to psychosocial performance and consistent with a hypothesized lack of attunement to interaffective situations.
Collapse
Affiliation(s)
- Marco Kramer
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| | - Fiona Fink
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| | - Lorenz A Campo
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| | - Erhan Akinci
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| | - Max-Oskar Wieser
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| | - Georg Juckel
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany.
| | - Paraskevi Mavrogiorgou
- Ruhr University Bochum, LWL-University Hospital Bochum, Department of Psychiatry, Germany
| |
Collapse
|
3
|
Bellal M, Lelandais J, Chabin T, Heudron A, Gourmelon T, Bauduin P, Cuchet P, Daubin C, De Carvalho Ribeiro C, Delcampe A, Goursaud S, Joret A, Mombrun M, Valette X, Cerasuolo D, Morello R, Mordel P, Chaillot F, Dutheil JJ, Vivien D, Du Cheyron D. Calibration trial of an innovative medical device ( NEVVA© ) for the evaluation of pain in non-communicating patients in the intensive care unit. Front Med (Lausanne) 2024; 11:1309720. [PMID: 38994344 PMCID: PMC11236545 DOI: 10.3389/fmed.2024.1309720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2023] [Accepted: 06/05/2024] [Indexed: 07/13/2024] Open
Abstract
Background Pain management is an essential and complex issue for non-communicative patients undergoing sedation in the intensive care unit (ICU). The Behavioral Pain Scale (BPS), although not perfect for assessing behavioral pain, is the gold standard based partly on clinical facial expression. NEVVA© , an automatic pain assessment tool based on facial expressions in critically ill patients, is a much-needed innovative medical device. Methods In this prospective pilot study, we recorded the facial expressions of critically ill patients in the medical ICU of Caen University Hospital using the iPhone and Smart Motion Tracking System (SMTS) software with the Facial Action Coding System (FACS) to measure human facial expressions metrically during sedation weaning. Analyses were recorded continuously, and BPS scores were collected hourly over two 8 h periods per day for 3 consecutive days. For this first stage, calibration of the innovative NEVVA© medical device algorithm was obtained by comparison with the reference pain scale (BPS). Results Thirty participants were enrolled between March and July 2022. To assess the acute severity of illness, the Sequential Organ Failure Assessment (SOFA) and the Simplified Acute Physiology Score (SAPS II) were recorded on ICU admission and were 9 and 47, respectively. All participants had deep sedation, assessed by a Richmond Agitation and Sedation scale (RASS) score of less than or equal to -4 at the time of inclusion. One thousand and six BPS recordings were obtained, and 130 recordings were retained for final calibration: 108 BPS recordings corresponding to the absence of pain and 22 BPS recordings corresponding to the presence of pain. Due to the small size of the dataset, a leave-one-subject-out cross-validation (LOSO-CV) strategy was performed, and the training results obtained the receiver operating characteristic (ROC) curve with an area under the curve (AUC) of 0.792. This model has a sensitivity of 81.8% and a specificity of 72.2%. Conclusion This pilot study calibrated the NEVVA© medical device and showed the feasibility of continuous facial expression analysis for pain monitoring in ICU patients. The next step will be to correlate this device with the BPS scale.
Collapse
Affiliation(s)
- Mathieu Bellal
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
- Normandie Univ., UNICAEN, INSERM UMRS U1237 PhIND, Caen, France
| | - Julien Lelandais
- Normandie Univ., UNICAEN, INSERM UMRS U1237 PhIND, Caen, France
- Samdoc Medical Technologies Company, Caen, France
| | | | | | | | - Pierrick Bauduin
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Pierre Cuchet
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Cédric Daubin
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | | | - Augustin Delcampe
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Suzanne Goursaud
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
- Normandie Univ., UNICAEN, INSERM UMRS U1237 PhIND, Caen, France
| | - Aurélie Joret
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Martin Mombrun
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Xavier Valette
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| | - Damiano Cerasuolo
- Department of Methodology and Statistics, Caen University Hospital, Caen, France
| | - Rémy Morello
- Department of Methodology and Statistics, Caen University Hospital, Caen, France
| | - Patrick Mordel
- Department of Clinical Research, Caen University Hospital, Caen, France
| | - Fabien Chaillot
- Department of Clinical Research, Caen University Hospital, Caen, France
| | | | - Denis Vivien
- Normandie Univ., UNICAEN, INSERM UMRS U1237 PhIND, Caen, France
- Department of Clinical Research, Caen University Hospital, Caen, France
- Department of Biological Resources Center, Caen University Hospital, Caen, France
| | - Damien Du Cheyron
- Department of Medical Intensive Care, Caen University Hospital, Caen, France
| |
Collapse
|
4
|
Fujiwara K, Plusquellec P. Association of intensity and dominance of CEOs' smiles with corporate performance. Sci Rep 2024; 14:13986. [PMID: 38886404 PMCID: PMC11183120 DOI: 10.1038/s41598-024-63956-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2024] [Accepted: 06/04/2024] [Indexed: 06/20/2024] Open
Abstract
This study investigated whether the facial expressions of chief executive officers (CEOs) are associated with corporate performance. A photograph of the CEO or president of each company that appeared on the Fortune Global 500 list for 2018 was taken from the company's official website. The smile intensity and action unit activation in each face were calculated using a pre-trained machine learning algorithm, FACET. The results revealed a positive association between smile intensity and company profit, even when controlling for the company's geographic location (Western culture versus others) and the CEO's gender. Furthermore, when the type of smile was examined with the activation of each action unit, this significant positive association was identified in the dominant smile but not in the reward and affiliative smiles. Relationships among the leader's smile intensity, group strategy, and group performance are discussed.
Collapse
Affiliation(s)
- Ken Fujiwara
- Department of Psychology, National Chung Cheng University, Chiayi, 621301, Taiwan, ROC.
| | | |
Collapse
|
5
|
Ahn YA, Moffitt JM, Tao Y, Custode S, Parlade M, Beaumont A, Cardona S, Hale M, Durocher J, Alessandri M, Shyu ML, Perry LK, Messinger DS. Objective Measurement of Social Gaze and Smile Behaviors in Children with Suspected Autism Spectrum Disorder During Administration of the Autism Diagnostic Observation Schedule, 2nd Edition. J Autism Dev Disord 2024; 54:2124-2137. [PMID: 37103660 DOI: 10.1007/s10803-023-05990-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/09/2023] [Indexed: 04/28/2023]
Abstract
Best practice for the assessment of autism spectrum disorder (ASD) symptom severity relies on clinician ratings of the Autism Diagnostic Observation Schedule, 2nd Edition (ADOS-2), but the association of these ratings with objective measures of children's social gaze and smiling is unknown. Sixty-six preschool-age children (49 boys, M = 39.97 months, SD = 10.58) with suspected ASD (61 confirmed ASD) were administered the ADOS-2 and provided social affect calibrated severity scores (SA CSS). Children's social gaze and smiling during the ADOS-2, captured with a camera contained in eyeglasses worn by the examiner and parent, were obtained via a computer vision processing pipeline. Children who gazed more at their parents (p = .04) and whose gaze at their parents involved more smiling (p = .02) received lower social affect severity scores, indicating fewer social affect symptoms, adjusted R2 = .15, p = .003.
Collapse
Affiliation(s)
- Yeojin A Ahn
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Yudong Tao
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Stephanie Custode
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Meaghan Parlade
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Amy Beaumont
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Sandra Cardona
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Melissa Hale
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Jennifer Durocher
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | | | - Mei-Ling Shyu
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA
| | - Lynn K Perry
- Department of Psychology, University of Miami, Coral Gables, FL, USA
| | - Daniel S Messinger
- Department of Psychology, University of Miami, Coral Gables, FL, USA.
- Department of Electrical and Computer Engineering, University of Miami, Coral Gables, FL, USA.
- Departments of Pediatrics and Music Engineering, University of Miami, Coral Gables, FL, USA.
- Department of Psychology, University of Miami, 5665 Ponce de Leon Blvd., P.O. Box 248185, Coral Gables, FL, 33124, USA.
| |
Collapse
|
6
|
Hall NT, Hallquist MN, Martin EA, Lian W, Jonas KG, Kotov R. Automating the analysis of facial emotion expression dynamics: A computational framework and application in psychotic disorders. Proc Natl Acad Sci U S A 2024; 121:e2313665121. [PMID: 38530896 PMCID: PMC10998559 DOI: 10.1073/pnas.2313665121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/18/2024] [Indexed: 03/28/2024] Open
Abstract
Facial emotion expressions play a central role in interpersonal interactions; these displays are used to predict and influence the behavior of others. Despite their importance, quantifying and analyzing the dynamics of brief facial emotion expressions remains an understudied methodological challenge. Here, we present a method that leverages machine learning and network modeling to assess the dynamics of facial expressions. Using video recordings of clinical interviews, we demonstrate the utility of this approach in a sample of 96 people diagnosed with psychotic disorders and 116 never-psychotic adults. Participants diagnosed with schizophrenia tended to move from neutral expressions to uncommon expressions (e.g., fear, surprise), whereas participants diagnosed with other psychoses (e.g., mood disorders with psychosis) moved toward expressions of sadness. This method has broad applications to the study of normal and altered expressions of emotion and can be integrated with telemedicine to improve psychiatric assessment and treatment.
Collapse
Affiliation(s)
- Nathan T. Hall
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC27599
| | - Michael N. Hallquist
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC27599
| | - Elizabeth A. Martin
- Department of Psychological Science, University of California, Irvine, CA92697
| | - Wenxuan Lian
- Department of Psychiatry, Stony Brook University, Stoney Brook, NY11794
| | | | - Roman Kotov
- Department of Psychiatry, Stony Brook University, Stoney Brook, NY11794
| |
Collapse
|
7
|
Law MJJ, Ridzwan MIZ, Ripin ZM, Abd Hamid IJ, Law KS, Karunagaran J, Cajee Y. Evaluation of a motorised patient transfer device based on perceived workload, technology acceptance, and emotional states. Disabil Rehabil Assist Technol 2024; 19:938-950. [PMID: 36334271 DOI: 10.1080/17483107.2022.2134472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2022] [Revised: 08/03/2022] [Accepted: 10/04/2022] [Indexed: 11/07/2022]
Abstract
PURPOSE The high prevalence of musculoskeletal disorders (MSDs) among healthcare workers is partly attributed to the low adoption of patient transfer assistive devices. This study aimed to evaluate the nurses' perceived workload, technology acceptance, and emotional states during the use of the sliding board (SB) and mechanical intervention in the form of a Motorised Patient Transfer Device (MPTD). METHODS The SB and MPTD activities were performed by seven nurses on a simulated patient. The nurses' facial expressions were recorded during the trial. The NASA Task Load Index and technology acceptance questionnaire were also assessed. RESULTS The MPTD significantly reduced the mean overall NASA-TLX score by 68.7% (p = 0.004) and increased the overall acceptance score (median = 8.30) by 21.2% (p = 0.016) when compared to the SB (median = 6.85). All the subjects reported positive feelings towards MPTD. However, facial expression analysis showed that the nurses had a significantly higher peak density of fear while using MPTD (p = 0.016). Besides, there was no improvement in the negative valence and contempt emotion compared to the SB. CONCLUSION Overall, nurses showed positive perceptions and acceptance of MPTD even when they experienced negative emotions.IMPLICATIONS FOR REHABILITATIONThe Motorised Patient Transfer Device (MPTD) reduced the perceived workload of nurses and showed a higher acceptance level compared to the commonly used baseline device (SB).Factors that attributed to the nurses' negative emotions can be used to improve technology and patient transfer processes.More training should be given to familiarise the health practitioners with the new assistive device to reduce their fear of technology.
Collapse
Affiliation(s)
- Mitchelle J J Law
- Neurorehabilitation Engineering and Assistance Systems Research, School of Mechanical Engineering, Universiti Sains Malaysia, Penang, Malaysia
| | - Mohamad Ikhwan Zaini Ridzwan
- Neurorehabilitation Engineering and Assistance Systems Research, School of Mechanical Engineering, Universiti Sains Malaysia, Penang, Malaysia
| | - Zaidi Mohd Ripin
- Neurorehabilitation Engineering and Assistance Systems Research, School of Mechanical Engineering, Universiti Sains Malaysia, Penang, Malaysia
| | | | - Kim Sooi Law
- Advanced Medical and Dental Institute, Universiti Sains Malaysia, Penang, Malaysia
| | - Jeevinthiran Karunagaran
- Neurorehabilitation Engineering and Assistance Systems Research, School of Mechanical Engineering, Universiti Sains Malaysia, Penang, Malaysia
| | - Yusuf Cajee
- Freedom Med International Sdn. Bhd, Penang, Malaysia
| |
Collapse
|
8
|
Shepelenko A, Shepelenko P, Obukhova A, Kosonogov V, Shestakova A. The relationship between charitable giving and emotional facial expressions: Results from affective computing. Heliyon 2024; 10:e23728. [PMID: 38347906 PMCID: PMC10859774 DOI: 10.1016/j.heliyon.2023.e23728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Revised: 12/04/2023] [Accepted: 12/12/2023] [Indexed: 02/15/2024] Open
Abstract
This study investigated the relationship between emotional states (valence, arousal, and six basic emotions) and donation size in pet charities, and it compared the effectiveness of affective computing and emotion self-report methods in assessing attractiveness. Using FaceReader software and self-report, we measured the emotional states of participants (N = 45) during the donation task. The results showed that sadness, happiness, and anger were significantly related to donation size. Sadness and anger increased donations, whereas happiness decreased them. Arousal was not significantly correlated with the willingness to donate. These results are supported by both methods, whereas the self-reported data regarding the association of surprise, fear, and disgust with donation size are inconclusive. Thus, unpleasant emotions increase donation size, and combining affective computing with self-reported data improves the prediction of the effectiveness of a charity appeal. This study contributes to the understanding of the relationship between emotions and charitable behavior toward pet charities and evaluates the effectiveness of marketing mix elements using affective computing. The limitations include the laboratory setting for this experiment and the lack of measurement of prolonged and repeated exposure to unpleasant charity appeals.
Collapse
Affiliation(s)
- Anna Shepelenko
- Institute for Cognitive Neuroscience, HSE University, Moscow, Russia
| | | | - Anastasia Obukhova
- Phystech School of Biological and Medical Physics, Moscow Institute of Physics and Technology, Moscow, Russia
| | | | - Anna Shestakova
- Institute for Cognitive Neuroscience, HSE University, Moscow, Russia
| |
Collapse
|
9
|
Westermann JF, Schäfer R, Nordmann M, Richter P, Müller T, Franz M. Measuring facial mimicry: Affdex vs. EMG. PLoS One 2024; 19:e0290569. [PMID: 38165847 PMCID: PMC10760767 DOI: 10.1371/journal.pone.0290569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 08/09/2023] [Indexed: 01/04/2024] Open
Abstract
Facial mimicry is the automatic imitation of the facial affect expressions of others. It serves as an important component of interpersonal communication and affective co-experience. Facial mimicry has so far been measured by Electromyography (EMG), which requires a complex measuring apparatus. Recently, software for measuring facial expressions have become available, but it is still unclear how well it is suited for measuring facial mimicry. This study investigates the comparability of the automated facial coding software Affdex with EMG for measuring facial mimicry. For this purpose, facial mimicry was induced in 33 subjects by presenting naturalistic affect-expressive video sequences (anger, joy). The response of the subjects is measured simultaneously by facial EMG (corrugator supercilii muscle, zygomaticus major muscle) and by Affdex (action units lip corner puller and brow lowerer and affects joy and anger). Subsequently, the correlations between the measurement results of EMG and Affdex were calculated. After the presentation of the joy stimulus, there was an increase in zygomaticus muscle activity (EMG) about 400 ms after stimulus onset and an increase in joy and lip corner puller activity (Affdex) about 1200 ms after stimulus onset. The joy and the lip corner puller activity detected by Affdex correlate significantly with the EMG activity. After presentation of the anger stimulus, corrugator muscle activity (EMG) also increased approximately 400 ms after stimulus onset, whereas anger and brow lowerer activity (Affdex) showed no response. During the entire measurement interval, anger activity and brow lowerer activity (Affdex) did not correlate with corrugator muscle activity (EMG). Using Affdex, the facial mimicry response to a joy stimulus can be measured, but it is detected approximately 800 ms later compared to the EMG. Thus, electromyography remains the tool of choice for studying subtle mimic processes like facial mimicry.
Collapse
Affiliation(s)
- Jan-Frederik Westermann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Ralf Schäfer
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Marc Nordmann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Peter Richter
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Tobias Müller
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Matthias Franz
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| |
Collapse
|
10
|
Lopez-Aguilar AA, Bustamante-Bello MR, Navarro-Tuch SA, Molina A. Development of a Framework for the Communication System Based on KNX for an Interactive Space for UX Evaluation. SENSORS (BASEL, SWITZERLAND) 2023; 23:9570. [PMID: 38067942 PMCID: PMC10708817 DOI: 10.3390/s23239570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 11/08/2023] [Accepted: 11/22/2023] [Indexed: 12/18/2023]
Abstract
Domotics (Home Automation) aims to improve the quality of life of people by integrating intelligent systems within inhabitable spaces. While traditionally associated with smart home systems, these technologies have potential for User Experience (UX) research. By emulating environments to test products and services, and integrating non-invasive user monitoring tools for emotion recognition, an objective UX evaluation can be performed. To achieve this objective, a testing booth was built and instrumented with devices based on KNX, an international standard for home automation, to conduct experiments and ensure replicability. A framework was designed based on Python to synchronize KNX systems with emotion recognition tools; the synchronization of these data allows finding patterns during the interaction process. To evaluate this framework, an experiment was conducted in a simulated laundry room within the testing booth to analyze the emotional responses of participants while interacting with prototypes of new detergent bottles. Emotional responses were contrasted with traditional questionnaires to determine the viability of using non-invasive methods. Using emulated environments alongside non-invasive monitoring tools allowed an immersive experience for participants. These results indicated that the testing booth can be implemented for a robust UX evaluation methodology.
Collapse
Affiliation(s)
- Ariel A. Lopez-Aguilar
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico; (S.A.N.-T.); (A.M.)
| | - M. Rogelio Bustamante-Bello
- School of Engineering and Sciences, Tecnologico de Monterrey, Mexico City 14380, Mexico; (S.A.N.-T.); (A.M.)
| | | | | |
Collapse
|
11
|
Cheong JH, Jolly E, Xie T, Byrne S, Kenney M, Chang LJ. Py-Feat: Python Facial Expression Analysis Toolbox. AFFECTIVE SCIENCE 2023; 4:781-796. [PMID: 38156250 PMCID: PMC10751270 DOI: 10.1007/s42761-023-00191-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 05/07/2023] [Indexed: 12/30/2023]
Abstract
Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state-of-the-art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00191-4.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Eshin Jolly
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Tiankang Xie
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| | - Sophie Byrne
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Matthew Kenney
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Luke J. Chang
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| |
Collapse
|
12
|
Ernst H, Scherpf M, Pannasch S, Helmert JR, Malberg H, Schmidt M. Assessment of the human response to acute mental stress-An overview and a multimodal study. PLoS One 2023; 18:e0294069. [PMID: 37943894 PMCID: PMC10635557 DOI: 10.1371/journal.pone.0294069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/27/2023] [Accepted: 10/24/2023] [Indexed: 11/12/2023] Open
Abstract
Numerous vital signs are reported in association with stress response assessment, but their application varies widely. This work provides an overview over methods for stress induction and strain assessment, and presents a multimodal experimental study to identify the most important vital signs for effective assessment of the response to acute mental stress. We induced acute mental stress in 65 healthy participants with the Mannheim Multicomponent Stress Test and acquired self-assessment measures (Likert scale, Self-Assessment Manikin), salivary α-amylase and cortisol concentrations as well as 60 vital signs from biosignals, such as heart rate variability parameters, QT variability parameters, skin conductance level, and breath rate. By means of statistical testing and a self-optimizing logistic regression, we identified the most important biosignal vital signs. Fifteen biosignal vital signs related to ventricular repolarization variability, blood pressure, skin conductance, and respiration showed significant results. The logistic regression converged with QT variability index, left ventricular work index, earlobe pulse arrival time, skin conductance level, rise time and number of skin conductance responses, breath rate, and breath rate variability (F1 = 0.82). Self-assessment measures indicated successful stress induction. α-amylase and cortisol showed effect sizes of -0.78 and 0.55, respectively. In summary, the hypothalamic-pituitary-adrenocortical axis and sympathetic nervous system were successfully activated. Our findings facilitate a coherent and integrative understanding of the assessment of the stress response and help to align applications and future research concerning acute mental stress.
Collapse
Affiliation(s)
- Hannes Ernst
- Institute of Biomedical Engineering, TU Dresden, Dresden, Germany
| | - Matthieu Scherpf
- Institute of Biomedical Engineering, TU Dresden, Dresden, Germany
| | - Sebastian Pannasch
- Chair of Engineering Psychology and Applied Cognitive Research, TU Dresden, Dresden, Germany
| | - Jens R. Helmert
- Chair of Engineering Psychology and Applied Cognitive Research, TU Dresden, Dresden, Germany
| | - Hagen Malberg
- Institute of Biomedical Engineering, TU Dresden, Dresden, Germany
| | - Martin Schmidt
- Institute of Biomedical Engineering, TU Dresden, Dresden, Germany
| |
Collapse
|
13
|
Fuzi J, Meller C, Ch'ng S, Dusseldorp J. The Emerging Role of Artificial Intelligence Tools for Outcome Measurement in Facial Reanimation Surgery: A Review. Facial Plast Surg Aesthet Med 2023; 25:556-561. [PMID: 37782135 DOI: 10.1089/fpsam.2022.0424] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023] Open
Abstract
Importance: Surgeons treating facial paralysis with reanimation surgery measure the outcomes of surgery and adjust treatment to each patient's needs. Our objective is to review the current subjective facial paralysis assessment tools and the emerging computer-based objective analysis, which may involve artificial intelligence. Observations: In recent years, many new automated approaches to outcome measurement in facial reanimation surgery have been developed. Most of these tools utilize artificial intelligence to analyze emotional expression and symmetry of facial landmarks. Other tools have provided automated approaches to existing clinician-guided scales. Conclusions: Newly developed computer-based tools using artificial intelligence have been developed to both improve existing clinician-graded scales and provide new approaches to facial symmetry and emotional expressivity analysis.
Collapse
Affiliation(s)
- Jordan Fuzi
- Department of Otolaryngology/Head and Neck Surgery, Prince of Wales Hospital, Randwick, Australia
- Faculty of Medicine, University of Sydney, Camperdown, Australia
| | - Catherine Meller
- Department of Otolaryngology/Head and Neck Surgery, Prince of Wales Hospital, Randwick, Australia
| | - Sydney Ch'ng
- Faculty of Medicine, University of Sydney, Camperdown, Australia
- Department of Plastic and Reconstructive Surgery, Chris O'Brien Lifehouse, Camperdown, Australia
| | - Joseph Dusseldorp
- Faculty of Medicine, University of Sydney, Camperdown, Australia
- Department of Plastic and Reconstructive Surgery, Chris O'Brien Lifehouse, Camperdown, Australia
- Department of Plastic and Reconstructive Surgery, Concord Repatriation General Hospital, Concord, Australia
| |
Collapse
|
14
|
Cheong JH, Molani Z, Sadhukha S, Chang LJ. Synchronized affect in shared experiences strengthens social connection. Commun Biol 2023; 6:1099. [PMID: 37898664 PMCID: PMC10613250 DOI: 10.1038/s42003-023-05461-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 10/13/2023] [Indexed: 10/30/2023] Open
Abstract
People structure their days to experience events with others. We gather to eat meals, watch TV, and attend concerts together. What constitutes a shared experience and how does it manifest in dyadic behavior? The present study investigates how shared experiences-measured through emotional, motoric, physiological, and cognitive alignment-promote social bonding. We recorded the facial expressions and electrodermal activity (EDA) of participants as they watched four episodes of a TV show for a total of 4 h with another participant. Participants displayed temporally synchronized and spatially aligned emotional facial expressions and the degree of synchronization predicted the self-reported social connection ratings between viewing partners. We observed a similar pattern of results for dyadic physiological synchrony measured via EDA and their cognitive impressions of the characters. All four of these factors, temporal synchrony of positive facial expressions, spatial alignment of expressions, EDA synchrony, and character impression similarity, contributed to a latent factor of a shared experience that predicted social connection. Our findings suggest that the development of interpersonal affiliations in shared experiences emerges from shared affective experiences comprising synchronous processes and demonstrate that these complex interpersonal processes can be studied in a holistic and multi-modal framework leveraging naturalistic experimental designs.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Zainab Molani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Sushmita Sadhukha
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Luke J Chang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
15
|
La Monica L, Cenerini C, Vollero L, Pennazza G, Santonico M, Keller F. Development of a Universal Validation Protocol and an Open-Source Database for Multi-Contextual Facial Expression Recognition. SENSORS (BASEL, SWITZERLAND) 2023; 23:8376. [PMID: 37896470 PMCID: PMC10611000 DOI: 10.3390/s23208376] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/05/2023] [Revised: 10/06/2023] [Accepted: 10/09/2023] [Indexed: 10/29/2023]
Abstract
Facial expression recognition (FER) poses a complex challenge due to diverse factors such as facial morphology variations, lighting conditions, and cultural nuances in emotion representation. To address these hurdles, specific FER algorithms leverage advanced data analysis for inferring emotional states from facial expressions. In this study, we introduce a universal validation methodology assessing any FER algorithm's performance through a web application where subjects respond to emotive images. We present the labelled data database, FeelPix, generated from facial landmark coordinates during FER algorithm validation. FeelPix is available to train and test generic FER algorithms, accurately identifying users' facial expressions. A testing algorithm classifies emotions based on FeelPix data, ensuring its reliability. Designed as a computationally lightweight solution, it finds applications in online systems. Our contribution improves facial expression recognition, enabling the identification and interpretation of emotions associated with facial expressions, offering profound insights into individuals' emotional reactions. This contribution has implications for healthcare, security, human-computer interaction, and entertainment.
Collapse
Affiliation(s)
- Ludovica La Monica
- Department of Engineering, Unit of Computational Systems and Bioinformatics, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (L.L.M.); (L.V.)
| | - Costanza Cenerini
- Department of Engineering, Unit of Electronics for Sensor Systems, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Luca Vollero
- Department of Engineering, Unit of Computational Systems and Bioinformatics, Università Campus Bio-Medico di Roma, 00128 Rome, Italy; (L.L.M.); (L.V.)
| | - Giorgio Pennazza
- Department of Engineering, Unit of Electronics for Sensor Systems, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Marco Santonico
- Department of Science and Technology for Sustainable Development and One Health, Unit of Electronics for Sensor Systems, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| | - Flavio Keller
- Department of Medicine, Unit of Developmental Neuroscience, Università Campus Bio-Medico di Roma, 00128 Rome, Italy;
| |
Collapse
|
16
|
Kim H, Küster D, Girard JM, Krumhuber EG. Human and machine recognition of dynamic and static facial expressions: prototypicality, ambiguity, and complexity. Front Psychol 2023; 14:1221081. [PMID: 37794914 PMCID: PMC10546417 DOI: 10.3389/fpsyg.2023.1221081] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2023] [Accepted: 08/22/2023] [Indexed: 10/06/2023] Open
Abstract
A growing body of research suggests that movement aids facial expression recognition. However, less is known about the conditions under which the dynamic advantage occurs. The aim of this research was to test emotion recognition in static and dynamic facial expressions, thereby exploring the role of three featural parameters (prototypicality, ambiguity, and complexity) in human and machine analysis. In two studies, facial expression videos and corresponding images depicting the peak of the target and non-target emotion were presented to human observers and the machine classifier (FACET). Results revealed higher recognition rates for dynamic stimuli compared to non-target images. Such benefit disappeared in the context of target-emotion images which were similarly well (or even better) recognised than videos, and more prototypical, less ambiguous, and more complex in appearance than non-target images. While prototypicality and ambiguity exerted more predictive power in machine performance, complexity was more indicative of human emotion recognition. Interestingly, recognition performance by the machine was found to be superior to humans for both target and non-target images. Together, the findings point towards a compensatory role of dynamic information, particularly when static-based stimuli lack relevant features of the target emotion. Implications for research using automatic facial expression analysis (AFEA) are discussed.
Collapse
Affiliation(s)
- Hyunwoo Kim
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| | - Dennis Küster
- Cognitive Systems Lab, Department of Mathematics and Computer Science, University of Bremen, Bremen, Germany
| | - Jeffrey M. Girard
- Department of Psychology, University of Kansas, Lawrence, KS, United States
| | - Eva G. Krumhuber
- Departmet of Experimental Psychology, University College London, London, United Kingdom
| |
Collapse
|
17
|
Cross MP, Acevedo AM, Hunter JF. A Critique of Automated Approaches to Code Facial Expressions: What Do Researchers Need to Know? AFFECTIVE SCIENCE 2023; 4:500-505. [PMID: 37744972 PMCID: PMC10514002 DOI: 10.1007/s42761-023-00195-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 06/03/2023] [Indexed: 09/26/2023]
Abstract
Facial expression recognition software is becoming more commonly used by affective scientists to measure facial expressions. Although the use of this software has exciting implications, there are persistent and concerning issues regarding the validity and reliability of these programs. In this paper, we highlight three of these issues: biases of the programs against certain skin colors and genders; the common inability of these programs to capture facial expressions made in non-idealized conditions (e.g., "in the wild"); and programs being forced to adopt the underlying assumptions of the specific theory of emotion on which each software is based. We then discuss three directions for the future of affective science in the area of automated facial coding. First, researchers need to be cognizant of exactly how and on which data sets the machine learning algorithms underlying these programs are being trained. In addition, there are several ethical considerations, such as privacy and data storage, surrounding the use of facial expression recognition programs. Finally, researchers should consider collecting additional emotion data, such as body language, and combine these data with facial expression data in order to achieve a more comprehensive picture of complex human emotions. Facial expression recognition programs are an excellent method of collecting facial expression data, but affective scientists should ensure that they recognize the limitations and ethical implications of these programs.
Collapse
Affiliation(s)
- Marie P. Cross
- Department of Biobehavioral Health, Pennsylvania State University, University Park, PA USA
| | - Amanda M. Acevedo
- Basic Biobehavioral and Psychological Sciences Branch, National Cancer Institute, Rockville, MD USA
| | - John F. Hunter
- Department of Psychology, Chapman University, Orange, CA USA
| |
Collapse
|
18
|
Ho MH, Kemp BT, Eisenbarth H, Rijnders RJP. Designing a neuroclinical assessment of empathy deficits in psychopathy based on the Zipper Model of Empathy. Neurosci Biobehav Rev 2023; 151:105244. [PMID: 37225061 DOI: 10.1016/j.neubiorev.2023.105244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/02/2023] [Accepted: 05/20/2023] [Indexed: 05/26/2023]
Abstract
Ho, M.H., Kemp, B.T., Eisenbarth, H. & Rijnders, R.J.P. Designing a neuroclinical assessment of empathy deficits in psychopathy based on the Zipper Model of Empathy. NEUROSCI BIOBEHAV REV YY(Y) XXX-XXX, 2023. The heterogeneity of the literature on empathy highlights its multidimensional and dynamic nature and affects unclear descriptions of empathy in the context of psychopathology. The Zipper Model of Empathy integrates current theories of empathy and proposes that empathy maturity is dependent on whether contextual and personal factors push affective and cognitive processes together or apart. This concept paper therefore proposes a comprehensive battery of physiological and behavioral measures to empirically assess empathy processing according to this model with an application for psychopathic personality. We propose using the following measures to assess each component of this model: (1) facial electromyography; (2) the Emotion Recognition Task; (3) the Empathy Accuracy task and physiological measures (e.g., heart rate); (4) a selection of Theory of Mind tasks and an adapted Dot Perspective Task, and; (5) an adjusted Charity Task. Ultimately, we hope this paper serves as a starting point for discussion and debate on defining and assessing empathy processing, to encourage research to falsify and update this model to improve our understanding of empathy.
Collapse
Affiliation(s)
- Man Him Ho
- Danish Research Center for Magnetic Resonance, Kettegård Alle 30, 2650 Hvidovre, Capital Region, Denmark; Maastricht University, Psychology Neurosciences Department, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands.
| | - Benjamin Thomas Kemp
- Maastricht University, Psychology Neurosciences Department, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands.
| | - Hedwig Eisenbarth
- School of Psychology, Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand.
| | - Ronald J P Rijnders
- Netherlands Institute for Forensic Psychiatry and Psychology, Forensic Observation Clinic "Pieter Baan Centrum", Carl Barksweg 3, 1336 ZL, Almere, the Netherlands; Utrecht University, Faculty of Social Sciences, Department of Psychology, Heidelberglaan 8, 3584 CS, Utrecht, the Netherlands.
| |
Collapse
|
19
|
Richlan F, Thürmer JL, Braid J, Kastner P, Leitner MC. Subjective experience, self-efficacy, and motivation of professional football referees during the COVID-19 pandemic. HUMANITIES & SOCIAL SCIENCES COMMUNICATIONS 2023; 10:215. [PMID: 37192951 PMCID: PMC10166025 DOI: 10.1057/s41599-023-01720-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 10/25/2022] [Accepted: 04/24/2023] [Indexed: 05/18/2023]
Abstract
The present multi-study article investigates the subjective experience of professional football (a.k.a. soccer) referees and players during the COVID-19 pandemic and the so-called ghost games (i.e., games without supporters). Referees from the Austrian Football Association completed questionnaires inquiring about self-efficacy, motivation, and general personal observations and perceptions (e.g., arousal or confidence). In addition, two players and one referee in the Austrian Football Bundesliga were interviewed retrospectively regarding their subjective experience during ghost games and the effects of emotions on behavior and performance using semi-structured, video-taped interviews. Results of the referee survey indicate that the most profound differences between regular games and ghost games lie in the domain of intrinsic motivation and multiple aspects of subjective experience. Specifically, the experience in ghost games compared with regular games was reported by referees as being significantly less motivating, less excited/tense, less emotional, less focused, and overall, more negative, despite being easier to referee and the players behaving more positively. Qualitative analyses of the video-taped interview footage indicated (i) substantial inter-individual variability regarding the extent of the effect of the empty stadiums on the subjective experience of emotions, (ii) consequently, different strategies to regulate emotions and arousal from suboptimal to optimal levels, both before and during competition, and (iii) interactions between reported emotions, arousal, motivation, self-confidence, behavior and performance on the pitch. In addition, non-verbal expressions of emotion were captured using fully automated AI-software that coded facial movements during interviews. The results of this exploratory facial expression analysis revealed varying degrees of arousal and valence in relation to the content of the statements during the interviews, demonstrating the convergent validity of our findings. Our findings contribute to the growing literature on the effects of football games without fans during the COVID-19 pandemic and provide insights into the subjective experience of professional football referees. Concerning referees and players alike, emotions are investigated as potential processes related to home-field advantage and performance in professional football by means of a multi-methods approach. Further, the combination of qualitative and quantitative measures-as well as verbal and non-verbal communication channels-can deepen our understanding of the emotional influence of (missing) spectators on the subjective experience and the behavior of sports professionals is discussed.
Collapse
Affiliation(s)
- Fabio Richlan
- Centre for Cognitive Neuroscience, Paris-Lodron-University, Salzburg, Austria
- Department of Psychology, Paris-Lodron-University, Salzburg, Austria
| | - J. Lukas Thürmer
- Department of Psychology, Paris-Lodron-University, Salzburg, Austria
- Department of Psychology, Ludwig-Maximilians-University, Munich, Germany
| | - Jeremias Braid
- Centre for Cognitive Neuroscience, Paris-Lodron-University, Salzburg, Austria
- Department of Psychology, Paris-Lodron-University, Salzburg, Austria
| | - Patrick Kastner
- Centre for Cognitive Neuroscience, Paris-Lodron-University, Salzburg, Austria
- Department of Psychology, Paris-Lodron-University, Salzburg, Austria
| | - Michael Christian Leitner
- Centre for Cognitive Neuroscience, Paris-Lodron-University, Salzburg, Austria
- Department of Psychology, Paris-Lodron-University, Salzburg, Austria
- Salzburg University of Applied Sciences, Salzburg, Austria
| |
Collapse
|
20
|
Venkitakrishnan S, Wu YH. Facial Expressions as an Index of Listening Difficulty and Emotional Response. Semin Hear 2023; 44:166-187. [PMID: 37122878 PMCID: PMC10147507 DOI: 10.1055/s-0043-1766104] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/07/2023] Open
Abstract
Knowledge about listening difficulty experienced during a task can be used to better understand speech perception processes, to guide amplification outcomes, and can be used by individuals to decide whether to participate in communication. Another factor affecting these decisions is individuals' emotional response which has not been measured objectively previously. In this study, we describe a novel method of measuring listening difficulty and affect of individuals in adverse listening situations using automatic facial expression algorithm. The purpose of our study was to determine if facial expressions of confusion and frustration are sensitive to changes in listening difficulty. We recorded speech recognition scores, facial expressions, subjective listening effort scores, and subjective emotional responses in 33 young participants with normal hearing. We used the signal-to-noise ratios of -1, +2, and +5 dB SNR and quiet conditions to vary the difficulty level. We found that facial expression of confusion and frustration increased with increase in difficulty level, but not with change in each level. We also found a relationship between facial expressions and both subjective emotion ratings and subjective listening effort. Emotional responses in the form of facial expressions show promise as a measure of affect and listening difficulty. Further research is needed to determine the specific contribution of affect to communication in challenging listening environments.
Collapse
Affiliation(s)
- Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, California State University, Sacramento, California
| | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa
| |
Collapse
|
21
|
Zhang X, Qiao Y, Wang H, Wang J, Chen D. Lighting environmental assessment in enclosed spaces based on emotional model. THE SCIENCE OF THE TOTAL ENVIRONMENT 2023; 870:161933. [PMID: 36736394 DOI: 10.1016/j.scitotenv.2023.161933] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 01/15/2023] [Accepted: 01/27/2023] [Indexed: 06/18/2023]
Abstract
Lighting assessment in special operating environments like enclosed spaces is of great research significance and value. In addition to investigating the visual ergonomics of workers, the emotional monitoring and guidance of workers in enclosed spaces is also a research focus. Based on the Circumplex emotion model theory, this paper designs an experiment to assess emotions in an enclosed space with 6 different lighting settings (2 correlated color temperature (CCT) × 3 illuminance levels). From the perspective of subjective assessment, participants used a rapid sensory analysis method (Check-all-that-apply, CATA) and a Subjective Coordinate Scale (SCS) method for rapid ambience perception checking and emotional self-reporting of lighting settings. From the perspective of objective evaluation, the participants' facial expressions were recorded during the experiment using a camera, and then the recorded facial expressions were automatically analyzed and predicted using FaceReader (FRE) software. The CATA and SCS showed similar results, with the 3100 K × 600 lx, 3100 K × 1000 lx and 6500 K × 600 lx lighting settings creating a relaxed, pleasant emotion in participants, the 6500 K × 1000 lx setting creating an excited, tense atmosphere, and the low illumination level settings of 3100 K × 250 lx and 6500 K × 250 lx made participants feel tired and frustrated. The results of the objective emotion analysis indicate that the FRE was able to effectively identify differences in participants' emotions in response to different lighting settings and was consistent with the results of participants' subjective emotion reports. This laboratory study validates that the three methods can effectively assess the enclosed space lighting settings, and provides a reference for further research on the enclosed space lighting settings and emotional monitoring of workers in the future.
Collapse
Affiliation(s)
- Xian Zhang
- Key Laboratory of Ministry of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi'an 710072, China
| | - Yidan Qiao
- Key Laboratory of Ministry of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi'an 710072, China
| | - Hanyu Wang
- Key Laboratory of Ministry of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi'an 710072, China
| | - Jingluan Wang
- Key Laboratory of Ministry of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi'an 710072, China
| | - Dengkai Chen
- Key Laboratory of Ministry of Industrial Design and Ergonomics, Ministry of Industry and Information Technology, Northwestern Polytechnical University, Xi'an 710072, China.
| |
Collapse
|
22
|
Ceccacci S, Generosi A, Giraldi L, Mengoni M. Emotional Valence from Facial Expression as an Experience Audit Tool: An Empirical Study in the Context of Opera Performance. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23052688. [PMID: 36904892 PMCID: PMC10007453 DOI: 10.3390/s23052688] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 02/21/2023] [Accepted: 02/26/2023] [Indexed: 05/31/2023]
Abstract
This paper aims to explore the potential offered by emotion recognition systems to provide a feasible response to the growing need for audience understanding and development in the field of arts organizations. Through an empirical study, it was investigated whether the emotional valence measured on the audience through an emotion recognition system based on facial expression analysis can be used with an experience audit to: (1) support the understanding of the emotional responses of customers toward any clue that characterizes a staged performance; and (2) systematically investigate the customer's overall experience in terms of their overall satisfaction. The study was carried out in the context of opera live shows in the open-air neoclassical theater Arena Sferisterio in Macerata, during 11 opera performances. A total of 132 spectators were involved. Both the emotional valence provided by the considered emotion recognition system and the quantitative data related to customers' satisfaction, collected through a survey, were considered. Results suggest how collected data can be useful for the artistic director to estimate the audience's overall level of satisfaction and make choices about the specific characteristics of the performance, and that emotional valence measured on the audience during the show can be useful to predict overall customer satisfaction, as measured using traditional self-report methods.
Collapse
Affiliation(s)
- Silvia Ceccacci
- Department of Education, Cultural Heritage and Tourism, Università degli Studi di Macerata, P.le Luigi Bertelli 1, 62100 Macerata, Italy
| | - Andrea Generosi
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| | - Luca Giraldi
- Emoj srl, Via Ferruccio Fioretti 10/B, 60131 Ancona, Italy
| | - Maura Mengoni
- Department of Industrial Engineering and Mathematical Sciences, Università Politecnica delle Marche, Via Brecce Bianche 12, 60131 Ancona, Italy
| |
Collapse
|
23
|
Mohammed H, Kumar R, Bennani H, Perry J, Halberstadt JB, Farella M. Malocclusion severity and smile features: Is there an association? Am J Orthod Dentofacial Orthop 2023:S0889-5406(23)00031-8. [PMID: 36842950 DOI: 10.1016/j.ajodo.2022.10.023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2022] [Revised: 10/01/2022] [Accepted: 10/01/2022] [Indexed: 02/26/2023]
Abstract
INTRODUCTION This observational study investigated the relationship between malocclusion and smiling. METHODS Adolescents and young adults (n = 72; aged 16-25 years) were identified according to their Dental Aesthetic Index (DAI) and allocated to 3 groups: (1) malocclusion group (n = 24; DAI ≥31), (2) retention group (n = 24; pretreatment DAI ≥31) with a prior malocclusion that had been corrected by orthodontic treatment, (3) control group with no-to-minor malocclusion (n = 24; DAI ≤25). Participants were requested to watch an amusing video. Based on the Facial Action Coding System, automated pattern recognition was used to detect smile episodes and assess their frequency, duration, genuineness, intensity, and extent of tooth show. Demographics, Big Five personality dimensions, and self-perceived smile esthetics-related quality of life were collected from all participants via questionnaires. Data were analyzed by mixed-model analysis and adjusted for possible confounders. RESULTS Patients from the malocclusion and retention groups smiled significantly less than participants from the control group, with the duration of smiles and smiling time being around half those of control subjects. Smile genuineness, smile intensity, and teeth shown did not differ across groups. Personality traits did not differ significantly among the 3 groups, whereas the malocclusion group scored around 30% less for dental self-confidence than the other 2 groups. CONCLUSIONS Patients with severe malocclusion tend to smile less, but the features of their smiles are similar to those without malocclusion. A lower propensity to smile in patients with a corrected malocclusion may persist after orthodontic treatment.
Collapse
Affiliation(s)
- Hisham Mohammed
- Discipline of Orthodontics, Faculty of Dentistry, University of Otago, Dunedin, New Zealand
| | - Reginald Kumar
- Discipline of Orthodontics, Faculty of Dentistry, University of Otago, Dunedin, New Zealand
| | - Hamza Bennani
- School of Information Technology, Otago Polytechnic, Dunedin, New Zealand
| | - John Perry
- Hospital Dental Service, Christchurch Hospital, Christchurch, New Zealand
| | - Jamin B Halberstadt
- Department head, Faculty of Psychology, University of Otago, Dunedin, New Zealand
| | - Mauro Farella
- Discipline of Orthodontics, Faculty of Dentistry, University of Otago, Dunedin, New Zealand; Discipline of Orthodontics, Department of Surgical Sciences, University of Cagliari, Cagliari, Italy.
| |
Collapse
|
24
|
Büdenbender B, Höfling TTA, Gerdes ABM, Alpers GW. Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions' prototypicality. PLoS One 2023; 18:e0281309. [PMID: 36763694 PMCID: PMC9916590 DOI: 10.1371/journal.pone.0281309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/20/2023] [Indexed: 02/12/2023] Open
Abstract
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models' accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material's prototypicality on AFC's ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Collapse
Affiliation(s)
- Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Tim T. A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
- * E-mail:
| |
Collapse
|
25
|
Bailey G, Halamová J, Vráblová V. Clients' Facial Expressions of Self-Compassion, Self-Criticism, and Self-Protection in Emotion-Focused Therapy Videos. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:1129. [PMID: 36673885 PMCID: PMC9859613 DOI: 10.3390/ijerph20021129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/19/2022] [Accepted: 12/31/2022] [Indexed: 06/17/2023]
Abstract
Clients' facial expressions allow psychotherapists to gather more information about clients' emotional processing. This study aims to examine and investigate the facial Action Units (AUs) of self-compassion, self-criticism, and self-protection within real Emotion-Focused Therapy (EFT) sessions. For this purpose, we used the facial analysis software iMotions. Twelve video sessions were selected for the analysis based on specific criteria. For self-compassion, the following AUs were significant: AUs 4 (brow furrow), 15 (lip corner depressor), and the AU12_smile (lip corner puller). For self-criticism, iMotions identified the AUs 2 (outer brow raise), AU1 (inner brow raise), AU7 (lid tighten), AU12_smirk (unilateral lip corner puller), and AU43 (eye closure). Self-protection was combined using the occurrence of AUs 1 and 4 and AU12_smirk. Moreover, the findings support the significance of discerning self-compassion and self-protection as two different concepts.
Collapse
Affiliation(s)
| | - Júlia Halamová
- Institute of Applied Psychology, Faculty of Social and Economic Sciences, Comenius University in Bratislava, Mlynské luhy 4, 821 05 Bratislava, Slovakia
| | | |
Collapse
|
26
|
Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects elicited by video commercials. Front Neurosci 2023; 17:1125983. [PMID: 37205049 PMCID: PMC10185761 DOI: 10.3389/fnins.2023.1125983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 02/10/2023] [Indexed: 05/21/2023] Open
Abstract
Introduction Consumers' emotional responses are the prime target for marketing commercials. Facial expressions provide information about a person's emotional state and technological advances have enabled machines to automatically decode them. Method With automatic facial coding we investigated the relationships between facial movements (i.e., action unit activity) and self-report of commercials advertisement emotion, advertisement and brand effects. Therefore, we recorded and analyzed the facial responses of 219 participants while they watched a broad array of video commercials. Results Facial expressions significantly predicted self-report of emotion as well as advertisement and brand effects. Interestingly, facial expressions had incremental value beyond self-report of emotion in the prediction of advertisement and brand effects. Hence, automatic facial coding appears to be useful as a non-verbal quantification of advertisement effects beyond self-report. Discussion This is the first study to measure a broad spectrum of automatically scored facial responses to video commercials. Automatic facial coding is a promising non-invasive and non-verbal method to measure emotional responses in marketing.
Collapse
|
27
|
Explainable automated recognition of emotional states from canine facial expressions: the case of positive anticipation and frustration. Sci Rep 2022; 12:22611. [PMID: 36585439 PMCID: PMC9803655 DOI: 10.1038/s41598-022-27079-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Accepted: 12/26/2022] [Indexed: 12/31/2022] Open
Abstract
In animal research, automation of affective states recognition has so far mainly addressed pain in a few species. Emotional states remain uncharted territories, especially in dogs, due to the complexity of their facial morphology and expressions. This study contributes to fill this gap in two aspects. First, it is the first to address dog emotional states using a dataset obtained in a controlled experimental setting, including videos from (n = 29) Labrador Retrievers assumed to be in two experimentally induced emotional states: negative (frustration) and positive (anticipation). The dogs' facial expressions were measured using the Dogs Facial Action Coding System (DogFACS). Two different approaches are compared in relation to our aim: (1) a DogFACS-based approach with a two-step pipeline consisting of (i) a DogFACS variable detector and (ii) a positive/negative state Decision Tree classifier; (2) An approach using deep learning techniques with no intermediate representation. The approaches reach accuracy of above 71% and 89%, respectively, with the deep learning approach performing better. Secondly, this study is also the first to study explainability of AI models in the context of emotion in animals. The DogFACS-based approach provides decision trees, that is a mathematical representation which reflects previous findings by human experts in relation to certain facial expressions (DogFACS variables) being correlates of specific emotional states. The deep learning approach offers a different, visual form of explainability in the form of heatmaps reflecting regions of focus of the network's attention, which in some cases show focus clearly related to the nature of particular DogFACS variables. These heatmaps may hold the key to novel insights on the sensitivity of the network to nuanced pixel patterns reflecting information invisible to the human eye.
Collapse
|
28
|
Bourret M, Ratelle CF, Plamondon A, Châteauvert GB. Dynamics of parent-adolescent interactions during a discussion on career choice: The role of parental behaviors and emotions. JOURNAL OF VOCATIONAL BEHAVIOR 2022. [DOI: 10.1016/j.jvb.2022.103837] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/25/2022]
|
29
|
Fu G, Zhou X, Wu SJ, Nikoo H, Panesar D, Zheng PP, Oatley K, Lee K. Discrete emotions discovered by contactless measurement of facial blood flows. Cogn Emot 2022; 36:1429-1439. [PMID: 36121056 DOI: 10.1080/02699931.2022.2124960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
ABSTRACTExperiential and behavioural aspects of emotions can be measured readily but developing a contactless measure of emotions' physiological aspects has been a major challenge. We hypothesised that different emotion-evoking films can produce distinctive facial blood flow patterns that can serve as physiological signatures of discrete emotions. To test this hypothesis, we created a new Transdermal Optical Imaging system that uses a conventional video camera to capture facial blood flows in a contactless manner. Using this and deep machine learning, we analysed videos of the faces of people as they viewed film clips that elicited joy, sadness, disgust, fear or a neutral state. We found that each of these elicited a distinct blood flow pattern in the facial epidermis, and that Transdermal Optical Imaging is an effective contactless and inexpensive tool to the reveal physiological correlates of discrete emotions.
Collapse
Affiliation(s)
- Genyue Fu
- Department of Psychology, Hangzhou Normal University, Hangzhou, China
| | - Xinyue Zhou
- School of Management, Zhejiang University, Hangzhou, People's Republic of China
| | - Si Jia Wu
- Department of Psychology, Hangzhou Normal University, Hangzhou, China.,Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| | - Hassan Nikoo
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| | - Darshan Panesar
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| | - Paul Pu Zheng
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| | - Keith Oatley
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| | - Kang Lee
- Department of Applied Psychology and Human Development, University of Toronto, Toronto, Canada
| |
Collapse
|
30
|
Do Deepfakes Adequately Display Emotions? A Study on Deepfake Facial Emotion Expression. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:1332122. [PMID: 36304741 PMCID: PMC9596270 DOI: 10.1155/2022/1332122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 10/08/2022] [Indexed: 11/18/2022]
Abstract
Recent technological advancements in Artificial Intelligence make it easy to create deepfakes and hyper-realistic videos, in which images and video clips are processed to create fake videos that appear authentic. Many of them are based on swapping faces without the consent of the person whose appearance and voice are used. As emotions are inherent in human communication, studying how deepfakes transfer emotional expressions from original to fakes is relevant. In this work, we conduct an in-depth study on facial emotional expression in deepfakes using a well-known face swap-based deepfake database. Firstly, we extracted the photograms from their videos. Then, we analyzed the emotional expression in the original and faked versions of video recordings for all performers in the database. Results show that emotional expressions are not adequately transferred between original recordings and the deepfakes created from them. High variability in emotions and performers detected between original and fake recordings indicates that performer emotion expressiveness should be considered for better deepfake generation or detection.
Collapse
|
31
|
Quinde-Zlibut J, Munshi A, Biswas G, Cascio CJ. Identifying and describing subtypes of spontaneous empathic facial expression production in autistic adults. J Neurodev Disord 2022; 14:43. [PMID: 35915404 PMCID: PMC9342940 DOI: 10.1186/s11689-022-09451-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/01/2021] [Accepted: 07/08/2022] [Indexed: 11/17/2022] Open
Abstract
BACKGROUND It is unclear whether atypical patterns of facial expression production metrics in autism reflect the dynamic and nuanced nature of facial expressions across people or a true diagnostic difference. Furthermore, the heterogeneity observed across autism symptomatology suggests a need for more adaptive and personalized social skills programs. Towards this goal, it would be useful to have a more concrete and empirical understanding of the different expressiveness profiles within the autistic population and how they differ from neurotypicals. METHODS We used automated facial coding and an unsupervised clustering approach to limit inter-individual variability in facial expression production that may have otherwise obscured group differences in previous studies, allowing an "apples-to-apples" comparison between autistic and neurotypical adults. Specifically, we applied k-means clustering to identify subtypes of facial expressiveness in an autism group (N = 27) and a neurotypical control group (N = 57) separately. The two most stable clusters from these analyses were then further characterized and compared based on their expressiveness and emotive congruence to emotionally charged stimuli. RESULTS Our main finding was that a subset of autistic adults in our sample show heightened spontaneous facial expressions irrespective of image valence. We did not find evidence for greater incongruous (i.e., inappropriate) facial expressions in autism. Finally, we found a negative trend between expressiveness and emotion recognition within the autism group. CONCLUSION The results from our previous study on self-reported empathy and current expressivity findings point to a higher degree of facial expressions recruited for emotional resonance in autism that may not always be adaptive (e.g., experiencing similar emotional resonance regardless of valence). These findings also build on previous work indicating that facial expression intensity is not diminished in autism and suggest the need for intervention programs to focus on emotion recognition and social skills in the context of both negative and positive emotions.
Collapse
Affiliation(s)
- Jennifer Quinde-Zlibut
- Graduate Program in Neuroscience, Vanderbilt University, Nashville, USA.
- Frist Center for Autism and Innovation, Vanderbilt University, Nashville, USA.
| | - Anabil Munshi
- Institute for Software Integrated Systems, Vanderbilt University, Nashville, USA
| | - Gautam Biswas
- Institute for Software Integrated Systems, Vanderbilt University, Nashville, USA
| | - Carissa J Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, USA
| |
Collapse
|
32
|
Test–Retest Reliability in Automated Emotional Facial Expression Analysis: Exploring FaceReader 8.0 on Data from Typically Developing Children and Children with Autism. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157759] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated emotional facial expression analysis (AEFEA) is used widely in applied research, including the development of screening/diagnostic systems for atypical human neurodevelopmental conditions. The validity of AEFEA systems has been systematically studied, but their test–retest reliability has not been researched thus far. We explored the test–retest reliability of a specific AEFEA software, Noldus FaceReader 8.0 (FR8; by Noldus Information Technology). We collected intensity estimates for 8 repeated emotions through FR8 from facial video recordings of 60 children: 31 typically developing children and 29 children with autism spectrum disorder. Test–retest reliability was imperfect in 20% of cases, affecting a substantial proportion of data points; however, the test–retest differences were small. This shows that the test–retest reliability of FR8 is high but not perfect. A proportion of cases which initially failed to show perfect test–retest reliability reached it in a subsequent analysis by FR8. This suggests that repeated analyses by FR8 can, in some cases, lead to the “stabilization” of emotion intensity datasets. Under ANOVA, the test–retest differences did not influence the pattern of cross-emotion and cross-group effects and interactions. Our study does not question the validity of previous results gained by AEFEA technology, but it shows that further exploration of the test–retest reliability of AEFEA systems is desirable.
Collapse
|
33
|
Investigating the Relationship between Facial Mimicry and Empathy. Behav Sci (Basel) 2022; 12:bs12080250. [PMID: 35892350 PMCID: PMC9330546 DOI: 10.3390/bs12080250] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 07/15/2022] [Accepted: 07/20/2022] [Indexed: 12/30/2022] Open
Abstract
Facial expressions play a key role in interpersonal communication when it comes to negotiating our emotions and intentions, as well as interpreting those of others. Research has shown that we can connect to other people better when we exhibit signs of empathy and facial mimicry. However, the relationship between empathy and facial mimicry is still debated. Among the factors contributing to the difference in results across existing studies is the use of different instruments for measuring both empathy and facial mimicry, as well as often ignoring the differences across various demographic groups. This study first looks at the differences in the empathetic abilities of people across different demographic groups based on gender, ethnicity and age. The empathetic ability is measured based on the Empathy Quotient, capturing a balanced representation of both emotional and cognitive empathy. Using statistical and machine learning methods, this study then investigates the correlation between the empathetic ability and facial mimicry of subjects in response to images portraying different emotions displayed on a computer screen. Unlike the existing studies measuring facial mimicry using electromyography, this study employs a technology detecting facial expressions based on video capture and deep learning. This choice was made in the context of increased online communication during and after the COVID-19 pandemic. The results of this study confirm the previously reported difference in the empathetic ability between females and males. However, no significant difference in empathetic ability was found across different age and ethnic groups. Furthermore, no strong correlation was found between empathy and facial reactions to faces portraying different emotions shown on a computer screen. Overall, the results of this study can be used to inform the design of online communication technologies and tools for training empathy team leaders, educators, social and healthcare providers.
Collapse
|
34
|
Bogdanova OV, Bogdanov VB, Pizano A, Bouvard M, Cazalets JR, Mellen N, Amestoy A. The Current View on the Paradox of Pain in Autism Spectrum Disorders. Front Psychiatry 2022; 13:910824. [PMID: 35935443 PMCID: PMC9352888 DOI: 10.3389/fpsyt.2022.910824] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Accepted: 06/17/2022] [Indexed: 01/18/2023] Open
Abstract
Autism spectrum disorder (ASD) is a neurodevelopmental disorder, which affects 1 in 44 children and may cause severe disabilities. Besides socio-communicational difficulties and repetitive behaviors, ASD also presents as atypical sensorimotor function and pain reactivity. While chronic pain is a frequent co-morbidity in autism, pain management in this population is often insufficient because of difficulties in pain evaluation, worsening their prognosis and perhaps driving higher mortality rates. Previous observations have tended to oversimplify the experience of pain in autism as being insensitive to painful stimuli. Various findings in the past 15 years have challenged and complicated this dogma. However, a relatively small number of studies investigates the physiological correlates of pain reactivity in ASD. We explore the possibility that atypical pain perception in people with ASD is mediated by alterations in pain perception, transmission, expression and modulation, and through interactions between these processes. These complex interactions may account for the great variability and sometimes contradictory findings from the studies. A growing body of evidence is challenging the idea of alterations in pain processing in ASD due to a single factor, and calls for an integrative view. We propose a model of the pain cycle that includes the interplay between the molecular and neurophysiological pathways of pain processing and it conscious appraisal that may interfere with pain reactivity and coping in autism. The role of social factors in pain-induced response is also discussed. Pain assessment in clinical care is mostly based on subjective rather than objective measures. This review clarifies the strong need for a consistent methodology, and describes innovative tools to cope with the heterogeneity of pain expression in ASD, enabling individualized assessment. Multiple measures, including self-reporting, informant reporting, clinician-assessed, and purely physiological metrics may provide more consistent results. An integrative view on the regulation of the pain cycle offers a more robust framework to characterize the experience of pain in autism.
Collapse
Affiliation(s)
- Olena V. Bogdanova
- CNRS, Aquitaine Institute for Cognitive and Integrative Neuroscience, INCIA, UMR 5287, Université de Bordeaux, Bordeaux, France
| | - Volodymyr B. Bogdanov
- Laboratoire EA 4136 – Handicap Activité Cognition Santé HACS, Collège Science de la Sante, Institut Universitaire des Sciences de la Réadaptation, Université de Bordeaux, Bordeaux, France
| | - Adrien Pizano
- CNRS, Aquitaine Institute for Cognitive and Integrative Neuroscience, INCIA, UMR 5287, Université de Bordeaux, Bordeaux, France
- Centre Hospitalier Charles-Perrens, Pôle Universitaire de Psychiatrie de l’Enfant et de l’Adolescent, Bordeaux, France
| | - Manuel Bouvard
- CNRS, Aquitaine Institute for Cognitive and Integrative Neuroscience, INCIA, UMR 5287, Université de Bordeaux, Bordeaux, France
- Centre Hospitalier Charles-Perrens, Pôle Universitaire de Psychiatrie de l’Enfant et de l’Adolescent, Bordeaux, France
| | - Jean-Rene Cazalets
- CNRS, Aquitaine Institute for Cognitive and Integrative Neuroscience, INCIA, UMR 5287, Université de Bordeaux, Bordeaux, France
| | - Nicholas Mellen
- Department of Neurology, University of Louisville, Louisville, KY, United States
| | - Anouck Amestoy
- CNRS, Aquitaine Institute for Cognitive and Integrative Neuroscience, INCIA, UMR 5287, Université de Bordeaux, Bordeaux, France
- Centre Hospitalier Charles-Perrens, Pôle Universitaire de Psychiatrie de l’Enfant et de l’Adolescent, Bordeaux, France
| |
Collapse
|
35
|
Galler M, Grendstad ÅR, Ares G, Varela P. Capturing food-elicited emotions: Facial decoding of children’s implicit and explicit responses to tasted samples. Food Qual Prefer 2022. [DOI: 10.1016/j.foodqual.2022.104551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
|
36
|
Guo Y, Huang J, Xiong M, Wang Z, Hu X, Wang J, Hijji M. Facial expressions recognition with multi-region divided attention networks for smart education cloud applications. Neurocomputing 2022. [DOI: 10.1016/j.neucom.2022.04.052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
37
|
Feighelstein M, Shimshoni I, Finka LR, Luna SPL, Mills DS, Zamansky A. Automated recognition of pain in cats. Sci Rep 2022; 12:9575. [PMID: 35688852 PMCID: PMC9187730 DOI: 10.1038/s41598-022-13348-1] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2022] [Accepted: 05/23/2022] [Indexed: 11/09/2022] Open
Abstract
Facial expressions in non-human animals are closely linked to their internal affective states, with the majority of empirical work focusing on facial shape changes associated with pain. However, existing tools for facial expression analysis are prone to human subjectivity and bias, and in many cases also require special expertise and training. This paper presents the first comparative study of two different paths towards automatizing pain recognition in facial images of domestic short haired cats (n = 29), captured during ovariohysterectomy at different time points corresponding to varying intensities of pain. One approach is based on convolutional neural networks (ResNet50), while the other-on machine learning models based on geometric landmarks analysis inspired by species specific Facial Action Coding Systems (i.e. catFACS). Both types of approaches reach comparable accuracy of above 72%, indicating their potential usefulness as a basis for automating cat pain detection from images.
Collapse
Affiliation(s)
| | - Ilan Shimshoni
- Information Systems Department, University of Haifa, Haifa, Israel
| | - Lauren R Finka
- School of Veterinary Medicine and Science, The University of Nottingham, Nottingham, UK
| | - Stelio P L Luna
- Department of Veterinary Surgery and Animal Reproduction, School of Veterinary Medicine and Animal Science, São Paulo State University (Unesp), Botucatu, São Paulo, Brazil
| | - Daniel S Mills
- School of Life Sciences, Joseph Bank Laboratories, University of Lincoln, Lincoln, UK
| | - Anna Zamansky
- Information Systems Department, University of Haifa, Haifa, Israel.
| |
Collapse
|
38
|
Rodríguez-Fuertes A, Alard-Josemaría J, Sandubete JE. Measuring the Candidates' Emotions in Political Debates Based on Facial Expression Recognition Techniques. Front Psychol 2022; 13:785453. [PMID: 35615169 PMCID: PMC9126085 DOI: 10.3389/fpsyg.2022.785453] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Accepted: 04/14/2022] [Indexed: 11/13/2022] Open
Abstract
This article presents the analysis of the main Spanish political candidates for the elections to be held on April 2019. The analysis focuses on the Facial Expression Analysis (FEA), a technique widely used in neuromarketing research. It allows to identify the micro-expressions that are very brief, involuntary. They are signals of hidden emotions that cannot be controlled voluntarily. The video with the final interventions of every candidate has been post-processed using the classification algorithms given by the iMotions's AFFDEX platform. We have then analyzed these data. Firstly, we have identified and compare the basic emotions showed by each politician. Second, we have associated the basic emotions with specific moments of the candidate's speech, identifying the topics they address and relating them directly to the expressed emotion. Third, we have analyzed whether the differences shown by each candidate in every emotion are statistically significant. In this sense, we have applied the non-parametric chi-squared goodness-of-fit test. We have also considered the ANOVA analysis in order to test whether, on average, there are differences between the candidates. Finally, we have checked if there is consistency between the results provided by different surveys from the main media in Spain regarding the evaluation of the debate and those obtained in our empirical analysis. A predominance of negative emotions has been observed. Some inconsistencies were found between the emotion expressed in the facial expression and the verbal content of the message. Also, evidences got from statistical analysis confirm that the differences observed between the various candidates with respect to the basic emotions, on average, are statistically significant. In this sense, this article provides a methodological contribution to the analysis of the public figures' communication, which could help politicians to improve the effectiveness of their messages identifying and evaluating the intensity of the expressed emotions.
Collapse
Affiliation(s)
| | | | - Julio E Sandubete
- Complutense University of Madrid, Madrid, Spain.,CEU San Pablo University, Madrid, Spain
| |
Collapse
|
39
|
Schmitz-Hübsch A, Stasch SM, Becker R, Fuchs S, Wirzberger M. Affective Response Categories—Toward Personalized Reactions in Affect-Adaptive Tutoring Systems. Front Artif Intell 2022; 5:873056. [PMID: 35656095 PMCID: PMC9152461 DOI: 10.3389/frai.2022.873056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/31/2022] [Indexed: 11/13/2022] Open
Abstract
Affect-adaptive tutoring systems detect the current emotional state of the learner and are capable of adequately responding by adapting the learning experience. Adaptations could be employed to manipulate the emotional state in a direction favorable to the learning process; for example, contextual help can be offered to mitigate frustration, or lesson plans can be accelerated to avoid boredom. Safety-critical situations, in which wrong decisions and behaviors can have fatal consequences, may particularly benefit from affect-adaptive tutoring systems, because accounting for affecting responses during training may help develop coping strategies and improve resilience. Effective adaptation, however, can only be accomplished when knowing which emotions benefit high learning performance in such systems. The results of preliminary studies indicate interindividual differences in the relationship between emotion and performance that require consideration by an affect-adaptive system. To that end, this article introduces the concept of Affective Response Categories (ARCs) that can be used to categorize learners based on their emotion-performance relationship. In an experimental study, N = 50 subjects (33% female, 19–57 years, M = 32.75, SD = 9.8) performed a simulated airspace surveillance task. Emotional valence was detected using facial expression analysis, and pupil diameters were used to indicate emotional arousal. A cluster analysis was performed to group subjects into ARCs based on their individual correlations of valence and performance as well as arousal and performance. Three different clusters were identified, one of which showed no correlations between emotion and performance. The performance of subjects in the other two clusters benefitted from negative arousal and differed only in the valence-performance correlation, which was positive or negative. Based on the identified clusters, the initial ARC model was revised. We then discuss the resulting model, outline future research, and derive implications for the larger context of the field of adaptive tutoring systems. Furthermore, potential benefits of the proposed concept are discussed and ethical issues are identified and addressed.
Collapse
Affiliation(s)
- Alina Schmitz-Hübsch
- Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE, Wachtberg, Germany
- *Correspondence: Alina Schmitz-Hübsch
| | | | - Ron Becker
- Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE, Wachtberg, Germany
| | - Sven Fuchs
- Fraunhofer Institute for Communication, Information Processing and Ergonomics FKIE, Wachtberg, Germany
| | | |
Collapse
|
40
|
Peng Y, Zhang H, Gao L, Wang X, Peng X. Palatability Assessment of Carbocysteine Oral Solution Strawberry Taste Versus Carbocysteine Oral Solution Mint Taste: A Blinded Randomized Study. Front Pharmacol 2022; 13:822086. [PMID: 35295331 PMCID: PMC8919395 DOI: 10.3389/fphar.2022.822086] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2021] [Accepted: 01/24/2022] [Indexed: 11/15/2022] Open
Abstract
Objective: To compare and evaluate the palatability of two carbocysteine oral solutions (strawberry vs. mint taste) among healthy children aged 2–12 years. Methods: A randomized, triple-blind, crossover, palatability trial in 42 children aged 2–12 years. All subjects received two preparations of carbocysteine oral solutions (strawberry vs. mint) according to randomized administration sequences, and the administration process was recorded by video. The palatability assessed by emotional valences was performed using a facial action coding system by FaceReader™, which reflected the quantification degree of emotion; a positive value represents positive emotion, and a negative value represents negative emotion. At the same time, a face-to-face interview was conducted for 5- to 12-year-old participants. Then, the taste preferential rates were compared to assess the palatability of two carbocysteine oral solutions. Results: Forty-two children were enrolled in this study. Twenty children first tasted the carbocysteine oral solution mint taste and then the strawberry taste preparation (M-S sequence), while 22 children tasted the strawberry preparation first and then the mint one (S-M sequence). The emotional valence of mint preparation (−0.9 in M-S and −1.2 in S-M) was both relatively lower than that of strawberry taste (both −0.7 in M-S and S-M) in two sequences; 69.0% (29/42) of participants’ emotional valences for strawberry preparation were higher than those for mint preparation. Among 27 participants aged ≥5 years, the taste preference rate was 88.5% (23/26) for the strawberry preparation (one missing value for the taste preference), and 77.8% of participants (21/27) chose the strawberry preparation if they had to take the medicine one more time. Conclusion: The carbocysteine oral solution with strawberry taste is an appealing preparation since it was better received by children. The facial action coding system could be an effective alternative for palatability assessment of pediatric pharmaceutical products.
Collapse
Affiliation(s)
- Yaguang Peng
- Center for Clinical Epidemiology and Evidence-based Medicine, National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Huan Zhang
- Department of Pharmacy, National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Liucun Gao
- Department of Pharmacy, National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Xiaoling Wang
- Department of Pharmacy, National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| | - Xiaoxia Peng
- Center for Clinical Epidemiology and Evidence-based Medicine, National Center for Children's Health, Beijing Children's Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
41
|
Höfling TTA, Alpers GW, Büdenbender B, Föhl U, Gerdes ABM. What's in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS One 2022; 17:e0263863. [PMID: 35239654 PMCID: PMC8893617 DOI: 10.1371/journal.pone.0263863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/28/2022] [Indexed: 11/19/2022] Open
Abstract
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
Collapse
Affiliation(s)
- T. Tim A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
42
|
Lebedeva S, Shved D, Savinkina A. Assessment of the Psychophysiological State of Female Operators Under Simulated Microgravity. Front Physiol 2022; 12:751016. [PMID: 35222056 PMCID: PMC8873526 DOI: 10.3389/fphys.2021.751016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2021] [Accepted: 12/29/2021] [Indexed: 11/13/2022] Open
Abstract
The article describes methods of non-verbal speech characteristics analysis used to determine psychophysiological state of female subjects under simulated microgravity conditions ("dry" immersion, DI), as well as the results of the study. A number of indicators of the acute period of adaptation to microgravity conditions was described. The acute adaptation period in female subjects began earlier (evening of the 1st day of DI) and ended faster than in male ones in previous studies (2nd day of DI). This was indicated by a decrease in the level of state anxiety (STAI, p < 0,05) and depression-dejection [Profile of Mood States (POMS), p < 0,05], as well as a decrease in pitch (p < 0,05) and voice intensity (p < 0,05). In addition, women, apparently, used the "freeze" coping strategy - the proportion of neutral facial expressions on the most intense days of the experiment was at maximum. The subjects in this experiment assessed their feelings and emotions better, giving more accurate answers in self-assessment questionnaires, but at the same time tried to look and sound as calm and confident as possible, controlling their expressions. Same trends in the subjects' cognitive performance were identified as in similar experimental conditions earlier: the subjects' psychophysiological excitement corresponded to better performance in sensorimotor tasks. The difference was in the speed of mathematical computation: women in the present study performed the computation faster on the same days when they made fewer pauses in speech, while in men in previous experiments this relationship was inverse.
Collapse
Affiliation(s)
- Svetlana Lebedeva
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Dmitry Shved
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
- Moscow Aviation Institute, National Research University, Moscow, Russia
| | - Alexandra Savinkina
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| |
Collapse
|
43
|
Lewandowska A, Rejer I, Bortko K, Jankowski J. Eye-Tracker Study of Influence of Affective Disruptive Content on User's Visual Attention and Emotional State. SENSORS 2022; 22:s22020547. [PMID: 35062508 PMCID: PMC8780667 DOI: 10.3390/s22020547] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/12/2021] [Revised: 12/24/2021] [Accepted: 01/08/2022] [Indexed: 12/19/2022]
Abstract
When reading interesting content or searching for information on a website, the appearance of a pop-up advertisement in the middle of the screen is perceived as irritating by a recipient. Interrupted cognitive processes are considered unwanted by the user but desired by advertising providers. Diverting visual attention away from the main content is intended to focus the user on the appeared disruptive content. Is the attempt to reach the user by any means justified? In this study, we examined the impact of pop-up emotional content on user reactions. For this purpose, a cognitive experiment was designed where a text-reading task was interrupted by two types of affective pictures: positive and negative ones. To measure the changes in user reactions, an eye-tracker (for analysis of eye movements and changes in gaze points) and an iMotion Platform (for analysis of face muscles' movements) were used. The results confirm the impact of the type of emotional content on users' reactions during cognitive process interruptions and indicate that the negative impact of cognitive process interruptions on the user can be reduced. The negative content evoked lower cognitive load, narrower visual attention, and lower irritation compared to positive content. These results offer insight on how to provide more efficient Internet advertising.
Collapse
|
44
|
Franz M, Müller T, Hahn S, Lundqvist D, Rampoldt D, Westermann JF, Nordmann MA, Schäfer R. Creation and validation of the Picture-Set of Young Children's Affective Facial Expressions (PSYCAFE). PLoS One 2021; 16:e0260871. [PMID: 34874965 PMCID: PMC8651117 DOI: 10.1371/journal.pone.0260871] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2020] [Accepted: 11/18/2021] [Indexed: 11/23/2022] Open
Abstract
The immediate detection and correct processing of affective facial expressions are one of the most important competences in social interaction and thus a main subject in emotion and affect research. Generally, studies in these research domains, use pictures of adults who display affective facial expressions as experimental stimuli. However, for studies investigating developmental psychology and attachment behaviour it is necessary to use age-matched stimuli, where it is children that display affective expressions. PSYCAFE represents a newly developed picture-set of children’s faces. It includes reference portraits of girls and boys aged 4 to 6 years averaged digitally from different individual pictures, that were categorized to six basic affects (fear, disgust, happiness, sadness, anger and surprise) plus a neutral facial expression by cluster analysis. This procedure led to deindividualized and affect prototypical portraits. Individual affect expressive portraits of adults from an already validated picture-set (KDEF) were used in a similar way to create affect prototypical images also of adults. The stimulus set has been validated on human observers and entail emotion recognition accuracy rates and scores for intensity, authenticity and likeability ratings of the specific affect displayed. Moreover, the stimuli have also been characterized by the iMotions Facial Expression Analysis Module, providing additional data on probability values representing the likelihood that the stimuli depict the expected affect. Finally, the validation data from human observers and iMotions are compared to data on facial mimicry of healthy adults in response to these portraits, measured by facial EMG (m. zygomaticus major and m. corrugator supercilii).
Collapse
Affiliation(s)
- Matthias Franz
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
- * E-mail:
| | - Tobias Müller
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| | - Sina Hahn
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| | - Daniel Lundqvist
- Karolinska Institute, Department of Clinical Neuroscience, NatMEG, Solna, Sweden
| | - Dirk Rampoldt
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| | - Jan-Frederik Westermann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| | - Marc A. Nordmann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| | - Ralf Schäfer
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy (15.16), University Hospital Düsseldorf, Düsseldorf, Germany
| |
Collapse
|
45
|
Supolkina N, Yusupova A, Shved D, Gushin V, Savinkina A, Lebedeva SA, Chekalina A, Kuznetsova P. External Communication of Autonomous Crews Under Simulation of Interplanetary Missions. Front Physiol 2021; 12:751170. [PMID: 34858207 PMCID: PMC8630617 DOI: 10.3389/fphys.2021.751170] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2021] [Accepted: 09/30/2021] [Indexed: 11/13/2022] Open
Abstract
Two experiments, with 17-day and 120-day isolation, were carried out within the frame of the Scientific International Research in Unique Terrestrial Station (SIRIUS) international project at the Institute of Biomedical Problems (Moscow, Russia). Manifestations of the "detachment" phenomenon in the crew - mission control center (MCC) communication previously identified in the Mars-500 project were confirmed in this study. As in the Mars-500 experiment, in the SIRIUS-19, the landing simulation in the halfway of isolation caused a temporary increase of crew communication with MCC. We also revealed several differences in the communication styles of male and female crew members. By the end of the experiment, there was a convergence of communication styles of all the SIRIUS crew members and also an increase in crew cohesion.
Collapse
Affiliation(s)
- Natalia Supolkina
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Anna Yusupova
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Dmitry Shved
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia.,Moscow Aviation Institute, National Research University, Moscow, Russia
| | - Vadim Gushin
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Alexandra Savinkina
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Svetlana A Lebedeva
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Angelina Chekalina
- Russian Federation State Scientific Center, Institute of Biomedical Problems of the Russian Academy of Sciences, Moscow, Russia
| | - Polina Kuznetsova
- Institute of Biomedical Problems, Russian Academy of Sciences (RAS), Moscow, Russia
| |
Collapse
|
46
|
Mauri M, Rancati G, Gaggioli A, Riva G. Applying Implicit Association Test Techniques and Facial Expression Analyses in the Comparative Evaluation of Website User Experience. Front Psychol 2021; 12:674159. [PMID: 34712164 PMCID: PMC8545899 DOI: 10.3389/fpsyg.2021.674159] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2021] [Accepted: 09/07/2021] [Indexed: 11/25/2022] Open
Abstract
This research project has the goal to verify whether the application of neuromarketing techniques, such as implicit association test (IAT) techniques and emotional facial expressions analyses may contribute to the assessment of user experience (UX) during and after website navigation. These techniques have been widely and positively applied in assessing customer experience (CX); however, little is known about their simultaneous application in the field of UX. As a specific context, the experience raised by different websites from two well-known automotive brands was compared. About 160 Italian university students were enrolled in an online experimental study. Participants performed a Brand Association Reaction Time Test (BARTT) version of the IAT where the two brands were compared according to different semantic dimensions already used in the automotive field. After completing the BARTT test, the participants navigated the target website: 80 participants navigated the first brand website, while the other half navigated the second brand website (between-subject design). During the first 3 min of website navigation, emotional facial expressions were recorded. The participants were asked to freely navigate the website home page, look for a car model and its characteristics and price, use the customising tool, and in the end, look for assistance. After the website navigation, all the participants performed, a second time, the BARTT version of the IAT, where the two brands were compared again, this time to assess whether the website navigation may impact the Implicit Associations previously detected. A traditional evaluation of the two websites was carried on by means of the classic heuristic evaluation. Findings from this study show, first of all, the significant results provided by neuromarketing techniques in the field of UX, as IAT can provide a positive application for assessing UX played by brand websites, thanks to the comparison of eventual changes in time reaction between the test performed before and after website navigation exposure. Secondly, results from emotional facial expression analyses during the navigation of both brand websites showed significant differences between the two brands, allowing the researchers to predict the emotional impact raised by each website. Finally, the positive correlation with heuristic evaluation shows that neuromarketing can be successfully applied in UX.
Collapse
Affiliation(s)
- Maurizio Mauri
- Department of Psychology, Catholic University of Milan, Milan, Italy.,Department of User Experience and Marketing Research, SR LABS, Milan, Italy
| | - Gaia Rancati
- Department of Business and Economics, Allegheny College, Meadville, PA, United States
| | - Andrea Gaggioli
- Department of Psychology, Catholic University of Milan, Milan, Italy.,Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
| | - Giuseppe Riva
- Department of Psychology, Catholic University of Milan, Milan, Italy.,Applied Technology for Neuro-Psychology Lab, I.R.C.C.S. Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
47
|
Pereira M, Meng H, Hone K. Prediction of Communication Effectiveness During Media Skills Training Using Commercial Automatic Non-verbal Recognition Systems. Front Psychol 2021; 12:675721. [PMID: 34659000 PMCID: PMC8511452 DOI: 10.3389/fpsyg.2021.675721] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Accepted: 07/08/2021] [Indexed: 11/13/2022] Open
Abstract
It is well recognised that social signals play an important role in communication effectiveness. Observation of videos to understand non-verbal behaviour is time-consuming and limits the potential to incorporate detailed and accurate feedback of this behaviour in practical applications such as communication skills training or performance evaluation. The aim of the current research is twofold: (1) to investigate whether off-the-shelf emotion recognition technology can detect social signals in media interviews and (2) to identify which combinations of social signals are most promising for evaluating trainees' performance in a media interview. To investigate this, non-verbal signals were automatically recognised from practice on-camera media interviews conducted within a media training setting with a sample size of 34. Automated non-verbal signal detection consists of multimodal features including facial expression, hand gestures, vocal behaviour and 'honest' signals. The on-camera interviews were categorised into effective and poor communication exemplars based on communication skills ratings provided by trainers and neutral observers which served as a ground truth. A correlation-based feature selection method was used to select signals associated with performance. To assess the accuracy of the selected features, a number of machine learning classification techniques were used. Naive Bayes analysis produced the best results with an F-measure of 0.76 and prediction accuracy of 78%. Results revealed that a combination of body movements, hand movements and facial expression are relevant for establishing communication effectiveness in the context of media interviews. The results of the current study have implications for the automatic evaluation of media interviews with a number of potential application areas including enhancing communication training including current media skills training.
Collapse
Affiliation(s)
- Monica Pereira
- Department of Psychology, School of Social Sciences, London Metropolitan University, London, United Kingdom
| | - Hongying Meng
- Department of Electronic and Computer Engineering, College of Engineering, Design and Physical Sciences, Brunel University London, London, United Kingdom
| | - Kate Hone
- Department of Computer Science, College of Engineering, Design and Physical Sciences, Brunel University London, London, United Kingdom
| |
Collapse
|
48
|
Tejada J, Freitag RMK, Pinheiro BFM, Cardoso PB, Souza VRA, Silva LS. Building and validation of a set of facial expression images to detect emotions: a transcultural study. PSYCHOLOGICAL RESEARCH 2021; 86:1996-2006. [PMID: 34652530 DOI: 10.1007/s00426-021-01605-3] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2020] [Accepted: 09/28/2021] [Indexed: 11/26/2022]
Abstract
The automatic emotion recognition from facial expressions has become an exceptional tool in research involving human subjects and has made it possible to obtain objective measurements of the emotional state of research subjects. Different software and commercial solutions are offered to perform this task. However, the adaptation to cultural context and the recognition of complex expressions and/or emotions are two of the main challenges faced by these solutions. Here, we describe the construction and validation of a set of facial expression images suitable for training a recognition system. Our datasets consist of images of people with no experience in acting who were recorded with a webcam as they performed a computer-assisted task in a room with a light background and overhead illumination. The six basic emotions and mockery were included and a combination of OpenCV, Dlib and Scikit-learn Python libraries were used to develop a support vector machine classifier. The code is available at GitHub and the images will be provided upon request. Since transcultural facial expressions to evaluate complex emotions and open-source solutions were used in this study, we strongly believe that our dataset will be useful in different research contexts.
Collapse
Affiliation(s)
- Julian Tejada
- Departamento de Psicologia, Universidade Federal de Sergipe, São Cristóvão, Brazil.
- Facultad de Psicología, Fundación Universitaria Konrad Lorenz, Bogotá, Colombia.
| | | | | | | | | | - Lucas Santos Silva
- Departamento de Letras, Universidade Federal de Sergipe, São Cristóvão, Brazil
| |
Collapse
|
49
|
Filippini C, Perpetuini D, Cardone D, Merla A. Improving Human-Robot Interaction by Enhancing NAO Robot Awareness of Human Facial Expression. SENSORS 2021; 21:s21196438. [PMID: 34640758 PMCID: PMC8512606 DOI: 10.3390/s21196438] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Revised: 09/20/2021] [Accepted: 09/23/2021] [Indexed: 11/16/2022]
Abstract
An intriguing challenge in the human–robot interaction field is the prospect of endowing robots with emotional intelligence to make the interaction more genuine, intuitive, and natural. A crucial aspect in achieving this goal is the robot’s capability to infer and interpret human emotions. Thanks to its design and open programming platform, the NAO humanoid robot is one of the most widely used agents for human interaction. As with person-to-person communication, facial expressions are the privileged channel for recognizing the interlocutor’s emotional expressions. Although NAO is equipped with a facial expression recognition module, specific use cases may require additional features and affective computing capabilities that are not currently available. This study proposes a highly accurate convolutional-neural-network-based facial expression recognition model that is able to further enhance the NAO robot’ awareness of human facial expressions and provide the robot with an interlocutor’s arousal level detection capability. Indeed, the model tested during human–robot interactions was 91% and 90% accurate in recognizing happy and sad facial expressions, respectively; 75% accurate in recognizing surprised and scared expressions; and less accurate in recognizing neutral and angry expressions. Finally, the model was successfully integrated into the NAO SDK, thus allowing for high-performing facial expression classification with an inference time of 0.34 ± 0.04 s.
Collapse
|
50
|
A Challenge-Based Learning Experience in Industrial Engineering in the Framework of Education 4.0. SUSTAINABILITY 2021. [DOI: 10.3390/su13179867] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Current tendencies of product, project and services development focus on a higher consideration of the User Experience (UX). Therefore, traditional training and teaching methodologies need to adapt to prepare the students to develop strategies for problem solving for their professional education. Such needs have risen and interest in tendencies such as education and Industry 4.0 has grown. This paper presents and analyzes the process and results of a teaching implementation methodology based on Challenge-Based Learning (CBL). The paper describes the process followed, explaining the methodology precedents that led to the final implementation case. It also mentions previous experiments on product analysis and home automation developments that are linked to implementation of the technology. This case’s implementation, analysis and experimentation integrated the use of Emotional Domotics (ED) Tools for the UX analysis, to grant feedback and compare the students’ results with the bio-metrical and emotional computational analysis. The methodology, described through this document, allowed the students to better understand and experience some of the implications of an interconnected system with instant information feedback. This allowed them to better grasp part of the impact that the tendency towards the Internet of Things (IoT) is currently having, and the impact of the improvement proposals from the students.
Collapse
|