1
|
Tessier MH, Mazet JP, Gagner E, Marcoux A, Jackson PL. Facial representations of complex affective states combining pain and a negative emotion. Sci Rep 2024; 14:11686. [PMID: 38777852 PMCID: PMC11111784 DOI: 10.1038/s41598-024-62423-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 05/16/2024] [Indexed: 05/25/2024] Open
Abstract
Pain is rarely communicated alone, as it is often accompanied by emotions such as anger or sadness. Communicating these affective states involves shared representations. However, how an individual conceptually represents these combined states must first be tested. The objective of this study was to measure the interaction between pain and negative emotions on two types of facial representations of these states, namely visual (i.e., interactive virtual agents; VAs) and sensorimotor (i.e., one's production of facial configurations). Twenty-eight participants (15 women) read short written scenarios involving only pain or a combined experience of pain and a negative emotion (anger, disgust, fear, or sadness). They produced facial configurations representing these experiences on the faces of the VAs and on their face (own production or imitation of VAs). The results suggest that affective states related to a direct threat to the body (i.e., anger, disgust, and pain) share a similar facial representation, while those that present no immediate danger (i.e., fear and sadness) differ. Although visual and sensorimotor representations of these states provide congruent affective information, they are differently influenced by factors associated with the communication cycle. These findings contribute to our understanding of pain communication in different affective contexts.
Collapse
Affiliation(s)
- Marie-Hélène Tessier
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Jean-Philippe Mazet
- Department of Computer Science and Software Engineering, Université Laval, Québec City, Canada
| | - Elliot Gagner
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Audrey Marcoux
- School of Psychology, Université Laval, Québec City, Canada
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada
- CERVO Brain Research Centre, Québec City, Canada
| | - Philip L Jackson
- School of Psychology, Université Laval, Québec City, Canada.
- Centre for Interdisciplinary Research in Rehabilitation and Social Integration (Cirris), Québec City, Canada.
- CERVO Brain Research Centre, Québec City, Canada.
| |
Collapse
|
2
|
Tal S, Ben-David Sela T, Dolev-Amit T, Hel-Or H, Zilcha-Mano S. Reactivity and stability in facial expressions as an indicator of therapeutic alliance strength. Psychother Res 2024:1-15. [PMID: 38442022 DOI: 10.1080/10503307.2024.2311777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 01/22/2024] [Indexed: 03/07/2024] Open
Abstract
Objective: Aspects of our emotional state are constantly being broadcast via our facial expressions. Psychotherapeutic theories highlight the importance of emotional dynamics between patients and therapists for an effective therapeutic relationship. Two emotional dynamics suggested by the literature are emotional reactivity (i.e., when one person is reacting to the other) and emotional stability (i.e., when a person has a tendency to remain in a given emotional state). Yet, little is known empirically about the association between these dynamics and the therapeutic alliance. This study investigates the association between the therapeutic alliance and the emotional dynamics of reactivity and stability, as manifested in the facial expressions of patients and therapists within the session. Methods: Ninety-four patients with major depressive disorder underwent short-term treatment for depression (N = 1256 sessions). Results: Both therapist reactivity and stability were associated with the alliance, across all time spans. Patient reactivity was associated with the alliance only in a short time span (1 s). Conclusions: These findings may potentially guide therapists in the field to attenuate not only their emotional reaction to their patients, but also their own unique presence in the therapy room.
Collapse
Affiliation(s)
- Shachaf Tal
- Department of Psychology, University of Haifa, Haifa, Israel
| | | | | | - Hagit Hel-Or
- Department of Computer Science, University of Haifa, Haifa, Israel
| | | |
Collapse
|
3
|
Bian Y, Küster D, Liu H, Krumhuber EG. Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models. SENSORS (BASEL, SWITZERLAND) 2023; 24:126. [PMID: 38202988 PMCID: PMC10781259 DOI: 10.3390/s24010126] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 11/30/2023] [Accepted: 12/21/2023] [Indexed: 01/12/2024]
Abstract
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.
Collapse
Affiliation(s)
- Yifan Bian
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| | - Dennis Küster
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Hui Liu
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Eva G. Krumhuber
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| |
Collapse
|
4
|
Namba S, Sato W, Namba S, Nomiya H, Shimokawa K, Osumi M. Development of the RIKEN database for dynamic facial expressions with multiple angles. Sci Rep 2023; 13:21785. [PMID: 38066065 PMCID: PMC10709572 DOI: 10.1038/s41598-023-49209-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2023] [Accepted: 12/05/2023] [Indexed: 12/18/2023] Open
Abstract
The development of facial expressions with sensing information is progressing in multidisciplinary fields, such as psychology, affective computing, and cognitive science. Previous facial datasets have not simultaneously dealt with multiple theoretical views of emotion, individualized context, or multi-angle/depth information. We developed a new facial database (RIKEN facial expression database) that includes multiple theoretical views of emotions and expressers' individualized events with multi-angle and depth information. The RIKEN facial expression database contains recordings of 48 Japanese participants captured using ten Kinect cameras at 25 events. This study identified several valence-related facial patterns and found them consistent with previous research investigating the coherence between facial movements and internal states. This database represents an advancement in developing a new sensing system, conducting psychological experiments, and understanding the complexity of emotional events.
Collapse
Affiliation(s)
- Shushi Namba
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan.
| | - Wataru Sato
- RIKEN, Psychological Process Research Team, Guardian Robot Project, Kyoto, 6190288, Japan.
| | - Saori Namba
- Department of Psychology, Hiroshima University, Hiroshima, 7398524, Japan
| | - Hiroki Nomiya
- Faculty of Information and Human Sciences, Kyoto Institute of Technology, Kyoto, 6068585, Japan
| | - Koh Shimokawa
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| | - Masaki Osumi
- KOHINATA Limited Liability Company, Osaka, 5560020, Japan
| |
Collapse
|
5
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
6
|
Gülşen M, Aydın B, Gürer G, Yalçın SS. AI-ASSISTED emotion analysis during complementary feeding in infants aged 6-11 months. Comput Biol Med 2023; 166:107482. [PMID: 37742418 DOI: 10.1016/j.compbiomed.2023.107482] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 08/07/2023] [Accepted: 09/15/2023] [Indexed: 09/26/2023]
Abstract
This study aims to explore AI-assisted emotion assessment in infants aged 6-11 months during complementary feeding using OpenFace to analyze the Actions Units (AUs) within the Facial Action Coding system. When infants (n = 98) were exposed to a diverse range of food groups; meat, cow-milk, vegetable, grain, and dessert products, favorite, and disliked food, then video recordings were analyzed for emotional responses to these food groups, including surprise, sadness, happiness, fear, anger, and disgust. Time-averaged filtering was performed for the intensity of AUs. Facial expression to different food groups were compared with neutral states by Wilcoxon Singed test. The majority of the food groups did not significantly differ from the neutral emotional state. Infants exhibited high disgust responses to meat and anger reactions to yogurt compared to neutral. Emotional responses also varied between breastfed and non-breastfed infants. Breastfed infants showed heightened negative emotions, including fear, anger, and disgust, when exposed to certain food groups while non-breastfed infants displayed lower surprise and sadness reactions to their favorite foods and desserts. Further longitudinal research is needed to gain a comprehensive understanding of infants' emotional experiences and their associations with feeding behaviors and food acceptance.
Collapse
Affiliation(s)
- Murat Gülşen
- Autism, Special Mental Needs and Rare Diseases Department, Turkish Ministry of Health, Ankara, Turkey.
| | - Beril Aydın
- Department of Pediatrics, Başkent University School of Medicine, Ankara, Turkey.
| | - Güliz Gürer
- Department of Pediatrics, Balıkesir Atatürk City Hospital, Balıkesir, Turkey.
| | | |
Collapse
|
7
|
Turan B, Algedik Demirayak P, Yildirim Demirdogen E, Gulsen M, Cubukcu HC, Guler M, Alarslan H, Yilmaz AE, Dursun OB. Toward the detection of reduced emotion expression intensity: an autism sibling study. J Clin Exp Neuropsychol 2023:1-11. [PMID: 37318219 DOI: 10.1080/13803395.2023.2225234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Accepted: 06/08/2023] [Indexed: 06/16/2023]
Abstract
INTRODUCTION Expressing emotions through spontaneous facial expression is an important nonverbal social communication skill. In our study, we aimed to demonstrate that both children with autism spectrum disorder (ASD) and the non-ASD siblings of children with ASD have deficits in this skill. METHOD In this study, we analyzed the six core facial emotion expressions of three distinct groups of children - those diagnosed with ASD (n = 60), non-ASD siblings (n = 60), and typically developed children (n = 60). To analyze facial expressions, we employed a computer vision program that uses machine learning algorithms to detect facial features and conducted an evidence-based task that involved assessing participants' ability to recognize facial emotion expressions. RESULTS Deficits in spontaneous emotion expression were shown in the children with ASD and in non-ASD siblings when compared with typically developed children. Interestingly, it was determined that these deficits were not related to the severity of the autism symptoms in the ASD group. CONCLUSIONS The results of the study suggest that computer-based automated analysis of facial expressions with contextual social scenes task holds potential for measuring limitations in the ability to express emotions, and they supplement the traditional clinical assessment of social phenotypical behavior deficits. This applies both to children with ASD and especially, to the non-ASD siblings of children with ASD. This study adds a novel approach to previous literature examining the emotion expression skills.
Collapse
Affiliation(s)
- Bahadir Turan
- Faculty of Medicine, Department of Child and Adolescent Psychiatry, Karadeniz Technical University, Trabzon, Turkey
- Graduate School of Applied Science Interdisciplinary Artificial Intelligence Technology, Ankara University, Ankara, Turkey
| | - Pinar Algedik Demirayak
- Department of Child and Adolescent Psychiatry, Umraniye Training and Research Hospital, Istanbul, Turkey
| | - Esen Yildirim Demirdogen
- Faculty of Medicine, Department of Child and Adolescent Psychiatry, Ataturk University, Erzurum, Turkey
| | - Murat Gulsen
- Graduate School of Applied Science Interdisciplinary Artificial Intelligence Technology, Ankara University, Ankara, Turkey
- General Directorate of Health Services, Autism, Mental Special Needs and Rare Diseases Department, Turkish Ministry of Health, Ankara, Turkey
| | - Hikmet Can Cubukcu
- General Directorate of Health Services, Autism, Mental Special Needs and Rare Diseases Department, Turkish Ministry of Health, Ankara, Turkey
| | - Muhammed Guler
- Department of Dıstance Educatıon and Applıcatıon Research Center, Ataturk University, Erzurum, Turkey
| | - Hatice Alarslan
- Faculty of Medicine, Department of Child and Adolescent Psychiatry, Ataturk University, Erzurum, Turkey
| | - Asım Egemen Yilmaz
- Graduate School of Applied Science Interdisciplinary Artificial Intelligence Technology, Ankara University, Ankara, Turkey
| | - Onur Burak Dursun
- General Directorate of Health Services, Autism, Mental Special Needs and Rare Diseases Department, Turkish Ministry of Health, Ankara, Turkey
| |
Collapse
|
8
|
Hirsch J, Zhang X, Noah JA, Bhattacharya A. Neural mechanisms for emotional contagion and spontaneous mimicry of live facial expressions. Philos Trans R Soc Lond B Biol Sci 2023; 378:20210472. [PMID: 36871593 PMCID: PMC9985973 DOI: 10.1098/rstb.2021.0472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 01/16/2023] [Indexed: 03/07/2023] Open
Abstract
Viewing a live facial expression typically elicits a similar expression by the observer (facial mimicry) that is associated with a concordant emotional experience (emotional contagion). The model of embodied emotion proposes that emotional contagion and facial mimicry are functionally linked although the neural underpinnings are not known. To address this knowledge gap, we employed a live two-person paradigm (n = 20 dyads) using functional near-infrared spectroscopy during live emotive face-processing while also measuring eye-tracking, facial classifications and ratings of emotion. One dyadic partner, 'Movie Watcher', was instructed to emote natural facial expressions while viewing evocative short movie clips. The other dyadic partner, 'Face Watcher', viewed the Movie Watcher's face. Task and rest blocks were implemented by timed epochs of clear and opaque glass that separated partners. Dyadic roles were alternated during the experiment. Mean cross-partner correlations of facial expressions (r = 0.36 ± 0.11 s.e.m.) and mean cross-partner affect ratings (r = 0.67 ± 0.04) were consistent with facial mimicry and emotional contagion, respectively. Neural correlates of emotional contagion based on covariates of partner affect ratings included angular and supramarginal gyri, whereas neural correlates of the live facial action units included motor cortex and ventral face-processing areas. Findings suggest distinct neural components for facial mimicry and emotional contagion. This article is part of a discussion meeting issue 'Face2face: advancing the science of social interaction'.
Collapse
Affiliation(s)
- Joy Hirsch
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA
- Department of Neuroscience, Yale School of Medicine, New Haven, CT 06511, USA
- Department of Comparative Medicine, Yale School of Medicine, New Haven, CT 06511, USA
- Wu Tsai Institute, Yale University, PO Box 208091, New Haven, CT 06520, USA
- Haskins Laboratories, 300 George Street, New Haven, CT 06511, USA
- Department of Medical Physics and Biomedical Engineering, University College London, London WC1E 6BT, UK
| | - Xian Zhang
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA
| | - J. Adam Noah
- Brain Function Laboratory, Department of Psychiatry, Yale School of Medicine, New Haven, CT 06511, USA
| | | |
Collapse
|
9
|
The spatio-temporal features of perceived-as-genuine and deliberate expressions. PLoS One 2022; 17:e0271047. [PMID: 35839208 PMCID: PMC9286247 DOI: 10.1371/journal.pone.0271047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/21/2021] [Accepted: 06/22/2022] [Indexed: 11/24/2022] Open
Abstract
Reading the genuineness of facial expressions is important for increasing the credibility of information conveyed by faces. However, it remains unclear which spatio-temporal characteristics of facial movements serve as critical cues to the perceived genuineness of facial expressions. This study focused on observable spatio-temporal differences between perceived-as-genuine and deliberate expressions of happiness and anger expressions. In this experiment, 89 Japanese participants were asked to judge the perceived genuineness of faces in videos showing happiness or anger expressions. To identify diagnostic facial cues to the perceived genuineness of the facial expressions, we analyzed a total of 128 face videos using an automated facial action detection system; thereby, moment-to-moment activations in facial action units were annotated, and nonnegative matrix factorization extracted sparse and meaningful components from all action units data. The results showed that genuineness judgments reduced when more spatial patterns were observed in facial expressions. As for the temporal features, the perceived-as-deliberate expressions of happiness generally had faster onsets to the peak than the perceived-as-genuine expressions of happiness. Moreover, opening the mouth negatively contributed to the perceived-as-genuine expressions, irrespective of the type of facial expressions. These findings provide the first evidence for dynamic facial cues to the perceived genuineness of happiness and anger expressions.
Collapse
|
10
|
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions. TECHNOLOGIES 2022. [DOI: 10.3390/technologies10040075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
(1) Background: In addition to verbalizations, facial expressions advertise one’s affective state. There is an ongoing debate concerning the communicative value of the facial expressions of pain and of pleasure, and to what extent humans can distinguish between these. We introduce a novel method of analysis by replacing human ratings with outputs from image analysis software. (2) Methods: We use image analysis software to extract feature vectors of the facial expressions neutral, pain, and pleasure displayed by 20 actresses. We dimension-reduced these feature vectors, used singular value decomposition to eliminate noise, and then used hierarchical agglomerative clustering to detect patterns. (3) Results: The vector norms for pain–pleasure were rarely less than the distances pain–neutral and pleasure–neutral. The pain–pleasure distances were Weibull-distributed and noise contributed 10% to the signal. The noise-free distances clustered in four clusters and two isolates. (4) Conclusions: AI methods of image recognition are superior to human abilities in distinguishing between facial expressions of pain and pleasure. Statistical methods and hierarchical clustering offer possible explanations as to why humans fail. The reliability of commercial software, which attempts to identify facial expressions of affective states, can be improved by using the results of our analyses.
Collapse
|
11
|
Kumar M, Roy S, Bhushan B, Sameer A. Creative problem solving and facial expressions: A stage based comparison. PLoS One 2022; 17:e0269504. [DOI: https:/doi.org/10.1371/journal.pone.0269504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/21/2023] Open
Abstract
A wealth of research indicates that emotions play an instrumental role in creative problem-solving. However, most of these studies have relied primarily on diary studies and self-report scales when measuring emotions during the creative processes. There has been a need to capture in-the-moment emotional experiences of individuals during the creative process using an automated emotion recognition tool. The experiment in this study examined the process-related difference between the creative problem solving (CPS) and simple problem solving (SPS) processes using protocol analysis and Markov’s chains. Further, this experiment introduced a novel method for measuring in-the-moment emotional experiences of individuals during the CPS and SPS processes using facial expressions and machine learning algorithms. The experiment described in this study employed 64 participants to solve different tasks while wearing camera-mounted headgear. Using retrospective analysis, the participants verbally reported their thoughts using video-stimulated recall. Our results indicate differences in the cognitive efforts spent at different stages during the CPS and SPS processes. We also found that most of the creative stages were associated with ambivalent emotions whereas the stage of block was associated with negative emotions.
Collapse
|
12
|
Kumar M, Roy S, Bhushan B, Sameer A. Creative problem solving and facial expressions: A stage based comparison. PLoS One 2022; 17:e0269504. [PMID: 35731723 PMCID: PMC9216609 DOI: 10.1371/journal.pone.0269504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 05/22/2022] [Indexed: 11/20/2022] Open
Abstract
A wealth of research indicates that emotions play an instrumental role in creative problem-solving. However, most of these studies have relied primarily on diary studies and self-report scales when measuring emotions during the creative processes. There has been a need to capture in-the-moment emotional experiences of individuals during the creative process using an automated emotion recognition tool. The experiment in this study examined the process-related difference between the creative problem solving (CPS) and simple problem solving (SPS) processes using protocol analysis and Markov's chains. Further, this experiment introduced a novel method for measuring in-the-moment emotional experiences of individuals during the CPS and SPS processes using facial expressions and machine learning algorithms. The experiment described in this study employed 64 participants to solve different tasks while wearing camera-mounted headgear. Using retrospective analysis, the participants verbally reported their thoughts using video-stimulated recall. Our results indicate differences in the cognitive efforts spent at different stages during the CPS and SPS processes. We also found that most of the creative stages were associated with ambivalent emotions whereas the stage of block was associated with negative emotions.
Collapse
Affiliation(s)
- Mritunjay Kumar
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Satyaki Roy
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- Department of Humanities and Social Sciences, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Braj Bhushan
- Department of Design, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
- Department of Humanities and Social Sciences, Indian Institute of Technology Kanpur, Kanpur, Uttar Pradesh, India
| | - Ahmed Sameer
- Department of Humanities and Social Sciences, Indian Institute of Technology (ISM) Dhanbad, Dhanbad, Jharkhand, India
| |
Collapse
|
13
|
Namba S, Sato W, Nakamura K, Watanabe K. Computational Process of Sharing Emotion: An Authentic Information Perspective. Front Psychol 2022; 13:849499. [PMID: 35645906 PMCID: PMC9134197 DOI: 10.3389/fpsyg.2022.849499] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2022] [Accepted: 04/26/2022] [Indexed: 11/28/2022] Open
Abstract
Although results of many psychology studies have shown that sharing emotion achieves dyadic interaction, no report has explained a study of the transmission of authentic information from emotional expressions that can strengthen perceivers. For this study, we used computational modeling, which is a multinomial processing tree, for formal quantification of the process of sharing emotion that emphasizes the perception of authentic information for expressers’ feeling states from facial expressions. Results indicated that the ability to perceive authentic information of feeling states from a happy expression has a higher probability than the probability of judging authentic information from anger expressions. Next, happy facial expressions can activate both emotional elicitation and sharing emotion in perceivers, where emotional elicitation alone is working rather than sharing emotion for angry facial expressions. Third, parameters to detect anger experiences were found to be correlated positively with those of happiness. No robust correlation was found between the parameters extracted from this experiment task and questionnaire-measured emotional contagion, empathy, and social anxiety. Results of this study revealed the possibility that a new computational approach contributes to description of emotion sharing processes.
Collapse
Affiliation(s)
- Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- *Correspondence: Shushi Namba,
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Koyo Nakamura
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
- Japan Society for the Promotion of Science, Tokyo, Japan
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Faculty of Arts, Design and Architecture, University of New South Wales, Sydney, NSW, Australia
| |
Collapse
|
14
|
Zhang K, Yuan Y, Chen J, Wang G, Chen Q, Luo M. Eye Tracking Research on the Influence of Spatial Frequency and Inversion Effect on Facial Expression Processing in Children with Autism Spectrum Disorder. Brain Sci 2022; 12:brainsci12020283. [PMID: 35204046 PMCID: PMC8870542 DOI: 10.3390/brainsci12020283] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 02/11/2022] [Accepted: 02/16/2022] [Indexed: 12/10/2022] Open
Abstract
Facial expression processing mainly depends on whether the facial features related to expressions can be fully acquired, and whether the appropriate processing strategies can be adopted according to different conditions. Children with autism spectrum disorder (ASD) have difficulty accurately recognizing facial expressions and responding appropriately, which is regarded as an important cause of their social disorders. This study used eye tracking technology to explore the internal processing mechanism of facial expressions in children with ASD under the influence of spatial frequency and inversion effects for improving their social disorders. The facial expression recognition rate and eye tracking characteristics of children with ASD and typical developing (TD) children on the facial area of interest were recorded and analyzed. The multi-factor mixed experiment results showed that the facial expression recognition rate of children with ASD under various conditions was significantly lower than that of TD children. TD children had more visual attention to the eyes area. However, children with ASD preferred the features of the mouth area, and lacked visual attention and processing of the eyes area. When the face was inverted, TD children had the inversion effect under all three spatial frequency conditions, which was manifested as a significant decrease in expression recognition rate. However, children with ASD only had the inversion effect under the LSF condition, indicating that they mainly used a featural processing method and had the capacity of configural processing under the LSF condition. The eye tracking results showed that when the face was inverted or facial feature information was weakened, both children with ASD and TD children would adjust their facial expression processing strategies accordingly, to increase the visual attention and information processing of their preferred areas. The fixation counts and fixation duration of TD children on the eyes area increased significantly, while the fixation duration of children with ASD on the mouth area increased significantly. The results of this study provided theoretical and practical support for facial expression intervention in children with ASD.
Collapse
Affiliation(s)
- Kun Zhang
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Yishuang Yuan
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Jingying Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
- Correspondence:
| | - Guangshuai Wang
- School of Computer Science, Wuhan University, Wuhan 430072, China;
| | - Qian Chen
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| | - Meijuan Luo
- National Engineering Research Center for E-Learning, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China; (K.Z.); (Y.Y.); (Q.C.); (M.L.)
- National Engineering Laboratory for Educational Big Data, Faculty of Artificial Intelligence in Education, Central China Normal University, Wuhan 430079, China
| |
Collapse
|
15
|
Sato W, Namba S, Yang D, Nishida S, Ishi C, Minato T. An Android for Emotional Interaction: Spatiotemporal Validation of Its Facial Expressions. Front Psychol 2022; 12:800657. [PMID: 35185697 PMCID: PMC8855677 DOI: 10.3389/fpsyg.2021.800657] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
Android robots capable of emotional interactions with humans have considerable potential for application to research. While several studies developed androids that can exhibit human-like emotional facial expressions, few have empirically validated androids' facial expressions. To investigate this issue, we developed an android head called Nikola based on human psychology and conducted three studies to test the validity of its facial expressions. In Study 1, Nikola produced single facial actions, which were evaluated in accordance with the Facial Action Coding System. The results showed that 17 action units were appropriately produced. In Study 2, Nikola produced the prototypical facial expressions for six basic emotions (anger, disgust, fear, happiness, sadness, and surprise), and naïve participants labeled photographs of the expressions. The recognition accuracy of all emotions was higher than chance level. In Study 3, Nikola produced dynamic facial expressions for six basic emotions at four different speeds, and naïve participants evaluated the naturalness of the speed of each expression. The effect of speed differed across emotions, as in previous studies of human expressions. These data validate the spatial and temporal patterns of Nikola's emotional facial expressions, and suggest that it may be useful for future psychological studies and real-life applications.
Collapse
Affiliation(s)
- Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
- Field Science Education and Research Center, Kyoto University, Kyoto, Japan
| | - Shushi Namba
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Dongsheng Yang
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Shin’ya Nishida
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
- NTT Communication Science Laboratories, Nippon Telegraph and Telephone Corporation, Atsugi, Japan
| | - Carlos Ishi
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| | - Takashi Minato
- Interactive Robot Research Team, Guardian Robot Project, RIKEN, Kyoto, Japan
| |
Collapse
|