1
|
Funk PF, Levit B, Bar-Haim C, Ben-Dov D, Volk GF, Grassme R, Anders C, Guntinas-Lichius O, Hanein Y. Wireless high-resolution surface facial electromyography mask for discrimination of standardized facial expressions in healthy adults. Sci Rep 2024; 14:19317. [PMID: 39164429 PMCID: PMC11336214 DOI: 10.1038/s41598-024-70205-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2024] [Accepted: 08/13/2024] [Indexed: 08/22/2024] Open
Abstract
Wired high resolution surface electromyography (sEMG) using gelled electrodes is a standard method for psycho-physiological, neurological and medical research. Despite its widespread use electrode placement is elaborative, time-consuming, and the overall experimental setting is prone to mechanical artifacts and thus offers little flexibility. Wireless and easy-to-apply technologies would facilitate more accessible examination in a realistic setting. To address this, a novel smart skin technology consisting of wireless dry 16-electrodes was tested. The soft electrode arrays were attached to the right hemiface of 37 healthy adult participants (60% female; 20 to 57 years). The participants performed three runs of a standard set of different facial expression exercises. Linear mixed-effects models utilizing the sEMG amplitudes as outcome measure were used to evaluate differences between the facial movement tasks and runs (separately for every task). The smart electrodes showed specific activation patterns for each of the exercises. 82% of the exercises could be differentiated from each other with very high precision when using the average muscle action of all electrodes. The effects were consistent during the 3 runs. Thus, it appears that wireless high-resolution sEMG analysis with smart skin technology successfully discriminates standard facial expressions in research and clinical settings.
Collapse
Affiliation(s)
- Paul F Funk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Bara Levit
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Chen Bar-Haim
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Dvir Ben-Dov
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Roland Grassme
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- Department of Prevention, Biomechanics, German Social Accident Insurance Institution for the Foodstuffs and Catering Industry, Erfurt, Germany
| | - Christoph Anders
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany.
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany.
- Center for Rare Diseases, Jena University Hospital, Jena, Germany.
| | - Yael Hanein
- School of Electrical Engineering, Tel Aviv University, Tel Aviv, Israel
- Tel Aviv University Center for Nanoscience and Nanotechnology, Tel Aviv University, Tel Aviv, Israel
- Sagol School of Neuroscience, Tel Aviv University, Tel Aviv, Israel
- X-Trodes, Herzliya, Israel
| |
Collapse
|
2
|
Slonim DA, Yehezkel I, Paz A, Bar-Kalifa E, Wolff M, Dar A, Gilboa-Schechtman E. Facing Change: Using Automated Facial Expression Analysis to Examine Emotional Flexibility in the Treatment of Depression. ADMINISTRATION AND POLICY IN MENTAL HEALTH AND MENTAL HEALTH SERVICES RESEARCH 2024; 51:501-508. [PMID: 37880472 DOI: 10.1007/s10488-023-01310-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/02/2023] [Indexed: 10/27/2023]
Abstract
OBJECTIVE Depression involves deficits in emotional flexibility. To date, the varied and dynamic nature of emotional processes during therapy has mostly been measured at discrete time intervals using clients' subjective reports. Because emotions tend to fluctuate and change from moment to moment, the understanding of emotional processes in the treatment of depression depends to a great extent on the existence of sensitive, continuous, and objectively codified measures of emotional expression. In this observational study, we used computerized measures to analyze high-resolution time-series facial expression data as well as self-reports to examine the association between emotional flexibility and depressive symptoms at the client as well as at the session levels. METHOD Video recordings from 283 therapy sessions of 58 clients who underwent 16 sessions of manualized psychodynamic psychotherapy for depression were analyzed. Data was collected as part of routine practice in a university clinic that provides treatments to the community. Emotional flexibility was measured in each session using an automated facial expression emotion recognition system. The clients' depression level was assessed at the beginning of each session using the Beck Depression Inventory-II (Beck et al., 1996). RESULTS Higher emotional flexibility was associated with lower depressive symptoms at the treatment as well as at the session levels. CONCLUSION These findings highlight the centrality of emotional flexibility both as a trait-like as well as a state-like characteristic of depression. The results also demonstrate the usefulness of computerized measures to capture key emotional processes in the treatment of depression at a high scale and specificity.
Collapse
Affiliation(s)
| | - Ido Yehezkel
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Adar Paz
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Eran Bar-Kalifa
- Department of Psychology, Ben-Gurion University of the Negev, Beer-Sheva, Israel
| | - Maya Wolff
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | - Avinoam Dar
- Department of Psychology, Bar-Ilan University, Ramat-Gan, Israel
| | | |
Collapse
|
3
|
Bress KS, Cascio CJ. Sensorimotor regulation of facial expression - An untouched frontier. Neurosci Biobehav Rev 2024; 162:105684. [PMID: 38710425 DOI: 10.1016/j.neubiorev.2024.105684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/08/2024]
Abstract
Facial expression is a critical form of nonverbal social communication which promotes emotional exchange and affiliation among humans. Facial expressions are generated via precise contraction of the facial muscles, guided by sensory feedback. While the neural pathways underlying facial motor control are well characterized in humans and primates, it remains unknown how tactile and proprioceptive information reaches these pathways to guide facial muscle contraction. Thus, despite the importance of facial expressions for social functioning, little is known about how they are generated as a unique sensorimotor behavior. In this review, we highlight current knowledge about sensory feedback from the face and how it is distinct from other body regions. We describe connectivity between the facial sensory and motor brain systems, and call attention to the other brain systems which influence facial expression behavior, including vision, gustation, emotion, and interoception. Finally, we petition for more research on the sensory basis of facial expressions, asserting that incomplete understanding of sensorimotor mechanisms is a barrier to addressing atypical facial expressivity in clinical populations.
Collapse
Affiliation(s)
- Kimberly S Bress
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Carissa J Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
4
|
Zhao Q, Ye Z, Deng Y, Chen J, Chen J, Liu D, Ye X, Huan C. An advance in novel intelligent sensory technologies: From an implicit-tracking perspective of food perception. Compr Rev Food Sci Food Saf 2024; 23:e13327. [PMID: 38517017 DOI: 10.1111/1541-4337.13327] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/28/2023] [Revised: 02/19/2024] [Accepted: 03/01/2024] [Indexed: 03/23/2024]
Abstract
Food sensory evaluation mainly includes explicit and implicit measurement methods. Implicit measures of consumer perception are gaining significant attention in food sensory and consumer science as they provide effective, subconscious, objective analysis. A wide range of advanced technologies are now available for analyzing physiological and psychological responses, including facial analysis technology, neuroimaging technology, autonomic nervous system technology, and behavioral pattern measurement. However, researchers in the food field often lack systematic knowledge of these multidisciplinary technologies and struggle with interpreting their results. In order to bridge this gap, this review systematically describes the principles and highlights the applications in food sensory and consumer science of facial analysis technologies such as eye tracking, facial electromyography, and automatic facial expression analysis, as well as neuroimaging technologies like electroencephalography, magnetoencephalography, functional magnetic resonance imaging, and functional near-infrared spectroscopy. Furthermore, we critically compare and discuss these advanced implicit techniques in the context of food sensory research and then accordingly propose prospects. Ultimately, we conclude that implicit measures should be complemented by traditional explicit measures to capture responses beyond preference. Facial analysis technologies offer a more objective reflection of sensory perception and attitudes toward food, whereas neuroimaging techniques provide valuable insight into the implicit physiological responses during food consumption. To enhance the interpretability and generalizability of implicit measurement results, further sensory studies are needed. Looking ahead, the combination of different methodological techniques in real-life situations holds promise for consumer sensory science in the field of food research.
Collapse
Affiliation(s)
- Qian Zhao
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Zhiyue Ye
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Yong Deng
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
| | - Jin Chen
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
| | - Jianle Chen
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Donghong Liu
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Xingqian Ye
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| | - Cheng Huan
- College of Biosystems Engineering and Food Science, National-Local Joint Engineering Research Center of Intelligent Food Technology and Equipment, Fuli Institute of Food Science, Zhejiang Key Laboratory for Agro-Food Processing, Zhejiang International Scientific and Technological Cooperation Base of Health Food Manufacturing and Quality Control, Zhejiang University, Hangzhou, China
- Innovation Center of Yangtze River Delta, Zhejiang University, Jiaxing, China
- Zhongyuan Institute, Zhejiang University, Zhengzhou, China
- Ningbo Innovation Center, Zhejiang University, Ningbo, China
| |
Collapse
|
5
|
Westermann JF, Schäfer R, Nordmann M, Richter P, Müller T, Franz M. Measuring facial mimicry: Affdex vs. EMG. PLoS One 2024; 19:e0290569. [PMID: 38165847 PMCID: PMC10760767 DOI: 10.1371/journal.pone.0290569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Accepted: 08/09/2023] [Indexed: 01/04/2024] Open
Abstract
Facial mimicry is the automatic imitation of the facial affect expressions of others. It serves as an important component of interpersonal communication and affective co-experience. Facial mimicry has so far been measured by Electromyography (EMG), which requires a complex measuring apparatus. Recently, software for measuring facial expressions have become available, but it is still unclear how well it is suited for measuring facial mimicry. This study investigates the comparability of the automated facial coding software Affdex with EMG for measuring facial mimicry. For this purpose, facial mimicry was induced in 33 subjects by presenting naturalistic affect-expressive video sequences (anger, joy). The response of the subjects is measured simultaneously by facial EMG (corrugator supercilii muscle, zygomaticus major muscle) and by Affdex (action units lip corner puller and brow lowerer and affects joy and anger). Subsequently, the correlations between the measurement results of EMG and Affdex were calculated. After the presentation of the joy stimulus, there was an increase in zygomaticus muscle activity (EMG) about 400 ms after stimulus onset and an increase in joy and lip corner puller activity (Affdex) about 1200 ms after stimulus onset. The joy and the lip corner puller activity detected by Affdex correlate significantly with the EMG activity. After presentation of the anger stimulus, corrugator muscle activity (EMG) also increased approximately 400 ms after stimulus onset, whereas anger and brow lowerer activity (Affdex) showed no response. During the entire measurement interval, anger activity and brow lowerer activity (Affdex) did not correlate with corrugator muscle activity (EMG). Using Affdex, the facial mimicry response to a joy stimulus can be measured, but it is detected approximately 800 ms later compared to the EMG. Thus, electromyography remains the tool of choice for studying subtle mimic processes like facial mimicry.
Collapse
Affiliation(s)
- Jan-Frederik Westermann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Ralf Schäfer
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Marc Nordmann
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Peter Richter
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Tobias Müller
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| | - Matthias Franz
- Medical Faculty, Clinical Institute for Psychosomatic Medicine and Psychotherapy, University Hospital of the Heinrich-Heine-University, Düsseldorf, Germany
| |
Collapse
|
6
|
Cheong JH, Jolly E, Xie T, Byrne S, Kenney M, Chang LJ. Py-Feat: Python Facial Expression Analysis Toolbox. AFFECTIVE SCIENCE 2023; 4:781-796. [PMID: 38156250 PMCID: PMC10751270 DOI: 10.1007/s42761-023-00191-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 05/07/2023] [Indexed: 12/30/2023]
Abstract
Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state-of-the-art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00191-4.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Eshin Jolly
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Tiankang Xie
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| | - Sophie Byrne
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Matthew Kenney
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Luke J. Chang
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| |
Collapse
|
7
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
8
|
Guntinas-Lichius O, Trentzsch V, Mueller N, Heinrich M, Kuttenreich AM, Dobel C, Volk GF, Graßme R, Anders C. High-resolution surface electromyographic activities of facial muscles during the six basic emotional expressions in healthy adults: a prospective observational study. Sci Rep 2023; 13:19214. [PMID: 37932337 PMCID: PMC10628297 DOI: 10.1038/s41598-023-45779-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2023] [Accepted: 10/24/2023] [Indexed: 11/08/2023] Open
Abstract
High-resolution facial surface electromyography (HR-sEMG) is suited to discriminate between different facial movements. Whether HR-sEMG also allows a discrimination among the six basic emotions of facial expression is unclear. 36 healthy participants (53% female, 18-67 years) were included for four sessions. Electromyograms were recorded from both sides of the face using a muscle-position oriented electrode application (Fridlund scheme) and by a landmark-oriented, muscle unrelated symmetrical electrode arrangement (Kuramoto scheme) simultaneously on the face. In each session, participants expressed the six basic emotions in response to standardized facial images expressing the corresponding emotions. This was repeated once on the same day. Both sessions were repeated two weeks later to assess repetition effects. HR-sEMG characteristics showed systematic regional distribution patterns of emotional muscle activation for both schemes with very low interindividual variability. Statistical discrimination between the different HR-sEMG patterns was good for both schemes for most but not all basic emotions (ranging from p > 0.05 to mostly p < 0.001) when using HR-sEMG of the entire face. When using information only from the lower face, the Kuramoto scheme allowed a more reliable discrimination of all six emotions (all p < 0.001). A landmark-oriented HR-sEMG recording allows specific discrimination of facial muscle activity patterns during basic emotional expressions.
Collapse
Affiliation(s)
- Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany.
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany.
- Center for Rare Diseases, Jena University Hospital, Jena, Germany.
| | - Vanessa Trentzsch
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Nadiya Mueller
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Martin Heinrich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Anna-Maria Kuttenreich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Am Klinikum 1, 07747, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Roland Graßme
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- Department of Prevention, Biomechanics, German Social Accident Insurance Institution for the Foodstuffs and Catering Industry, Erfurt, Germany
| | - Christoph Anders
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| |
Collapse
|
9
|
Cheong JH, Molani Z, Sadhukha S, Chang LJ. Synchronized affect in shared experiences strengthens social connection. Commun Biol 2023; 6:1099. [PMID: 37898664 PMCID: PMC10613250 DOI: 10.1038/s42003-023-05461-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2020] [Accepted: 10/13/2023] [Indexed: 10/30/2023] Open
Abstract
People structure their days to experience events with others. We gather to eat meals, watch TV, and attend concerts together. What constitutes a shared experience and how does it manifest in dyadic behavior? The present study investigates how shared experiences-measured through emotional, motoric, physiological, and cognitive alignment-promote social bonding. We recorded the facial expressions and electrodermal activity (EDA) of participants as they watched four episodes of a TV show for a total of 4 h with another participant. Participants displayed temporally synchronized and spatially aligned emotional facial expressions and the degree of synchronization predicted the self-reported social connection ratings between viewing partners. We observed a similar pattern of results for dyadic physiological synchrony measured via EDA and their cognitive impressions of the characters. All four of these factors, temporal synchrony of positive facial expressions, spatial alignment of expressions, EDA synchrony, and character impression similarity, contributed to a latent factor of a shared experience that predicted social connection. Our findings suggest that the development of interpersonal affiliations in shared experiences emerges from shared affective experiences comprising synchronous processes and demonstrate that these complex interpersonal processes can be studied in a holistic and multi-modal framework leveraging naturalistic experimental designs.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Zainab Molani
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Sushmita Sadhukha
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Luke J Chang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA.
| |
Collapse
|
10
|
Burgess R, Culpin I, Costantini I, Bould H, Nabney I, Pearson RM. Quantifying the efficacy of an automated facial coding software using videos of parents. Front Psychol 2023; 14:1223806. [PMID: 37583610 PMCID: PMC10425266 DOI: 10.3389/fpsyg.2023.1223806] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2023] [Accepted: 07/10/2023] [Indexed: 08/17/2023] Open
Abstract
Introduction This work explores the use of an automated facial coding software - FaceReader - as an alternative and/or complementary method to manual coding. Methods We used videos of parents (fathers, n = 36; mothers, n = 29) taken from the Avon Longitudinal Study of Parents and Children. The videos-obtained during real-life parent-infant interactions in the home-were coded both manually (using an existing coding scheme) and by FaceReader. We established a correspondence between the manual and automated coding categories - namely Positive, Neutral, Negative, and Surprise - before contingency tables were employed to examine the software's detection rate and quantify the agreement between manual and automated coding. By employing binary logistic regression, we examined the predictive potential of FaceReader outputs in determining manually classified facial expressions. An interaction term was used to investigate the impact of gender on our models, seeking to estimate its influence on the predictive accuracy. Results We found that the automated facial detection rate was low (25.2% for fathers, 24.6% for mothers) compared to manual coding, and discuss some potential explanations for this (e.g., poor lighting and facial occlusion). Our logistic regression analyses found that Surprise and Positive expressions had strong predictive capabilities, whilst Negative expressions performed poorly. Mothers' faces were more important for predicting Positive and Neutral expressions, whilst fathers' faces were more important in predicting Negative and Surprise expressions. Discussion We discuss the implications of our findings in the context of future automated facial coding studies, and we emphasise the need to consider gender-specific influences in automated facial coding research.
Collapse
Affiliation(s)
- R. Burgess
- The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United Kingdom
| | - I. Culpin
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- Florence Nightingale Faculty of Nursing, Midwifery and Palliative Care, King’s College London, London, United Kingdom
| | - I. Costantini
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
| | - H. Bould
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- The Medical Research Council Integrative Epidemiology Unit, University of Bristol, Bristol, United Kingdom
- The Gloucestershire Health and Care NHS Foundation Trust, Gloucester, United Kingdom
| | - I. Nabney
- The Digital Health Engineering Group, Merchant Venturers Building, University of Bristol, Bristol, United Kingdom
| | - R. M. Pearson
- The Centre for Academic Mental Health, Bristol Medical School, Bristol, United Kingdom
- The Department of Psychology, Manchester Metropolitan University, Manchester, United Kingdom
| |
Collapse
|
11
|
Ho MH, Kemp BT, Eisenbarth H, Rijnders RJP. Designing a neuroclinical assessment of empathy deficits in psychopathy based on the Zipper Model of Empathy. Neurosci Biobehav Rev 2023; 151:105244. [PMID: 37225061 DOI: 10.1016/j.neubiorev.2023.105244] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 05/02/2023] [Accepted: 05/20/2023] [Indexed: 05/26/2023]
Abstract
Ho, M.H., Kemp, B.T., Eisenbarth, H. & Rijnders, R.J.P. Designing a neuroclinical assessment of empathy deficits in psychopathy based on the Zipper Model of Empathy. NEUROSCI BIOBEHAV REV YY(Y) XXX-XXX, 2023. The heterogeneity of the literature on empathy highlights its multidimensional and dynamic nature and affects unclear descriptions of empathy in the context of psychopathology. The Zipper Model of Empathy integrates current theories of empathy and proposes that empathy maturity is dependent on whether contextual and personal factors push affective and cognitive processes together or apart. This concept paper therefore proposes a comprehensive battery of physiological and behavioral measures to empirically assess empathy processing according to this model with an application for psychopathic personality. We propose using the following measures to assess each component of this model: (1) facial electromyography; (2) the Emotion Recognition Task; (3) the Empathy Accuracy task and physiological measures (e.g., heart rate); (4) a selection of Theory of Mind tasks and an adapted Dot Perspective Task, and; (5) an adjusted Charity Task. Ultimately, we hope this paper serves as a starting point for discussion and debate on defining and assessing empathy processing, to encourage research to falsify and update this model to improve our understanding of empathy.
Collapse
Affiliation(s)
- Man Him Ho
- Danish Research Center for Magnetic Resonance, Kettegård Alle 30, 2650 Hvidovre, Capital Region, Denmark; Maastricht University, Psychology Neurosciences Department, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands.
| | - Benjamin Thomas Kemp
- Maastricht University, Psychology Neurosciences Department, Universiteitssingel 40, 6229 ER Maastricht, the Netherlands.
| | - Hedwig Eisenbarth
- School of Psychology, Victoria University of Wellington, PO Box 600, Wellington 6140, New Zealand.
| | - Ronald J P Rijnders
- Netherlands Institute for Forensic Psychiatry and Psychology, Forensic Observation Clinic "Pieter Baan Centrum", Carl Barksweg 3, 1336 ZL, Almere, the Netherlands; Utrecht University, Faculty of Social Sciences, Department of Psychology, Heidelberglaan 8, 3584 CS, Utrecht, the Netherlands.
| |
Collapse
|
12
|
Mena B, Torrico DD, Hutchings S, Ha M, Ashman H, Warner RD. Understanding consumer liking of beef patties with different firmness among younger and older adults using FaceReader™ and biometrics. Meat Sci 2023; 199:109124. [PMID: 36736127 DOI: 10.1016/j.meatsci.2023.109124] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Revised: 11/23/2022] [Accepted: 01/17/2023] [Indexed: 01/21/2023]
Abstract
Sensorial perceptions change as people age and biometrics analysis can be used to explore the unconscious consumer responses. Investigation was conducted of effects of consumer age (younger, 22-52 years; older, 60-76 years) on facial expression response (FER) during consumption of beef patties with varying firmness (soft, medium, hard) and taste (±plum sauce). Video images were collected and FERs analysed using FaceReader™. Younger people exhibited higher intensity for happy/sad/scared and lower intensity for neutral/disgusted, relative to older people. Interactions between age and texture/sauce showed little FER variation in older people, whereas younger people showed considerable FER variation. Younger people, but not older people, had lowest intensity of happy FER and highest intensity of angry FER for the hard patty. Sauce addition resulted in higher intensity of happy/contempt in younger consumers, but not older consumers. FER collected using FaceReader™ was successfully used to differentiate between the unconscious responses of younger and older consumers.
Collapse
Affiliation(s)
- Behannis Mena
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Damir Dennis Torrico
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia; Department of Wine, Food and Molecular Biosciences, Faculty of Agriculture and Life Sciences, Lincoln University, Lincoln 7647, New Zealand
| | - Scott Hutchings
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia; AgResearch, Food & Bio-based Products Group, Palmerston North 4442, New Zealand
| | - Minh Ha
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Hollis Ashman
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia
| | - Robyn D Warner
- Faculty of Veterinary and Agricultural Sciences, School of Agriculture and Food, The University of Melbourne, Parkville, VIC 3010, Australia.
| |
Collapse
|
13
|
Song Y, Tao D, Luximon Y. In robot we trust? The effect of emotional expressions and contextual cues on anthropomorphic trustworthiness. APPLIED ERGONOMICS 2023; 109:103967. [PMID: 36736181 DOI: 10.1016/j.apergo.2023.103967] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/05/2022] [Revised: 09/05/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Following the evolution of technology and its application in various daily contexts, social robots work as an advanced artificial intelligence (AI) system to interact with humans. However, limited research has been done to discuss the role of emotional expressions and contextual cues in influencing anthropomorphic trustworthiness, especially from the design perspective. To address this research gap, the current study designed a specific robot prototype and conducted two lab experiments to explore the effect of emotional expressions and contextual cues on trustworthiness via a combination of subjective ratings and physiological measures. Results showed that: 1) positive (vs. negative) emotional expressions enjoyed a higher level of anthropomorphic trustworthiness and visual attention; 2) regulatory fit was expanded in parasocial interaction and worked as a prime to activate anthropomorphic trustworthiness for social robots. Theoretical contributions and design implications were also discussed in this study.
Collapse
Affiliation(s)
- Yao Song
- College of Literature and Journalism, Sichuan University, Chengdu, China; Convergence Laboratory of Chinese Cultural Inheritance and Global Communication, Sichuan University, Chengdu, China; School of Design, The Hong Kong Polytechnic University, Hung Hom, Hong Kong Special Administrative Region of China
| | - Da Tao
- Institute of Human Factors and Ergonomics, College of Mechatronics and Control Engineering, Shenzhen University, Shenzhen, China
| | - Yan Luximon
- School of Design, The Hong Kong Polytechnic University, Hung Hom, Hong Kong Special Administrative Region of China.
| |
Collapse
|
14
|
Trentzsch V, Mueller N, Heinrich M, Kuttenreich AM, Guntinas-Lichius O, Volk GF, Anders C. Test-retest reliability of high-resolution surface electromyographic activities of facial muscles during facial expressions in healthy adults: A prospective observational study. Front Hum Neurosci 2023; 17:1126336. [PMID: 36992792 PMCID: PMC10040741 DOI: 10.3389/fnhum.2023.1126336] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Accepted: 03/01/2023] [Indexed: 03/16/2023] Open
Abstract
ObjectivesSurface electromyography (sEMG) is a standard method for psycho-physiological research to evaluate emotional expressions or in a clinical setting to analyze facial muscle function. High-resolution sEMG shows the best results to discriminate between different facial expressions. Nevertheless, the test-retest reliability of high-resolution facial sEMG is not analyzed in detail yet, as good reliability is a necessary prerequisite for its repeated clinical application.MethodsThirty-six healthy adult participants (53% female, 18–67 years) were included. Electromyograms were recorded from both sides of the face using an arrangement of electrodes oriented by the underlying topography of the facial muscles (Fridlund scheme) and simultaneously by a geometric and symmetrical arrangement on the face (Kuramoto scheme). In one session, participants performed three trials of a standard set of different facial expression tasks. On one day, two sessions were performed. The two sessions were repeated two weeks later. Intraclass correlation coefficient (ICC) and coefficient of variation statistics were used to analyze the intra-session, intra-day, and between-day reliability.ResultsFridlund scheme, mean ICCs per electrode position: Intra-session: excellent (0.935–0.994), intra-day: moderate to good (0.674–0.881), between-day: poor to moderate (0.095–0.730). Mean ICC’s per facial expression: Intra-session: excellent (0.933–0.991), intra-day: good to moderate (0.674–0.903), between-day: poor to moderate (0.385–0.679). Kuramoto scheme, mean ICC’s per electrode position: Intra-session: excellent (0.957–0.970), intra-day: good (0.751–0.908), between-day: moderate (0.643–0.742). Mean ICC’s per facial expression: Intra-session: excellent (0.927–0.991), intra-day: good to excellent (0.762–0.973), between-day: poor to good (0.235–0.868). The intra-session reliability of both schemes were equal. Compared to the Fridlund scheme, the ICCs for intra-day and between-day reliability were always better for the Kuramoto scheme.ConclusionFor repeated facial sEMG measurements of facial expressions, we recommend the Kuramoto scheme.
Collapse
Affiliation(s)
- Vanessa Trentzsch
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | - Nadiya Mueller
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| | - Martin Heinrich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Anna-Maria Kuttenreich
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
- *Correspondence: Orlando Guntinas-Lichius, ; orcid.org/0000-0001-9671-0784
| | - Gerd Fabian Volk
- Department of Otorhinolaryngology, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
- Facial-Nerve-Center Jena, Jena University Hospital, Jena, Germany
- Center for Rare Diseases, Jena University Hospital, Jena, Germany
| | - Christoph Anders
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
15
|
Snoek L, Jack RE, Schyns PG, Garrod OG, Mittenbühler M, Chen C, Oosterwijk S, Scholte HS. Testing, explaining, and exploring models of facial expressions of emotions. SCIENCE ADVANCES 2023; 9:eabq8421. [PMID: 36763663 PMCID: PMC9916981 DOI: 10.1126/sciadv.abq8421] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.
Collapse
Affiliation(s)
- Lukas Snoek
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Rachael E. Jack
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Philippe G. Schyns
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | | | - Maximilian Mittenbühler
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Suzanne Oosterwijk
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - H. Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
16
|
Büdenbender B, Höfling TTA, Gerdes ABM, Alpers GW. Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions' prototypicality. PLoS One 2023; 18:e0281309. [PMID: 36763694 PMCID: PMC9916590 DOI: 10.1371/journal.pone.0281309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/20/2023] [Indexed: 02/12/2023] Open
Abstract
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models' accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material's prototypicality on AFC's ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Collapse
Affiliation(s)
- Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Tim T. A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
- * E-mail:
| |
Collapse
|
17
|
Moulds DJ, Meyer J, McLean JF, Kempe V. Exploring effects of response biases in affect induction procedures. PLoS One 2023; 18:e0285706. [PMID: 37167316 PMCID: PMC10174507 DOI: 10.1371/journal.pone.0285706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 05/02/2023] [Indexed: 05/13/2023] Open
Abstract
This study examined whether self-reports or ratings of experienced affect, often used as manipulation checks on the efficacy of affect induction procedures (AIPs), reflect genuine changes in affective states rather than response biases arising from demand characteristics or social desirability effects. In a between-participants design, participants were exposed to positive, negative and neutral images with valence-congruent music or sound to induce happy, sad and neutral mood. Half of the participants had to actively appraise each image whereas the other half viewed images passively. We hypothesised that if ratings of affective valence are subject to response biases then they should reflect the target mood in the same way for active appraisal and passive exposure as participants encountered the same affective stimuli in both conditions. We also tested whether the AIP resulted in mood-congruent changes in facial expressions analysed by FaceReader to see whether behavioural indicators corroborate the self-reports. The results showed that while participants' ratings reflected the induced target valence, the difference between positive and negative AIP was significantly attenuated in the active appraisal condition, suggesting that self-reports of mood experienced after the AIP are not entirely a reflection of response biases. However, there were no effects of the AIP on FaceReader valence scores, in line with theories questioning the existence of cross-culturally and inter-individually universal behavioural indicators of affective states. Efficacy of AIPs is therefore best checked using self-reports.
Collapse
Affiliation(s)
- David J Moulds
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Jona Meyer
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Janet F McLean
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Vera Kempe
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| |
Collapse
|
18
|
Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects elicited by video commercials. Front Neurosci 2023; 17:1125983. [PMID: 37205049 PMCID: PMC10185761 DOI: 10.3389/fnins.2023.1125983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 02/10/2023] [Indexed: 05/21/2023] Open
Abstract
Introduction Consumers' emotional responses are the prime target for marketing commercials. Facial expressions provide information about a person's emotional state and technological advances have enabled machines to automatically decode them. Method With automatic facial coding we investigated the relationships between facial movements (i.e., action unit activity) and self-report of commercials advertisement emotion, advertisement and brand effects. Therefore, we recorded and analyzed the facial responses of 219 participants while they watched a broad array of video commercials. Results Facial expressions significantly predicted self-report of emotion as well as advertisement and brand effects. Interestingly, facial expressions had incremental value beyond self-report of emotion in the prediction of advertisement and brand effects. Hence, automatic facial coding appears to be useful as a non-verbal quantification of advertisement effects beyond self-report. Discussion This is the first study to measure a broad spectrum of automatically scored facial responses to video commercials. Automatic facial coding is a promising non-invasive and non-verbal method to measure emotional responses in marketing.
Collapse
|
19
|
Wang Y, Tian J, Yang Q. Tai Chi exercise improves working memory capacity and emotion regulation ability. Front Psychol 2023; 14:1047544. [PMID: 36874821 PMCID: PMC9983368 DOI: 10.3389/fpsyg.2023.1047544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Accepted: 01/25/2023] [Indexed: 02/19/2023] Open
Abstract
Purpose The study aimed to research the promoting effects of Tai Chi exercise on working memory capacity and emotional regulation ability among college students. Methods Fifty-five participants were recruited and randomly divided into the Tai Chi group and control group. The Tai Chi group had a 12-week Tai Chi training to implement intervention, while the control group performed non-cognitive traditional sports with the same exercise intensity as the Tai Chi group. The visual 2-back test of action pictures and the Geneva emotional picture system test were performed before and after the trial, which aimed to examine whether the action memory of Tai Chi training can improve individuals' working memory capacity and emotion regulation ability. Results After 12 weeks, a significant difference was observed in Accuracy Rate (AR) (F = 54.89, p ≤ 0.001) and Response Time (RT) (F = 99.45, p ≤ 0.001) of individuals' Visual Memory Capacity between the Tai Chi group and the control group. Significant effects in Time (F = 98.62, p ≤ 0.001), Group (F = 21.43, p ≤ 0.001), and Interaction (Groups × time; F = 50.81, p ≤ 0.001) on Accuracy Rate (AR) of the Visual Memory Capacity were observed. The same effect was observed again on the Response Time (RT) of the Visual Memory Capacity, Time (F = 67.21, p ≤ 0.001), Group (F = 45.68, p ≤ 0.001), Interaction (groups × time; F = 79.52, p ≤ 0.001). Post-hoc analysis showed that at the end of 12 weeks, the participants in the Tai Chi group had significantly higher Visual Memory Capacity than those in the control group (p < 0.05).After 12 weeks, valence difference (F = 11.49, p ≤ 0.001), arousal difference (F = 10.17, p ≤ 0.01), and dominance difference (F = 13.30, p ≤ 0.001) in the emotion response were significantly different between the control group and the Tai Chi group. The effect of valence differences in Time (F = 7.28, p < 0.01), Group (F = 4.16, p < 0.05), and Time*Group (F = 10.16, p < 0.01), respectively, was significant in the Tai Chi group after 12-week intervention. Post hoc analysis showed valence swings in the Tai Chi group were significantly lower than that in the control group (p < 0.05); The effect of arousal difference in Time (F = 5.18, p < 0.05), Group (F = 7.26, p < 0.01), Time*Group (F = 4.23, p < 0.05), respectively, was significant in the Tai Chi group after 12-week intervention. Post hoc analysis showed arousal fluctuations in the Tai Chi group was significantly lower than that in the control group too (p < 0.01); As the same, the effect of dominance differences in Time (F = 7.92, p < 0.01), Group (F = 5.82 p < 0.05) and Time*Group (F = 10.26, p < 0.01), respectively was significant in the Tai Chi group. Dominance swings in the Tai Chi group were significantly lower than that in the control group (p < 0.001). Conclusion The data support our speculation that action memory training in Tai Chi exercise may improve individuals' working memory capacity, and then improve their emotion regulation ability, which has provided insightful information for customized exercise programs for emotion regulation in adolescents. Thus, we suggest those adolescents who are experiencing volatile moods and poor emotion regulation attend regular Tai Chi classes, which could contribute to their emotional health.
Collapse
Affiliation(s)
- Yi Wang
- School of Physical Education, Weinan Normal University, Weinan, China
| | - Jing Tian
- School of Foreign Languages, Weinan Normal University, Weinan, China
| | - Qingxuan Yang
- Department of Physical Education, Chang'an University, Xi'an, China
| |
Collapse
|
20
|
Gerostathi M, Doukakis S. Proposal for Monitoring Students' Self-Efficacy Using Neurophysiological Measures and Self-Report Scales. ADVANCES IN EXPERIMENTAL MEDICINE AND BIOLOGY 2023; 1425:635-643. [PMID: 37581837 DOI: 10.1007/978-3-031-31986-0_62] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/16/2023]
Abstract
The role of STEM-science, technology, engineering, mathematics-education is internationally recognized as critical to both the personal development of students and their future contribution to a country's economy as through this education they are equipped with the necessary twenty-first-century skills. As a result, there is a need to study the way in which such education affects students. In particular, the study of the self-efficacy factor is a contribution in this direction. Self-efficacy is a fundamental concept in the learning process as it contributes to shaping learning outcomes. Self-report scales are commonly used to measure self-efficacy; however, concerns in research circles have been raised regarding their limitations. On the other hand, there is a growing research interest in neurophysiological measures in the field of education, which seem to offer promising possibilities for understanding learning. Therefore, to better determine the impact of STEM education on students, a combination of self-report scales and neurophysiological measures is proposed to measure self-efficacy.
Collapse
|
21
|
Leppanen J, Patsalos O, Surguladze S, Kerr-Gaffney J, Williams S, Tchanturia K. Evaluation of film stimuli for the assessment of social-emotional processing: a pilot study. PeerJ 2022; 10:e14160. [PMID: 36444380 PMCID: PMC9700451 DOI: 10.7717/peerj.14160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2022] [Accepted: 09/09/2022] [Indexed: 11/24/2022] Open
Abstract
Background Difficulties in top-down and bottom-up emotion generation have been proposed to play a key role in the progression of psychiatric disorders. The aim of the current study was to develop more ecologically valid measures of top-down interpretation biases and bottom-up evoked emotional responses. Methods A total of 124 healthy female participants aged 18-25 took part in the study. We evaluated two sets of 18 brief film clips. The first set of film clips presented ambiguous social situations designed to examine interpretation biases. Participants provided written interpretations of each ambiguous film clip which were subjected to sentiment analysis. We compared the films in terms of the valence of participants interpretations. The second set of film clips presented neutral and emotionally provoking social scenarios designed to elicit subjective and facial emotional responses. While viewing these film clips participants mood ratings and facial affect were recorded and analysed using exploratory factor analyses. Results Most of the 18 ambiguous film clips were interpreted in the expected manner while still retaining some ambiguity. However, participants were more attuned to the negative cues in the ambiguous film clips and three film clips were identified as unambiguous. These films clips were deemed unsuitable for assessing interpretation bias. The exploratory factor analyses of participants' mood ratings and evoked facial affect showed that the positive and negative emotionally provoking film clips formed their own factors as expected. However, there was substantial cross-loading of the neutral film clips when participants' facial expression data was analysed. Discussion A subset of the film clips from the two tasks could be used to assess top-down interpretation biases and bottom-up evoked emotional responses. Ambiguous negatively valenced film clips should have more subtle negative cues to avoid ceiling effects and to ensure there is enough room for interpretation.
Collapse
Affiliation(s)
- Jenni Leppanen
- Department of Neuroimaging, King’s College London, University of London, London, United Kingdom
| | - Olivia Patsalos
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Sophie Surguladze
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Jess Kerr-Gaffney
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
| | - Steven Williams
- Department of Neuroimaging, King’s College London, University of London, London, United Kingdom
| | - Ketevan Tchanturia
- Department of Psychological Medicine, King’s College London, University of London, London, United Kingdom
- South London and Maudsley NHS Foundation Trust National Eating Disorder Service, London, United Kingdom
- Psychology Department, Illia State University, Tbilisi, Georgia
| |
Collapse
|
22
|
Lu L, Xie Z, Wang H, Li L, Xu X. Mental stress and safety awareness during human-robot collaboration - Review. APPLIED ERGONOMICS 2022; 105:103832. [PMID: 35772289 DOI: 10.1016/j.apergo.2022.103832] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2021] [Revised: 06/14/2022] [Accepted: 06/15/2022] [Indexed: 06/15/2023]
Abstract
Human-robot collaboration (HRC) is an emerging research area that has gained tremendous attention in both academia and industry. Yet, the feature that humans and robots sharing the workplace has led to safety concerns. In particular, the mental stress or safety awareness of human teammates during HRC remains unclear but is also of great importance to workplace safety. In this manuscript, we reviewed twenty-five studies for understanding the relationships between HRC and workers' mental stress or safety awareness. Specifically, we aimed to understand: (1) robot-related factors that may affect human workers' mental stress or safety awareness, (2) a number of measurements that could be used to evaluate workers' mental stress in HRC, and (3) various methods for measuring safety awareness that had been adopted or could be applied in HRC. According to our literature review, robot-related factors including robot characteristics, social touching and trajectory have relationships with workers' mental stress or safety awareness. For the measurement of mental stress and safety awareness, each method mentioned has its validity and rationality. Additionally, a discussion related to the potential co-robot actions to lower mental stress or improve safety awareness as well as future implications were provided.
Collapse
Affiliation(s)
- Lu Lu
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh NC, 27695, USA
| | - Ziyang Xie
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh NC, 27695, USA
| | - Hanwen Wang
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh NC, 27695, USA
| | - Li Li
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh NC, 27695, USA
| | - Xu Xu
- Edward P. Fitts Department of Industrial and Systems Engineering, North Carolina State University, Raleigh NC, 27695, USA.
| |
Collapse
|
23
|
Korosec-Serfaty M, Riedl R, Sénécal S, Léger PM. Attentional and Behavioral Disengagement as Coping Responses to Technostress and Financial Stress: An Experiment Based on Psychophysiological, Perceptual, and Behavioral Data. Front Neurosci 2022; 16:883431. [PMID: 35903805 PMCID: PMC9314858 DOI: 10.3389/fnins.2022.883431] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/10/2022] [Indexed: 11/15/2022] Open
Abstract
Discontinuance of information systems (IS) is a common phenomenon. It is thus critical to understand the decision process and psychophysiological mechanisms that underlie the intention and corresponding behaviors to discontinue IS use, particularly within the digital financial technology usage context, where continuance rates remain low despite increased adoption. Discontinuance has been identified as one coping behavior to avoid stressful situations. However, research has not yet explored this phenomenon toward digital financial technologies. This manuscript builds upon a pilot study that investigated the combined influence of technostress and financial stress on users’ responses toward digital financial decision-making tasks and aims to disentangle the specific impacts of unexpected technology behaviors and perceived financial loss on attentional and behavioral disengagement as coping responses, which may lead to discontinuance from digital financial technology usage. A two-factor within-subject design was developed, where perceived techno-unreliability as variable system response time delays under time pressure and perceived financial loss as negative financial outcomes were manipulated in a 3 × 2 design. Psychophysiological, perceptual, and behavioral data were collected from N = 15 participants while performing an adapted version of the Iowa Gambling Task. The results indicate that unexpected technology behaviors have a far greater impact than perceived financial loss on (1) physiological arousal and emotional valence, demonstrated by decreased skin conductance levels and curvilinear emotional valence responses, (2) feedback processing and decision-making, corroborated by curvilinear negative heart rate (BPM) and positive heart rate variability (HRV) responses, decreased skin conductance level (SCL), increased perceptions of system unresponsiveness and techno-unreliability, and mental workload, (3) attentional disengagement supported by curvilinear HRV and decreased SCL, and (4) behavioral disengagement as coping response, represented by curvilinear decision time and increasingly poor financial decision quality. Overall, these results suggest a feedforward and feedback loop of cognitive and affective mechanisms toward attentional and behavioral disengagement, which may lead to a decision of disengagement-discontinuance as a coping outcome in stressful human-computer interaction situations.
Collapse
Affiliation(s)
| | - René Riedl
- Digital Business, School of Business and Management, University of Applied Sciences Upper Austria, Steyr, Austria
- Institute of Business Informatics - Information Engineering, Johannes Kepler University Linz, Linz, Austria
| | | | | |
Collapse
|
24
|
Postural Correlates of Pollution Perception. Brain Sci 2022; 12:brainsci12070869. [PMID: 35884676 PMCID: PMC9313123 DOI: 10.3390/brainsci12070869] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 06/08/2022] [Accepted: 06/28/2022] [Indexed: 11/17/2022] Open
Abstract
In our contemporary societies, environmental issues are more and more important. An increasing number of studies explore the biological processes involved in environment perception and in particular try to highlight the mechanisms underlying the perception of environmental scenes by our brain. The main objective of the present study was to establish whether the visualization of clean and polluted environmental scenes would lead to differential postural reactions. Our hypothesis was based on a differential postural modulation that could be recorded when the subject is confronted with images representing a “polluted” environment, differential modulation which has been reported in previous studies in response to painful-scenes compared to non-painful scenes visualization.Thirty-one subjects participated in this study. Physiological measurements [heart rate variability (HRV) and electrodermal activity] and postural responses (Center Of Pression—COP—displacements) were recorded in response to perception of polluted or clean environmental scenes. We show, for the first time, that images representing polluted scenes evoke a weaker approach movement than images representing clean scenes. The displacement of the COP in the anteroposterior axis reflects an avoidance when subjects visualize “polluted” scenes. Our results demonstrate a clear distinction between “clean” and “polluted” environments according to the postural change they induce, correlated with the ratings of pleasure and approach evoked by images.
Collapse
|
25
|
Bekendam MT, Mommersteeg PMC, Vermeltfoort IAC, Widdershoven JW, Kop WJ. Facial Emotion Expression and the Inducibility of Myocardial Ischemia During Cardiac Stress Testing: The Role of Psychological Background Factors. Psychosom Med 2022; 84:588-596. [PMID: 35420591 DOI: 10.1097/psy.0000000000001085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Negative emotional states, such as anger and anxiety, are associated with the onset of myocardial infarction and other acute clinical manifestations of ischemic heart disease. The likelihood of experiencing these short-term negative emotions has been associated with long-term psychological background factors such as depression, generalized anxiety, and personality factors. We examined the association of acute emotional states preceding cardiac stress testing (CST) with inducibility of myocardial ischemia and to what extent psychological background factors account for this association. METHODS Emotional states were assessed in patients undergoing CST (n = 210; mean [standard deviation] age = 66.9 [8.2] years); 91 (43%) women) using self-report measures and video recordings of facial emotion expression. Video recordings were analyzed for expressed anxiety, anger, sadness, and happiness before CST. Psychological background factors were assessed with validated questionnaires. Single-photon emission computed tomography was used to evaluate inducibility of ischemia. RESULTS Ischemia occurred in 72 patients (34%). Emotional states were not associated with subsequent inducibility of ischemia during CST (odds ratio between 0.93 and 1.04; p values > .50). Psychological background factors were also not associated with ischemia (odds ratio between 0.96 and 1.06 per scale unit; p values > .20) and did not account for the associations of emotional states with ischemia. CONCLUSIONS Emotional states immediately before CST and psychological background factors were not associated with the inducibility of ischemia. These findings indicate that the well-documented association between negative emotions with acute clinical manifestations of ischemic heart disease requires a different explanation than a reduced threshold for inducible ischemia.
Collapse
Affiliation(s)
- Maria T Bekendam
- From the Center of Research on Psychology in Somatic Diseases (CoRPS) (Bekendam, Mommersteeg, Widdershoven, Kop); Department of Medical and Clinical Psychology (Bekendam, Mommersteeg, Widdershoven, Kop), Tilburg University; Department of Nuclear Medicine (Vermeltfoort), Institute Verbeeten; Department of Cardiology (Widdershoven), Elizabeth-TweeSteden Hospital; and Tilburg, the Netherlands
| | | | | | | | | |
Collapse
|
26
|
Höfling TTA, Alpers GW, Büdenbender B, Föhl U, Gerdes ABM. What's in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS One 2022; 17:e0263863. [PMID: 35239654 PMCID: PMC8893617 DOI: 10.1371/journal.pone.0263863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/28/2022] [Indexed: 11/19/2022] Open
Abstract
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
Collapse
Affiliation(s)
- T. Tim A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
27
|
Yan F, Wu N, Iliyasu AM, Kawamoto K, Hirota K. Framework for identifying and visualising emotional atmosphere in online learning environments in the COVID-19 Era. APPL INTELL 2022; 52:9406-9422. [PMID: 35013647 PMCID: PMC8731199 DOI: 10.1007/s10489-021-02916-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/12/2021] [Indexed: 01/13/2023]
Abstract
In addition to the almost five million lives lost and millions more than that in hospitalisations, efforts to mitigate the spread of the COVID-19 pandemic, which that has disrupted every aspect of human life deserves the contributions of all and sundry. Education is one of the areas most affected by the COVID-imposed abhorrence to physical (i.e., face-to-face (F2F)) communication. Consequently, schools, colleges, and universities worldwide have been forced to transition to different forms of online and virtual learning. Unlike F2F classes where the instructors could monitor and adjust lessons and content in tandem with the learners’ perceived emotions and engagement, in online learning environments (OLE), such tasks are daunting to undertake. In our modest contribution to ameliorate disruptions to education caused by the pandemic, this study presents an intuitive model to monitor the concentration, understanding, and engagement expected of a productive classroom environment. The proposed apposite OLE (i.e., AOLE) provides an intelligent 3D visualisation of the classroom atmosphere (CA), which could assist instructors adjust and tailor both content and instruction for maximum delivery. Furthermore, individual learner status could be tracked via visualisation of his/her emotion curve at any stage of the lesson or learning cycle. Considering the enormous emotional and psychological toll caused by COVID and the attendant shift to OLE, the emotion curves could be progressively compared through the duration of the learning cycle and the semester to track learners’ performance through to the final examinations. In terms of learning within the CA, our proposed AOLE is assessed within a class of 15 students and three instructors. Correlation of the outcomes reported with those from administered questionnaires validate the potential of our proposed model as a support for learning and counselling during these unprecedentedtimes that we find ourselves.
Collapse
Affiliation(s)
- Fei Yan
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Nan Wu
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
| | - Abdullah M. Iliyasu
- College of Engineering, Prince Sattam Bin Abdulaziz University, Al-Kharj, Saudi Arabia
- School of Computing, Tokyo Institute of Technology, Tokyo, Japan
| | - Kazuhiko Kawamoto
- Graduate School of Science and Engineering, Chiba University, Chiba, Japan
| | - Kaoru Hirota
- School of Computing, Tokyo Institute of Technology, Tokyo, Japan
| |
Collapse
|
28
|
Yan B, Wang FC, Ma TS, Liu YZ, Liu W, Cheng L, Wang ZY, Wang ZK, Liu CY. Efficacy and safety of electroacupuncture treatment in the prevention of negative moods in healthy young men after 30 h of total sleep deprivation: study protocol for a single-center, single-blind, parallel-arm, randomized clinical trial. Trials 2021; 22:761. [PMID: 34724966 PMCID: PMC8559366 DOI: 10.1186/s13063-021-05659-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 09/27/2021] [Indexed: 11/22/2022] Open
Abstract
Background Sleep deprivation (SD) among young adults is a major public health concern. In humans, it has adverse effects on mood and results in serious health problems. Faced with SD, persons may take precautionary measures to try and reduce their risk. The aim of this study is to evaluate the efficacy and safety of electroacupuncture (EA) for the prevention of negative moods after SD. In addition, we will do a comparison of the effects of EA on mood after SD at different time points. Methods This randomized controlled trial (RCT) will be performed at the First Affiliated Hospital of Changchun University of Chinese Medicine in China. The Standards for Reporting Interventions in Clinical Trials of Acupuncture 2010 will be strictly adhered to. Forty-two healthy male volunteers will be distributed into acupoints electroacupuncture (AE) group, non-acupoints electroacupuncture (NAE) control group, or blank control group. This trial will comprise 1-week baseline (baseline sleep), 1-week preventative treatment, 30-h total sleep deprivation (TSD), and 24-h after waking follow-up period. Participants in the AE group and the NAE control group during the preventative treatment period will be administered with EA treatment once daily for 1 week. Participants in the blank control group will not be administered with any treatment. The primary outcome will be the Profile of Mood States (POMS) Scale. Secondary outcome measures will include changes in the Noldus FaceReader (a tool for automatic analysis of facial expressions) and Positive and Negative Affect Schedule (PANAS) Scale. Total sleep deprivation will be 30 h. During the 30-h TSD period, participants will be subjected to 11 sessions of assessment. Adverse events will be recorded. Discussion This study is designed to evaluate the efficacy and safety of EA for the prevention of negative moods after SD. The results of this trial will allow us to compare the effects of EA on mood after SD at different time points. Moreover, the findings from this trial will be published in peer-reviewed journals. Trial registration Chinese Clinical Trial Registry Chi2000039713. Registered on 06 November 2020
Collapse
Affiliation(s)
- Bing Yan
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Fu-Chun Wang
- Department of Acupuncture, The Affiliated Hospital of Changchun University of Chinese Medicine, Changchun, China
| | - Tian-Shu Ma
- Innovative Practice Center, Changchun University of Chinese Medicine, Changchun, China
| | - Yan-Ze Liu
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Wu Liu
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Lei Cheng
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Zi-Yuan Wang
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Zhong-Ke Wang
- School of Acupuncture-Moxibustion and Tuina, Changchun University of Chinese Medicine, Changchun, China
| | - Cheng-Yu Liu
- School of Rehabilitation Medicine, Changchun University of Chinese Medicine, Changchun, China.
| |
Collapse
|
29
|
Schumann NP, Bongers K, Scholle HC, Guntinas-Lichius O. Atlas of voluntary facial muscle activation: Visualization of surface electromyographic activities of facial muscles during mimic exercises. PLoS One 2021; 16:e0254932. [PMID: 34280246 PMCID: PMC8289121 DOI: 10.1371/journal.pone.0254932] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2021] [Accepted: 07/06/2021] [Indexed: 12/29/2022] Open
Abstract
Complex facial muscle movements are essential for many motoric and emotional functions. Facial muscles are unique in the musculoskeletal system as they are interwoven, so that the contraction of one muscle influences the contractility characteristic of other mimic muscles. The facial muscles act more as a whole than as single facial muscle movements. The standard for clinical and psychosocial experiments to detect these complex interactions is surface electromyography (sEMG). What is missing, is an atlas showing which facial muscles are activated during specific tasks. Based on high-resolution sEMG data of 10 facial muscles of both sides of the face simultaneously recorded during 29 different facial muscle tasks, an atlas visualizing voluntary facial muscle activation was developed. For each task, the mean normalized EMG amplitudes of the examined facial muscles were visualized by colors. The colors were spread between the lowest and highest EMG activity. Gray shades represent no to very low EMG activities, light and dark brown shades represent low to medium EMG activities and red shades represent high to very high EMG activities relatively with respect to each task. The present atlas should become a helpful tool to design sEMG experiments not only for clinical trials and psychological experiments, but also for speech therapy and orofacial rehabilitation studies.
Collapse
Affiliation(s)
- Nikolaus P. Schumann
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Kevin Bongers
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Hans C. Scholle
- Division Motor Research, Pathophysiology and Biomechanics, Department of Trauma, Hand and Reconstructive Surgery, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
| | - Orlando Guntinas-Lichius
- Department of Otolaryngology, Jena University Hospital, Friedrich-Schiller-University Jena, Jena, Germany
- * E-mail:
| |
Collapse
|
30
|
Küntzler T, Höfling TTA, Alpers GW. Automatic Facial Expression Recognition in Standardized and Non-standardized Emotional Expressions. Front Psychol 2021; 12:627561. [PMID: 34025503 PMCID: PMC8131548 DOI: 10.3389/fpsyg.2021.627561] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/11/2021] [Indexed: 12/22/2022] Open
Abstract
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
Collapse
Affiliation(s)
- Theresa Küntzler
- Department of Politics and Public Administration, Center for Image Analysis in the Social Sciences, Graduate School of Decision Science, University of Konstanz, Konstanz, Germany
| | - T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
31
|
Höfling TTA, Alpers GW, Gerdes ABM, Föhl U. Automatic facial coding versus electromyography of mimicked, passive, and inhibited facial response to emotional faces. Cogn Emot 2021; 35:874-889. [PMID: 33761825 DOI: 10.1080/02699931.2021.1902786] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/07/2023]
Abstract
Decoding someone's facial expressions provides insights into his or her emotional experience. Recently, Automatic Facial Coding (AFC) software has been developed to provide measurements of emotional facial expressions. Previous studies provided first evidence for the sensitivity of such systems to detect facial responses in study participants. In the present experiment, we set out to generalise these results to affective responses as they can occur in variable social interactions. Thus, we presented facial expressions (happy, neutral, angry) and instructed participants (N = 64) to either actively mimic, to look at them passively (n = 21), or to inhibit their own facial reaction (n = 22). A video stream for AFC and an electromyogram (EMG) of the zygomaticus and corrugator muscles were registered continuously. In the mimicking condition, both AFC and EMG differentiated well between facial expressions in response to the different emotional pictures. In the passive viewing and in the inhibition condition AFC did not detect changes in facial expressions whereas EMG was still highly sensitive. Although only EMG is sensitive when participants intend to conceal their facial reactions, these data extend previous findings that Automatic Facial Coding is a promising tool for the detection of intense facial reaction.
Collapse
Affiliation(s)
- T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany.,Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B M Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Ulrich Föhl
- Business School, Pforzheim University of Applied Sciences, Pforzheim, Germany
| |
Collapse
|