1
|
Kim H, Kwak S, Yoo SY, Lee EC, Park S, Ko H, Bae M, Seo M, Nam G, Lee JY. Facial Expressions Track Depressive Symptoms in Old Age. SENSORS (BASEL, SWITZERLAND) 2023; 23:7080. [PMID: 37631616 PMCID: PMC10459725 DOI: 10.3390/s23167080] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 07/17/2023] [Accepted: 07/20/2023] [Indexed: 08/27/2023]
Abstract
Facial expressions play a crucial role in the diagnosis of mental illnesses characterized by mood changes. The Facial Action Coding System (FACS) is a comprehensive framework that systematically categorizes and captures even subtle changes in facial appearance, enabling the examination of emotional expressions. In this study, we investigated the association between facial expressions and depressive symptoms in a sample of 59 older adults without cognitive impairment. Utilizing the FACS and the Korean version of the Beck Depression Inventory-II, we analyzed both "posed" and "spontaneous" facial expressions across six basic emotions: happiness, sadness, fear, anger, surprise, and disgust. Through principal component analysis, we summarized 17 action units across these emotion conditions. Subsequently, multiple regression analyses were performed to identify specific facial expression features that explain depressive symptoms. Our findings revealed several distinct features of posed and spontaneous facial expressions. Specifically, among older adults with higher depressive symptoms, a posed face exhibited a downward and inward pull at the corner of the mouth, indicative of sadness. In contrast, a spontaneous face displayed raised and narrowed inner brows, which was associated with more severe depressive symptoms in older adults. These findings suggest that facial expressions can provide valuable insights into assessing depressive symptoms in older adults.
Collapse
Affiliation(s)
- Hairin Kim
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Seyul Kwak
- Department of Psychology, Pusan National University, Busan 46241, Republic of Korea
| | - So Young Yoo
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Eui Chul Lee
- Department of Human-Centered Artificial Intelligence, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Republic of Korea
| | - Soowon Park
- Division of Teacher Education, College of General Education for Truth, Sincerity and Love, Kyonggi University, Suwon 16227, Republic of Korea
| | - Hyunwoong Ko
- Samsung Advanced Institute for Health Sciences and Technology (SAIHST), Sungkyunkwan University, Samsung Medical Center, Seoul 06355, Republic of Korea
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Republic of Korea
| | - Minju Bae
- Interdisciplinary Program in Cognitive Science, Seoul National University, Seoul 08826, Republic of Korea
| | - Myogyeong Seo
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Gieun Nam
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| | - Jun-Young Lee
- Department of Psychiatry, Seoul National University College of Medicine & SMG-SNU Boramae Medical Center, Seoul 07061, Republic of Korea
| |
Collapse
|
2
|
Mai HN, Win TT, Tong MS, Lee CH, Lee KB, Kim SY, Lee HW, Lee DH. Three-dimensional morphometric analysis of facial units in virtual smiling facial images with different smile expressions. J Adv Prosthodont 2023; 15:1-10. [PMID: 36908751 PMCID: PMC9992697 DOI: 10.4047/jap.2023.15.1.1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 12/06/2022] [Accepted: 01/31/2023] [Indexed: 03/06/2023] Open
Abstract
PURPOSE Accuracy of image matching between resting and smiling facial models is affected by the stability of the reference surfaces. This study aimed to investigate the morphometric variations in subdivided facial units during resting, posed and spontaneous smiling. MATERIALS AND METHODS The posed and spontaneous smiling faces of 33 adults were digitized and registered to the resting faces. The morphological changes of subdivided facial units at the forehead (upper and lower central, upper and lower lateral, and temple), nasal (dorsum, tip, lateral wall, and alar lobules), and chin (central and lateral) regions were assessed by measuring the 3D mesh deviations between the smiling and resting facial models. The one-way analysis of variance, Duncan post hoc tests, and Student's t-test were used to determine the differences among the groups (α = .05). RESULTS The smallest morphometric changes were observed at the upper and central forehead and nasal dorsum; meanwhile, the largest deviation was found at the nasal alar lobules in both the posed and spontaneous smiles (P < .001). The spontaneous smile generally resulted in larger facial unit changes than the posed smile, and significant difference was observed at the alar lobules, central chin, and lateral chin units (P < .001). CONCLUSION The upper and central forehead and nasal dorsum are reliable areas for image matching between resting and smiling 3D facial images. The central chin area can be considered an additional reference area for posed smiles; however, special cautions should be taken when selecting this area as references for spontaneous smiles.
Collapse
Affiliation(s)
- Hang-Nga Mai
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Dental School of Hanoi University of business and technology, Hanoi, Vietnam
| | - Thaw Thaw Win
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Minh Son Tong
- School of Dentistry, Hanoi Medical University, Hanoi, Vietnam
| | - Cheong-Hee Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Kyu-Bok Lee
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - So-Yeun Kim
- Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| | - Hyun-Woo Lee
- Department of Oral and Maxillofacial Surgery, Uijeongbu Eulji Medical Center, Eulji University School of Dentistry, Uijeongbu, Republic of Korea
| | - Du-Hyeong Lee
- Institute for Translational Research in Dentistry, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea.,Department of Prosthodontics, School of Dentistry, Kyungpook National University, Daegu, Republic of Korea
| |
Collapse
|
3
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
4
|
Smile Reanimation with Masseteric-to-Facial Nerve Transfer plus Cross-Face Nerve Grafting in Patients with Segmental Midface Paresis: 3D Retrospective Quantitative Evaluation. Symmetry (Basel) 2022. [DOI: 10.3390/sym14122570] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022] Open
Abstract
Facial paresis involves functional and aesthetic problems with altered and asymmetric movement patterns. Surgical procedures and physical therapy can effectively reanimate the muscles. From our database, 10 patients (18–50 years) suffering from unilateral segmental midface paresis and rehabilitated by a masseteric-to-facial nerve transfer combined with a cross-face facial nerve graft, followed by physical therapy, were retrospectively analyzed. Standardized labial movements were measured using an optoelectronic motion capture system. Maximum teeth clenching, spontaneous smiles, and lip protrusion (kiss movement) were detected before and after surgery (21 ± 13 months). Preoperatively, during the maximum smile, the paretic side moved less than the healthy one (23.2 vs. 28.7 mm; activation ratio 69%, asymmetry index 18%). Postoperatively, no differences in total mobility were found. The activity ratio and the asymmetry index differed significantly (without/with teeth clenching: ratio 65% vs. 92%, p = 0.016; asymmetry index 21% vs. 5%, p = 0.016). Postoperatively, the mobility of the spontaneous smiles significantly reduced (healthy side, 25.1 vs. 17.2 mm, p = 0.043; paretic side 16.8 vs. 12.2 mm, p = 0.043), without modifications of the activity ratio and asymmetry index. Postoperatively, the paretic side kiss movement was significantly reduced (27 vs. 19.9 mm, p = 0.028). Overall, the treatment contributed to balancing the displacements between the two sides of the face with more symmetric movements.
Collapse
|
5
|
Dobreva D, Gkantidis N, Halazonetis D, Verna C, Kanavakis G. Smile Reproducibility and Its Relationship to Self-Perceived Smile Attractiveness. BIOLOGY 2022; 11:biology11050719. [PMID: 35625447 PMCID: PMC9138875 DOI: 10.3390/biology11050719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/03/2022] [Revised: 05/04/2022] [Accepted: 05/05/2022] [Indexed: 11/16/2022]
Abstract
The reproducibility of facial expressions has been previously explored, however, there is no detailed information regarding the reproducibility of lip morphology forming a social smile. In this study, we recruited 93 young adults, aged 21−35 years old, who agreed to participate in two consecutive study visits four weeks apart. On each visit, they were asked to perform a social smile, which was captured on a 3D facial image acquired using the 3dMD camera system. Assessments of self-perceived smile attractiveness were also performed using a VAS scale. Lip morphology, including smile shape, was described using 62 landmarks and semi-landmarks. A Procrustes superimposition of each set of smiling configurations (first and second visit) was performed and the Euclidean distance between each landmark set was calculated. A linear regression model was used to test the association between smile consistency and self-perceived smile attractiveness. The results show that the average landmark distance between sessions did not exceed 1.5 mm, indicating high repeatability, and that females presented approximately 15% higher smile consistecy than males (p < 0.05). There was no statistically significant association between smile consistency and self-perceived smile attractiveness (η2 = 0.015; p = 0.252), when controlling for the effect of sex and age.
Collapse
Affiliation(s)
- Denitsa Dobreva
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Nikolaos Gkantidis
- Department of Orthodontics and Dentofacial Orthopedics, University of Bern, 3001 Bern, Switzerland;
| | - Demetrios Halazonetis
- Department of Orthodontics, School of Dentistry, National and Kapodistrian University of Athens, GR-11527 Athens, Greece;
| | - Carlalberta Verna
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
| | - Georgios Kanavakis
- Department of Pediatric Oral Health and Orthodontics, University Center for Dental Medicine UZB, University of Basel, Mattenstrasse 40, 4058 Basel, Switzerland; (D.D.); (C.V.)
- Department of Orthodontics, Tufts University School of Dental Medicine, Boston, MA 02111, USA
- Correspondence:
| |
Collapse
|
6
|
Interpolated Stand Properties of Urban Forest Parks Account for Posted Facial Expressions of Visitors. SUSTAINABILITY 2022. [DOI: 10.3390/su14073817] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
Posted facial expressions on social networks have been used as a gauge to assess the emotional perceptions of urban forest visitors. This approach may be limited by the randomness of visitor numbers and park locations, which may not be accounted for by the range of data in local tree inventories. Spatial interpolation can be used to predict stand characteristics and detect their relationship with posted facial expressions. Shaoguan was used as the study area where a tree inventory was used to extract data from 74 forest stands (each sized 30 m × 20 m), in which the range was increased by interpolating the stand characteristics of another 12 urban forest parks. Visitors smiled more in parks in regions with a high population or a large built-up area, where trees had strong trunks and dense canopies. People who displayed sad faces were more likely to visit parks located in regions of hilly mountains or farmlands, where soils had a greater total nitrogen concentration and organic matter. Our study illustrates a successful case in using data from a local tree inventory to predict stand characteristics of forest parks that attracted frequent visits.
Collapse
|
7
|
Reply: Using Artificial Intelligence to Measure Facial Expression following Facial Reanimation Surgery. Plast Reconstr Surg 2022; 149:594e-595e. [PMID: 35089289 DOI: 10.1097/prs.0000000000008867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
8
|
Webster PJ, Wang S, Li X. Review: Posed vs. Genuine Facial Emotion Recognition and Expression in Autism and Implications for Intervention. Front Psychol 2021; 12:653112. [PMID: 34305720 PMCID: PMC8300960 DOI: 10.3389/fpsyg.2021.653112] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2021] [Accepted: 06/02/2021] [Indexed: 12/03/2022] Open
Abstract
Different styles of social interaction are one of the core characteristics of autism spectrum disorder (ASD). Social differences among individuals with ASD often include difficulty in discerning the emotions of neurotypical people based on their facial expressions. This review first covers the rich body of literature studying differences in facial emotion recognition (FER) in those with ASD, including behavioral studies and neurological findings. In particular, we highlight subtle emotion recognition and various factors related to inconsistent findings in behavioral studies of FER in ASD. Then, we discuss the dual problem of FER – namely facial emotion expression (FEE) or the production of facial expressions of emotion. Despite being less studied, social interaction involves both the ability to recognize emotions and to produce appropriate facial expressions. How others perceive facial expressions of emotion in those with ASD has remained an under-researched area. Finally, we propose a method for teaching FER [FER teaching hierarchy (FERTH)] based on recent research investigating FER in ASD, considering the use of posed vs. genuine emotions and static vs. dynamic stimuli. We also propose two possible teaching approaches: (1) a standard method of teaching progressively from simple drawings and cartoon characters to more complex audio-visual video clips of genuine human expressions of emotion with context clues or (2) teaching in a field of images that includes posed and genuine emotions to improve generalizability before progressing to more complex audio-visual stimuli. Lastly, we advocate for autism interventionists to use FER stimuli developed primarily for research purposes to facilitate the incorporation of well-controlled stimuli to teach FER and bridge the gap between intervention and research in this area.
Collapse
Affiliation(s)
- Paula J Webster
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
9
|
Jia S, Wang S, Hu C, Webster PJ, Li X. Detection of Genuine and Posed Facial Expressions of Emotion: Databases and Methods. Front Psychol 2021; 11:580287. [PMID: 33519600 PMCID: PMC7844089 DOI: 10.3389/fpsyg.2020.580287] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2020] [Accepted: 12/09/2020] [Indexed: 11/18/2022] Open
Abstract
Facial expressions of emotion play an important role in human social interactions. However, posed expressions of emotion are not always the same as genuine feelings. Recent research has found that facial expressions are increasingly used as a tool for understanding social interactions instead of personal emotions. Therefore, the credibility assessment of facial expressions, namely, the discrimination of genuine (spontaneous) expressions from posed (deliberate/volitional/deceptive) ones, is a crucial yet challenging task in facial expression understanding. With recent advances in computer vision and machine learning techniques, rapid progress has been made in recent years for automatic detection of genuine and posed facial expressions. This paper presents a general review of the relevant research, including several spontaneous vs. posed (SVP) facial expression databases and various computer vision based detection methods. In addition, a variety of factors that will influence the performance of SVP detection methods are discussed along with open issues and technical challenges in this nascent field.
Collapse
Affiliation(s)
- Shan Jia
- State Key Laboratory of Information Engineering in Surveying Mapping and Remote Sensing, Wuhan University, Wuhan, China.,Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States
| | - Chuanbo Hu
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| | - Paula J Webster
- Department of Chemical and Biomedical Engineering, West Virginia University, Morgantown, WV, United States
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV, United States
| |
Collapse
|
10
|
Lee K, Lee EC. Siamese Architecture-Based 3D DenseNet with Person-Specific Normalization Using Neutral Expression for Spontaneous and Posed Smile Classification. SENSORS 2020; 20:s20247184. [PMID: 33333873 PMCID: PMC7765265 DOI: 10.3390/s20247184] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/30/2020] [Revised: 12/08/2020] [Accepted: 12/14/2020] [Indexed: 11/16/2022]
Abstract
Clinical studies have demonstrated that spontaneous and posed smiles have spatiotemporal differences in facial muscle movements, such as laterally asymmetric movements, which use different facial muscles. In this study, a model was developed in which video classification of the two types of smile was performed using a 3D convolutional neural network (CNN) applying a Siamese network, and using a neutral expression as reference input. The proposed model makes the following contributions. First, the developed model solves the problem caused by the differences in appearance between individuals, because it learns the spatiotemporal differences between the neutral expression of an individual and spontaneous and posed smiles. Second, using a neutral expression as an anchor improves the model accuracy, when compared to that of the conventional method using genuine and imposter pairs. Third, by using a neutral expression as an anchor image, it is possible to develop a fully automated classification system for spontaneous and posed smiles. In addition, visualizations were designed for the Siamese architecture-based 3D CNN to analyze the accuracy improvement, and to compare the proposed and conventional methods through feature analysis, using principal component analysis (PCA).
Collapse
Affiliation(s)
- Kunyoung Lee
- Department of Computer Science, Graduate School, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Korea;
| | - Eui Chul Lee
- Department of Human-Centered Artificial Intelligence, Sangmyung University, Hongjimun 2-Gil 20, Jongno-Gu, Seoul 03016, Korea
- Correspondence: ; Tel.: +82-2-781-7553
| |
Collapse
|