1
|
Wegrzyn M, Münst L, König J, Dinter M, Kissler J. Observer-generated maps of diagnostic facial features enable categorization and prediction of emotion expressions. Acta Psychol (Amst) 2024; 251:104569. [PMID: 39488877 DOI: 10.1016/j.actpsy.2024.104569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 10/23/2024] [Accepted: 10/25/2024] [Indexed: 11/05/2024] Open
Abstract
According to one prominent model, facial expressions of emotion can be categorized into depicting happiness, disgust, anger, sadness, fear and surprise. One open question is which facial features observers use to recognize the different expressions and whether the features indicated by observers can be used to predict which expression they saw. We created fine-grained maps of diagnostic facial features by asking participants to use mouse clicks to highlight those parts of a face that they deem useful for recognizing its expression. We tested how well the resulting maps align with models of emotion expressions (based on Action Units) and how the maps relate to the accuracy with which observers recognize full or partly masked faces. As expected, observers focused on the eyes and mouth regions in all faces. However, each expression deviated from this global pattern in a unique way, allowing to create maps of diagnostic face regions. Action Units considered most important for expressing an emotion were highlighted most often, indicating their psychological validity. The maps of facial features also allowed to correctly predict which expression a participant had seen, with above-chance accuracies for all expressions. For happiness, fear and anger, the face half which was highlighted the most was also the half whose visibility led to higher recognition accuracies. The results suggest that diagnostic facial features are distributed in unique patterns for each expression, which observers seem to intuitively extract and use when categorizing facial displays of emotion.
Collapse
Affiliation(s)
- Martin Wegrzyn
- Department of Psychology, Bielefeld University, Bielefeld, Germany.
| | - Laura Münst
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | - Jessica König
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | | | - Johanna Kissler
- Department of Psychology, Bielefeld University, Bielefeld, Germany; Center for Cognitive Interaction Technology, Bielefeld, Germany
| |
Collapse
|
2
|
Noohi F, Kosik EL, Veziris C, Perry DC, Rosen HJ, Kramer JH, Miller BL, Holley SR, Seeley WW, Sturm VE. Structural neuroanatomy of human facial behaviors. Soc Cogn Affect Neurosci 2024; 19:nsae064. [PMID: 39308147 PMCID: PMC11492553 DOI: 10.1093/scan/nsae064] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2023] [Revised: 05/23/2024] [Accepted: 09/21/2024] [Indexed: 10/22/2024] Open
Abstract
The human face plays a central role in emotions and social communication. The emotional and somatic motor networks generate facial behaviors, but whether facial behaviors have representations in the structural anatomy of the human brain is unknown. We coded 16 facial behaviors in 55 healthy older adults who viewed five videos that elicited emotions and examined whether individual differences in facial behavior were related to regional variation in gray matter volume. Voxel-based morphometry analyses revealed that greater emotional facial behavior during the disgust trial (i.e. greater brow furrowing and eye tightening as well as nose wrinkling and upper lip raising) and the amusement trial (i.e. greater smiling and eye tightening) was associated with larger gray matter volume in midcingulate cortex, supplementary motor area, and precentral gyrus, areas spanning both the emotional and somatic motor networks. When measured across trials, however, these facial behaviors (and others) only related to gray matter volume in the precentral gyrus, a somatic motor network hub. These findings suggest that the emotional and somatic motor networks store structural representations of facial behavior and that the midcingulate cortex is critical for generating the predictable movements in the face that arise during emotions.
Collapse
Affiliation(s)
- Fate Noohi
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - Eena L Kosik
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - Christina Veziris
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - David C Perry
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - Howard J Rosen
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - Joel H Kramer
- Department of Neurology, University of California, San Francisco, CA 94158, United States
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94158, United States
| | - Bruce L Miller
- Department of Neurology, University of California, San Francisco, CA 94158, United States
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94158, United States
| | - Sarah R Holley
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94158, United States
- Department of Psychology, San Francisco State University, San Francisco, CA 94132, United States
| | - William W Seeley
- Department of Neurology, University of California, San Francisco, CA 94158, United States
| | - Virginia E Sturm
- Department of Neurology, University of California, San Francisco, CA 94158, United States
- Department of Psychiatry and Behavioral Sciences, University of California, San Francisco, CA 94158, United States
| |
Collapse
|
3
|
Ozturk S, Feltman S, Klein DN, Kotov R, Mohanty A. Digital assessment of nonverbal behaviors forecasts first onset of depression. Psychol Med 2024; 54:1-12. [PMID: 39363541 PMCID: PMC11496224 DOI: 10.1017/s0033291724002010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/22/2024] [Revised: 07/29/2024] [Accepted: 08/12/2024] [Indexed: 10/05/2024]
Abstract
BACKGROUND Adolescence is marked by a sharp increase in the incidence of depression, especially in females. Identification of risk for depressive disorders (DD) in this key developmental stage can help prevention efforts, mitigating the clinical and public burden of DD. While frequently used in diagnosis, nonverbal behaviors are relatively understudied as risk markers for DD. Digital technology, such as facial recognition, may provide objective, fast, efficient, and cost-effective means of measuring nonverbal behavior. METHOD Here, we analyzed video-recorded clinical interviews of 359 never-depressed adolescents females via commercially available facial emotion recognition software. RESULTS We found that average head and facial movements forecast future first onset of depression (AUC = 0.70) beyond the effects of other established self-report and physiological markers of DD risk. CONCLUSIONS Overall, these findings suggest that digital assessment of nonverbal behaviors may provide a promising risk marker for DD, which could aid in early identification and intervention efforts.
Collapse
Affiliation(s)
- Sekine Ozturk
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| | - Scott Feltman
- Department of Applied Mathematics and Statistics, Stony Brook University, Stony Brook, NY, USA
| | - Daniel N. Klein
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| | - Roman Kotov
- Department of Psychiatry and Behavioral Science, Stony Brook University, Stony Brook, NY, USA
| | - Aprajita Mohanty
- Department of Psychology, Stony Brook University, Stony Brook, NY, USA
| |
Collapse
|
4
|
Kiyokawa H, Hayashi R. Commonalities and variations in emotion representation across modalities and brain regions. Sci Rep 2024; 14:20992. [PMID: 39251743 PMCID: PMC11385795 DOI: 10.1038/s41598-024-71690-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2024] [Accepted: 08/30/2024] [Indexed: 09/11/2024] Open
Abstract
Humans express emotions through various modalities such as facial expressions and natural language. However, the relationships between emotions expressed through different modalities and their correlations with neural activities remain uncertain. Here, we aimed to unveil some of these uncertainties by investigating the similarity of emotion representations across modalities and brain regions. First, we represented various emotion categories as multi-dimensional vectors derived from visual (face), linguistic, and visio-linguistic data, and used representational similarity analysis to compare these modalities. Second, we examined the linear transferability of emotion representation from other modalities to the visual modality. Third, we compared the representational structure derived in the first step with those from brain activities across 360 regions. Our findings revealed that emotion representations share commonalities across modalities with modality-type dependent variations, and they can be linearly mapped from other modalities to the visual modality. Additionally, emotion representations in uni-modalities showed relatively higher similarity with specific brain regions, while multi-modal emotion representation was most similar to representations across the entire brain region. These findings suggest that emotional experiences are represented differently across various brain regions with varying degrees of similarity to different modality types, and that they may be multi-modally conveyable in visual and linguistic domains.
Collapse
Affiliation(s)
- Hiroaki Kiyokawa
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan
- Graduate School of Science and Engineering, Saitama University, Saitama, Japan
| | - Ryusuke Hayashi
- Human Informatics and Interaction Research Institute, National Institute of Advanced Industrial Science and Technology (AIST), Tsukuba, Ibaraki, Japan.
| |
Collapse
|
5
|
Schutte H, Bielevelt F, Muradin MSM, Bleys RLAW, Rosenberg AJWP. New method for analysing spatial relationships of facial muscles on MRI: a pilot study. Int J Oral Maxillofac Surg 2024; 53:731-738. [PMID: 38565453 DOI: 10.1016/j.ijom.2024.03.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 03/05/2024] [Accepted: 03/12/2024] [Indexed: 04/04/2024]
Abstract
Dysfunction of the facial musculature can have significant physical, social, and psychological consequences. In surgeries such as cleft surgery or craniofacial bimaxillary osteotomies, the perioral facial muscles may be detached or severed, potentially altering their functional vectors and mimicry capabilities. Ensuring correct reconstruction and maintenance of anatomical sites and muscle vectors is crucial in these procedures. However, a standardized method for perioperative assessment of the facial musculature and function is currently lacking. The aim of this study was to develop a workflow to analyse the three-dimensional vectors of the facial musculature using magnetic resonance imaging (MRI) scans. A protocol for localizing the origins and insertions of these muscles was established. The protocol was implemented using the 3DMedX computer program and tested on 7 Tesla MRI scans obtained from 10 healthy volunteers. Inter- and intra-observer variability were assessed to validate the protocol. The absolute intra-observer variability was 2.6 mm (standard deviation 2.0 mm), and absolute inter-observer variability was 2.6 mm (standard deviation 1.5 mm). This study presents a reliable and reproducible method for analysing the spatial relationships and functional significance of the facial muscles. The workflow developed facilitates perioperative assessment of the facial musculature, potentially aiding clinicians in surgical planning and potentially enhancing the outcomes of midface surgery.
Collapse
Affiliation(s)
- H Schutte
- Department of Maxillofacial Surgery, University Medical Center Utrecht, Utrecht, the Netherlands.
| | - F Bielevelt
- Department of Maxillofacial Surgery, University Medical Center Utrecht, Utrecht, the Netherlands; Radboud University Medical Centre, Radboudumc 3D Lab, Nijmegen, the Netherlands
| | - M S M Muradin
- Department of Maxillofacial Surgery, University Medical Center Utrecht, Utrecht, the Netherlands
| | - R L A W Bleys
- Department of Functional Anatomy, University Medical Center Utrecht, Utrecht, the Netherlands
| | - A J W P Rosenberg
- Department of Maxillofacial Surgery, University Medical Center Utrecht, Utrecht, the Netherlands
| |
Collapse
|
6
|
Stamkou E, Keltner D, Corona R, Aksoy E, Cowen AS. Emotional palette: a computational mapping of aesthetic experiences evoked by visual art. Sci Rep 2024; 14:19932. [PMID: 39198545 PMCID: PMC11358466 DOI: 10.1038/s41598-024-69686-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Accepted: 08/07/2024] [Indexed: 09/01/2024] Open
Abstract
Despite the evolutionary history and cultural significance of visual art, the structure of aesthetic experiences it evokes has only attracted recent scientific attention. What kinds of experience does visual art evoke? Guided by Semantic Space Theory, we identify the concepts that most precisely describe people's aesthetic experiences using new computational techniques. Participants viewed 1457 artworks sampled from diverse cultural and historical traditions and reported on the emotions they felt and their perceived artwork qualities. Results show that aesthetic experiences are high-dimensional, comprising 25 categories of feeling states. Extending well beyond hedonism and broad evaluative judgments (e.g., pleasant/unpleasant), aesthetic experiences involve emotions of daily social living (e.g., "sad", "joy"), the imagination (e.g., "psychedelic", "mysterious"), profundity (e.g., "disgust", "awe"), and perceptual qualities attributed to the artwork (e.g., "whimsical", "disorienting"). Aesthetic emotions and perceptual qualities jointly predict viewers' liking of the artworks, indicating that we conceptualize aesthetic experiences in terms of the emotions we feel but also the qualities we perceive in the artwork. Aesthetic experiences are often mixed and lie along continuous gradients between categories rather than within discrete clusters. Our collection of artworks is visualized within an interactive map ( https://barradeau.com/2021/emotions-map/ ), revealing the high-dimensional space of aesthetic experiences associated with visual art.
Collapse
Affiliation(s)
- Eftychia Stamkou
- Department of Psychology, University of Amsterdam, 1001 NK, Amsterdam, The Netherlands.
| | - Dacher Keltner
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
| | - Eda Aksoy
- Google Arts and Culture, 75009, Paris, France
| | - Alan S Cowen
- Department of Psychology, University of California Berkeley, Berkeley, CA, 94720, USA
- Hume AI, New York, NY, 10010, USA
| |
Collapse
|
7
|
Martin EA, Lian W, Oltmanns JR, Jonas KG, Samaras D, Hallquist MN, Ruggero CJ, Clouston SAP, Kotov R. Behavioral meaures of psychotic disorders: Using automatic facial coding to detect nonverbal expressions in video. J Psychiatr Res 2024; 176:9-17. [PMID: 38830297 DOI: 10.1016/j.jpsychires.2024.05.056] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/16/2023] [Revised: 04/11/2024] [Accepted: 05/29/2024] [Indexed: 06/05/2024]
Abstract
Emotional deficits in psychosis are prevalent and difficult to treat. In particular, much remains unknown about facial expression abnormalities, and a key reason is that expressions are very labor-intensive to code. Automatic facial coding (AFC) can remove this barrier. The current study sought to both provide evidence for the utility of AFC in psychosis for research purposes and to provide evidence that AFC are valid measures of clinical constructs. Changes of facial expressions and head position of participants-39 with schizophrenia/schizoaffective disorder (SZ), 46 with other psychotic disorders (OP), and 108 never psychotic individuals (NP)-were assessed via FaceReader, a commercially available automated facial expression analysis software, using video recorded during a clinical interview. We first examined the behavioral measures of the psychotic disorder groups and tested if they can discriminate between the groups. Next, we evaluated links of behavioral measures with clinical symptoms, controlling for group membership. We found the SZ group was characterized by significantly less variation in neutral expressions, happy expressions, arousal, and head movements compared to NP. These measures discriminated SZ from NP well (AUC = 0.79, sensitivity = 0.79, specificity = 0.67) but discriminated SZ from OP less well (AUC = 0.66, sensitivity = 0.77, specificity = 0.46). We also found significant correlations between clinician-rated symptoms and most behavioral measures (particularly happy expressions, arousal, and head movements). Taken together, these results suggest that AFC can provide useful behavioral measures of psychosis, which could improve research on non-verbal expressions in psychosis and, ultimately, enhance treatment.
Collapse
Affiliation(s)
- Elizabeth A Martin
- Department of Psychological Science, University of California, Irvine, CA, USA.
| | - Wenxuan Lian
- Department of Materials Science and Engineering and Department of Applied Math and Statistics, Stony Brook University, Stony Brook, NY, USA
| | - Joshua R Oltmanns
- Department of Psychiatry, Stony Brook University, Stony Brook, NY, USA
| | - Katherine G Jonas
- Department of Psychiatry, Stony Brook University, Stony Brook, NY, USA
| | - Dimitris Samaras
- Department of Computer Science, Stony Brook University, Stony Brook, NY, USA
| | - Michael N Hallquist
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Camilo J Ruggero
- Department of Psychology, University of Texas at Dallas, Richardson, TX, USA
| | - Sean A P Clouston
- Program in Public Health and Department of Family, Population, and Preventive Medicine, Renaissance School of Medicine, Stony Brook University, Stony Brook, NY, USA
| | - Roman Kotov
- Department of Psychiatry, Stony Brook University, Stony Brook, NY, USA.
| |
Collapse
|
8
|
Zhu B, Zhang C, Sui Y, Li L. FaceMotionPreserve: a generative approach for facial de-identification and medical information preservation. Sci Rep 2024; 14:17275. [PMID: 39068186 PMCID: PMC11283491 DOI: 10.1038/s41598-024-67989-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Accepted: 07/18/2024] [Indexed: 07/30/2024] Open
Abstract
Telemedicine and video-based diagnosis have raised significant concerns regarding the protection of facial privacy. Effective de-identification methods require the preservation of diagnostic information related to normal and pathological facial movements, which play a crucial role in the diagnosis of various movement, neurological, and psychiatric disorders. In this work, we have developed FaceMotionPreserve , a deep generative model-based approach that transforms patients' facial identities while preserving facial dynamics with a novel face dynamic similarity module to enhance facial landmark consistency. We collected test videos from patients with Parkinson's disease recruited via telemedicine for evaluation of model performance and clinical applicability. The performance of FaceMotionPreserve was quantitatively evaluated based on neurologist diagnostic consistency, critical facial behavior fidelity, and correlation of general facial dynamics. In addition, we further validated the robustness and advancements of our model in preserving medical information with clinical examination videos from a different cohort of patients. FaceMotionPreserve is applicable to real-time integration, safeguarding facial privacy while retaining crucial medical information associated with facial movements to address concerns in telemedicine, and facilitating safer and more collaborative medical data sharing.
Collapse
Affiliation(s)
- Bingquan Zhu
- National Engineering Research Center of Neuromodulation, Tsinghua University, Beijing, 100084, China
| | - Chen Zhang
- National Engineering Research Center of Neuromodulation, Tsinghua University, Beijing, 100084, China
| | - Yanan Sui
- National Engineering Research Center of Neuromodulation, Tsinghua University, Beijing, 100084, China.
| | - Luming Li
- National Engineering Research Center of Neuromodulation, Tsinghua University, Beijing, 100084, China
- IDG/McGovern Institute for Brain Research, Tsinghua University, Beijing, 100084, China
| |
Collapse
|
9
|
Cowen AS, Brooks JA, Prasad G, Tanaka M, Kamitani Y, Kirilyuk V, Somandepalli K, Jou B, Schroff F, Adam H, Sauter D, Fang X, Manokara K, Tzirakis P, Oh M, Keltner D. How emotion is experienced and expressed in multiple cultures: a large-scale experiment across North America, Europe, and Japan. Front Psychol 2024; 15:1350631. [PMID: 38966733 PMCID: PMC11223574 DOI: 10.3389/fpsyg.2024.1350631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2023] [Accepted: 03/04/2024] [Indexed: 07/06/2024] Open
Abstract
Core to understanding emotion are subjective experiences and their expression in facial behavior. Past studies have largely focused on six emotions and prototypical facial poses, reflecting limitations in scale and narrow assumptions about the variety of emotions and their patterns of expression. We examine 45,231 facial reactions to 2,185 evocative videos, largely in North America, Europe, and Japan, collecting participants' self-reported experiences in English or Japanese and manual and automated annotations of facial movement. Guided by Semantic Space Theory, we uncover 21 dimensions of emotion in the self-reported experiences of participants in Japan, the United States, and Western Europe, and considerable cross-cultural similarities in experience. Facial expressions predict at least 12 dimensions of experience, despite massive individual differences in experience. We find considerable cross-cultural convergence in the facial actions involved in the expression of emotion, and culture-specific display tendencies-many facial movements differ in intensity in Japan compared to the U.S./Canada and Europe but represent similar experiences. These results quantitatively detail that people in dramatically different cultures experience and express emotion in a high-dimensional, categorical, and similar but complex fashion.
Collapse
Affiliation(s)
- Alan S. Cowen
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | - Jeffrey A. Brooks
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| | | | - Misato Tanaka
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | - Yukiyasu Kamitani
- Advanced Telecommunications Research Institute, Kyoto, Japan
- Graduate School of Informatics, Kyoto University, Kyoto, Japan
| | | | - Krishna Somandepalli
- Google Research, Mountain View, CA, United States
- Department of Electrical Engineering, University of Southern California, Los Angeles, CA, United States
| | - Brendan Jou
- Google Research, Mountain View, CA, United States
| | | | - Hartwig Adam
- Google Research, Mountain View, CA, United States
| | - Disa Sauter
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | - Xia Fang
- Zhejiang University, Zhejiang, China
| | - Kunalan Manokara
- Faculty of Social and Behavioural Sciences, University of Amsterdam, Amsterdam, Netherlands
| | | | - Moses Oh
- Hume AI, New York, NY, United States
| | - Dacher Keltner
- Hume AI, New York, NY, United States
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
10
|
Kavanagh E, Whitehouse J, Waller BM. Being facially expressive is socially advantageous. Sci Rep 2024; 14:12798. [PMID: 38871925 DOI: 10.1038/s41598-024-62902-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 05/22/2024] [Indexed: 06/15/2024] Open
Abstract
Individuals vary in how they move their faces in everyday social interactions. In a first large-scale study, we measured variation in dynamic facial behaviour during social interaction and examined dyadic outcomes and impression formation. In Study 1, we recorded semi-structured video calls with 52 participants interacting with a confederate across various everyday contexts. Video clips were rated by 176 independent participants. In Study 2, we examined video calls of 1315 participants engaging in unstructured video-call interactions. Facial expressivity indices were extracted using automated Facial Action Coding Scheme analysis and measures of personality and partner impressions were obtained by self-report. Facial expressivity varied considerably across participants, but little across contexts, social partners or time. In Study 1, more facially expressive participants were more well-liked, agreeable, and successful at negotiating (if also more agreeable). Participants who were more facially competent, readable, and perceived as readable were also more well-liked. In Study 2, we replicated the findings that facial expressivity was associated with agreeableness and liking by their social partner, and additionally found it to be associated with extraversion and neuroticism. Findings suggest that facial behaviour is a stable individual difference that proffers social advantages, pointing towards an affiliative, adaptive function.
Collapse
Affiliation(s)
- Eithne Kavanagh
- Department of Psychology, Nottingham Trent University, Nottingham, UK.
| | - Jamie Whitehouse
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| | - Bridget M Waller
- Department of Psychology, Nottingham Trent University, Nottingham, UK
| |
Collapse
|
11
|
Chirico A, Borghesi F, Yaden DB, Pizzolante M, Sarcinella ED, Cipresso P, Gaggioli A. Unveiling the underlying structure of awe in virtual reality and in autobiographical recall: an exploratory study. Sci Rep 2024; 14:12474. [PMID: 38816477 PMCID: PMC11139977 DOI: 10.1038/s41598-024-62654-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Accepted: 05/20/2024] [Indexed: 06/01/2024] Open
Abstract
Over the last two decades, awe has attracted the attention of an increasing number of researchers. The use of virtual reality has been identified as one of the most effective techniques for eliciting awe, in addition to more personalized methods for inducing emotion, such as autobiographical recall. However, previous measures of awe were unable to uncover the hidden structure of this experience. Awe experience scale (AWE-S) has been validated as a comprehensive measure of contingent awe in English, providing new opportunities for analysis. In this two-phases study, we investigated whether the latent structure of the experience of awe evoked by the autobiographical recall technique (Study 1) overlapped with that induced by exposing participants to a validated virtual reality awe-eliciting training (Study 2). The original English AWE-S structure held both in autobiographical recall induction and virtual reality-based elicitation. Despite evidence of overlap between English and Italian structures, low correlations were found between Italian trait measures used to test the concurrent validity of the AWE-S in the Italian sample and AWE-S state dimensions. This study highlights cultural differences in awe experience, trait, and state variations, and provides new insights into the standardized induction of this emotion through simulated environments.
Collapse
Affiliation(s)
- Alice Chirico
- Department of Psychology, Research Center in Communication Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Francesca Borghesi
- Department of Psychology, University of Turin, Via Verdi 10, 10124, Turin, Italy.
| | - David B Yaden
- Department of Psychiatry and Behavioral Sciences, Johns Hopkins University School of Medicine, Baltimore, MD, USA
| | - Marta Pizzolante
- Department of Psychology, Research Center in Communication Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Eleonora Diletta Sarcinella
- Department of Psychology, Research Center in Communication Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
| | - Pietro Cipresso
- Department of Psychology, University of Turin, Via Verdi 10, 10124, Turin, Italy
| | - Andrea Gaggioli
- Department of Psychology, Research Center in Communication Psychology, Università Cattolica del Sacro Cuore, Milan, Italy
- IRCCS Istituto Auxologico Italiano, Milan, Italy
| |
Collapse
|
12
|
Hall NT, Hallquist MN, Martin EA, Lian W, Jonas KG, Kotov R. Automating the analysis of facial emotion expression dynamics: A computational framework and application in psychotic disorders. Proc Natl Acad Sci U S A 2024; 121:e2313665121. [PMID: 38530896 PMCID: PMC10998559 DOI: 10.1073/pnas.2313665121] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2023] [Accepted: 01/18/2024] [Indexed: 03/28/2024] Open
Abstract
Facial emotion expressions play a central role in interpersonal interactions; these displays are used to predict and influence the behavior of others. Despite their importance, quantifying and analyzing the dynamics of brief facial emotion expressions remains an understudied methodological challenge. Here, we present a method that leverages machine learning and network modeling to assess the dynamics of facial expressions. Using video recordings of clinical interviews, we demonstrate the utility of this approach in a sample of 96 people diagnosed with psychotic disorders and 116 never-psychotic adults. Participants diagnosed with schizophrenia tended to move from neutral expressions to uncommon expressions (e.g., fear, surprise), whereas participants diagnosed with other psychoses (e.g., mood disorders with psychosis) moved toward expressions of sadness. This method has broad applications to the study of normal and altered expressions of emotion and can be integrated with telemedicine to improve psychiatric assessment and treatment.
Collapse
Affiliation(s)
- Nathan T. Hall
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC27599
| | - Michael N. Hallquist
- Department of Psychology and Neuroscience, University of North Carolina at Chapel Hill, Chapel Hill, NC27599
| | - Elizabeth A. Martin
- Department of Psychological Science, University of California, Irvine, CA92697
| | - Wenxuan Lian
- Department of Psychiatry, Stony Brook University, Stoney Brook, NY11794
| | | | - Roman Kotov
- Department of Psychiatry, Stony Brook University, Stoney Brook, NY11794
| |
Collapse
|
13
|
Brooks JA, Kim L, Opara M, Keltner D, Fang X, Monroy M, Corona R, Tzirakis P, Baird A, Metrick J, Taddesse N, Zegeye K, Cowen AS. Deep learning reveals what facial expressions mean to people in different cultures. iScience 2024; 27:109175. [PMID: 38433918 PMCID: PMC10906517 DOI: 10.1016/j.isci.2024.109175] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2023] [Revised: 09/05/2023] [Accepted: 02/06/2024] [Indexed: 03/05/2024] Open
Abstract
Cross-cultural studies of the meaning of facial expressions have largely focused on judgments of small sets of stereotypical images by small numbers of people. Here, we used large-scale data collection and machine learning to map what facial expressions convey in six countries. Using a mimicry paradigm, 5,833 participants formed facial expressions found in 4,659 naturalistic images, resulting in 423,193 participant-generated facial expressions. In their own language, participants also rated each expression in terms of 48 emotions and mental states. A deep neural network tasked with predicting the culture-specific meanings people attributed to facial movements while ignoring physical appearance and context discovered 28 distinct dimensions of facial expression, with 21 dimensions showing strong evidence of universality and the remainder showing varying degrees of cultural specificity. These results capture the underlying dimensions of the meanings of facial expressions within and across cultures in unprecedented detail.
Collapse
Affiliation(s)
- Jeffrey A. Brooks
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY 10010, USA
| | | | - Dacher Keltner
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Xia Fang
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, Zhejiang, China
| | - Maria Monroy
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | - Rebecca Corona
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY 10010, USA
| | | | | | | | - Alan S. Cowen
- Research Division, Hume AI, New York, NY 10010, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| |
Collapse
|
14
|
Ulichney V, Schmidt H, Helion C. Perceived Relational Support Is Associated With Everyday Positive, But Not Negative, Affectivity in a U.S. Sample. PERSONALITY AND SOCIAL PSYCHOLOGY BULLETIN 2024:1461672231224991. [PMID: 38323578 DOI: 10.1177/01461672231224991] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
Research suggests that perceived social support bolsters emotional well-being. We tested whether perceived support from friends, family, and spouses/partners was associated with reduced negative and greater positive affectivity (i.e., everyday affective baseline), and whether perceived strain in these relationships had opposite effects, accounting for age and relevant covariates. Using data from the third waves of the Midlife in the United States survey and National Study of Daily Experience (n = 1,124), we found negative affectivity was not tied to relational support nor strain, but instead was associated positively with neuroticism and negatively with conscientiousness. In contrast, positive affectivity was related positively to support from friends and family, conscientiousness, and extroversion, and negatively to strain among partners and neuroticism. Exploratory analyses within second-wave Midlife in Japan data (n = 657) suggest patterns for future cross-cultural study. Some relationship dynamics may vary, but perceived support might enhance emotional well-being by bolstering positive, rather than mitigating negative, emotionality.
Collapse
|
15
|
Baumard N, Safra L, Martins M, Chevallier C. Cognitive fossils: using cultural artifacts to reconstruct psychological changes throughout history. Trends Cogn Sci 2024; 28:172-186. [PMID: 37949792 DOI: 10.1016/j.tics.2023.10.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/26/2023] [Accepted: 10/04/2023] [Indexed: 11/12/2023]
Abstract
Psychology is crucial for understanding human history. When aggregated, changes in the psychology of individuals - in the intensity of social trust, parental care, or intellectual curiosity - can lead to important changes in institutions, social norms, and cultures. However, studying the role of psychology in shaping human history has been hindered by the difficulty of documenting the psychological traits of people who are no longer alive. Recent developments in psychology suggest that cultural artifacts reflect in part the psychological traits of the individuals who produced or consumed them. Cultural artifacts can thus serve as 'cognitive fossils' - physical imprints of the psychological traits of long-dead people. We review the range of materials available to cognitive and behavioral scientists, and discuss the methods that can be used to recover and quantify changes in psychological traits throughout history.
Collapse
Affiliation(s)
- Nicolas Baumard
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France.
| | - Lou Safra
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France; Centre de Recherches Politiques de Sciences Po (CEVIPOF), Institut d'Études Politiques de Paris (Sciences Po), Paris, France
| | - Mauricio Martins
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France; SCAN-Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Coralie Chevallier
- Institut Jean Nicod, École Normale Supérieure (ENS)-Université de Paris Institut Jean Nicod, Département d'études cognitives, Ecole normale supérieure, Université PSL, EHESS, CNRS, Paris, France
| |
Collapse
|
16
|
Bian Y, Küster D, Liu H, Krumhuber EG. Understanding Naturalistic Facial Expressions with Deep Learning and Multimodal Large Language Models. SENSORS (BASEL, SWITZERLAND) 2023; 24:126. [PMID: 38202988 PMCID: PMC10781259 DOI: 10.3390/s24010126] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/13/2023] [Revised: 11/30/2023] [Accepted: 12/21/2023] [Indexed: 01/12/2024]
Abstract
This paper provides a comprehensive overview of affective computing systems for facial expression recognition (FER) research in naturalistic contexts. The first section presents an updated account of user-friendly FER toolboxes incorporating state-of-the-art deep learning models and elaborates on their neural architectures, datasets, and performances across domains. These sophisticated FER toolboxes can robustly address a variety of challenges encountered in the wild such as variations in illumination and head pose, which may otherwise impact recognition accuracy. The second section of this paper discusses multimodal large language models (MLLMs) and their potential applications in affective science. MLLMs exhibit human-level capabilities for FER and enable the quantification of various contextual variables to provide context-aware emotion inferences. These advancements have the potential to revolutionize current methodological approaches for studying the contextual influences on emotions, leading to the development of contextualized emotion models.
Collapse
Affiliation(s)
- Yifan Bian
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| | - Dennis Küster
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Hui Liu
- Department of Mathematics and Computer Science, University of Bremen, 28359 Bremen, Germany; (D.K.); (H.L.)
| | - Eva G. Krumhuber
- Department of Experimental Psychology, University College London, London WC1H 0AP, UK;
| |
Collapse
|
17
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
18
|
Cheong JH, Jolly E, Xie T, Byrne S, Kenney M, Chang LJ. Py-Feat: Python Facial Expression Analysis Toolbox. AFFECTIVE SCIENCE 2023; 4:781-796. [PMID: 38156250 PMCID: PMC10751270 DOI: 10.1007/s42761-023-00191-4] [Citation(s) in RCA: 5] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 05/07/2023] [Indexed: 12/30/2023]
Abstract
Studying facial expressions is a notoriously difficult endeavor. Recent advances in the field of affective computing have yielded impressive progress in automatically detecting facial expressions from pictures and videos. However, much of this work has yet to be widely disseminated in social science domains such as psychology. Current state-of-the-art models require considerable domain expertise that is not traditionally incorporated into social science training programs. Furthermore, there is a notable absence of user-friendly and open-source software that provides a comprehensive set of tools and functions that support facial expression research. In this paper, we introduce Py-Feat, an open-source Python toolbox that provides support for detecting, preprocessing, analyzing, and visualizing facial expression data. Py-Feat makes it easy for domain experts to disseminate and benchmark computer vision models and also for end users to quickly process, analyze, and visualize face expression data. We hope this platform will facilitate increased use of facial expression data in human behavior research. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00191-4.
Collapse
Affiliation(s)
- Jin Hyun Cheong
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Eshin Jolly
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Tiankang Xie
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| | - Sophie Byrne
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Matthew Kenney
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
| | - Luke J. Chang
- Computational Social and Affective Neuroscience Laboratory, Department of Psychological & Brain Sciences, Dartmouth College, Hanover, NH 03755 USA
- Department of Quantitative Biomedical Sciences, Geisel School of Medicine, Dartmouth College, Hanover, NH 03755 USA
| |
Collapse
|
19
|
Haase CM. Emotion Regulation in Couples Across Adulthood. ANNUAL REVIEW OF DEVELOPMENTAL PSYCHOLOGY 2023; 5:399-421. [PMID: 38939362 PMCID: PMC11210602 DOI: 10.1146/annurev-devpsych-120621-043836] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/29/2024]
Abstract
Intimate relationships are hotbeds of emotion. This article presents key findings and current directions in research on couples' emotion regulation across adulthood as a critical context in which older adults not only maintain functioning but may also outshine younger adults. First, I introduce key concepts, defining qualities (i.e., dynamic, coregulatory, bidirectional, bivalent), and measures (i.e., self-report versus performance-based) of couples' emotion regulation. Second, I highlight a socioemotional turn in our understanding of adult development with the advent of socioemotional selectivity theory. Third, I offer a life-span developmental perspective on emotion regulation in couples (i.e., across infancy, adolescence and young adulthood, midlife, and late life). Finally, I present the idea that emotion regulation may shift from "me to us" across adulthood and discuss how emotion regulation in couples may become more important, better, and increasingly consequential (e.g., for relationship outcomes, well-being, and health) with age. Ideas for future research are then discussed.
Collapse
Affiliation(s)
- Claudia M Haase
- School of Education and Social Policy and (by courtesy) Department of Psychology, Northwestern University, Evanston, Illinois, USA
| |
Collapse
|
20
|
van Heijst K, Kret ME, Ploeger A. Basic Emotions or Constructed Emotions: Insights From Taking an Evolutionary Perspective. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231205186. [PMID: 37916982 DOI: 10.1177/17456916231205186] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2023]
Abstract
The ongoing debate between basic emotion theories (BETs) and the theory of constructed emotion (TCE) hampers progress in the field of emotion research. Providing a new perspective, here we aim to bring the theories closer together by dissecting them according to Tinbergen's four questions to clarify a focus on their evolutionary basis. On the basis of our review of the literature, we conclude that whereas BETs focus on the evolution question of Tinbergen, the TCE is more concerned with the causation of emotion. On the survival value of emotions both theories largely agree: to provide the best reaction in specific situations. Evidence is converging on the evolutionary history of emotions but is still limited for both theories-research within both frameworks focuses heavily on the causation. We conclude that BETs and the TCE explain two different phenomena: emotion and feeling. Therefore, they seem irreconcilable but possibly supplementary for explaining and investigating the evolution of emotion-especially considering their similar answer to the question of survival value. Last, this article further highlights the importance of carefully describing what aspect of emotion is being discussed or studied. Only then can evidence be interpreted to converge toward explaining emotion.
Collapse
Affiliation(s)
| | - Mariska E Kret
- Cognitive Psychology Unit, Faculty of Social and Behavioral Sciences, Leiden University
- Comparative Psychology and Affective Neuroscience Lab, Cognitive Psychology Department, Leiden University
- Leiden Institute for Brain and Cognition (LIBC), Leiden University
| | | |
Collapse
|
21
|
Maxwell JW, Sanchez DN, Ruthruff E. Infrequent facial expressions of emotion do not bias attention. PSYCHOLOGICAL RESEARCH 2023; 87:2449-2459. [PMID: 37258662 DOI: 10.1007/s00426-023-01844-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 05/22/2023] [Indexed: 06/02/2023]
Abstract
Despite the obvious importance of facial expressions of emotion, most studies have found that they do not bias attention. A critical limitation, however, is that these studies generally present face distractors on all trials of the experiment. For other kinds of emotional stimuli, such as emotional scenes, infrequently presented stimuli elicit greater attentional bias than frequently presented stimuli, perhaps due to suppression or habituation. The goal of the current study then was to test whether such modulation of attentional bias by distractor frequency generalizes to facial expressions of emotion. In Experiment 1, both angry and happy faces were unable to bias attention, despite being infrequently presented. Even when the location of these face cues were more unpredictable-presented in one of two possible locations-still no attentional bias was observed (Experiment 2). Moreover, there was no bottom-up influence for angry and happy faces shown under high or low perceptual load (Experiment 3). We conclude that task-irrelevant posed facial expressions of emotion cannot bias attention even when presented infrequently.
Collapse
Affiliation(s)
- Joshua W Maxwell
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Danielle N Sanchez
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| | - Eric Ruthruff
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
22
|
Celidwen Y, Keltner D. Kin relationality and ecological belonging: a cultural psychology of Indigenous transcendence. Front Psychol 2023; 14:994508. [PMID: 37928574 PMCID: PMC10622976 DOI: 10.3389/fpsyg.2023.994508] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2022] [Accepted: 10/02/2023] [Indexed: 11/07/2023] Open
Abstract
In this article, we consider prosociality through the lens of an Indigenous "ethics of belonging" and its two constitutive concepts: kin relationality and ecological belonging. Kin relationality predicates that all living beings and phenomena share a familial identity of interdependence, mutuality, and organization. Within the value system of ecological belonging, an individual's identity is constituted in relation to the natural environment, centered on the sentiments of responsibility and reverence for Nature. We detail how Indigenous perspectives upon prosociality differ from Western scientific accounts in terms of the motives, scope, and rewards of altruistic action. Grounded in this understanding, we then profile three self-transcendent states, compassion, gratitude, and awe, and their similarities across Indigenous and Western approaches, and how kin relationality and ecological belonging give rise to cultural variations. We consider convergent insights across Indigenous and Western science concerning the role of ritual and narrative and the cultural cultivation of kin relationality and ecological belonging. We conclude by highlighting how these two core concepts might guide future inquiry in cultural psychology.
Collapse
Affiliation(s)
- Yuria Celidwen
- Department of Psychology and Othering and Belonging Institute, University of California at Berkeley, Berkeley, CA, United States
| | - Dacher Keltner
- Department of Psychology, University of California at Berkeley, Berkeley, CA, United States
| |
Collapse
|
23
|
Lin C, Bulls LS, Tepfer LJ, Vyas AD, Thornton MA. Advancing Naturalistic Affective Science with Deep Learning. AFFECTIVE SCIENCE 2023; 4:550-562. [PMID: 37744976 PMCID: PMC10514024 DOI: 10.1007/s42761-023-00215-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 08/03/2023] [Indexed: 09/26/2023]
Abstract
People express their own emotions and perceive others' emotions via a variety of channels, including facial movements, body gestures, vocal prosody, and language. Studying these channels of affective behavior offers insight into both the experience and perception of emotion. Prior research has predominantly focused on studying individual channels of affective behavior in isolation using tightly controlled, non-naturalistic experiments. This approach limits our understanding of emotion in more naturalistic contexts where different channels of information tend to interact. Traditional methods struggle to address this limitation: manually annotating behavior is time-consuming, making it infeasible to do at large scale; manually selecting and manipulating stimuli based on hypotheses may neglect unanticipated features, potentially generating biased conclusions; and common linear modeling approaches cannot fully capture the complex, nonlinear, and interactive nature of real-life affective processes. In this methodology review, we describe how deep learning can be applied to address these challenges to advance a more naturalistic affective science. First, we describe current practices in affective research and explain why existing methods face challenges in revealing a more naturalistic understanding of emotion. Second, we introduce deep learning approaches and explain how they can be applied to tackle three main challenges: quantifying naturalistic behaviors, selecting and manipulating naturalistic stimuli, and modeling naturalistic affective processes. Finally, we describe the limitations of these deep learning methods, and how these limitations might be avoided or mitigated. By detailing the promise and the peril of deep learning, this review aims to pave the way for a more naturalistic affective science.
Collapse
Affiliation(s)
- Chujun Lin
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Landry S. Bulls
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Lindsey J. Tepfer
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Amisha D. Vyas
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mark A. Thornton
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| |
Collapse
|
24
|
Pei G, Xiao Q, Pan Y, Li T, Jin J. Neural evidence of face processing in social anxiety disorder: A systematic review with meta-analysis. Neurosci Biobehav Rev 2023; 152:105283. [PMID: 37315657 DOI: 10.1016/j.neubiorev.2023.105283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2023] [Revised: 06/09/2023] [Accepted: 06/10/2023] [Indexed: 06/16/2023]
Abstract
Numerous previous studies have used event-related potentials (ERPs) to examine facial processing deficits in individuals with social anxiety disorder (SAD). However, researchers still need to determine whether the deficits are general or specific and what the dominant factors are behind different cognitive stages. Meta-analysis was performed to quantitatively identify face processing deficits in individuals with SAD. Ninety-seven results in 27 publications involving 1032 subjects were calculated using Hedges' g. The results suggest that the face itself elicits enlarged P1 amplitudes, threat-related facial expressions induce larger P2 amplitudes, and negative facial expressions lead to enhanced P3/LPP amplitudes in SAD individuals compared with controls. That is, there is face perception attentional bias in the early phase (P1), threat attentional bias in the mid-term phase (P2), and negative emotion attentional bias in the late phase (P3/LPP), which can be summarized into a three-phase SAD face processing deficit model. These findings provide an essential theoretical basis for cognitive behavioral therapy and have significant application value for the initial screening, intervention, and treatment of social anxiety.
Collapse
Affiliation(s)
- Guanxiong Pei
- Research Center for Multi-Modal Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, 1818# Wenyixi Road, Hangzhou 311121, China
| | - Qin Xiao
- Key Laboratory of Brain-Machine Intelligence for Information Behavior (Ministry of Education and Shanghai), 550# Dalian West Road, Shanghai 200083, China; School of Business and Management, Shanghai International Studies University, 550# Dalian West Road, Shanghai 200083, China
| | - Yu Pan
- Key Laboratory of Brain-Machine Intelligence for Information Behavior (Ministry of Education and Shanghai), 550# Dalian West Road, Shanghai 200083, China; School of Business and Management, Shanghai International Studies University, 550# Dalian West Road, Shanghai 200083, China
| | - Taihao Li
- Research Center for Multi-Modal Intelligence, Research Institute of Artificial Intelligence, Zhejiang Lab, 1818# Wenyixi Road, Hangzhou 311121, China.
| | - Jia Jin
- Key Laboratory of Brain-Machine Intelligence for Information Behavior (Ministry of Education and Shanghai), 550# Dalian West Road, Shanghai 200083, China; School of Business and Management, Shanghai International Studies University, 550# Dalian West Road, Shanghai 200083, China; Guangdong Institute of Intelligence Science and Technology, Joint Lab of Finance and Business Intelligence, 2515# Huandao North Road, Zhuhai 519031, China.
| |
Collapse
|
25
|
Grogans SE, Bliss-Moreau E, Buss KA, Clark LA, Fox AS, Keltner D, Cowen AS, Kim JJ, Kragel PA, MacLeod C, Mobbs D, Naragon-Gainey K, Fullana MA, Shackman AJ. The nature and neurobiology of fear and anxiety: State of the science and opportunities for accelerating discovery. Neurosci Biobehav Rev 2023; 151:105237. [PMID: 37209932 PMCID: PMC10330657 DOI: 10.1016/j.neubiorev.2023.105237] [Citation(s) in RCA: 17] [Impact Index Per Article: 17.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 05/11/2023] [Accepted: 05/13/2023] [Indexed: 05/22/2023]
Abstract
Fear and anxiety play a central role in mammalian life, and there is considerable interest in clarifying their nature, identifying their biological underpinnings, and determining their consequences for health and disease. Here we provide a roundtable discussion on the nature and biological bases of fear- and anxiety-related states, traits, and disorders. The discussants include scientists familiar with a wide variety of populations and a broad spectrum of techniques. The goal of the roundtable was to take stock of the state of the science and provide a roadmap to the next generation of fear and anxiety research. Much of the discussion centered on the key challenges facing the field, the most fruitful avenues for future research, and emerging opportunities for accelerating discovery, with implications for scientists, funders, and other stakeholders. Understanding fear and anxiety is a matter of practical importance. Anxiety disorders are a leading burden on public health and existing treatments are far from curative, underscoring the urgency of developing a deeper understanding of the factors governing threat-related emotions.
Collapse
Affiliation(s)
- Shannon E Grogans
- Department of Psychology, University of Maryland, College Park, MD 20742, USA
| | - Eliza Bliss-Moreau
- Department of Psychology, University of California, Davis, CA 95616, USA; California National Primate Research Center, University of California, Davis, CA 95616, USA
| | - Kristin A Buss
- Department of Psychology, The Pennsylvania State University, University Park, PA 16802 USA
| | - Lee Anna Clark
- Department of Psychology, University of Notre Dame, Notre Dame, IN 46556, USA
| | - Andrew S Fox
- Department of Psychology, University of California, Davis, CA 95616, USA; California National Primate Research Center, University of California, Davis, CA 95616, USA
| | - Dacher Keltner
- Department of Psychology, University of California, Berkeley, Berkeley, CA 94720, USA
| | | | - Jeansok J Kim
- Department of Psychology, University of Washington, Seattle, WA 98195, USA
| | - Philip A Kragel
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Colin MacLeod
- Centre for the Advancement of Research on Emotion, School of Psychological Science, The University of Western Australia, Perth, WA 6009, Australia
| | - Dean Mobbs
- Department of Humanities and Social Sciences, California Institute of Technology, Pasadena, California 91125, USA; Computation and Neural Systems Program, California Institute of Technology, Pasadena, CA 91125, USA
| | - Kristin Naragon-Gainey
- School of Psychological Science, University of Western Australia, Perth, WA 6009, Australia
| | - Miquel A Fullana
- Adult Psychiatry and Psychology Department, Institute of Neurosciences, Hospital Clinic, Barcelona, Spain; Imaging of Mood, and Anxiety-Related Disorders Group, Institut d'Investigacions Biomèdiques August Pi i Sunyer, CIBERSAM, University of Barcelona, Barcelona, Spain
| | - Alexander J Shackman
- Department of Psychology, University of Maryland, College Park, MD 20742, USA; Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD 20742, USA; Maryland Neuroimaging Center, University of Maryland, College Park, MD 20742, USA.
| |
Collapse
|
26
|
LaPalme ML, Barsade SG, Brackett MA, Floman JL. The Meso-Expression Test (MET): A Novel Assessment of Emotion Perception. J Intell 2023; 11:145. [PMID: 37504788 PMCID: PMC10381771 DOI: 10.3390/jintelligence11070145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Revised: 07/13/2023] [Accepted: 07/16/2023] [Indexed: 07/29/2023] Open
Abstract
Emotion perception is a primary facet of Emotional Intelligence (EI) and the underpinning of interpersonal communication. In this study, we examined meso-expressions-the everyday, moderate-intensity emotions communicated through the face, voice, and body. We theoretically distinguished meso-expressions from other well-known emotion research paradigms (i.e., macro-expression and micro-expressions). In Study 1, we demonstrated that people can reliably discriminate between meso-expressions, and we created a corpus of 914 unique video displays of meso-expressions across a race- and gender-diverse set of expressors. In Study 2, we developed a novel video-based assessment of emotion perception ability: The Meso-Expression Test (MET). In this study, we found that the MET is psychometrically valid and demonstrated measurement equivalence across Asian, Black, Hispanic, and White perceiver groups and across men and women. In Study 3, we examined the construct validity of the MET and showed that it converged with other well-known measures of emotion perception and diverged from cognitive ability. Finally, in Study 4, we showed that the MET is positively related to important psychosocial outcomes, including social well-being, social connectedness, and empathic concern and is negatively related to alexithymia, stress, depression, anxiety, and adverse social interactions. We conclude with a discussion focused on the implications of our findings for EI ability research and the practical applications of the MET.
Collapse
Affiliation(s)
- Matthew L LaPalme
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| | - Sigal G Barsade
- Wharton, University of Pennsylvania, Philadelphia, PA 19104, USA
| | - Marc A Brackett
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| | - James L Floman
- Yale Center for Emotional Intelligence, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
27
|
Vaill M, Kawanishi K, Varki N, Gagneux P, Varki A. Comparative physiological anthropogeny: exploring molecular underpinnings of distinctly human phenotypes. Physiol Rev 2023; 103:2171-2229. [PMID: 36603157 PMCID: PMC10151058 DOI: 10.1152/physrev.00040.2021] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2021] [Revised: 12/26/2022] [Accepted: 12/28/2022] [Indexed: 01/06/2023] Open
Abstract
Anthropogeny is a classic term encompassing transdisciplinary investigations of the origins of the human species. Comparative anthropogeny is a systematic comparison of humans and other living nonhuman hominids (so-called "great apes"), aiming to identify distinctly human features in health and disease, with the overall goal of explaining human origins. We begin with a historical perspective, briefly describing how the field progressed from the earliest evolutionary insights to the current emphasis on in-depth molecular and genomic investigations of "human-specific" biology and an increased appreciation for cultural impacts on human biology. While many such genetic differences between humans and other hominids have been revealed over the last two decades, this information remains insufficient to explain the most distinctive phenotypic traits distinguishing humans from other living hominids. Here we undertake a complementary approach of "comparative physiological anthropogeny," along the lines of the preclinical medical curriculum, i.e., beginning with anatomy and considering each physiological system and in each case considering genetic and molecular components that are relevant. What is ultimately needed is a systematic comparative approach at all levels from molecular to physiological to sociocultural, building networks of related information, drawing inferences, and generating testable hypotheses. The concluding section will touch on distinctive considerations in the study of human evolution, including the importance of gene-culture interactions.
Collapse
Affiliation(s)
- Michael Vaill
- Center for Academic Research and Training in Anthropogeny, University of California, San Diego, La Jolla, California
- Department of Cellular and Molecular Medicine, University of California, San Diego, La Jolla, California
- Glycobiology Research and Training Center, University of California, San Diego, La Jolla, California
| | - Kunio Kawanishi
- Center for Academic Research and Training in Anthropogeny, University of California, San Diego, La Jolla, California
- Department of Cellular and Molecular Medicine, University of California, San Diego, La Jolla, California
- Department of Experimental Pathology, Faculty of Medicine, University of Tsukuba, Tsukuba, Japan
| | - Nissi Varki
- Center for Academic Research and Training in Anthropogeny, University of California, San Diego, La Jolla, California
- Glycobiology Research and Training Center, University of California, San Diego, La Jolla, California
- Department of Pathology, University of California, San Diego, La Jolla, California
| | - Pascal Gagneux
- Center for Academic Research and Training in Anthropogeny, University of California, San Diego, La Jolla, California
- Glycobiology Research and Training Center, University of California, San Diego, La Jolla, California
- Department of Pathology, University of California, San Diego, La Jolla, California
| | - Ajit Varki
- Center for Academic Research and Training in Anthropogeny, University of California, San Diego, La Jolla, California
- Department of Cellular and Molecular Medicine, University of California, San Diego, La Jolla, California
- Glycobiology Research and Training Center, University of California, San Diego, La Jolla, California
| |
Collapse
|
28
|
Saumure C, Plouffe-Demers MP, Fiset D, Cormier S, Zhang Y, Sun D, Feng M, Luo F, Kunz M, Blais C. Differences Between East Asians and Westerners in the Mental Representations and Visual Information Extraction Involved in the Decoding of Pain Facial Expression Intensity. AFFECTIVE SCIENCE 2023; 4:332-349. [PMID: 37293682 PMCID: PMC10153781 DOI: 10.1007/s42761-023-00186-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/14/2023] [Indexed: 06/10/2023]
Abstract
Effectively communicating pain is crucial for human beings. Facial expressions are one of the most specific forms of behavior associated with pain, but the way culture shapes expectations about the intensity with which pain is typically facially conveyed, and the visual strategies deployed to decode pain intensity in facial expressions, is poorly understood. The present study used a data-driven approach to compare two cultures, namely East Asians and Westerners, with respect to their mental representations of pain facial expressions (experiment 1, N=60; experiment 2, N=74) and their visual information utilization during the discrimination of facial expressions of pain of different intensities (experiment 3; N=60). Results reveal that compared to Westerners, East Asians expect more intense pain expressions (experiments 1 and 2), need more signal, and do not rely as much as Westerners on core facial features of pain expressions to discriminate between pain intensities (experiment 3). Together, those findings suggest that cultural norms regarding socially accepted pain behaviors shape the expectations about pain facial expressions and decoding visual strategies. Furthermore, they highlight the complexity of emotional facial expressions and the importance of studying pain communication in multicultural settings. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00186-1.
Collapse
Affiliation(s)
- Camille Saumure
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Marie-Pier Plouffe-Demers
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
- Département de Psychologie, Université du Québec à Montréal, CP 8888 succ. Centre-ville, Montréal, Québec) H3C 3P8 Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Ye Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Dan Sun
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Manni Feng
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Feifan Luo
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Miriam Kunz
- Department of Medical Psychology & Sociology, University of Augsburg, Augsburg, Germany
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| |
Collapse
|
29
|
Correia-Caeiro C, Guo K, Mills DS. Visual perception of emotion cues in dogs: a critical review of methodologies. Anim Cogn 2023; 26:727-754. [PMID: 36870003 PMCID: PMC10066124 DOI: 10.1007/s10071-023-01762-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 02/07/2023] [Accepted: 02/20/2023] [Indexed: 03/05/2023]
Abstract
Comparative studies of human-dog cognition have grown exponentially since the 2000's, but the focus on how dogs look at us (as well as other dogs) as social partners is a more recent phenomenon despite its importance to human-dog interactions. Here, we briefly summarise the current state of research in visual perception of emotion cues in dogs and why this area is important; we then critically review its most commonly used methods, by discussing conceptual and methodological challenges and associated limitations in depth; finally, we suggest some possible solutions and recommend best practice for future research. Typically, most studies in this field have concentrated on facial emotional cues, with full body information rarely considered. There are many challenges in the way studies are conceptually designed (e.g., use of non-naturalistic stimuli) and the way researchers incorporate biases (e.g., anthropomorphism) into experimental designs, which may lead to problematic conclusions. However, technological and scientific advances offer the opportunity to gather much more valid, objective, and systematic data in this rapidly expanding field of study. Solving conceptual and methodological challenges in the field of emotion perception research in dogs will not only be beneficial in improving research in dog-human interactions, but also within the comparative psychology area, in which dogs are an important model species to study evolutionary processes.
Collapse
Affiliation(s)
- Catia Correia-Caeiro
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
- Department of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK.
- Primate Research Institute, Kyoto University, Inuyama, 484-8506, Japan.
- Center for the Evolutionary Origins of Human Behavior, Kyoto University, Inuyama, 484-8506, Japan.
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Daniel S Mills
- Department of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK
| |
Collapse
|
30
|
Demchenko I, Desai N, Iwasa SN, Gholamali Nezhad F, Zariffa J, Kennedy SH, Rule NO, Cohn JF, Popovic MR, Mulsant BH, Bhat V. Manipulating facial musculature with functional electrical stimulation as an intervention for major depressive disorder: a focused search of literature for a proposal. J Neuroeng Rehabil 2023; 20:64. [PMID: 37193985 DOI: 10.1186/s12984-023-01187-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2023] [Accepted: 05/02/2023] [Indexed: 05/18/2023] Open
Abstract
BACKGROUND Major Depressive Disorder (MDD) is associated with interoceptive deficits expressed throughout the body, particularly the facial musculature. According to the facial feedback hypothesis, afferent feedback from the facial muscles suffices to alter the emotional experience. Thus, manipulating the facial muscles could provide a new "mind-body" intervention for MDD. This article provides a conceptual overview of functional electrical stimulation (FES), a novel neuromodulation-based treatment modality that can be potentially used in the treatment of disorders of disrupted brain connectivity, such as MDD. METHODS A focused literature search was performed for clinical studies of FES as a modulatory treatment for mood symptoms. The literature is reviewed in a narrative format, integrating theories of emotion, facial expression, and MDD. RESULTS A rich body of literature on FES supports the notion that peripheral muscle manipulation in patients with stroke or spinal cord injury may enhance central neuroplasticity, restoring lost sensorimotor function. These neuroplastic effects suggest that FES may be a promising innovative intervention for psychiatric disorders of disrupted brain connectivity, such as MDD. Recent pilot data on repetitive FES applied to the facial muscles in healthy participants and patients with MDD show early promise, suggesting that FES may attenuate the negative interoceptive bias associated with MDD by enhancing positive facial feedback. Neurobiologically, the amygdala and nodes of the emotion-to-motor transformation loop may serve as potential neural targets for facial FES in MDD, as they integrate proprioceptive and interoceptive inputs from muscles of facial expression and fine-tune their motor output in line with socio-emotional context. CONCLUSIONS Manipulating facial muscles may represent a mechanistically novel treatment strategy for MDD and other disorders of disrupted brain connectivity that is worthy of investigation in phase II/III trials.
Collapse
Affiliation(s)
- Ilya Demchenko
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
| | - Naaz Desai
- Krembil Research Institute - University Health Network, Toronto, ON, M5T 0S8, Canada
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
| | - Stephanie N Iwasa
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
| | - Fatemeh Gholamali Nezhad
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
| | - José Zariffa
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
- Rehabilitation Sciences Institute, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5G 1V7, Canada
- Institute of Biomedical Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3E2, Canada
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3G8, Canada
| | - Sidney H Kennedy
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada
| | - Nicholas O Rule
- Department of Psychology, Faculty of Arts & Science , University of Toronto, Toronto, ON, M5S 3G3, Canada
| | - Jeffrey F Cohn
- Department of Psychology, Kenneth P. Dietrich School of Arts & Sciences, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Milos R Popovic
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada
- Institute of Biomedical Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3E2, Canada
- The Edward S. Rogers Sr. Department of Electrical & Computer Engineering, Faculty of Applied Science & Engineering, University of Toronto, Toronto, ON, M5S 3G8, Canada
| | - Benoit H Mulsant
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada
- Campbell Family Mental Health Research Institute, Centre for Addiction and Mental Health, Toronto, ON, M6J 1H4, Canada
| | - Venkat Bhat
- Interventional Psychiatry Program, Mental Health and Addictions Service, St. Michael's Hospital - Unity Health Toronto, Toronto, ON, M5B 1M4, Canada.
- Institute of Medical Science, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5S 1A8, Canada.
- Krembil Research Institute - University Health Network, Toronto, ON, M5T 0S8, Canada.
- KITE, Toronto Rehabilitation Institute - University Health Network, Toronto, ON, M5G 2A2, Canada.
- CRANIA, University Health Network, Toronto, ON, M5G 2C4, Canada.
- Department of Psychiatry, Temerty Faculty of Medicine, University of Toronto, Toronto, ON, M5T 1R8, Canada.
| |
Collapse
|
31
|
Onal Ertugrul I, Ahn YA, Bilalpur M, Messinger DS, Speltz ML, Cohn JF. Infant AFAR: Automated facial action recognition in infants. Behav Res Methods 2023; 55:1024-1035. [PMID: 35538295 PMCID: PMC9646921 DOI: 10.3758/s13428-022-01863-y] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/13/2022] [Indexed: 11/08/2022]
Abstract
Automated detection of facial action units in infants is challenging. Infant faces have different proportions, less texture, fewer wrinkles and furrows, and unique facial actions relative to adults. For these and related reasons, action unit (AU) detectors that are trained on adult faces may generalize poorly to infant faces. To train and test AU detectors for infant faces, we trained convolutional neural networks (CNN) in adult video databases and fine-tuned these networks in two large, manually annotated, infant video databases that differ in context, head pose, illumination, video resolution, and infant age. AUs were those central to expression of positive and negative emotion. AU detectors trained in infants greatly outperformed ones trained previously in adults. Training AU detectors across infant databases afforded greater robustness to between-database differences than did training database specific AU detectors and outperformed previous state-of-the-art in infant AU detection. The resulting AU detection system, which we refer to as Infant AFAR (Automated Facial Action Recognition), is available to the research community for further testing and applications in infant emotion, social interaction, and related topics.
Collapse
|
32
|
Abu Salih M, Abargil M, Badarneh S, Klein Selle N, Irani M, Atzil S. Evidence for cultural differences in affect during mother-infant interactions. Sci Rep 2023; 13:4831. [PMID: 36964204 PMCID: PMC10039016 DOI: 10.1038/s41598-023-31907-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Accepted: 03/16/2023] [Indexed: 03/26/2023] Open
Abstract
Maternal care is considered a universal and even cross-species set of typical behaviors, which are necessary to determine the social development of children. In humans, most research on mother-infant bonding is based on Western cultures and conducted in European and American countries. Thus, it is still unknown which aspects of mother-infant behaviors are universal and which vary with culture. Here we test whether typical mother-infant behaviors of affect-communication and affect-regulation are equally represented during spontaneous interaction in Palestinian-Arab and Jewish cultures. 30 Palestinian-Arab and 43 Jewish mother-infant dyads were recruited and videotaped. Using AffectRegulation Coding System (ARCS), we behaviorally analyzed the second-by-second display of valence and arousal in each participant and calculated the dynamic patterns of affect co-regulation. The results show that Palestinian-Arab infants express more positive valence than Jewish infants and that Palestinian-Arab mothers express higher arousal compared to Jewish mothers. Moreover, we found culturally-distinct strategies to regulate the infant: increased arousal in Palestinian-Arab dyads and increased mutual affective match in Jewish dyads. Such cross-cultural differences in affect indicate that basic features of emotion that are often considered universal are differentially represented in different cultures. Affect communication and regulation patterns can be transmitted across generations in early-life socialization with caregivers.
Collapse
Affiliation(s)
- Miada Abu Salih
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Maayan Abargil
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Saja Badarneh
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | | | - Merav Irani
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel
| | - Shir Atzil
- Department of Psychology, The Hebrew University of Jerusalem, Mount Scopus, Jerusalem, Israel.
| |
Collapse
|
33
|
Abstract
How do experiences in nature or in spiritual contemplation or in being moved by music or with psychedelics promote mental and physical health? Our proposal in this article is awe. To make this argument, we first review recent advances in the scientific study of awe, an emotion often considered ineffable and beyond measurement. Awe engages five processes-shifts in neurophysiology, a diminished focus on the self, increased prosocial relationality, greater social integration, and a heightened sense of meaning-that benefit well-being. We then apply this model to illuminate how experiences of awe that arise in nature, spirituality, music, collective movement, and psychedelics strengthen the mind and body.
Collapse
Affiliation(s)
- Maria Monroy
- Department of Psychology, University of California,
Berkeley
| | - Dacher Keltner
- Department of Psychology, University of California,
Berkeley
| |
Collapse
|
34
|
Snoek L, Jack RE, Schyns PG, Garrod OG, Mittenbühler M, Chen C, Oosterwijk S, Scholte HS. Testing, explaining, and exploring models of facial expressions of emotions. SCIENCE ADVANCES 2023; 9:eabq8421. [PMID: 36763663 PMCID: PMC9916981 DOI: 10.1126/sciadv.abq8421] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.
Collapse
Affiliation(s)
- Lukas Snoek
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Rachael E. Jack
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Philippe G. Schyns
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | | | - Maximilian Mittenbühler
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Suzanne Oosterwijk
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - H. Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
35
|
Brooks JA, Tzirakis P, Baird A, Kim L, Opara M, Fang X, Keltner D, Monroy M, Corona R, Metrick J, Cowen AS. Deep learning reveals what vocal bursts express in different cultures. Nat Hum Behav 2023; 7:240-250. [PMID: 36577898 DOI: 10.1038/s41562-022-01489-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Accepted: 10/26/2022] [Indexed: 12/29/2022]
Abstract
Human social life is rich with sighs, chuckles, shrieks and other emotional vocalizations, called 'vocal bursts'. Nevertheless, the meaning of vocal bursts across cultures is only beginning to be understood. Here, we combined large-scale experimental data collection with deep learning to reveal the shared and culture-specific meanings of vocal bursts. A total of n = 4,031 participants in China, India, South Africa, the USA and Venezuela mimicked vocal bursts drawn from 2,756 seed recordings. Participants also judged the emotional meaning of each vocal burst. A deep neural network tasked with predicting the culture-specific meanings people attributed to vocal bursts while disregarding context and speaker identity discovered 24 acoustic dimensions, or kinds, of vocal expression with distinct emotion-related meanings. The meanings attributed to these complex vocal modulations were 79% preserved across the five countries and three languages. These results reveal the underlying dimensions of human emotional vocalization in remarkable detail.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| | | | - Alice Baird
- Research Division, Hume AI, New York, NY, USA
| | - Lauren Kim
- Research Division, Hume AI, New York, NY, USA
| | | | - Xia Fang
- Zhejiang University, Hangzhou, China
| | - Dacher Keltner
- Research Division, Hume AI, New York, NY, USA.,University of California, Berkeley, Berkeley, CA, USA
| | - Maria Monroy
- University of California, Berkeley, Berkeley, CA, USA
| | | | | | - Alan S Cowen
- Research Division, Hume AI, New York, NY, USA. .,University of California, Berkeley, Berkeley, CA, USA.
| |
Collapse
|
36
|
The Average Facial Expressions: A Range of Motion Analysis for Different Sex and Age Groups. Plast Reconstr Surg Glob Open 2023; 11:e4762. [PMID: 36776597 PMCID: PMC9911205 DOI: 10.1097/gox.0000000000004762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2022] [Accepted: 11/15/2022] [Indexed: 02/04/2023]
Abstract
Facial expressions are ubiquitous in communication. Therefore, assessment of mimic function is essential in facial surgery, but no reference standards are currently available. This prospective study aims to create reference values of three-dimensional landmark displacement for different sex and age groups. Methods Three-dimensional photographs were taken from healthy subjects in rest, maximum closed smile, and pouting. Displacement for both exercises of perioral landmarks was analyzed with MATLAB as absolute displacement and as the ratio of mouth width. Additionally, displacement in three planes was analyzed for each landmark. Averages were calculated for both genders in four age groups: 4-8, 8-12, 12-16, and >16 years. Results In total, 328 subjects were included. Oral landmarks predominantly moved forward and backward for both exercises. Nasal landmarks predominantly moved vertically. Growing up, oral landmark displacement decreased for smiling, whereas nasal landmark displacement increased. For pouting, oral landmark displacement increased while growing up, whereas nasal landmark displacement decreased. Conclusions The present study creates reference values for movement of perioral structures for different sex and age groups, for two facial expressions. These data are of great value for the assessment of mimic function and give insight into the development of facial animation over time.
Collapse
|
37
|
Ballotta D, Maramotti R, Borelli E, Lui F, Pagnoni G. Neural correlates of emotional valence for faces and words. Front Psychol 2023; 14:1055054. [PMID: 36910761 PMCID: PMC9996044 DOI: 10.3389/fpsyg.2023.1055054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/27/2022] [Accepted: 02/06/2023] [Indexed: 02/25/2023] Open
Abstract
Stimuli with negative emotional valence are especially apt to influence perception and action because of their crucial role in survival, a property that may not be precisely mirrored by positive emotional stimuli of equal intensity. The aim of this study was to identify the neural circuits differentially coding for positive and negative valence in the implicit processing of facial expressions and words, which are among the main ways human beings use to express emotions. Thirty-six healthy subjects took part in an event-related fMRI experiment. We used an implicit emotional processing task with the visual presentation of negative, positive, and neutral faces and words, as primary stimuli. Dynamic Causal Modeling (DCM) of the fMRI data was used to test effective brain connectivity within two different anatomo-functional models, for the processing of words and faces, respectively. In our models, the only areas showing a significant differential response to negative and positive valence across both face and word stimuli were early visual cortices, with faces eliciting stronger activations. For faces, DCM revealed that this effect was mediated by a facilitation of activity in the amygdala by positive faces and in the fusiform face area by negative faces; for words, the effect was mainly imputable to a facilitation of activity in the primary visual cortex by positive words. These findings support a role of early sensory cortices in discriminating the emotional valence of both faces and words, where the effect may be mediated chiefly by the subcortical/limbic visual route for faces, and rely more on the direct thalamic pathway to primary visual cortex for words.
Collapse
Affiliation(s)
- Daniela Ballotta
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Riccardo Maramotti
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Eleonora Borelli
- Department of Medical and Surgical, Maternal-Infantile and Adult Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Fausta Lui
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| | - Giuseppe Pagnoni
- Department of Biomedical, Metabolic and Neural Sciences, University of Modena and Reggio Emilia, Modena, Italy
| |
Collapse
|
38
|
Höfling TTA, Alpers GW. Automatic facial coding predicts self-report of emotion, advertisement and brand effects elicited by video commercials. Front Neurosci 2023; 17:1125983. [PMID: 37205049 PMCID: PMC10185761 DOI: 10.3389/fnins.2023.1125983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Accepted: 02/10/2023] [Indexed: 05/21/2023] Open
Abstract
Introduction Consumers' emotional responses are the prime target for marketing commercials. Facial expressions provide information about a person's emotional state and technological advances have enabled machines to automatically decode them. Method With automatic facial coding we investigated the relationships between facial movements (i.e., action unit activity) and self-report of commercials advertisement emotion, advertisement and brand effects. Therefore, we recorded and analyzed the facial responses of 219 participants while they watched a broad array of video commercials. Results Facial expressions significantly predicted self-report of emotion as well as advertisement and brand effects. Interestingly, facial expressions had incremental value beyond self-report of emotion in the prediction of advertisement and brand effects. Hence, automatic facial coding appears to be useful as a non-verbal quantification of advertisement effects beyond self-report. Discussion This is the first study to measure a broad spectrum of automatically scored facial responses to video commercials. Automatic facial coding is a promising non-invasive and non-verbal method to measure emotional responses in marketing.
Collapse
|
39
|
Straulino E, Scarpazza C, Sartori L. What is missing in the study of emotion expression? Front Psychol 2023; 14:1158136. [PMID: 37179857 PMCID: PMC10173880 DOI: 10.3389/fpsyg.2023.1158136] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2023] [Accepted: 04/06/2023] [Indexed: 05/15/2023] Open
Abstract
While approaching celebrations for the 150 years of "The Expression of the Emotions in Man and Animals", scientists' conclusions on emotion expression are still debated. Emotion expression has been traditionally anchored to prototypical and mutually exclusive facial expressions (e.g., anger, disgust, fear, happiness, sadness, and surprise). However, people express emotions in nuanced patterns and - crucially - not everything is in the face. In recent decades considerable work has critiqued this classical view, calling for a more fluid and flexible approach that considers how humans dynamically perform genuine expressions with their bodies in context. A growing body of evidence suggests that each emotional display is a complex, multi-component, motoric event. The human face is never static, but continuously acts and reacts to internal and environmental stimuli, with the coordinated action of muscles throughout the body. Moreover, two anatomically and functionally different neural pathways sub-serve voluntary and involuntary expressions. An interesting implication is that we have distinct and independent pathways for genuine and posed facial expressions, and different combinations may occur across the vertical facial axis. Investigating the time course of these facial blends, which can be controlled consciously only in part, is recently providing a useful operational test for comparing the different predictions of various models on the lateralization of emotions. This concise review will identify shortcomings and new challenges regarding the study of emotion expressions at face, body, and contextual levels, eventually resulting in a theoretical and methodological shift in the study of emotions. We contend that the most feasible solution to address the complex world of emotion expression is defining a completely new and more complete approach to emotional investigation. This approach can potentially lead us to the roots of emotional display, and to the individual mechanisms underlying their expression (i.e., individual emotional signatures).
Collapse
Affiliation(s)
- Elisa Straulino
- Department of General Psychology, University of Padova, Padova, Italy
- *Correspondence: Elisa Straulino,
| | - Cristina Scarpazza
- Department of General Psychology, University of Padova, Padova, Italy
- IRCCS San Camillo Hospital, Venice, Italy
| | - Luisa Sartori
- Department of General Psychology, University of Padova, Padova, Italy
- Padova Neuroscience Center, University of Padova, Padova, Italy
- Luisa Sartori,
| |
Collapse
|
40
|
Zhang M, Siegle GJ. Linking Affective and Hearing Sciences-Affective Audiology. Trends Hear 2023; 27:23312165231208377. [PMID: 37904515 PMCID: PMC10619363 DOI: 10.1177/23312165231208377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/23/2023] [Revised: 09/22/2023] [Accepted: 10/01/2023] [Indexed: 11/01/2023] Open
Abstract
A growing number of health-related sciences, including audiology, have increasingly recognized the importance of affective phenomena. However, in audiology, affective phenomena are mostly studied as a consequence of hearing status. This review first addresses anatomical and functional bidirectional connections between auditory and affective systems that support a reciprocal affect-hearing relationship. We then postulate, by focusing on four practical examples (hearing public campaigns, hearing intervention uptake, thorough hearing evaluation, and tinnitus), that some important challenges in audiology are likely affect-related and that potential solutions could be developed by inspiration from affective science advances. We continue by introducing useful resources from affective science that could help audiology professionals learn about the wide range of affective constructs and integrate them into hearing research and clinical practice in structured and applicable ways. Six important considerations for good quality affective audiology research are summarized. We conclude that it is worthwhile and feasible to explore the explanatory power of emotions, feelings, motivations, attitudes, moods, and other affective processes in depth when trying to understand and predict how people with hearing difficulties perceive, react, and adapt to their environment.
Collapse
Affiliation(s)
- Min Zhang
- Shanghai Key Laboratory of Clinical Geriatric Medicine, Huadong Hospital, Fudan University, Shanghai, China
| | - Greg J. Siegle
- Department of Psychiatry, University of Pittsburgh Medical Center, Pittsburgh, PA, USA
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
41
|
Gündem D, Potočnik J, De Winter FL, El Kaddouri A, Stam D, Peeters R, Emsell L, Sunaert S, Van Oudenhove L, Vandenbulcke M, Feldman Barrett L, Van den Stock J. The neurobiological basis of affect is consistent with psychological construction theory and shares a common neural basis across emotional categories. Commun Biol 2022; 5:1354. [PMID: 36494449 PMCID: PMC9734184 DOI: 10.1038/s42003-022-04324-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2021] [Accepted: 11/30/2022] [Indexed: 12/13/2022] Open
Abstract
Affective experience colours everyday perception and cognition, yet its fundamental and neurobiological basis is poorly understood. The current debate essentially centers around the communalities and specificities across individuals, events, and emotional categories like anger, sadness, and happiness. Using fMRI during the experience of these emotions, we critically compare the two dominant conflicting theories on human affect. Basic emotion theory posits emotions as discrete universal entities generated by dedicated emotion category-specific neural circuits, while psychological construction theory claims emotional events as unique, idiosyncratic, and constructed by psychological primitives like core affect and conceptualization, which underlie each emotional event and operate in a predictive framework. Based on the findings of 8 a priori-defined model-specific prediction tests on the neural response amplitudes and patterns, we conclude that the neurobiological basis of affect is primarily characterized by idiosyncratic mechanisms and a common neural basis shared across emotion categories, consistent with psychological construction theory. The findings provide further insight into the organizational principles of the neural basis of affect and brain function in general. Future studies in clinical populations with affective symptoms may reveal the corresponding underlying neural changes from a psychological construction perspective.
Collapse
Affiliation(s)
- Doğa Gündem
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Jure Potočnik
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - François-Laurent De Winter
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Geriatric Psychiatry, University Psychiatric Center KU Leuven, Leuven, Belgium
| | - Amal El Kaddouri
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Daphne Stam
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Ronald Peeters
- grid.410569.f0000 0004 0626 3338Department of Radiology, University Hospitals Leuven, Leuven, Belgium
| | - Louise Emsell
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium ,grid.410569.f0000 0004 0626 3338Department of Radiology, University Hospitals Leuven, Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Stefan Sunaert
- grid.410569.f0000 0004 0626 3338Department of Radiology, University Hospitals Leuven, Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Department of Imaging and Pathology, KU Leuven, Leuven, Belgium
| | - Lukas Van Oudenhove
- grid.5596.f0000 0001 0668 7884Laboratory for Brain-Gut Axis Studies (LaBGAS), Translational Research in Gastrointestinal Disorders (TARGID), Department of Chronic Diseases and Metabolism, Leuven Brain Institute, KU Leuven, Leuven, Belgium ,grid.254880.30000 0001 2179 2404Cognitive and Affective Neuroscience Lab, Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH USA
| | - Mathieu Vandenbulcke
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Geriatric Psychiatry, University Psychiatric Center KU Leuven, Leuven, Belgium
| | - Lisa Feldman Barrett
- grid.261112.70000 0001 2173 3359Department of Psychology, Northeastern University, Boston, MA USA ,grid.38142.3c000000041936754XDepartment of Psychiatry, Massachusetts General Hospital, Harvard Medical School, Boston, MA USA ,grid.32224.350000 0004 0386 9924Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA USA
| | - Jan Van den Stock
- grid.5596.f0000 0001 0668 7884Neuropsychiatry, Department of Neurosciences, Leuven Brain Institute, KU Leuven, Leuven, Belgium ,grid.5596.f0000 0001 0668 7884Geriatric Psychiatry, University Psychiatric Center KU Leuven, Leuven, Belgium
| |
Collapse
|
42
|
Barrett LF. Context reconsidered: Complex signal ensembles, relational meaning, and population thinking in psychological science. AMERICAN PSYCHOLOGIST 2022; 77:894-920. [PMID: 36409120 PMCID: PMC9683522 DOI: 10.1037/amp0001054] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/26/2023]
Abstract
This article considers the status and study of "context" in psychological science through the lens of research on emotional expressions. The article begins by updating three well-trod methodological debates on the role of context in emotional expressions to reconsider several fundamental assumptions lurking within the field's dominant methodological tradition: namely, that certain expressive movements have biologically prepared, inherent emotional meanings that issue from singular, universal processes which are independent of but interact with contextual influences. The second part of this article considers the scientific opportunities that await if we set aside this traditional understanding of "context" as a moderator of signals with inherent psychological meaning and instead consider the possibility that psychological events emerge in ecosystems of signal ensembles, such that the psychological meaning of any individual signal is entirely relational. Such a fundamental shift has radical implications not only for the science of emotion but for psychological science more generally. It offers opportunities to improve the validity and trustworthiness of psychological science beyond what can be achieved with improvements to methodological rigor alone. (PsycInfo Database Record (c) 2022 APA, all rights reserved).
Collapse
|
43
|
Díaz-Agea JL, Pujalte-Jesús MJ, Arizo-Luque V, García-Méndez JA, López-Chicheri-García I, Rojo-Rojo A. How Are You Feeling? Interpretation of Emotions through Facial Expressions of People Wearing Different Personal Protective Equipment: An Observational Study. NURSING REPORTS 2022; 12:758-774. [PMID: 36278768 PMCID: PMC9590080 DOI: 10.3390/nursrep12040075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Revised: 10/10/2022] [Accepted: 10/13/2022] [Indexed: 11/05/2022] Open
Abstract
(1) Background: The perception of others’ emotions based on non-verbal cues, such as facial expressions, is fundamental for interpersonal communication and mutual support. Using personal protection equipment (PPE) in a work environment during the SAR-CoV-2 pandemic challenged health professionals’ ability to recognise emotions and expressions while wearing PPE. The working hypothesis of this study was that the increased limitation of facial visibility, due to the use of a personal protective device, would interfere with the perception of basic emotions in the participants. (2) Methods: Through a cross-sectional descriptive study, the present research aimed to analyse the identification of four basic emotions (happiness; sadness; fear/surprise; and disgust/anger) through three types of PPE (FFP2 respirator, protective overall and powered air-purifying respirator (PAPR)), by using 32 photographs. The study was conducted using volunteer participants who met the inclusion criteria (individuals older than 13 without cognitive limitations). Participants had to recognise the emotions of actors in photographs that were randomly displayed in an online form. (3) Results: In general, the 690 participants better recognised happiness and fear, independently of the PPE utilised. Women could better identify different emotions, along with university graduates and young and middle-aged adults. Emotional identification was at its worst when the participants wore protective overalls (5.42 ± 1.22), followed by the PAPR (5.83 ± 1.38); the best scores were obtained using the FFP2 masks (6.57 ± 1.20). Sadness was the least recognised emotion, regardless of age. (4) Conclusions: The personal protective devices interfere in the recognition of emotions, with the protective overalls having the greatest impact, and the FFP2 mask the least. The emotions that were best recognised were happiness and fear/surprise, while the least recognised emotion was sadness. Women were better at identifying emotions, as well as participants with higher education, and young and middle-aged adults.
Collapse
|
44
|
Comunicación no verbal de emociones: variables sociodemográficas y ventaja endogrupal. REVISTA IBEROAMERICANA DE PSICOLOGÍA 2022. [DOI: 10.33881/2027-1786.rip.15209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/22/2023] Open
Abstract
En el campo de la comunicación no verbal de las emociones aún existe un debate en torno a la universalidad de las expresiones de emoción y el efecto que tiene la cultura en ellas. Actualmente existen dos teorías que tratan de explicar este fenómeno, la teoría neurocultural y la teoría de los dialectos. Ambas se enfocan en explicar la comunicación no verbal de emociones, pero la primera se centra en los aspectos universales, mientras que la segunda lo hace en la cultura. El objetivo del presente estudio fue indagar la ventaja endogrupal al interior de una cultura. Se diseñó un cuasiexperimento en el que se solicitó a 107 participantes que indicaran la emoción expresada en 42 estímulos en tres formatos de presentación distintos. Los resultados indican la existencia de dicha ventaja en las mujeres y jóvenes. Los presentes resultados ilustran los efectos de la cultura en este fenómeno.
Collapse
|
45
|
Viola M. Seeing through the shades of situated affectivity. Sunglasses as a socio-affective artifact. PHILOSOPHICAL PSYCHOLOGY 2022. [DOI: 10.1080/09515089.2022.2118574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Affiliation(s)
- Marco Viola
- Department of Philosophy, Communication, and Performing Arts, Rome 3 University, Rome, Italy
| |
Collapse
|
46
|
Erle TM, Funk F. Visuospatial and Affective Perspective-Taking. SOCIAL PSYCHOLOGY 2022. [DOI: 10.1027/1864-9335/a000504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
Abstract. Perspective-taking is the ability to intuit another person’s mental state. Historically, cognitive and affective perspective-taking are distinguished from visuospatial perspective-taking because the content these processes operate on is too dissimilar. However, all three share functional similarities. Following recent research showing relations between cognitive and visuospatial perspective-taking, this article explores links between visuospatial and affective perspective-taking. Data of three preregistered experiments suggest that visuospatial perspective-taking does not improve emotion recognition speed and only slightly increases emotion recognition accuracy (Experiment 1), yet visuospatial perspective-taking increases the perceived intensity of emotional expressions (Experiment 2), as well as the emotional contagiousness of negative emotions (Experiment 3). The implications of these findings for content-based, cognitive, and functional taxonomies of perspective-taking and related processes are discussed.
Collapse
Affiliation(s)
- Thorsten M. Erle
- Department of Social Psychology, Tilburg University, Tilburg, The Netherlands
| | - Friederike Funk
- Faculty of Arts and Sciences, NYU Shanghai, Shanghai, PR China
| |
Collapse
|
47
|
Ruba AL, Pollak SD, Saffran JR. Acquiring Complex Communicative Systems: Statistical Learning of Language and Emotion. Top Cogn Sci 2022; 14:432-450. [PMID: 35398974 PMCID: PMC9465951 DOI: 10.1111/tops.12612] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2011] [Revised: 03/16/2022] [Accepted: 03/17/2022] [Indexed: 11/30/2022]
Abstract
During the early postnatal years, most infants rapidly learn to understand two naturally evolved communication systems: language and emotion. While these two domains include different types of content knowledge, it is possible that similar learning processes subserve their acquisition. In this review, we compare the learnable statistical regularities in language and emotion input. We then consider how domain-general learning abilities may underly the acquisition of language and emotion, and how this process may be constrained in each domain. This comparative developmental approach can advance our understanding of how humans learn to communicate with others.
Collapse
Affiliation(s)
- Ashley L. Ruba
- Department of PsychologyUniversity of Wisconsin – Madison
| | - Seth D. Pollak
- Department of PsychologyUniversity of Wisconsin – Madison
| | | |
Collapse
|
48
|
Determination of “Neutral”–“Pain”, “Neutral”–“Pleasure”, and “Pleasure”–“Pain” Affective State Distances by Using AI Image Analysis of Facial Expressions. TECHNOLOGIES 2022. [DOI: 10.3390/technologies10040075] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/27/2023]
Abstract
(1) Background: In addition to verbalizations, facial expressions advertise one’s affective state. There is an ongoing debate concerning the communicative value of the facial expressions of pain and of pleasure, and to what extent humans can distinguish between these. We introduce a novel method of analysis by replacing human ratings with outputs from image analysis software. (2) Methods: We use image analysis software to extract feature vectors of the facial expressions neutral, pain, and pleasure displayed by 20 actresses. We dimension-reduced these feature vectors, used singular value decomposition to eliminate noise, and then used hierarchical agglomerative clustering to detect patterns. (3) Results: The vector norms for pain–pleasure were rarely less than the distances pain–neutral and pleasure–neutral. The pain–pleasure distances were Weibull-distributed and noise contributed 10% to the signal. The noise-free distances clustered in four clusters and two isolates. (4) Conclusions: AI methods of image recognition are superior to human abilities in distinguishing between facial expressions of pain and pleasure. Statistical methods and hierarchical clustering offer possible explanations as to why humans fail. The reliability of commercial software, which attempts to identify facial expressions of affective states, can be improved by using the results of our analyses.
Collapse
|
49
|
The cultural learning account of first impressions. Trends Cogn Sci 2022; 26:656-668. [PMID: 35697651 DOI: 10.1016/j.tics.2022.05.007] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Revised: 05/16/2022] [Accepted: 05/17/2022] [Indexed: 11/20/2022]
Abstract
Humans spontaneously attribute character traits to strangers based on their facial appearance. Although these 'first impressions' typically have no basis in reality, some authors have assumed that they have an innate origin. By contrast, the Trait Inference Mapping (TIM) account proposes that first impressions are products of culturally acquired associative mappings that allow activation to spread from representations of facial appearance to representations of trait profiles. According to TIM, cultural instruments, including propaganda, illustrated storybooks, art and iconography, ritual, film, and TV, expose many individuals within a community to common sources of correlated face-trait experience, yielding first impressions that are shared by many, but typically inaccurate. Here, we review emerging empirical findings, many of which accord with TIM, and argue that future work must distinguish first impressions based on invariant facial features (e.g., shape) from those based on facial behaviours (e.g., expressions).
Collapse
|
50
|
Posterior-prefrontal and medial orbitofrontal regions play crucial roles in happiness and sadness recognition. Neuroimage Clin 2022; 35:103072. [PMID: 35689975 PMCID: PMC9192961 DOI: 10.1016/j.nicl.2022.103072] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2021] [Revised: 05/31/2022] [Accepted: 05/31/2022] [Indexed: 11/23/2022]
Abstract
Brain areas underlying trade-off relations between emotions were identified. Damage to the PPF area reduces accuracy of happiness recognition. Damage to the PPF increases accuracy of sadness recognition. A similar tendency was observed in orbitofrontal regions for sadness recognition. Only a deficit in sadness, but not happiness, persisted in the chronic phase.
The core brain regions responsible for basic human emotions are not yet fully understood. We investigated the key areas responsible for emotion recognition of facial expressions of happiness and sadness using data obtained from patients who underwent local brain resection. A total of 44 patients with right cerebral hemispheric brain tumors and 33 healthy volunteers were enrolled and subjected to a facial expression recognition test. Voxel-based lesion-symptom mapping was performed to investigate the relationship between the accuracy of emotion recognition and the resected regions. Consequently, trade-off relationships were discovered: the posterior-prefrontal region was related to a low score of happiness recognition and a high score of sadness recognition (disorder-of-happiness group), whereas the medial orbitofrontal region was related to a low score of sadness recognition and a high score of happiness recognition (disorder-of-sadness group). The emotion recognition score in both the happiness and sadness disorder groups was significantly lower than that in the control group (p = 0.0009 and p = 0.021, respectively). Interestingly, the deficit in happiness recognition was temporary, whereas the deficit in sadness recognition persisted during the chronic phase. Using graph theoretical analysis, we identified structural connectivity between the posterior-prefrontal and medial orbitofrontal regions. When either of these regions was damaged, the tract volume connecting them was significantly reduced (p = 0.013). These results indicate that the posterior-prefrontal and medial orbitofrontal regions may be crucial for maintaining a balance between happiness and sadness recognition in humans. Investigating the clinical impact of certain area resections using lesion studies combined with connectivity analysis is a useful neuroimaging method for understanding neural networks.
Collapse
|