1
|
Mattera A, Alfieri V, Granato G, Baldassarre G. Chaotic recurrent neural networks for brain modelling: A review. Neural Netw 2025; 184:107079. [PMID: 39756119 DOI: 10.1016/j.neunet.2024.107079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2024] [Revised: 11/25/2024] [Accepted: 12/19/2024] [Indexed: 01/07/2025]
Abstract
Even in the absence of external stimuli, the brain is spontaneously active. Indeed, most cortical activity is internally generated by recurrence. Both theoretical and experimental studies suggest that chaotic dynamics characterize this spontaneous activity. While the precise function of brain chaotic activity is still puzzling, we know that chaos confers many advantages. From a computational perspective, chaos enhances the complexity of network dynamics. From a behavioural point of view, chaotic activity could generate the variability required for exploration. Furthermore, information storage and transfer are maximized at the critical border between order and chaos. Despite these benefits, many computational brain models avoid incorporating spontaneous chaotic activity due to the challenges it poses for learning algorithms. In recent years, however, multiple approaches have been proposed to overcome this limitation. As a result, many different algorithms have been developed, initially within the reservoir computing paradigm. Over time, the field has evolved to increase the biological plausibility and performance of the algorithms, sometimes going beyond the reservoir computing framework. In this review article, we examine the computational benefits of chaos and the unique properties of chaotic recurrent neural networks, with a particular focus on those typically utilized in reservoir computing. We also provide a detailed analysis of the algorithms designed to train chaotic RNNs, tracing their historical evolution and highlighting key milestones in their development. Finally, we explore the applications and limitations of chaotic RNNs for brain modelling, consider their potential broader impacts beyond neuroscience, and outline promising directions for future research.
Collapse
Affiliation(s)
- Andrea Mattera
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy.
| | - Valerio Alfieri
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy; International School of Advanced Studies, Center for Neuroscience, University of Camerino, Via Gentile III Da Varano, 62032, Camerino, Italy
| | - Giovanni Granato
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| | - Gianluca Baldassarre
- Institute of Cognitive Sciences and Technology, National Research Council, Via Romagnosi 18a, I-00196, Rome, Italy
| |
Collapse
|
2
|
Cai Y, An X, Dai S, Ma H, Wang Y. Long-term high altitude exposure reduces positive bias of facial recognition: Evidence from event-related potential. Neuroscience 2025; 570:1-8. [PMID: 39956353 DOI: 10.1016/j.neuroscience.2025.02.024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2024] [Revised: 02/10/2025] [Accepted: 02/12/2025] [Indexed: 02/18/2025]
Abstract
High-altitude environments influence emotional biases. Nonetheless, the neural mechanisms underlying emotional facial processing, which could help explain depression due to high altitudes, remain unexplored. An emotional face recognition task was used to explore the impact of high-altitude hypoxia on emotional face recognition and event-related potentials were recorded in relation to a high-altitude group (n = 22) and a low-altitude group (n = 24). The results showed that the high-altitude group had longer reaction time, lower accuracy rates, and more negative P1 and N170 amplitudes. Moreover, compared with the low-altitude group, the positive bias of the N170 component in the high-altitude group decreased, and the right hemispheric lateralization of the P1 component disappeared. These results suggest that early and late stages of facial processing are influenced by high-altitude hypoxia. The decrease in positive bias in late processing may explain depression due to high altitudes.
Collapse
Affiliation(s)
- Yudian Cai
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xin An
- College of Politics, National Defence University, Xi'an, China
| | - Shan Dai
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Hailin Ma
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China; Institute of Education and Psychology, Tibet University, Tibet, China
| | - Yan Wang
- Institute of Psychology, Chinese Academy of Sciences, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
3
|
Kroczek LOH, Mühlberger A. Neural mechanisms underlying the interactive exchange of facial emotional expressions. Soc Cogn Affect Neurosci 2025; 20:nsaf001. [PMID: 39821290 PMCID: PMC11781275 DOI: 10.1093/scan/nsaf001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2024] [Revised: 11/14/2024] [Accepted: 01/09/2025] [Indexed: 01/19/2025] Open
Abstract
Facial emotional expressions are crucial in face-to-face social interactions, and recent findings have highlighted their interactive nature. However, the underlying neural mechanisms remain unclear. This electroencephalography study investigated whether the interactive exchange of facial expressions modulates socio-emotional processing. Participants (N = 41) displayed a facial emotional expression (angry, neutral, or happy) toward a virtual agent, and the agent then responded with a further emotional expression (angry or happy) or remained neutral (control condition). We assessed subjective experience (valence, arousal), facial EMG (Zygomaticus, Corrugator), and event-related potentials (EPN, LPP) elicited by the agent's response. Replicating previous findings, we found that an agent's happy facial expression was experienced as more pleasant and elicited increased Zygomaticus activity when participants had initiated the interaction with a happy compared to an angry expression. At the neural level, angry expressions resulted in a greater LPP than happy expressions, but only when participants directed an angry or happy, but not a neutral, expression at the agent. These findings suggest that sending an emotional expression increases salience and enhances the processing of received emotional expressions, indicating that an interactive setting alters brain responses to social stimuli.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, Regensburg University, Universitätsstraße 31, Regensburg 93053, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, Regensburg University, Universitätsstraße 31, Regensburg 93053, Germany
| |
Collapse
|
4
|
Balolia KL, Baughan K, Massey JS. Relative facial width, and its association with canine size and body mass among chimpanzees and bonobos: Implications for understanding facial width-to-height ratio expression among human populations. AMERICAN JOURNAL OF BIOLOGICAL ANTHROPOLOGY 2025; 186:e25040. [PMID: 39529448 PMCID: PMC11775434 DOI: 10.1002/ajpa.25040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/19/2024] [Revised: 08/28/2024] [Accepted: 10/13/2024] [Indexed: 11/16/2024]
Abstract
OBJECTIVES Facial width-to-height ratio (fWHR) has been widely investigated in the context of its role in visual communication, though there is a lack of consensus about how fWHR serves as a social signal. To better understand fWHR variation in a comparative context, we investigate the associations between fWHR and canine crown height (CCH) and body mass, respectively, among two chimpanzee subspecies (Pan troglodytes schweinfurthii, Pan troglodytes troglodytes) and bonobos (Pan paniscus). MATERIALS AND METHODS We collected landmark data from 3D surface models of 86 Pan cranial specimens to quantify fWHR and upper CCH, and to estimate body mass. We used Spearman's r and Kruskal-Wallis tests to test for significant relationships among variables, and to assess sexual dimorphism. RESULTS There is an inverse relationship between fWHR and CCH in both sexes of Pan, however there are interpopulation differences in the relationship between fWHR and CCH among Pan taxa. Pan paniscus have relatively wide faces and small canine crowns, and wide faces in Pan t. schweinfurthii males may be driven by body size constraints. Pan troglodytes and Pan paniscus show fWHR dimorphism, and Pan paniscus have significantly higher fWHRs than do either Pan troglodytes subspecies. DISCUSSION Our findings indicate that CCH and facial breadth may serve subtly different signaling functions among Pan taxa. Further research into the circumstances in which wide faces evolved among chimpanzees and bonobos will likely afford deeper insights into the function of relatively wide faces in the context of visual signaling among humans and our extinct hominin relatives.
Collapse
Affiliation(s)
- Katharine L. Balolia
- School of Archaeology and AnthropologyAustralian National UniversityCanberraAustralian Capital TerritoryAustralia
- Department of AnthropologyDurham UniversityDurhamUK
| | - Kieran Baughan
- School of Archaeology and AnthropologyAustralian National UniversityCanberraAustralian Capital TerritoryAustralia
| | - Jason S. Massey
- Department of Anatomy and Developmental BiologyMonash UniversityMelbourneVictoriaAustralia
| |
Collapse
|
5
|
Weiß M, Paelecke M, Mussel P, Hein G. Neural dynamics of personality trait perception and interaction preferences. Sci Rep 2024; 14:30455. [PMID: 39668166 PMCID: PMC11638252 DOI: 10.1038/s41598-024-76423-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Accepted: 10/11/2024] [Indexed: 12/14/2024] Open
Abstract
According to recent research, self-reported Big Five personality traits are associated with preferences for faces that are representative of certain Big Five traits. Previous research has primarily focused on either preference for distinct prototypical personality faces or the accuracy of trait ratings for these faces. However, the underlying neural correlates involved in the processing of prototypical personality faces are unknown. In the present study, we aim to bridge this gap by investigating whether participants' Big Five personality traits predict preferences to interact with individuals represented by prototypical personality faces, as well as the neural processing of these facial features. Based on theoretical considerations and previous research, we focus on trait extraversion, agreeableness and neuroticism, and corresponding prototypical faces. Participants were asked to classify prototypical faces as above or below average representative of a certain trait, and then provide an interaction preference rating while face-sensitive event-related potentials (N170 and late positive potential) were measured. In line with our hypotheses, the results showed an interaction preference for faces that were perceived as high (vs. low) extraverted and agreeable and low (vs. high) neurotic. In addition, the preference for agreeable faces interacted with personality characteristics of the perceiver: The higher a persons' score on trait agreeableness, the higher the face preference ratings for both prototypical and perceived high agreeable faces. Analyses of ERP data showed that an increase in preference ratings for prototypical agreeable faces was paralleled by an increase of the late positive potential. Notably, the N170 did not show any neural signature of the hypothesized effects of personality faces. Together, these results highlight the importance of considering both perceiver characteristics as well as perceived features of an interaction partner when it comes to preference for social interaction.Protocol registration The stage 1 protocol for this Registered Report was accepted in principle on the 8th of May 2023. The protocol, as accepted by the journal, can be found at: https://doi.org/10.17605/OSF.IO/G8SCY .
Collapse
Affiliation(s)
- Martin Weiß
- Center of Mental Health, Department of Psychiatry, Psychosomatic and Psychotherapy, Translational Social Neuroscience Unit, University Hospital Würzburg, Margarete-Höppel-Platz 1, 97080, Würzburg, Germany.
- Department of Psychology I: Clinical Psychology and Psychotherapy, Institute of Psychology, University of Würzburg, Würzburg, Germany.
| | - Marko Paelecke
- Department of Psychology V: Differential Psychology, Personality Psychology and Psychological Diagnostics, Institute of Psychology, University of Würzburg, Würzburg, Germany
| | - Patrick Mussel
- Division for Psychological Diagnostics and Differential Psychology, Psychologische Hochschule Berlin, Berlin, Germany
| | - Grit Hein
- Center of Mental Health, Department of Psychiatry, Psychosomatic and Psychotherapy, Translational Social Neuroscience Unit, University Hospital Würzburg, Margarete-Höppel-Platz 1, 97080, Würzburg, Germany
| |
Collapse
|
6
|
Wegrzyn M, Münst L, König J, Dinter M, Kissler J. Observer-generated maps of diagnostic facial features enable categorization and prediction of emotion expressions. Acta Psychol (Amst) 2024; 251:104569. [PMID: 39488877 DOI: 10.1016/j.actpsy.2024.104569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2024] [Revised: 10/23/2024] [Accepted: 10/25/2024] [Indexed: 11/05/2024] Open
Abstract
According to one prominent model, facial expressions of emotion can be categorized into depicting happiness, disgust, anger, sadness, fear and surprise. One open question is which facial features observers use to recognize the different expressions and whether the features indicated by observers can be used to predict which expression they saw. We created fine-grained maps of diagnostic facial features by asking participants to use mouse clicks to highlight those parts of a face that they deem useful for recognizing its expression. We tested how well the resulting maps align with models of emotion expressions (based on Action Units) and how the maps relate to the accuracy with which observers recognize full or partly masked faces. As expected, observers focused on the eyes and mouth regions in all faces. However, each expression deviated from this global pattern in a unique way, allowing to create maps of diagnostic face regions. Action Units considered most important for expressing an emotion were highlighted most often, indicating their psychological validity. The maps of facial features also allowed to correctly predict which expression a participant had seen, with above-chance accuracies for all expressions. For happiness, fear and anger, the face half which was highlighted the most was also the half whose visibility led to higher recognition accuracies. The results suggest that diagnostic facial features are distributed in unique patterns for each expression, which observers seem to intuitively extract and use when categorizing facial displays of emotion.
Collapse
Affiliation(s)
- Martin Wegrzyn
- Department of Psychology, Bielefeld University, Bielefeld, Germany.
| | - Laura Münst
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | - Jessica König
- Department of Psychology, Bielefeld University, Bielefeld, Germany
| | | | - Johanna Kissler
- Department of Psychology, Bielefeld University, Bielefeld, Germany; Center for Cognitive Interaction Technology, Bielefeld, Germany
| |
Collapse
|
7
|
Härgestam M, Morian H, Lindgren L. Interprofessional team training via telemedicine in medical and nursing education. BMC MEDICAL EDUCATION 2024; 24:1110. [PMID: 39379934 PMCID: PMC11463107 DOI: 10.1186/s12909-024-06104-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/24/2024] [Accepted: 09/30/2024] [Indexed: 10/10/2024]
Abstract
BACKGROUND The use of information communication technologies such as telemedicine has increased over the years, offering access to specialized healthcare even in remote locations. However, telemedicine in interprofessional team training is seldom included in medical or nursing programs, and little is known about how to practise these scenarios. This study aimed to explore how medical and nursing students experience teamwork when one team member is participating remotely and digitally. METHODS Following interprofessional team training in which one team member participated remotely, focus group interviews were conducted with three teams, each comprising one medical student and two nursing students (n = 9 students in total). The focus group interviews were analysed with thematic content analysis. The Systems Engineering Initiative for Patient Safety model was applied as a theoretical framework and served as a lens in the analysis. RESULTS Three themes were identified in the analysis: challenging the dynamic of leadership, becoming familiar with a new setting, and finding new strategies to communicate. CONCLUSIONS The results of this study suggest that future physicians and nurses need to enhance their knowledge of practicing teamwork through telemedicine during their education, as the use of telemedicine continues to grow.
Collapse
Affiliation(s)
| | - Hanna Morian
- Department of Nursing, Umeå University, Umeå, Sweden
| | | |
Collapse
|
8
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
9
|
Zhao Z, Yaoma K, Wu Y, Burns E, Sun M, Ying H. Other ethnicity effects in ensemble coding of facial expressions. Atten Percept Psychophys 2024; 86:2412-2423. [PMID: 38992322 DOI: 10.3758/s13414-024-02920-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/22/2024] [Indexed: 07/13/2024]
Abstract
Cultural difference in ensemble emotion perception is an important research question, providing insights into the complexity of human cognition and social interaction. Here, we conducted two experiments to investigate how emotion perception would be affected by other ethnicity effects and ensemble coding. In Experiment 1, two groups of Asian and Caucasian participants were tasked with assessing the average emotion of faces from their ethnic group, other ethnic group, and mixed ethnicity groups. Results revealed that participants exhibited relatively accurate yet amplified emotion perception of their group faces, with a tendency to overestimate the weight of the faces from the other ethnic group. In Experiment 2, Asian participants were instructed to discern the emotion of a target face surrounded by faces from Caucasian and Asian faces. Results corroborated earlier findings, indicating that while participants accurately perceived emotions in faces of their ethnicity, their perception of Caucasian faces was noticeably influenced by the presence of surrounding Asian faces. These findings collectively support the notion that the other ethnicity effect stems from differential emotional amplification inherent in ensemble coding of emotion perception.
Collapse
Affiliation(s)
- Zhenhua Zhao
- Department of Psychology, Soochow University, Suzhou, China
| | - Kelun Yaoma
- Department of Psychology, Soochow University, Suzhou, China
| | - Yujie Wu
- Department of Psychology, Soochow University, Suzhou, China
| | - Edwin Burns
- Department of Psychology, Swansea University, Swansea, United Kingdom
| | - Mengdan Sun
- Department of Psychology, Soochow University, Suzhou, China.
| | - Haojiang Ying
- Department of Psychology, Soochow University, Suzhou, China.
| |
Collapse
|
10
|
Yang M, Zhang L, Wei Z, Zhang P, Xu L, Huang L, Kendrick KM, Lei Y, Kou J. Neural and gaze pattern responses to happy faces in autism: Predictors of adaptive difficulties and re-evaluation of the social motivation hypothesis. Int J Clin Health Psychol 2024; 24:100527. [PMID: 39659956 PMCID: PMC11629545 DOI: 10.1016/j.ijchp.2024.100527] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2024] [Accepted: 11/19/2024] [Indexed: 12/12/2024] Open
Abstract
Background The "Social Motivation" hypothesis posits that social deficits in autism spectrum disorder (ASD) arise from altered reward perception. However, few studies have examined neural and behavioral responses to social reward-related cues in low functioning ASD children with limited cognitive or language abilities. Objective This study investigated if young children with ASD show atypical gaze towards happy faces and its association with altered brain reward responses. Methods Eye-tracking was performed in 36 ASD and 36 typically developing (TD) children (2.5-6 years) viewing happy faces of children or emoticons. Functional near infrared spectroscopy was used to record group differences in orbitofrontal cortex (OFC) activation simultaneously. Results Children with ASD showed increased pupil diameter and OFC activation compared to TD children when viewing all happy faces and gazed less at the eyes of actual faces and the mouths of emoticons. These atypical responses was associated with lower adaptive behavior scores and greater symptom severity. Conclusion Our research reveals distinct neural hyperactivity and viewing patterns in young children with ASD when presented with reward-related facial stimuli. These results contradict the Social Motivation Hypothesis. Children with ASD exhibit heightened levels of arousal and employ less efficient facial processing strategies. This heightened demand for cognitive resources could have long-term effects on children's well-being and may hinder their ability to develop adaptive skills effectively.
Collapse
Affiliation(s)
- Mengyuan Yang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| | - Lan Zhang
- Chengdu Women's and Children's Central Hospital, University of Electronic Science and Technology of China, Chengdu, 611731, China
| | - Zijie Wei
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| | - Pingping Zhang
- School of Management and Economics, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Lei Xu
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| | - Lihui Huang
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| | - Keith M. Kendrick
- The Clinical Hospital of Chengdu Brain Science Institute, MOE Key Laboratory for Neuroinformation, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Yi Lei
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| | - Juan Kou
- Institute of Brain and Psychological Sciences, Sichuan Normal University, 610066, China
| |
Collapse
|
11
|
Virgili G, Neill E, Enticott P, Castle D, Rossell SL. A systematic review of visual processing in body dysmorphic disorder (BDD). Psychiatry Res 2024; 339:116013. [PMID: 38924902 DOI: 10.1016/j.psychres.2024.116013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/06/2024] [Accepted: 06/03/2024] [Indexed: 06/28/2024]
Abstract
To understand the visual preponderance of perceived flaws in appearance in body dysmorphic disorder (BDD), the study of visual processing has been growing. Studies have focused on facial and other basic visual stimuli. The current literature does not provide evidence of consistent behavioural patterns, lacking an overarching body of work describing visual processing in BDD. This systematic review aims to characterise behavioural outcomes of visual processing anomalies and/or deficits in BDD. Articles were collected through online databases MEDLINE and PubMed, and were included if they comprised a clinical BDD group, and were published after 1990. Results indicate that individuals with BDD demonstrate deficits in emotional face processing, a possible overreliance on detail processing, aberrant eye-scanning behaviours, and a tendency to overvalue attractiveness. While findings consistently signal towards visual deficits in BDD, there is lack of clarity as to the type. This inconsistency may be attributed to heterogeneity within BDD samples and differences in experimental design (i.e., stimuli, tasks, conditions). There are difficulties distinguishing between BDD-associated deficits and those associated with OCD or eating disorders. A coherent framework, including sample characterisation and task design will seek to generate clear and consistent behavioural patterns to guide future treatments.
Collapse
Affiliation(s)
- Gemma Virgili
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, Hawthorn, VIC, Australia.
| | - Erica Neill
- Orygen, Centre for Youth Mental Health, University of Melbourne, Vic Australia
| | - Peter Enticott
- Cognitive Neuroscience Unit, Faculty of Health, Deakin University, Burwood, VIC, Australia
| | | | - Susan Lee Rossell
- Centre for Mental Health, Faculty of Health, Arts & Design, Swinburne University of Technology, Hawthorn, VIC, Australia
| |
Collapse
|
12
|
Kroczek LOH, Lingnau A, Schwind V, Wolff C, Mühlberger A. Observers predict actions from facial emotional expressions during real-time social interactions. Behav Brain Res 2024; 471:115126. [PMID: 38950784 DOI: 10.1016/j.bbr.2024.115126] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/30/2023] [Revised: 06/07/2024] [Accepted: 06/19/2024] [Indexed: 07/03/2024]
Abstract
In face-to-face social interactions, emotional expressions provide insights into the mental state of an interactive partner. This information can be crucial to infer action intentions and react towards another person's actions. Here we investigate how facial emotional expressions impact subjective experience and physiological and behavioral responses to social actions during real-time interactions. Thirty-two participants interacted with virtual agents while fully immersed in Virtual Reality. Agents displayed an angry or happy facial expression before they directed an appetitive (fist bump) or aversive (punch) social action towards the participant. Participants responded to these actions, either by reciprocating the fist bump or by defending the punch. For all interactions, subjective experience was measured using ratings. In addition, physiological responses (electrodermal activity, electrocardiogram) and participants' response times were recorded. Aversive actions were judged to be more arousing and less pleasant relative to appetitive actions. In addition, angry expressions increased heart rate relative to happy expressions. Crucially, interaction effects between facial emotional expression and action were observed. Angry expressions reduced pleasantness stronger for appetitive compared to aversive actions. Furthermore, skin conductance responses to aversive actions were increased for happy compared to angry expressions and reaction times were faster to aversive compared to appetitive actions when agents showed an angry expression. These results indicate that observers used facial emotional expression to generate expectations for particular actions. Consequently, the present study demonstrates that observers integrate information from facial emotional expressions with actions during social interactions.
Collapse
Affiliation(s)
- Leon O H Kroczek
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany.
| | - Angelika Lingnau
- Department of Psychology, Cognitive Neuroscience, University of Regensburg, Regensburg, Germany
| | - Valentin Schwind
- Human Computer Interaction, University of Applied Sciences in Frankfurt a. M., Frankfurt a. M, Germany; Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Christian Wolff
- Department of Media Informatics, University of Regensburg, Regensburg, Germany
| | - Andreas Mühlberger
- Department of Psychology, Clinical Psychology and Psychotherapy, University of Regensburg, Regensburg, Germany
| |
Collapse
|
13
|
Reinke P, Deneke L, Ocklenburg S. Asymmetries in event-related potentials part 1: A systematic review of face processing studies. Int J Psychophysiol 2024; 202:112386. [PMID: 38914138 DOI: 10.1016/j.ijpsycho.2024.112386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 06/06/2024] [Accepted: 06/18/2024] [Indexed: 06/26/2024]
Abstract
The human brain shows distinct lateralized activation patterns for a range of cognitive processes. One such function, which is thought to be lateralized to the right hemisphere (RH), is human face processing. Its importance for social communication and interaction has led to a plethora of studies investigating face processing in health and disease. Temporally highly resolved methods, like event-related potentials (ERPs), allow for a detailed characterization of different processing stages and their specific lateralization patterns. This systematic review aimed at disentangling some of the contradictory findings regarding the RH specialization in face processing focusing on ERP research in healthy participants. Two databases were searched for studies that investigated left and right electrodes while participants viewed (mostly neutral) facial stimuli. The included studies used a variety of different tasks, which ranged from passive viewing to memorizing faces. The final data selection highlights, that strongest lateralization to the RH was found for the N170, especially for right-handed young male participants. Left-handed, female, and older participants showed less consistent lateralization patterns. Other ERP components like the P1, P2, N2, P3, and the N400 were overall less clearly lateralized. The current review highlights that many of the assumed lateralization patterns are less clear than previously thought and that the variety of stimuli, tasks, and EEG setups used, might contribute to the ambiguous findings.
Collapse
Affiliation(s)
- Petunia Reinke
- Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany; ICAN Institute for Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany.
| | - Lisa Deneke
- Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany
| | - Sebastian Ocklenburg
- Department of Psychology, MSH Medical School Hamburg, Hamburg, Germany; ICAN Institute for Cognitive and Affective Neuroscience, MSH Medical School Hamburg, Hamburg, Germany; Institute of Cognitive Neuroscience, Biopsychology, Faculty of Psychology, Ruhr University Bochum, Bochum, Germany
| |
Collapse
|
14
|
Ventura M, Caffò AO, Manippa V, Rivolta D. Normative data of the Italian Famous Face Test. Sci Rep 2024; 14:15276. [PMID: 38961204 PMCID: PMC11222389 DOI: 10.1038/s41598-024-66252-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2023] [Accepted: 06/30/2024] [Indexed: 07/05/2024] Open
Abstract
The faces we see in daily life exist on a continuum of familiarity, ranging from personally familiar to famous to unfamiliar faces. Thus, when assessing face recognition abilities, adequate evaluation measures should be employed to discriminate between each of these processes and their relative impairments. We here developed the Italian Famous Face Test (IT-FFT), a novel assessment tool for famous face recognition in typical and clinical populations. Normative data on a large sample (N = 436) of Italian individuals were collected, assessing both familiarity (d') and recognition accuracy. Furthermore, this study explored whether individuals possess insights into their overall face recognition skills by correlating the Prosopagnosia Index-20 (PI-20) with the IT-FFT; a negative correlation between these measures suggests that people have a moderate insight into their face recognition skills. Overall, our study provides the first online-based Italian test for famous faces (IT-FFT), a test that could be used alongside other standard tests of face recognition because it complements them by evaluating real-world face familiarity, providing a more comprehensive assessment of face recognition abilities. Testing different aspects of face recognition is crucial for understanding both typical and atypical face recognition.
Collapse
Affiliation(s)
- Martina Ventura
- The MARCS Institute for Brain, Behaviour, and Development, Western Sydney University, Sydney, Australia
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Alessandro Oronzo Caffò
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| | - Valerio Manippa
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy.
| | - Davide Rivolta
- Department of Education, Psychology and Communication, University of Bari Aldo Moro, Bari, Italy
| |
Collapse
|
15
|
Holmer E, Rönnberg J, Asutay E, Tirado C, Ekberg M. Facial mimicry interference reduces working memory accuracy for facial emotion expressions. PLoS One 2024; 19:e0306113. [PMID: 38924006 PMCID: PMC11207140 DOI: 10.1371/journal.pone.0306113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.
Collapse
Affiliation(s)
- Emil Holmer
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Erkin Asutay
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- JEDI Lab, Linköping University, Linköping, Sweden
| | - Carlos Tirado
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Mattias Ekberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
16
|
Kasahara S, Kumasaki N, Shimizu K. Investigating the impact of motion visual synchrony on self face recognition using real time morphing. Sci Rep 2024; 14:13090. [PMID: 38849381 PMCID: PMC11161490 DOI: 10.1038/s41598-024-63233-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/27/2024] [Indexed: 06/09/2024] Open
Abstract
Face recognition is a crucial aspect of self-image and social interactions. Previous studies have focused on static images to explore the boundary of self-face recognition. Our research, however, investigates the dynamics of face recognition in contexts involving motor-visual synchrony. We first validated our morphing face metrics for self-face recognition. We then conducted an experiment using state-of-the-art video processing techniques for real-time face identity morphing during facial movement. We examined self-face recognition boundaries under three conditions: synchronous, asynchronous, and static facial movements. Our findings revealed that participants recognized a narrower self-face boundary with moving facial images compared to static ones, with no significant differences between synchronous and asynchronous movements. The direction of morphing consistently biased the recognized self-face boundary. These results suggest that while motor information of the face is vital for self-face recognition, it does not rely on movement synchronization, and the sense of agency over facial movements does not affect facial identity judgment. Our methodology offers a new approach to exploring the 'self-face boundary in action', allowing for an independent examination of motion and identity.
Collapse
Affiliation(s)
- Shunichi Kasahara
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan.
- Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0412, Japan.
| | - Nanako Kumasaki
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| | - Kye Shimizu
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| |
Collapse
|
17
|
Hsu FY, Hsiao YC, Su YJ, Chang CS, Yen CI. A prospective study of psychological adjustment during and after forehead flap nasal reconstruction. J Craniomaxillofac Surg 2024; 52:692-696. [PMID: 38729846 DOI: 10.1016/j.jcms.2024.03.010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2023] [Revised: 12/14/2023] [Accepted: 03/12/2024] [Indexed: 05/12/2024] Open
Abstract
The psychological effects of staged nasal reconstruction with a forehead flap were prospectively investigated. Thirty-three patients underwent nasal reconstruction with forehead flaps between March 2017 and July 2020. Three questionnaires were used to assess psychosocial functioning before surgery (time 1), 1 week after forehead flap transfer (time 2), 1 week after forehead flap division (time 3), and after refinement procedures (time 4). The patients were categorized into three groups according to the severity of nasal defects. Between- and within-group comparisons were conducted. All patients reported increased satisfaction with their appearance during nasal reconstruction. For most patients, levels of distress and social avoidance were highest before reconstruction (time 1). Both levels decreased as reconstruction advanced, and were significantly improved by times 3 and 4. The stage of reconstruction had a greater effect on these levels than did severity of nasal defect. Nasal reconstruction with forehead flap is beneficial physically and psychologically. Psychological evaluation before and after surgery facilitates patient-surgeon interactions and further enhances outcomes.
Collapse
Affiliation(s)
- Fang-Yu Hsu
- Department of Plastic and Reconstructive Surgery, Chang Gung University, Chiayi, Taiwan
| | | | - Yi-Jen Su
- Graduate Institute of Behavioral Sciences, Chang Gung University and Department of Psychiatry, Linkou Chang Gung Memorial Hospital, Taoyuan, Taiwan
| | | | - Cheng-I Yen
- Department of Plastic and Reconstructive Surgery, Aesthetic Medical Center of Chang Gung Memorial Hospital, College of Medicine, Chang Gung University, Taipei, Taiwan.
| |
Collapse
|
18
|
Drummond J, Makdani A, Pawling R, Walker SC. Congenital Anosmia and Facial Emotion Recognition. Physiol Behav 2024; 278:114519. [PMID: 38490365 DOI: 10.1016/j.physbeh.2024.114519] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2024] [Revised: 03/11/2024] [Accepted: 03/13/2024] [Indexed: 03/17/2024]
Abstract
Major functions of the olfactory system include guiding ingestion and avoidance of environmental hazards. People with anosmia report reliance on others, for example to check the edibility of food, as their primary coping strategy. Facial expressions are a major source of non-verbal social information that can be used to guide approach and avoidance behaviour. Thus, it is of interest to explore whether a life-long absence of the sense of smell heightens sensitivity to others' facial emotions, particularly those depicting threat. In the present, online study 28 people with congenital anosmia (mean age 43.46) and 24 people reporting no olfactory dysfunction (mean age 42.75) completed a facial emotion recognition task whereby emotionally neutral faces (6 different identities) morphed, over 40 stages, to express one of 5 basic emotions: anger, disgust, fear, happiness, or sadness. Results showed that, while the groups did not differ in their ability to identify the final, full-strength emotional expressions, nor in the accuracy of their first response, the congenital anosmia group successfully identified the emotions at significantly lower intensity (i.e. an earlier stage of the morph) than the control group. Exploratory analysis showed this main effect was primarily driven by an advantage in detecting anger and disgust. These findings indicate the absence of a functioning sense of smell during development leads to compensatory changes in visual, social cognition. Future work should explore the neural and behavioural basis for this advantage.
Collapse
Affiliation(s)
- James Drummond
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Adarsh Makdani
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Ralph Pawling
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK
| | - Susannah C Walker
- Research Centre for Brain & Behaviour, School of Psychology, Faculty of Health, Liverpool John Moores University, Liverpool, UK.
| |
Collapse
|
19
|
Wang Y, Luo Q, Zhang Y, Zhao K. Synchrony or asynchrony: development of facial expression recognition from childhood to adolescence based on large-scale evidence. Front Psychol 2024; 15:1379652. [PMID: 38725946 PMCID: PMC11079229 DOI: 10.3389/fpsyg.2024.1379652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2024] [Accepted: 04/09/2024] [Indexed: 05/12/2024] Open
Abstract
The development of facial expression recognition ability in children is crucial for their emotional cognition and social interactions. In this study, 510 children aged between 6 and 15 participated in a two forced-choice task of facial expression recognition. The findings supported that recognition of the six basic facial expressions reached a relatively stable mature level around 8-9 years old. Additionally, model fitting results indicated that children showed the most significant improvement in recognizing expressions of disgust, closely followed by fear. Conversely, recognition of expressions of happiness and sadness showed slower improvement across different age groups. Regarding gender differences, girls exhibited a more pronounced advantage. Further model fitting revealed that boys showed more pronounced improvements in recognizing expressions of disgust, fear, and anger, while girls showed more pronounced improvements in recognizing expressions of surprise, sadness, and happiness. These clear findings suggested the synchronous developmental trajectory of facial expression recognition from childhood to adolescence, likely influenced by socialization processes and interactions related to brain maturation.
Collapse
Affiliation(s)
- Yihan Wang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Qian Luo
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yuanmeng Zhang
- College of Letters and Science, University of California, Berkeley, Berkeley, CA, United States
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
20
|
Sexton CL, Diogo R, Subiaul F, Bradley BJ. Raising an Eye at Facial Muscle Morphology in Canids. BIOLOGY 2024; 13:290. [PMID: 38785773 PMCID: PMC11118188 DOI: 10.3390/biology13050290] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2024] [Revised: 04/13/2024] [Accepted: 04/24/2024] [Indexed: 05/25/2024]
Abstract
The evolution of facial muscles in dogs has been linked to human preferential selection of dogs whose faces appear to communicate information and emotion. Dogs who convey, especially with their eyes, a sense of perceived helplessness can elicit a caregiving response from humans. However, the facial muscles used to generate such expressions may not be uniquely present in all dogs, but rather specifically cultivated among various taxa and individuals. In a preliminary, qualitative gross anatomical evaluation of 10 canid specimens of various species, we find that the presence of two facial muscles previously implicated in human-directed canine communication, the levator anguli occuli medialis (LAOM) and the retractor anguli occuli lateralis (RAOL), was not unique to domesticated dogs (Canis familiaris). Our results suggest that these aspects of facial musculature do not necessarily reflect selection via human domestication and breeding. In addition to quantitatively evaluating more and other members of the Canidae family, future directions should include analyses of the impact of superficial facial features on canine communication and interspecies communication between dogs and humans.
Collapse
Affiliation(s)
- Courtney L. Sexton
- Department of Population Health Sciences, Virginia-Maryland College of Veterinary Medicine, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
| | - Rui Diogo
- Department of Anatomy, Howard University School of Medicine, Washington, DC 20059, USA
| | - Francys Subiaul
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
- Department of Speech, Language and Hearing Sciences, The George Washington University, Washington, DC 20052, USA
| | - Brenda J. Bradley
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
| |
Collapse
|
21
|
Wang G, Ma L, Wang L, Pang W. Independence Threat or Interdependence Threat? The Focusing Effect on Social or Physical Threat Modulates Brain Activity. Brain Sci 2024; 14:368. [PMID: 38672018 PMCID: PMC11047893 DOI: 10.3390/brainsci14040368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2024] [Revised: 04/04/2024] [Accepted: 04/04/2024] [Indexed: 04/28/2024] Open
Abstract
OBJECTIVE The neural basis of threat perception has mostly been examined separately for social or physical threats. However, most of the threats encountered in everyday life are complex. The features of interactions between social and physiological threats under different attentional conditions are unclear. METHOD The present study explores this issue using an attention-guided paradigm based on ERP techniques. The screen displays social threats (face threats) and physical threats (action threats), instructing participants to concentrate on only one type of threat, thereby exploring brain activation characteristics. RESULTS It was found that action threats did not affect the processing of face threats in the face-attention condition, and electrophysiological evidence from the brain suggests a comparable situation to that when processing face threats alone, with higher amplitudes of the N170 and EPN (Early Posterior Negativity) components of anger than neutral emotions. However, when focusing on the action-attention condition, the brain was affected by face threats, as evidenced by a greater N190 elicited by stimuli containing threatening emotions, regardless of whether the action was threatening or not. This trend was also reflected in EPN. CONCLUSIONS The current study reveals important similarities and differences between physical and social threats, suggesting that the brain has a greater processing advantage for social threats.
Collapse
Affiliation(s)
- Guan Wang
- The School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
- School of Education Science, Huaiyin Normal University, Huaian 223300, China
| | - Lian Ma
- School of Computer Science and Technology, Huaiyin Normal University, Huaian 223300, China
| | - Lili Wang
- School of Education Science, Huaiyin Normal University, Huaian 223300, China
| | - Weiguo Pang
- The School of Psychology and Cognitive Science, East China Normal University, Shanghai 200062, China
| |
Collapse
|
22
|
Jeong T, Alessandri-Bonetti M, Liu H, Pandya S, Stofman GM, Egro FM. Fourteen-Year Experience in Burn Eyelid Reconstruction and Complications Recurrence: A Retrospective Cohort Study. Ann Plast Surg 2024; 92:S146-S149. [PMID: 38556664 DOI: 10.1097/sap.0000000000003848] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
BACKGROUND Loss of vision and other ocular defects are a concern with eyelid burn sequelae. This most commonly progresses from eyelid contracture to cicatricial ectropion and lagophthalmos. When left untreated, these may lead to exposure keratitis, ulceration, infection, perforation, and loss of vision. In the case of full-thickness eyelid burns, release and grafting are required. However, there is a paucity of studies on outcomes in eyelid burn surgery treatment, despite concern for permanent ocular damage or loss of vision. The aim of the study is to describe the complication rates in burn eyelid reconstruction at a single center for 14 years. METHODS A retrospective cohort study was performed of all patients who had sustained eyelid burns and required reconstruction between April 2009 and February 2023. Medical records were obtained from patients' charts. Collected data include demographics, medical history, type of injury, indication for surgery, procedure performed, and complications. RESULTS A total of 14 patients and 25 eyelids underwent eyelid reconstruction of the 901 total patients with burn-related injuries requiring plastic surgery reconstruction. These patients underwent 54 eyelid surgeries with a mean follow-up time of 13.1 ± 17.1 months. Patients were 71% men and 29% women, with a mean age of 45.1 ± 15.6 years. In 53.7% (n = 29) of the cases, the simultaneous reconstruction of both the upper and lower eyelids was necessary. The reconstruction of the upper and lower eyelid alone represented a smaller percentage (25.9% and 20.4%, respectively). On average, the patients received 3.9 ± 3.5 eyelid surgeries. The overall complication rate was 53.7% (n = 29). The most common complication was ectropion (42.6%, n = 23). Other complications included eye injury (25.9%, n = 14), lagophthalmos (24.1%, n = 13), local infection (7.4%, n = 4), and graft loss (5.6%, n = 3). CONCLUSION Periorbital burns represent a major challenge that may require complex surgical intervention. Full-thickness skin graft remains the standard of care for patients with eyelid burns. However, there is a high incidence of ectropion that may require reoperation. Further studies examining the conditions of successful eyelid burn procedures may provide guidance on when patients may benefit from eyelid reconstruction during their burn treatment.
Collapse
|
23
|
Chang CH, Drobotenko N, Ruocco AC, Lee ACH, Nestor A. Perception and memory-based representations of facial emotions: Associations with personality functioning, affective states and recognition abilities. Cognition 2024; 245:105724. [PMID: 38266352 DOI: 10.1016/j.cognition.2024.105724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/09/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Personality traits and affective states are associated with biases in facial emotion perception. However, the precise personality impairments and affective states that underlie these biases remain largely unknown. To investigate how relevant factors influence facial emotion perception and recollection, Experiment 1 employed an image reconstruction approach in which community-dwelling adults (N = 89) rated the similarity of pairs of facial expressions, including those recalled from memory. Subsequently, perception- and memory-based expression representations derived from such ratings were assessed across participants and related to measures of personality impairment, state affect, and visual recognition abilities. Impairment in self-direction and level of positive affect accounted for the largest components of individual variability in perception and memory representations, respectively. Additionally, individual differences in these representations were impacted by face recognition ability. In Experiment 2, adult participants (N = 81) rated facial image reconstructions derived in Experiment 1, revealing that individual variability was associated with specific visual face properties, such as expressiveness, representation accuracy, and positivity/negativity. These findings highlight and clarify the influence of personality, affective state, and recognition abilities on individual differences in the perception and recollection of facial expressions.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Natalia Drobotenko
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Anthony C Ruocco
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Department of Psychological Clinical Science at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, 3560 Bathurst St, North York, Ontario M6A 2E1, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| |
Collapse
|
24
|
Yu L, Wang W, Li Z, Ren Y, Liu J, Jiao L, Xu Q. Alexithymia modulates emotion concept activation during facial expression processing. Cereb Cortex 2024; 34:bhae071. [PMID: 38466112 DOI: 10.1093/cercor/bhae071] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2023] [Revised: 01/23/2024] [Accepted: 02/06/2024] [Indexed: 03/12/2024] Open
Abstract
Alexithymia is characterized by difficulties in emotional information processing. However, the underlying reasons for emotional processing deficits in alexithymia are not fully understood. The present study aimed to investigate the mechanism underlying emotional deficits in alexithymia. Using the Toronto Alexithymia Scale-20, we recruited college students with high alexithymia (n = 24) or low alexithymia (n = 24) in this study. Participants judged the emotional consistency of facial expressions and contextual sentences while recording their event-related potentials. Behaviorally, the high alexithymia group showed longer response times versus the low alexithymia group in processing facial expressions. The event-related potential results showed that the high alexithymia group had more negative-going N400 amplitudes compared with the low alexithymia group in the incongruent condition. More negative N400 amplitudes are also associated with slower responses to facial expressions. Furthermore, machine learning analyses based on N400 amplitudes could distinguish the high alexithymia group from the low alexithymia group in the incongruent condition. Overall, these findings suggest worse facial emotion perception for the high alexithymia group, potentially due to difficulty in spontaneously activating emotion concepts. Our findings have important implications for the affective science and clinical intervention of alexithymia-related affective disorders.
Collapse
Affiliation(s)
- Linwei Yu
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Weihan Wang
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Zhiwei Li
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Yi Ren
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Jiabin Liu
- Beijing Key Laboratory of Applied Experimental Psychology, National Demonstration Center for Experimental Psychology Education (Beijing Normal University), Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Lan Jiao
- Department of Psychology, Ningbo University, Ningbo 315211, China
| | - Qiang Xu
- Department of Psychology, Ningbo University, Ningbo 315211, China
| |
Collapse
|
25
|
Bagnis A, Colonnello V, Russo PM, Mattarozzi K. Facial trustworthiness dampens own-gender bias in emotion recognition. PSYCHOLOGICAL RESEARCH 2024; 88:458-465. [PMID: 37558932 PMCID: PMC10858080 DOI: 10.1007/s00426-023-01864-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Accepted: 07/30/2023] [Indexed: 08/11/2023]
Abstract
Previous research suggests that emotion recognition is influenced by social categories derived by invariant facial features such as gender and inferences of trustworthiness from facial appearance. The current study sought to replicate and extend these findings by examining the intersection of these social categories on recognition of emotional facial expressions. We used a dynamic emotion recognition task to assess accuracy and response times in the happiness and anger categorization displayed by female and male faces that differed in the degree of facial trustworthiness (i.e., trustworthy- vs. untrustworthy-looking faces). We found that facial trustworthiness was able to modulate the own-gender bias on emotion recognition, as responses to untrustworthy-looking faces revealed a bias towards ingroup members. Conversely, when faces look trustworthy, no differences on emotion recognition between female and male faces were found. In addition, positive inferences of trustworthiness lead to faster recognition of happiness in females and anger in males, showing that facial appearance was able to influence also the intersection between social categories and specific emotional expressions. Together, these results suggest that facial appearance, probably due to the activation of approach or avoidance motivational systems, is able to modulate the own-gender bias on emotion recognition.
Collapse
Affiliation(s)
- Arianna Bagnis
- Department of Medical and Surgical Sciences, University of Bologna, Sant'Orsola Hospital, Pad. 21, Bologna, Italy.
| | - Valentina Colonnello
- Department of Medical and Surgical Sciences, University of Bologna, Sant'Orsola Hospital, Pad. 21, Bologna, Italy
| | - Paolo Maria Russo
- Department of Medical and Surgical Sciences, University of Bologna, Sant'Orsola Hospital, Pad. 21, Bologna, Italy
| | - Katia Mattarozzi
- Department of Medical and Surgical Sciences, University of Bologna, Sant'Orsola Hospital, Pad. 21, Bologna, Italy
| |
Collapse
|
26
|
Duque A, Picado G, Salgado G, Salgado A, Palacios B, Chaves C. Validation of the Edited Tromsø Infant Faces Database (E-TIF): A study on differences in the processing of children's emotional expressions. Behav Res Methods 2024; 56:2507-2518. [PMID: 37369938 PMCID: PMC10991014 DOI: 10.3758/s13428-023-02163-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/05/2023] [Indexed: 06/29/2023]
Abstract
Images of emotional facial expressions are often used in emotion research, which has promoted the development of different databases. However, most of these standardized sets of images do not include images from infants under 2 years of age, which is relevant for psychology research, especially for perinatal psychology. The present study aims to validate the edited version of the Tromsø Infant Faces Database (E-TIF) in a large sample of participants. The original set of 119 pictures was edited. The pictures were cropped to remove nonrelevant information, fitted in an oval window, and converted to grayscale. Four hundred and eighty participants (72.9% women) took part in the study, rating the images on five dimensions: depicted emotion, clarity, intensity, valence, and genuineness. Valence scores were useful for discriminating between positive, negative, and neutral facial expressions. Results revealed that women were more accurate at recognizing emotions in children. Regarding parental status, parents, in comparison with nonparents, rated neutral expressions as more intense and genuine. They also rated sad, angry, disgusted, and fearful faces as less negative, and happy expressions as less positive. The editing and validation of the E-TIF database offers a useful tool for basic and experimental research in psychology.
Collapse
Affiliation(s)
- Almudena Duque
- Facultad de Psicología, Universidad Pontificia de Salamanca, C/ Compañía 5, 37002, Salamanca, Spain
| | - Gonzalo Picado
- Facultad de Psicología, Universidad Pontificia de Salamanca, C/ Compañía 5, 37002, Salamanca, Spain
| | - Gloria Salgado
- Facultad de Psicología, Universidad Complutense de Madrid, Campus de Somosaguas s/n, 28223, Pozuelo de Alarcón, Spain
| | - Alfonso Salgado
- Facultad de Psicología, Universidad Pontificia de Salamanca, C/ Compañía 5, 37002, Salamanca, Spain
| | - Beatriz Palacios
- Facultad de Psicología, Universidad Pontificia de Salamanca, C/ Compañía 5, 37002, Salamanca, Spain
| | - Covadonga Chaves
- Facultad de Psicología, Universidad Complutense de Madrid, Campus de Somosaguas s/n, 28223, Pozuelo de Alarcón, Spain.
| |
Collapse
|
27
|
Chen C, Messinger DS, Chen C, Yan H, Duan Y, Ince RAA, Garrod OGB, Schyns PG, Jack RE. Cultural facial expressions dynamically convey emotion category and intensity information. Curr Biol 2024; 34:213-223.e5. [PMID: 38141619 PMCID: PMC10831323 DOI: 10.1016/j.cub.2023.12.001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Revised: 10/27/2023] [Accepted: 12/01/2023] [Indexed: 12/25/2023]
Abstract
Communicating emotional intensity plays a vital ecological role because it provides valuable information about the nature and likelihood of the sender's behavior.1,2,3 For example, attack often follows signals of intense aggression if receivers fail to retreat.4,5 Humans regularly use facial expressions to communicate such information.6,7,8,9,10,11 Yet how this complex signaling task is achieved remains unknown. We addressed this question using a perception-based, data-driven method to mathematically model the specific facial movements that receivers use to classify the six basic emotions-"happy," "surprise," "fear," "disgust," "anger," and "sad"-and judge their intensity in two distinct cultures (East Asian, Western European; total n = 120). In both cultures, receivers expected facial expressions to dynamically represent emotion category and intensity information over time, using a multi-component compositional signaling structure. Specifically, emotion intensifiers peaked earlier or later than emotion classifiers and represented intensity using amplitude variations. Emotion intensifiers are also more similar across emotions than classifiers are, suggesting a latent broad-plus-specific signaling structure. Cross-cultural analysis further revealed similarities and differences in expectations that could impact cross-cultural communication. Specifically, East Asian and Western European receivers have similar expectations about which facial movements represent high intensity for threat-related emotions, such as "anger," "disgust," and "fear," but differ on those that represent low threat emotions, such as happiness and sadness. Together, our results provide new insights into the intricate processes by which facial expressions can achieve complex dynamic signaling tasks by revealing the rich information embedded in facial expressions.
Collapse
Affiliation(s)
- Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK.
| | - Daniel S Messinger
- Departments of Psychology, Pediatrics, and Electrical & Computer Engineering, University of Miami, 5665 Ponce De Leon Blvd, Coral Gables, FL 33146, USA
| | - Cheng Chen
- Foreign Language Department, Teaching Centre for General Courses, Chengdu Medical College, 601 Tianhui Street, Chengdu 610083, China
| | - Hongmei Yan
- The MOE Key Lab for Neuroinformation, University of Electronic Science and Technology of China, North Jianshe Road, Chengdu 611731, China
| | - Yaocong Duan
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Robin A A Ince
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Oliver G B Garrod
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Philippe G Schyns
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| | - Rachael E Jack
- School of Psychology and Neuroscience, University of Glasgow, 62 Hillhead Street, Glasgow G12 8QB, Scotland, UK
| |
Collapse
|
28
|
Lévesque-Lacasse A, Desjardins MC, Fiset D, Charbonneau C, Cormier S, Blais C. The Relationship Between the Ability to Infer Another's Pain and the Expectations Regarding the Appearance of Pain Facial Expressions: Investigation of the Role of Visual Perception. THE JOURNAL OF PAIN 2024; 25:250-264. [PMID: 37604362 DOI: 10.1016/j.jpain.2023.08.007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/23/2022] [Revised: 08/01/2023] [Accepted: 08/12/2023] [Indexed: 08/23/2023]
Abstract
Although pain is a commonly experienced and observed affective state, it is frequently misinterpreted, which leads to inadequate caregiving. Studies show the ability at estimating pain in others (estimation bias) and detecting its subtle variations (sensitivity) could emerge from independent mechanisms. While estimation bias is modulated by variables such as empathy level, pain catastrophizing tendency, and overexposure to pain, sensitivity remains unimpacted. The present study verifies if these 2 types of inaccuracies are partly explained by perceptual factors. Using reverse correlation, we measured their association with participants' mental representation of pain, or more simply put, with their expectations of what the face of a person in pain should look like. Experiment 1 shows that both parameters are associated with variations in expectations of this expression. More specifically, the estimation bias is linked with expectations characterized by salient changes in the middle face region, whereas sensitivity is associated with salient changes in the eyebrow region. Experiment 2 reveals that bias and sensitivity yield differences in emotional representations. Expectations of individuals with a lower underestimation tendency are qualitatively rated as expressing more pain and sadness, and those of individuals with a higher level of sensitivity as expressing more pain, anger, and disgust. Together, these results provide evidence for a perceptual contribution in pain inferencing that is independent of other psychosocial variables and its link to observers' expectations. PERSPECTIVE: This article reinforces the contribution of perceptual mechanisms in pain assessment. Moreover, strategies aimed to improve the reliability of individuals' expectations regarding the appearance of facial expressions of pain could potentially be developed, and contribute to decrease inaccuracies found in pain assessment and the confusion between pain and other affective states.
Collapse
Affiliation(s)
- Alexandra Lévesque-Lacasse
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Marie-Claude Desjardins
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Carine Charbonneau
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, Gatineau, Québec, Canada
| |
Collapse
|
29
|
Leung TS, Zeng G, Maylott SE, Martinez SN, Jakobsen KV, Simpson EA. Infection detection in faces: Children's development of pathogen avoidance. Child Dev 2024; 95:e35-e46. [PMID: 37589080 DOI: 10.1111/cdev.13983] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2022] [Revised: 06/20/2023] [Accepted: 07/05/2023] [Indexed: 08/18/2023]
Abstract
This study examined the development of children's avoidance and recognition of sickness using face photos from people with natural, acute, contagious illness. In a U.S. sample of fifty-seven 4- to 5-year-olds (46% male, 70% White), fifty-two 8- to 9-year-olds (26% male, 62% White), and 51 adults (59% male, 61% White), children and adults avoided and recognized sick faces (ds ranged from 0.38 to 2.26). Both avoidance and recognition improved with age. Interestingly, 4- to 5-year-olds' avoidance of sick faces positively correlated with their recognition, suggesting stable individual differences in these emerging skills. Together, these findings are consistent with a hypothesized immature but functioning and flexible behavioral immune system emerging early in development. Characterizing children's sickness perception may help design interventions to improve health.
Collapse
Affiliation(s)
- Tiffany S Leung
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
| | - Guangyu Zeng
- Department of Psychology, University of Miami, Coral Gables, Florida, USA
- Division of Applied Psychology, The Chinese University of Hong Kong, Shenzhen, China
| | - Sarah E Maylott
- Department of Psychiatry & Behavioral Sciences, Duke University, Durham, North Carolina, USA
| | | | | | | |
Collapse
|
30
|
Li Z, Lu H, Liu D, Yu ANC, Gendron M. Emotional event perception is related to lexical complexity and emotion knowledge. COMMUNICATIONS PSYCHOLOGY 2023; 1:45. [PMID: 39242918 PMCID: PMC11332234 DOI: 10.1038/s44271-023-00039-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2023] [Accepted: 11/23/2023] [Indexed: 09/09/2024]
Abstract
Inferring emotion is a critical skill that supports social functioning. Emotion inferences are typically studied in simplistic paradigms by asking people to categorize isolated and static cues like frowning faces. Yet emotions are complex events that unfold over time. Here, across three samples (Study 1 N = 222; Study 2 N = 261; Study 3 N = 101), we present the Emotion Segmentation Paradigm to examine inferences about complex emotional events by extending cognitive paradigms examining event perception. Participants were asked to indicate when there were changes in the emotions of target individuals within continuous streams of activity in narrative film (Study 1) and documentary clips (Study 2, preregistered, and Study 3 test-retest sample). This Emotion Segmentation Paradigm revealed robust and reliable individual differences across multiple metrics. We also tested the constructionist prediction that emotion labels constrain emotion inference, which is traditionally studied by introducing emotion labels. We demonstrate that individual differences in active emotion vocabulary (i.e., readily accessible emotion words) correlate with emotion segmentation performance.
Collapse
Affiliation(s)
- Zhimeng Li
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| | - Hanxiao Lu
- Department of Psychology, New York University, New York, NY, USA
| | - Di Liu
- Department of Psychology, Johns Hopkins University, Baltimore, MD, USA
| | - Alessandra N C Yu
- Nash Family Department of Neuroscience, Icahn School of Medicine at Mount Sinai, New York, NY, USA
| | - Maria Gendron
- Department of Psychology, Yale University, New Haven, Connecticut, USA.
| |
Collapse
|
31
|
Pitcher D. Visual neuroscience: A specialised neural pathway for social perception. Curr Biol 2023; 33:R1222-R1224. [PMID: 38052168 DOI: 10.1016/j.cub.2023.10.020] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2023]
Abstract
Humans are an intensely social species. Our daily lives depend on understanding the behaviour and intentions of the people around us. A new study identifies a neural pathway specialised for interpreting the physical actions that we use to understand others.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, Heslington, York YO10 5DD, UK.
| |
Collapse
|
32
|
Kho SK, Keeble D, Wong HK, Estudillo AJ. Null effect of anodal and cathodal transcranial direct current stimulation (tDCS) on own- and other-race face recognition. Soc Neurosci 2023; 18:393-406. [PMID: 37840302 DOI: 10.1080/17470919.2023.2263924] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Indexed: 10/17/2023]
Abstract
Successful face recognition is important for social interactions and public security. Although some preliminary evidence suggests that anodal and cathodal transcranial direct current stimulation (tDCS) might modulate own- and other-race face identification, respectively, the findings are largely inconsistent. Hence, we examined the effect of both anodal and cathodal tDCS on the recognition of own- and other-race faces. Ninety participants first completed own- and other-race Cambridge Face Memory Test (CFMT) as baseline measurements. Next, they received either anodal tDCS, cathodal tDCS or sham stimulation and finally they completed alternative versions of the own- and other-race CFMT. No difference in performance, in terms of accuracy and reaction time, for own- and other-race face recognition between anodal tDCS, cathodal tDCS and sham stimulation was found. Our findings cast doubt upon the efficacy of tDCS to modulate performance in face identification tasks.
Collapse
Affiliation(s)
- Siew Kei Kho
- Department of Psychology, Bournemouth University, Poole, United Kingdom
- School of Psychology, University of Nottingham Malaysia, Semenyih, Malaysia
| | - David Keeble
- Department of Psychology, Bournemouth University, Poole, United Kingdom
| | - Hoo Keat Wong
- Department of Psychology, Bournemouth University, Poole, United Kingdom
| | | |
Collapse
|
33
|
Maxwell JW, Sanchez DN, Ruthruff E. Infrequent facial expressions of emotion do not bias attention. PSYCHOLOGICAL RESEARCH 2023; 87:2449-2459. [PMID: 37258662 DOI: 10.1007/s00426-023-01844-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2022] [Accepted: 05/22/2023] [Indexed: 06/02/2023]
Abstract
Despite the obvious importance of facial expressions of emotion, most studies have found that they do not bias attention. A critical limitation, however, is that these studies generally present face distractors on all trials of the experiment. For other kinds of emotional stimuli, such as emotional scenes, infrequently presented stimuli elicit greater attentional bias than frequently presented stimuli, perhaps due to suppression or habituation. The goal of the current study then was to test whether such modulation of attentional bias by distractor frequency generalizes to facial expressions of emotion. In Experiment 1, both angry and happy faces were unable to bias attention, despite being infrequently presented. Even when the location of these face cues were more unpredictable-presented in one of two possible locations-still no attentional bias was observed (Experiment 2). Moreover, there was no bottom-up influence for angry and happy faces shown under high or low perceptual load (Experiment 3). We conclude that task-irrelevant posed facial expressions of emotion cannot bias attention even when presented infrequently.
Collapse
Affiliation(s)
- Joshua W Maxwell
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA.
| | - Danielle N Sanchez
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| | - Eric Ruthruff
- Department of Psychology, 1 University of New Mexico, Albuquerque, NM, 87131, USA
| |
Collapse
|
34
|
Greene L, Reidy J, Morton N, Atherton A, Barker LA. Dynamic Emotion Recognition and Social Inference Ability in Traumatic Brain Injury: An Eye-Tracking Comparison Study. Behav Sci (Basel) 2023; 13:816. [PMID: 37887466 PMCID: PMC10604615 DOI: 10.3390/bs13100816] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/22/2023] [Accepted: 09/27/2023] [Indexed: 10/28/2023] Open
Abstract
Emotion recognition and social inference impairments are well-documented features of post-traumatic brain injury (TBI), yet the mechanisms underpinning these are not fully understood. We examined dynamic emotion recognition, social inference abilities, and eye fixation patterns between adults with and without TBI. Eighteen individuals with TBI and 18 matched non-TBI participants were recruited and underwent all three components of The Assessment of Social Inference Test (TASIT). The TBI group were less accurate in identifying emotions compared to the non-TBI group. Individuals with TBI also scored lower when distinguishing sincere and sarcastic conversations, but scored similarly to those without TBI during lie vignettes. Finally, those with TBI also had difficulty understanding the actor's intentions, feelings, and beliefs compared to participants without TBI. No group differences were found for eye fixation patterns, and there were no associations between fixations and behavioural accuracy scores. This conflicts with previous studies, and might be related to an important distinction between static and dynamic stimuli. Visual strategies appeared goal- and stimulus-driven, with attention being distributed to the most diagnostic area of the face for each emotion. These findings suggest that low-level visual deficits may not be modulating emotion recognition and social inference disturbances post-TBI.
Collapse
Affiliation(s)
- Leanne Greene
- Centre for Behavioural Science and Applied Psychology, Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield S10 2BP, UK; (J.R.); (L.A.B.)
| | - John Reidy
- Centre for Behavioural Science and Applied Psychology, Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield S10 2BP, UK; (J.R.); (L.A.B.)
| | - Nick Morton
- Neuro Rehabilitation Outreach Team, Rotherham, Doncaster and South Humber NHS Trust, Doncaster DN4 8QN, UK;
| | - Alistair Atherton
- Consultant Clinical Neuropsychologist, Atherton Neuropsychology Consultancy Ltd. Parkhead Consultancy, 356 Ecclesall Road, Sheffield S11 9PU, UK;
| | - Lynne A. Barker
- Centre for Behavioural Science and Applied Psychology, Department of Psychology, Sociology and Politics, Sheffield Hallam University, Sheffield S10 2BP, UK; (J.R.); (L.A.B.)
| |
Collapse
|
35
|
Schindler S, Bruchmann M, Straube T. Beyond facial expressions: A systematic review on effects of emotional relevance of faces on the N170. Neurosci Biobehav Rev 2023; 153:105399. [PMID: 37734698 DOI: 10.1016/j.neubiorev.2023.105399] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2023] [Revised: 09/15/2023] [Accepted: 09/17/2023] [Indexed: 09/23/2023]
Abstract
The N170 is the most prominent electrophysiological signature of face processing. While facial expressions reliably modulate the N170, there is considerable variance in N170 modulations by other sources of emotional relevance. Therefore, we systematically review and discuss this research area using different methods to manipulate the emotional relevance of inherently neutral faces. These methods were categorized into (1) existing pre-experimental affective person knowledge (e.g., negative attitudes towards outgroup faces), (2) experimentally instructed affective person knowledge (e.g., negative person information), (3) contingency-based affective learning (e.g., fear-conditioning), or (4) the immediate affective context (e.g., emotional information directly preceding the face presentation). For all categories except the immediate affective context category, the majority of studies reported significantly increased N170 amplitudes depending on the emotional relevance of faces. Furthermore, the potentiated N170 was observed across different attention conditions, supporting the role of the emotional relevance of faces on the early prioritized processing of configural facial information, regardless of low-level differences. However, we identified several open research questions and suggest venues for further research.
Collapse
Affiliation(s)
- Sebastian Schindler
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany.
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Muenster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Muenster, Germany
| |
Collapse
|
36
|
Ebner NC, Pehlivanoglu D, Shoenfelt A. Financial Fraud and Deception in Aging. ADVANCES IN GERIATRIC MEDICINE AND RESEARCH 2023; 5:e230007. [PMID: 37990708 PMCID: PMC10662792 DOI: 10.20900/agmr20230007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
Financial exploitation among older adults is a significant concern with often devastating consequences for individuals and society. Deception plays a critical role in financial exploitation, and detecting deception is challenging, especially for older adults. Susceptibility to deception in older adults is heightened by age-related changes in cognition, such as declines in processing speed and working memory, as well as socioemotional factors, including positive affect and social isolation. Additionally, neurobiological changes with age, such as reduced cortical volume and altered functional connectivity, are associated with declining deception detection and increased risk for financial exploitation among older adults. Furthermore, characteristics of deceptive messages, such as personal relevance and framing, as well as visual cues such as faces, can influence deception detection. Understanding the multifaceted factors that contribute to deception risk in aging is crucial for developing interventions and strategies to protect older adults from financial exploitation. Tailored approaches, including age-specific warnings and harmonizing artificial intelligence as well as human-centered approaches, can help mitigate the risks and protect older adults from fraud.
Collapse
Affiliation(s)
- Natalie C. Ebner
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA
- Florida Institute for Cybersecurity Research, University of Florida, Gainesville, FL 32611, USA
- Florida Institute for National Security, University of Florida, Gainesville, FL 32611, USA
- Institute on Aging, University of Florida, Gainesville, FL 32611, USA
- Center for Cognitive Aging and Memory, University of Florida, Gainesville, FL 32610, USA
| | - Didem Pehlivanoglu
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA
- Florida Institute for Cybersecurity Research, University of Florida, Gainesville, FL 32611, USA
- Florida Institute for National Security, University of Florida, Gainesville, FL 32611, USA
| | - Alayna Shoenfelt
- Department of Psychology, University of Florida, Gainesville, FL 32611, USA
| |
Collapse
|
37
|
Şentürk YD, Tavacioglu EE, Duymaz İ, Sayim B, Alp N. The Sabancı University Dynamic Face Database (SUDFace): Development and validation of an audiovisual stimulus set of recited and free speeches with neutral facial expressions. Behav Res Methods 2023; 55:3078-3099. [PMID: 36018484 DOI: 10.3758/s13428-022-01951-z] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/06/2022] [Indexed: 11/08/2022]
Abstract
Faces convey a wide range of information, including one's identity, and emotional and mental states. Face perception is a major research topic in many research fields, such as cognitive science, social psychology, and neuroscience. Frequently, stimuli are selected from a range of available face databases. However, even though faces are highly dynamic, most databases consist of static face stimuli. Here, we introduce the Sabancı University Dynamic Face (SUDFace) database. The SUDFace database consists of 150 high-resolution audiovisual videos acquired in a controlled lab environment and stored with a resolution of 1920 × 1080 pixels at a frame rate of 60 Hz. The multimodal database consists of three videos of each human model in frontal view in three different conditions: vocalizing two scripted texts (conditions 1 and 2) and one Free Speech (condition 3). The main focus of the SUDFace database is to provide a large set of dynamic faces with neutral facial expressions and natural speech articulation. Variables such as face orientation, illumination, and accessories (piercings, earrings, facial hair, etc.) were kept constant across all stimuli. We provide detailed stimulus information, including facial features (pixel-wise calculations of face length, eye width, etc.) and speeches (e.g., duration of speech and repetitions). In two validation experiments, a total number of 227 participants rated each video on several psychological dimensions (e.g., neutralness and naturalness of expressions, valence, and the perceived mental states of the models) using Likert scales. The database is freely accessible for research purposes.
Collapse
Affiliation(s)
| | | | - İlker Duymaz
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey
| | - Bilge Sayim
- SCALab - Sciences Cognitives et Sciences Affectives, Université de Lille, CNRS, Lille, France
- Institute of Psychology, University of Bern, Fabrikstrasse 8, 3012, Bern, Switzerland
| | - Nihan Alp
- Psychology, Sabancı University, Orta Mahalle, Tuzla, İstanbul, 34956, Turkey.
| |
Collapse
|
38
|
Hayajneh A, Shaqfeh M, Serpedin E, Stotland MA. Unsupervised anomaly appraisal of cleft faces using a StyleGAN2-based model adaptation technique. PLoS One 2023; 18:e0288228. [PMID: 37535557 PMCID: PMC10399833 DOI: 10.1371/journal.pone.0288228] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2022] [Accepted: 06/22/2023] [Indexed: 08/05/2023] Open
Abstract
A novel machine learning framework that is able to consistently detect, localize, and measure the severity of human congenital cleft lip anomalies is introduced. The ultimate goal is to fill an important clinical void: to provide an objective and clinically feasible method of gauging baseline facial deformity and the change obtained through reconstructive surgical intervention. The proposed method first employs the StyleGAN2 generative adversarial network with model adaptation to produce a normalized transformation of 125 faces, and then uses a pixel-wise subtraction approach to assess the difference between all baseline images and their normalized counterparts (a proxy for severity of deformity). The pipeline of the proposed framework consists of the following steps: image preprocessing, face normalization, color transformation, heat-map generation, morphological erosion, and abnormality scoring. Heatmaps that finely discern anatomic anomalies visually corroborate the generated scores. The proposed framework is validated through computer simulations as well as by comparison of machine-generated versus human ratings of facial images. The anomaly scores yielded by the proposed computer model correlate closely with human ratings, with a calculated Pearson's r score of 0.89. The proposed pixel-wise measurement technique is shown to more closely mirror human ratings of cleft faces than two other existing, state-of-the-art image quality metrics (Learned Perceptual Image Patch Similarity and Structural Similarity Index). The proposed model may represent a new standard for objective, automated, and real-time clinical measurement of faces affected by congenital cleft deformity.
Collapse
Affiliation(s)
- Abdullah Hayajneh
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, United States of America
| | - Mohammad Shaqfeh
- Electrical and Computer Engineering Program, Texas A&M University, Doha, Qatar
| | - Erchin Serpedin
- Electrical and Computer Engineering Department, Texas A&M University, College Station, TX, United States of America
| | - Mitchell A Stotland
- Division of Plastic, Craniofacial and Hand Surgery, Sidra Medicine, and Weill Cornell Medical College, Doha, Qatar
| |
Collapse
|
39
|
Miolla A, Cardaioli M, Scarpazza C. Padova Emotional Dataset of Facial Expressions (PEDFE): A unique dataset of genuine and posed emotional facial expressions. Behav Res Methods 2023; 55:2559-2574. [PMID: 36002622 PMCID: PMC10439033 DOI: 10.3758/s13428-022-01914-4] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/15/2022] [Indexed: 11/08/2022]
Abstract
Facial expressions are among the most powerful signals for human beings to convey their emotional states. Indeed, emotional facial datasets represent the most effective and controlled method of examining humans' interpretation of and reaction to various emotions. However, scientific research on emotion mainly relied on static pictures of facial expressions posed (i.e., simulated) by actors, creating a significant bias in emotion literature. This dataset tries to fill this gap, providing a considerable amount (N = 1458) of dynamic genuine (N = 707) and posed (N = 751) clips of the six universal emotions from 56 participants. The dataset is available in two versions: original clips, including participants' body and background, and modified clips, where only the face of participants is visible. Notably, the original dataset has been validated by 122 human raters, while the modified dataset has been validated by 280 human raters. Hit rates for emotion and genuineness, as well as the mean, standard deviation of genuineness, and intensity perception, are provided for each clip to allow future users to select the most appropriate clips needed to answer their scientific questions.
Collapse
Affiliation(s)
- A. Miolla
- Department of General Psychology, University of Padua, Padua, Italy
| | - M. Cardaioli
- Department of Mathematics, University of Padua, Padua, Italy
- GFT Italy, Milan, Italy
| | - C. Scarpazza
- Department of General Psychology, University of Padua, Padua, Italy
| |
Collapse
|
40
|
Lampi AJ, Brewer R, Bird G, Jaswal VK. Non-autistic adults can recognize posed autistic facial expressions: Implications for internal representations of emotion. Autism Res 2023; 16:1321-1334. [PMID: 37172211 DOI: 10.1002/aur.2938] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2023] [Accepted: 04/28/2023] [Indexed: 05/14/2023]
Abstract
Autistic people report that their emotional expressions are sometimes misunderstood by non-autistic people. One explanation for these misunderstandings could be that the two neurotypes have different internal representations of emotion: Perhaps they have different expectations about what a facial expression showing a particular emotion looks like. In three well-powered studies with non-autistic college students in the United States (total N = 632), we investigated this possibility. In Study 1, participants recognized most facial expressions posed by autistic individuals more accurately than those posed by non-autistic individuals. Study 2 showed that one reason the autistic expressions were recognized more accurately was because they were better and more intense examples of the intended expressions than the non-autistic expressions. In Study 3, we used a set of expressions created by autistic and non-autistic individuals who could see their faces as they made the expressions, which could allow them to explicitly match the expression they produced with their internal representation of that emotional expression. Here, neither autistic expressions nor non-autistic expressions were consistently recognized more accurately. In short, these findings suggest that differences in internal representations of what emotional expressions look like are unlikely to play a major role in explaining why non-autistic people sometimes misunderstand the emotions autistic people are experiencing.
Collapse
Affiliation(s)
- Andrew J Lampi
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| | - Rebecca Brewer
- Department of Psychology, Royal Holloway University of London, Egham, UK
| | - Geoffrey Bird
- Department of Experimental Psychology, Brasenose College, University of Oxford, Oxford, UK
| | - Vikram K Jaswal
- Department of Psychology, University of Virginia, Charlottesville, Virginia, USA
| |
Collapse
|
41
|
Saumure C, Plouffe-Demers MP, Fiset D, Cormier S, Zhang Y, Sun D, Feng M, Luo F, Kunz M, Blais C. Differences Between East Asians and Westerners in the Mental Representations and Visual Information Extraction Involved in the Decoding of Pain Facial Expression Intensity. AFFECTIVE SCIENCE 2023; 4:332-349. [PMID: 37293682 PMCID: PMC10153781 DOI: 10.1007/s42761-023-00186-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 08/30/2022] [Accepted: 03/14/2023] [Indexed: 06/10/2023]
Abstract
Effectively communicating pain is crucial for human beings. Facial expressions are one of the most specific forms of behavior associated with pain, but the way culture shapes expectations about the intensity with which pain is typically facially conveyed, and the visual strategies deployed to decode pain intensity in facial expressions, is poorly understood. The present study used a data-driven approach to compare two cultures, namely East Asians and Westerners, with respect to their mental representations of pain facial expressions (experiment 1, N=60; experiment 2, N=74) and their visual information utilization during the discrimination of facial expressions of pain of different intensities (experiment 3; N=60). Results reveal that compared to Westerners, East Asians expect more intense pain expressions (experiments 1 and 2), need more signal, and do not rely as much as Westerners on core facial features of pain expressions to discriminate between pain intensities (experiment 3). Together, those findings suggest that cultural norms regarding socially accepted pain behaviors shape the expectations about pain facial expressions and decoding visual strategies. Furthermore, they highlight the complexity of emotional facial expressions and the importance of studying pain communication in multicultural settings. Supplementary Information The online version contains supplementary material available at 10.1007/s42761-023-00186-1.
Collapse
Affiliation(s)
- Camille Saumure
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Marie-Pier Plouffe-Demers
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
- Département de Psychologie, Université du Québec à Montréal, CP 8888 succ. Centre-ville, Montréal, Québec) H3C 3P8 Canada
| | - Daniel Fiset
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Stéphanie Cormier
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| | - Ye Zhang
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Dan Sun
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
- Department of Psychology, Utrecht University, Utrecht, The Netherlands
| | - Manni Feng
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Feifan Luo
- Institute of Psychological Science, Hangzhou Normal University, Hangzhou, Zhejiang China
| | - Miriam Kunz
- Department of Medical Psychology & Sociology, University of Augsburg, Augsburg, Germany
| | - Caroline Blais
- Département de Psychoéducation et de Psychologie, Université du Québec en Outaouais, CP 1250 succ. Hull, Gatineau, J8X 3X7 Canada
| |
Collapse
|
42
|
Correia-Caeiro C, Guo K, Mills DS. Visual perception of emotion cues in dogs: a critical review of methodologies. Anim Cogn 2023; 26:727-754. [PMID: 36870003 PMCID: PMC10066124 DOI: 10.1007/s10071-023-01762-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 02/07/2023] [Accepted: 02/20/2023] [Indexed: 03/05/2023]
Abstract
Comparative studies of human-dog cognition have grown exponentially since the 2000's, but the focus on how dogs look at us (as well as other dogs) as social partners is a more recent phenomenon despite its importance to human-dog interactions. Here, we briefly summarise the current state of research in visual perception of emotion cues in dogs and why this area is important; we then critically review its most commonly used methods, by discussing conceptual and methodological challenges and associated limitations in depth; finally, we suggest some possible solutions and recommend best practice for future research. Typically, most studies in this field have concentrated on facial emotional cues, with full body information rarely considered. There are many challenges in the way studies are conceptually designed (e.g., use of non-naturalistic stimuli) and the way researchers incorporate biases (e.g., anthropomorphism) into experimental designs, which may lead to problematic conclusions. However, technological and scientific advances offer the opportunity to gather much more valid, objective, and systematic data in this rapidly expanding field of study. Solving conceptual and methodological challenges in the field of emotion perception research in dogs will not only be beneficial in improving research in dog-human interactions, but also within the comparative psychology area, in which dogs are an important model species to study evolutionary processes.
Collapse
Affiliation(s)
- Catia Correia-Caeiro
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK.
- Department of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK.
- Primate Research Institute, Kyoto University, Inuyama, 484-8506, Japan.
- Center for the Evolutionary Origins of Human Behavior, Kyoto University, Inuyama, 484-8506, Japan.
| | - Kun Guo
- School of Psychology, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, UK
| | - Daniel S Mills
- Department of Life Sciences, University of Lincoln, Lincoln, LN6 7DL, UK
| |
Collapse
|
43
|
Familiarity Facilitates Detection of Angry Expressions. Brain Sci 2023; 13:brainsci13030509. [PMID: 36979319 PMCID: PMC10046299 DOI: 10.3390/brainsci13030509] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2023] [Revised: 03/10/2023] [Accepted: 03/16/2023] [Indexed: 03/22/2023] Open
Abstract
Personal familiarity facilitates rapid and optimized detection of faces. In this study, we investigated whether familiarity associated with faces can also facilitate the detection of facial expressions. Models of face processing propose that face identity and face expression detection are mediated by distinct pathways. We used a visual search paradigm to assess if facial expressions of emotion (anger and happiness) were detected more rapidly when produced by familiar as compared to unfamiliar faces. We found that participants detected an angry expression 11% more accurately and 135 ms faster when produced by familiar as compared to unfamiliar faces while happy expressions were detected with equivalent accuracies and at equivalent speeds for familiar and unfamiliar faces. These results suggest that detectors in the visual system dedicated to processing features of angry expressions are optimized for familiar faces.
Collapse
|
44
|
Behavioral and physiological sensitivity to natural sick faces. Brain Behav Immun 2023; 110:195-211. [PMID: 36893923 DOI: 10.1016/j.bbi.2023.03.007] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Revised: 03/03/2023] [Accepted: 03/03/2023] [Indexed: 03/11/2023] Open
Abstract
The capacity to rapidly detect and avoid sick people may be adaptive. Given that faces are reliably available, as well as rapidly detected and processed, they may provide health information that influences social interaction. Prior studies used faces that were manipulated to appear sick (e.g., editing photos, inducing inflammatory response); however, responses to naturally sick faces remain largely unexplored. We tested whether adults detected subtle cues of genuine, acute, potentially contagious illness in face photos compared to the same individuals when healthy. We tracked illness symptoms and severity with the Sickness Questionnaire and Common Cold Questionnaire. We also checked that sick and healthy photos were matched on low-level features. We found that participants (N = 109) rated sick faces, compared to healthy faces, as sicker, more dangerous, and eliciting more unpleasant feelings. Participants (N = 90) rated sick faces as more likely to be avoided, more tired, and more negative in expression than healthy faces. In a passive-viewing eye-tracking task, participants (N = 50) looked longer at healthy than sick faces, especially the eye region, suggesting people may be more drawn to healthy conspecifics. When making approach-avoidance decisions, participants (N = 112) had greater pupil dilation to sick than healthy faces, and more pupil dilation was associated with greater avoidance, suggesting elevated arousal to threat. Across all experiments, participants' behaviors correlated with the degree of sickness, as reported by the face donors, suggesting a nuanced, fine-tuned sensitivity. Together, these findings suggest that humans may detect subtle threats of contagion from sick faces, which may facilitate illness avoidance. By better understanding how humans naturally avoid illness in conspecifics, we may identify what information is used and ultimately improve public health.
Collapse
|
45
|
Thomas LE, Emich A, Weiss E, Zisman C, Foray K, Roberts DM, Page E, Ernst M. Examination of the COVID-19 Pandemic's Impact on Mental Health From Three Perspectives: Global, Social, and Individual. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023; 18:513-526. [PMID: 36173751 PMCID: PMC10018233 DOI: 10.1177/17456916221078310] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The extent of the deleterious effects of the COVID-19 pandemic on mental health is recognized ubiquitously. However, these effects are subject to many modulatory factors from a plethora of domains of examination. It is important to understand the intersection of societal and individual levels for global stressors compared with local phenomena and physical-health outcomes. Here, we consider three perspectives: international/cultural, social, and individual. Both the enduring threat of COVID-19 infection and the protective measures to contain contagion have important consequences on individual mental health. These consequences, together with possible remedial interventions, are the focus of this article. We hope this work will stimulate more research and will suggest factors that need to be considered in the coordination of responses to a global threat, allowing for better preparation in the future.
Collapse
Affiliation(s)
- Lauren E. Thomas
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Abigail Emich
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Emily Weiss
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Corina Zisman
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Katherine Foray
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Deborah M. Roberts
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Emily Page
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| | - Monique Ernst
- Section on the Neurobiology of Fear and Anxiety, National Institute of Mental Health, Bethesda, Maryland
| |
Collapse
|
46
|
Snoek L, Jack RE, Schyns PG, Garrod OG, Mittenbühler M, Chen C, Oosterwijk S, Scholte HS. Testing, explaining, and exploring models of facial expressions of emotions. SCIENCE ADVANCES 2023; 9:eabq8421. [PMID: 36763663 PMCID: PMC9916981 DOI: 10.1126/sciadv.abq8421] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 05/06/2022] [Accepted: 01/09/2023] [Indexed: 06/18/2023]
Abstract
Models are the hallmark of mature scientific inquiry. In psychology, this maturity has been reached in a pervasive question-what models best represent facial expressions of emotion? Several hypotheses propose different combinations of facial movements [action units (AUs)] as best representing the six basic emotions and four conversational signals across cultures. We developed a new framework to formalize such hypotheses as predictive models, compare their ability to predict human emotion categorizations in Western and East Asian cultures, explain the causal role of individual AUs, and explore updated, culture-accented models that improve performance by reducing a prevalent Western bias. Our predictive models also provide a noise ceiling to inform the explanatory power and limitations of different factors (e.g., AUs and individual differences). Thus, our framework provides a new approach to test models of social signals, explain their predictive power, and explore their optimization, with direct implications for theory development.
Collapse
Affiliation(s)
- Lukas Snoek
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Rachael E. Jack
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Philippe G. Schyns
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | | | - Maximilian Mittenbühler
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
- Department of Computer Science, University of Tübingen, Tübingen, Germany
| | - Chaona Chen
- School of Psychology and Neuroscience, University of Glasgow, Glasgow, UK
| | - Suzanne Oosterwijk
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| | - H. Steven Scholte
- Department of Psychology, University of Amsterdam, Amsterdam, Netherlands
| |
Collapse
|
47
|
Lanfranco RC, Rabagliati H, Carmel D. The importance of awareness in face processing: A critical review of interocular suppression studies. Behav Brain Res 2023; 437:114116. [PMID: 36113728 DOI: 10.1016/j.bbr.2022.114116] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 10/14/2022]
Abstract
Human faces convey essential information for understanding others' mental states and intentions. The importance of faces in social interaction has prompted suggestions that some relevant facial features such as configural information, emotional expression, and gaze direction may promote preferential access to awareness. This evidence has predominantly come from interocular suppression studies, with the most common method being the Breaking Continuous Flash Suppression (bCFS) procedure, which measures the time it takes different stimuli to overcome interocular suppression. However, the procedures employed in such studies suffer from multiple methodological limitations. For example, they are unable to disentangle detection from identification processes, their results may be confounded by participants' response bias and decision criteria, they typically use small stimulus sets, and some of their results attributed to detecting high-level facial features (e.g., emotional expression) may be confounded by differences in low-level visual features (e.g., contrast, spatial frequency). In this article, we review the evidence from the bCFS procedure on whether relevant facial features promote access to awareness, discuss the main limitations of this very popular method, and propose strategies to address these issues.
Collapse
Affiliation(s)
- Renzo C Lanfranco
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom; Department of Neuroscience, Karolinska Institutet, Stockholm, Sweden.
| | - Hugh Rabagliati
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - David Carmel
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom; School of Psychology, Victoria University of Wellington, Wellington, New Zealand.
| |
Collapse
|
48
|
Richard MA, Saint Aroman M, Baissac C, Merhand S, Aubert R, Audouze A, Legrand C, Beausillon C, Carre M, Raynal H, Bergqvist C, Taieb C, Cribier B. Burden of visible [face and hands] skin diseases: Results from a large international survey. Ann Dermatol Venereol 2023:S0151-9638(22)00122-3. [PMID: 36653227 DOI: 10.1016/j.annder.2022.11.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 10/08/2022] [Accepted: 11/08/2022] [Indexed: 01/18/2023]
Abstract
BACKGROUND While numerous surveys over the last decade have evaluated the burden of skin diseases, none have focused on the specific impact of disease-location on the hands and face. AIM The purpose of our study was to evaluate the burden of 8 skin diseases on the multidimensional aspects of subjects' daily lives in respect to their location on visible body areas (face or hands) versus non-visible areas. METHODS This was a population-based study in a representative sample of the Canadian, Chinese, Italian, Spanish, German and French populations, aged over 18 years using the proportional quota sampling method. All participants were asked (i) to complete a specific questionnaire including socio-demographic characteristics, (ii) to declare if they had a skin disease. All respondents with a skin disease were asked (iii) to specify the respective disease locations (hands, face, body) and (iv) to complete the DLQI questionnaire. Respondents with 8 selected skin diseases were asked (v) to complete a questionnaire evaluating the impact of the skin disease on their daily life, including their professional activity, social relations, emotional and intimate life, leisure, sports activities and perceived stigma. RESULTS A total of 13,138 adult participants responded to the questionnaire, of whom 26.2 % (n = 3,450) had skin diseases, and 23.4 % (n = 3,072) reported having one of the 8 selected skin diseases. Fifty-three percent were women and the mean age was 39.6 ± 15.5 years. The QoL was mostly impaired when the visible localization was solely on the hands as compared with the face (38 % had a DLQI > 10 versus 22 % respectively). More subjects with a visible localization on the hands reported felt-stigma, having difficulty falling asleep and felt that their sex life had been affected. CONCLUSION Special attention should be given to patients with skin disease on the hands and face as they are at higher risk of social exclusion and lower quality of life.
Collapse
Affiliation(s)
- M-A Richard
- Department of Dermatology, Aix-Marseille University, La Timone University Hospital, Marseille, France; CEReSS-EA 3279, Health Services and Quality of Life Research Centre, Aix Marseille University, Dermatology Department, La Timone University Hospital APHM, 13385 Marseille, France
| | - M Saint Aroman
- Head of Corporate Medical Direction Pharma, Dermocosmetics Care & Personal Care, Pierre Fabre, France
| | - C Baissac
- Head of Patient Centricity, Dermocosmetics Care & Personal Care, Pierre Fabre, France
| | - S Merhand
- Association Française de l'Eczéma, Redon, France
| | | | - A Audouze
- Association Ichtyose France, Bellerive-Sur-Allier, France
| | - C Legrand
- France Acné Adolescents Adultes, Vincennes, France
| | - C Beausillon
- France Acné Adolescents Adultes, Vincennes, France
| | - M Carre
- Association Française du Vitiligo, Paris, France
| | - H Raynal
- Solidarité Verneuil, Villeurbanne, France
| | - C Bergqvist
- Department of Dermatology, CHU Henri Mondor, Créteil, France
| | - C Taieb
- European Market Maintenance Assessment, Patients Priority Dpt, Fontenay sous Bois, France.
| | - B Cribier
- Clinique Dermatologique, University Hospital, Strasbourg, France
| |
Collapse
|
49
|
Abstract
Virtual reality (VR) allows us to create visual stimuli that are both immersive and reactive. VR provides many new opportunities in vision science. In particular, it allows us to present wide field-of-view, immersive visual stimuli; for observers to actively explore the environments that we create; and for us to understand how visual information is used in the control of behaviour. In contrast with traditional psychophysical experiments, VR provides much greater flexibility in creating environments and tasks that are more closely aligned with our everyday experience. These benefits of VR are of particular value in developing our theories of the behavioural goals of the visual system and explaining how visual information is processed to achieve these goals. The use of VR in vision science presents a number of technical challenges, relating to how the available software and hardware limit our ability to accurately specify the visual information that defines our virtual environments and the interpretation of data gathered in experiments with a freely moving observer in a responsive environment.
Collapse
Affiliation(s)
- Paul B Hibbard
- Department of Psychology, University of Essex, Colchester, UK.
| |
Collapse
|
50
|
Canas-Bajo T, Whitney D. Individual differences in classification images of Mooney faces. J Vis 2022; 22:3. [DOI: 10.1167/jov.22.13.3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Affiliation(s)
- Teresa Canas-Bajo
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
| | - David Whitney
- Vision Science Graduate Group, University of California, Berkeley, Berkeley, CA, USA
- Department of Psychology, University of California, Berkeley, Berkeley, CA, USA
| |
Collapse
|