1
|
Moosavi J, Resch A, Lecchi A, Sokolov AN, Fallgatter AJ, Pavlova MA. Reading language of the eyes in female depression. Cereb Cortex 2024; 34:bhae253. [PMID: 38990517 DOI: 10.1093/cercor/bhae253] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2024] [Revised: 05/31/2024] [Accepted: 06/03/2024] [Indexed: 07/12/2024] Open
Abstract
Aberrations in non-verbal social cognition have been reported to coincide with major depressive disorder. Yet little is known about the role of the eyes. To fill this gap, the present study explores whether and, if so, how reading language of the eyes is altered in depression. For this purpose, patients and person-by-person matched typically developing individuals were administered the Emotions in Masked Faces task and Reading the Mind in the Eyes Test, modified, both of which contained a comparable amount of visual information available. For achieving group homogeneity, we set a focus on females as major depressive disorder displays a gender-specific profile. The findings show that facial masks selectively affect inferring emotions: recognition of sadness and anger are more heavily compromised in major depressive disorder as compared with typically developing controls, whereas the recognition of fear, happiness, and neutral expressions remains unhindered. Disgust, the forgotten emotion of psychiatry, is the least recognizable emotion in both groups. On the Reading the Mind in the Eyes Test patients exhibit lower accuracy on positive expressions than their typically developing peers, but do not differ on negative items. In both depressive and typically developing individuals, the ability to recognize emotions behind a mask and performance on the Reading the Mind in the Eyes Test are linked to each other in processing speed, but not recognition accuracy. The outcome provides a blueprint for understanding the complexities of reading language of the eyes within and beyond the COVID-19 pandemic.
Collapse
Affiliation(s)
- Jonas Moosavi
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
| | - Annika Resch
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
| | - Alessandro Lecchi
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
| | - Alexander N Sokolov
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
| | - Andreas J Fallgatter
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
- German Center for Mental Health (DZPG), Partner Site Tübingen, Tübingen, Germany
| | - Marina A Pavlova
- Social Neuroscience Unit, Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Calwerstr. 14, 72076, Tübingen, Germany
| |
Collapse
|
2
|
Roberti E, Turati C, Actis-Grosso R. Single point motion kinematics convey emotional signals in children and adults. PLoS One 2024; 19:e0301896. [PMID: 38598520 PMCID: PMC11006184 DOI: 10.1371/journal.pone.0301896] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2023] [Accepted: 03/25/2024] [Indexed: 04/12/2024] Open
Abstract
This study investigates whether humans recognize different emotions conveyed only by the kinematics of a single moving geometrical shape and how this competence unfolds during development, from childhood to adulthood. To this aim, animations in which a shape moved according to happy, fearful, or neutral cartoons were shown, in a forced-choice paradigm, to 7- and 10-year-old children and adults. Accuracy and response times were recorded, and the movement of the mouse while the participants selected a response was tracked. Results showed that 10-year-old children and adults recognize happiness and fear when conveyed solely by different kinematics, with an advantage for fearful stimuli. Fearful stimuli were also accurately identified at 7-year-olds, together with neutral stimuli, while, at this age, the accuracy for happiness was not significantly different than chance. Overall, results demonstrates that emotions can be identified by a single point motion alone during both childhood and adulthood. Moreover, motion contributes in various measures to the comprehension of emotions, with fear recognized earlier in development and more readily even later on, when all emotions are accurately labeled.
Collapse
Affiliation(s)
- Elisa Roberti
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Chiara Turati
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| | - Rossana Actis-Grosso
- Psychology Department, University of Milano–Bicocca, Milan, Italy
- Neuromi, Milan Center for Neuroscience, Milan, Italy
| |
Collapse
|
3
|
Keating CT, Cook JL. The inside out model of emotion recognition: how the shape of one's internal emotional landscape influences the recognition of others' emotions. Sci Rep 2023; 13:21490. [PMID: 38057460 PMCID: PMC10700588 DOI: 10.1038/s41598-023-48469-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2023] [Accepted: 11/27/2023] [Indexed: 12/08/2023] Open
Abstract
Some people are exceptional at reading emotional expressions, while others struggle. Here we ask whether the way we experience emotion "on the inside" influences the way we expect emotions to be expressed in the "outside world" and subsequently our ability to read others' emotional expressions. Across multiple experiments, incorporating discovery and replication samples, we develop EmoMap (N = 20; N = 271) and ExpressionMap (N = 98; replication N = 193) to map adults' experiences of emotions and visual representations of others' emotions. Some individuals have modular maps, wherein emotional experiences and visual representations are consistent and distinct-anger looks and feels different from happiness, which looks and feels different from sadness. In contrast, others have experiences and representations that are variable and overlapping-anger, happiness, and sadness look and feel similar and are easily confused for one another. Here we illustrate an association between these maps: those with consistent and distinct experiences of emotion also have consistent and distinct visual representations of emotion. Finally (N = 193), we construct the Inside Out Model of Emotion Recognition, which explains 60.8% of the variance in emotion recognition and illuminates multiple pathways to emotion recognition difficulties. These findings have important implications for understanding the emotion recognition difficulties documented in numerous clinical populations.
Collapse
|
4
|
Bidet-Ildei C, Francisco V, Decatoire A, Pylouster J, Blandin Y. PLAViMoP database: A new continuously assessed and collaborative 3D point-light display dataset. Behav Res Methods 2023; 55:694-715. [PMID: 35441360 DOI: 10.3758/s13428-022-01850-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/23/2022] [Indexed: 11/08/2022]
Abstract
It was more than 45 years ago that Gunnar Johansson invented the point-light display technique. This showed for the first time that kinematics is crucial for action recognition, and that humans are very sensitive to their conspecifics' movements. As a result, many of today's researchers use point-light displays to better understand the mechanisms behind this recognition ability. In this paper, we propose PLAViMoP, a new database of 3D point-light displays representing everyday human actions (global and fine-motor control movements), sports movements, facial expressions, interactions, and robotic movements. Access to the database is free, at https://plavimop.prd.fr/en/motions . Moreover, it incorporates a search engine to facilitate action retrieval. In this paper, we describe the construction, functioning, and assessment of the PLAViMoP database. Each sequence was analyzed according to four parameters: type of movement, movement label, sex of the actor, and age of the actor. We provide both the mean scores for each assessment of each point-light display, and the comparisons between the different categories of sequences. Our results are discussed in the light of the literature and the suitability of our stimuli for research and applications.
Collapse
Affiliation(s)
- Christel Bidet-Ildei
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France.
- MSHS, Bâtiment A5, 5 rue Théodore Lefebvre TSA 21103, 86073, Poitiers, Cedex 9, France.
| | - Victor Francisco
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Arnaud Decatoire
- Institut PPRIME (UPR CNRS 3346), Université de Poitiers, Centre National de la Recherche Scientifique, Poitiers, France
| | - Jean Pylouster
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| | - Yannick Blandin
- Centre de Recherches sur la Cognition et l'Apprentissage (UMR CNRS 7295), Université de Poitiers, Université de Tours, Centre National de la Recherche Scientifique, Poitiers, France
| |
Collapse
|
5
|
Pavlova MA, Romagnano V, Kubon J, Isernia S, Fallgatter AJ, Sokolov AN. Ties between reading faces, bodies, eyes, and autistic traits. Front Neurosci 2022; 16:997263. [PMID: 36248653 PMCID: PMC9554539 DOI: 10.3389/fnins.2022.997263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2022] [Accepted: 08/12/2022] [Indexed: 11/29/2022] Open
Abstract
While reading covered with masks faces during the COVID-19 pandemic, for efficient social interaction, we need to combine information from different sources such as the eyes (without faces hidden by masks) and bodies. This may be challenging for individuals with neuropsychiatric conditions, in particular, autism spectrum disorders. Here we examined whether reading of dynamic faces, bodies, and eyes are tied in a gender-specific way, and how these capabilities are related to autistic traits expression. Females and males accomplished a task with point-light faces along with a task with point-light body locomotion portraying different emotional expressions. They had to infer emotional content of displays. In addition, participants were administered the Reading the Mind in the Eyes Test, modified and Autism Spectrum Quotient questionnaire. The findings show that only in females, inferring emotions from dynamic bodies and faces are firmly linked, whereas in males, reading in the eyes is knotted with face reading. Strikingly, in neurotypical males only, accuracy of face, body, and eyes reading was negatively tied with autistic traits. The outcome points to gender-specific modes in social cognition: females rely upon merely dynamic cues while reading faces and bodies, whereas males most likely trust configural information. The findings are of value for examination of face and body language reading in neuropsychiatric conditions, in particular, autism, most of which are gender/sex-specific. This work suggests that if male individuals with autistic traits experience difficulties in reading covered with masks faces, these deficits may be unlikely compensated by reading (even dynamic) bodies and faces. By contrast, in females, reading covered faces as well as reading language of dynamic bodies and faces are not compulsorily connected to autistic traits preventing them from paying high costs for maladaptive social interaction.
Collapse
Affiliation(s)
- Marina A. Pavlova
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
- *Correspondence: Marina A. Pavlova,
| | - Valentina Romagnano
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Julian Kubon
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Sara Isernia
- IRCCS Fondazione Don Carlo Gnocchi ONLUS, Milan, Italy
| | - Andreas J. Fallgatter
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| | - Alexander N. Sokolov
- Department of Psychiatry and Psychotherapy, Tübingen Center for Mental Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany
| |
Collapse
|
6
|
Pavlova MA, Sokolov AA. Reading language of the eyes. Neurosci Biobehav Rev 2022; 140:104755. [PMID: 35760388 DOI: 10.1016/j.neubiorev.2022.104755] [Citation(s) in RCA: 18] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2022] [Revised: 05/20/2022] [Accepted: 06/23/2022] [Indexed: 12/19/2022]
Abstract
The need for assessment of social skills in clinical and neurotypical populations has led to the widespread, and still increasing use of the 'Reading the Mind in the Eyes Test' (RMET) developed more than two decades ago by Simon Baron-Cohen and colleagues for evaluation of social cognition in autism. By analyzing most recent clinical and brain imaging data, we illuminate a set of factors decisive for using the RMET. Converging evidence indicates: (i) In neurotypical individuals, RMET scores are tightly correlated with other social skills (empathy, emotional intelligence, and body language reading); (ii) The RMET assesses recognition of facial affect, but also heavily relies on receptive language skills, semantic knowledge, and memory; (iii) RMET performance is underwritten by the large-scale ensembles of neural networks well-outside the social brain; (iv) The RMET is limited in its capacity to differentiate between neuropsychiatric conditions as well as between stages and severity of a single disorder, though it reliably distinguishes individuals with altered social cognition or elevated pathological traits from neurotypical persons; (v) Merely gender (as a social construct) rather than neurobiological sex influences performance on the RMET; (vi) RMET scores do not substantially decline in healthy aging, and they are higher with higher education level, cognitive abilities, literacy, and mental well-being; (vii) Accuracy on the RMET, and engagement of the social brain, are greater when emotions are expressed and recognized by individuals with similar cultural/ethnic background. Further research is required to better inform usage of the RMET as a tool for swift and reliable examination of social cognition. In light of comparable visual input from the RMET images and faces covered by masks due to COVID-19 regulations, the analysis is of value for keeping efficient social interaction during the current pandemic, in particular, in professional settings related to social communication.
Collapse
Affiliation(s)
- Marina A Pavlova
- Department of Psychiatry and Psychotherapy, Tübingen Center for Menthal Health (TüCMH), Medical School and University Hospital, Eberhard Karls University of Tübingen, Tübingen, Germany.
| | - Arseny A Sokolov
- Service de neuropsychologie et de neuroréhabilitation, Département des neurosciences cliniques, Centre Hospitalier Universitaire Vaudois (CHUV), Lausanne, Switzerland
| |
Collapse
|
7
|
Takarae Y, McBeath MK, Krynen RC. Perception of Dynamic Point Light Facial Expression. AMERICAN JOURNAL OF PSYCHOLOGY 2021. [DOI: 10.5406/amerjpsyc.134.4.0373] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Abstract
This study uses point light displays both to investigate the roles of global and local motion analyses in the perception of dynamic facial expressions and to measure the information threshold for reliable recognition of emotions. We videotaped the faces of actors wearing black makeup with white dots while they dynamically produced each of 6 basic Darwin/Ekman emotional expressions. The number of point lights was varied to systematically manipulate amount of information available. For all but one of the expressions, discriminability (d′) increased approximately linearly with number of point lights, with most remaining largely discriminable with as few as only 6 point lights. This finding supports reliance on global motion patterns produced by facial muscles. However, discriminability for the happy expression was notably higher and largely unaffected by number of point lights and thus appears to rely on characteristic local motion, probably the unique upward curvature of the mouth. The findings indicate that recognition of facial expression is not a unitary process and that different expressions may be conveyed by different perceptual information, but in general, basic facial emotional expressions typically remain largely discriminable with as few as 6 dynamic point lights.
Collapse
Affiliation(s)
| | - Michael K. McBeath
- Arizona State University and Max Planck Institute for Empirical Aesthetics
| | | |
Collapse
|
8
|
Pazhoohi F, Forby L, Kingstone A. Facial masks affect emotion recognition in the general population and individuals with autistic traits. PLoS One 2021; 16:e0257740. [PMID: 34591895 PMCID: PMC8483373 DOI: 10.1371/journal.pone.0257740] [Citation(s) in RCA: 31] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Accepted: 09/08/2021] [Indexed: 11/25/2022] Open
Abstract
Facial expressions, and the ability to recognize these expressions, have evolved in humans to communicate information to one another. Face masks are equipment used in healthcare by health professionals to prevent the transmission of airborne infections. As part of the social distancing efforts related to COVID-19, wearing facial masks has been practiced globally. Such practice might influence affective information communication among humans. Previous research suggests that masks disrupt expression recognition of some emotions (e.g., fear, sadness or neutrality) and lower the confidence in their identification. To extend the previous research, in the current study we tested a larger and more diverse sample of individuals and also investigated the effect of masks on perceived intensity of expressions. Moreover, for the first time in the literature we examined these questions using individuals with autistic traits. Specifically, across three experiments using different populations (college students and general population), and the 10-item Autism Spectrum Quotient (AQ-10; lower and higher scorers), we tested the effect of facial masks on facial emotion recognition of anger, disgust, fear, happiness, sadness, and neutrality. Results showed that the ability to identify all facial expressions decreased when faces were masked, a finding observed across all three studies, contradicting previous research on fear, sad, and neutral expressions. Participants were also less confident in their judgements for all emotions, supporting previous research; and participants perceived emotions as less expressive in the mask condition compared to the unmasked condition, a finding novel to the literature. An additional novel finding was that participants with higher scores on the AQ-10 were less accurate and less confident overall in facial expression recognition, as well as perceiving expressions as less intense. Our findings reveal that wearing face masks decreases facial expression recognition, confidence in expression identification, as well as the perception of intensity for all expressions, affecting high-scoring AQ-10 individuals more than low-scoring individuals.
Collapse
Affiliation(s)
- Farid Pazhoohi
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Leilani Forby
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| | - Alan Kingstone
- Department of Psychology, University of British Columbia, Vancouver, British Columbia, Canada
| |
Collapse
|
9
|
Yagi S, Nakata Y, Nakamura Y, Ishiguro H. Can an android's posture and movement discriminate against the ambiguous emotion perceived from its facial expressions? PLoS One 2021; 16:e0254905. [PMID: 34375327 PMCID: PMC8354482 DOI: 10.1371/journal.pone.0254905] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Accepted: 07/06/2021] [Indexed: 11/24/2022] Open
Abstract
Expressing emotions through various modalities is a crucial function not only for humans but also for robots. The mapping method from facial expressions to the basic emotions is widely used in research on robot emotional expressions. This method claims that there are specific facial muscle activation patterns for each emotional expression and people can perceive these emotions by reading these patterns. However, recent research on human behavior reveals that some emotional expressions, such as the emotion "intense", are difficult to judge as positive or negative by just looking at the facial expression alone. Nevertheless, it has not been investigated whether robots can also express ambiguous facial expressions with no clear valence and whether the addition of body expressions can make the facial valence clearer to humans. This paper shows that an ambiguous facial expression of an android can be perceived more clearly by viewers when body postures and movements are added. We conducted three experiments and online surveys among North American residents with 94, 114 and 114 participants, respectively. In Experiment 1, by calculating the entropy, we found that the facial expression "intense" was difficult to judge as positive or negative when they were only shown the facial expression. In Experiments 2 and 3, by analyzing ANOVA, we confirmed that participants were better at judging the facial valence when they were shown the whole body of the android, even though the facial expression was the same as in Experiment 1. These results suggest that facial and body expressions by robots should be designed jointly to achieve better communication with humans. In order to achieve smoother cooperative human-robot interaction, such as education by robots, emotion expressions conveyed through a combination of both the face and the body of the robot is necessary to convey the robot's intentions or desires to humans.
Collapse
Affiliation(s)
- Satoshi Yagi
- Department of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
- JST ERATO, Chiyoda-ku, Tokyo, Japan
| | - Yoshihiro Nakata
- Department of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
- JST ERATO, Chiyoda-ku, Tokyo, Japan
| | - Yutaka Nakamura
- Department of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
- JST ERATO, Chiyoda-ku, Tokyo, Japan
| | - Hiroshi Ishiguro
- Department of Engineering Science, Osaka University, Toyonaka, Osaka, Japan
- JST ERATO, Chiyoda-ku, Tokyo, Japan
| |
Collapse
|
10
|
Validation of dynamic virtual faces for facial affect recognition. PLoS One 2021; 16:e0246001. [PMID: 33493234 PMCID: PMC7833130 DOI: 10.1371/journal.pone.0246001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 01/12/2021] [Indexed: 11/29/2022] Open
Abstract
The ability to recognise facial emotions is essential for successful social interaction. The most common stimuli used when evaluating this ability are photographs. Although these stimuli have proved to be valid, they do not offer the level of realism that virtual humans have achieved. The objective of the present paper is the validation of a new set of dynamic virtual faces (DVFs) that mimic the six basic emotions plus the neutral expression. The faces are prepared to be observed with low and high dynamism, and from front and side views. For this purpose, 204 healthy participants, stratified by gender, age and education level, were recruited for assessing their facial affect recognition with the set of DVFs. The accuracy in responses was compared with the already validated Penn Emotion Recognition Test (ER-40). The results showed that DVFs were as valid as standardised natural faces for accurately recreating human-like facial expressions. The overall accuracy in the identification of emotions was higher for the DVFs (88.25%) than for the ER-40 faces (82.60%). The percentage of hits of each DVF emotion was high, especially for neutral expression and happiness emotion. No statistically significant differences were discovered regarding gender. Nor were significant differences found between younger adults and adults over 60 years. Moreover, there is an increase of hits for avatar faces showing a greater dynamism, as well as front views of the DVFs compared to their profile presentations. DVFs are as valid as standardised natural faces for accurately recreating human-like facial expressions of emotions.
Collapse
|