1
|
Olszanowski M, Tołopiło A. "Anger? No, thank you. I don't mimic it": how contextual modulation of facial display meaning impacts emotional mimicry. Cogn Emot 2024; 38:530-548. [PMID: 38303660 DOI: 10.1080/02699931.2024.2310759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2023] [Accepted: 01/22/2024] [Indexed: 02/03/2024]
Abstract
Research indicates that emotional mimicry predominantly occurs in response to affiliative displays, such as happiness, while the mimicry of antagonistic displays, like anger, is seldom observed in social contexts. However, contextual factors, including the identity of the displayer (e.g. social similarity with the observer) and whose action triggered the emotional reaction (i.e. to whom display is directed), can modulate the meaning of the display. In two experiments, participants observed happiness, sadness, and anger expressed by individuals with similar or different social attitudes in response to actions from either a participant or another person. Results demonstrated that three manipulated factors - displayer social similarity, whose action caused an emotional display, and the type of emotional display - affected participants' perception of the display. In turn, mimicry was predominantly observed in response to happiness (Experiments 1 and 2), to a lesser extent to sadness (Experiment 1), and not to anger. Furthermore, participants mimicked individuals who were more socially similar (Experiment 1), while whose action caused an emotional reaction did not influence mimicry. The findings suggest that when the context mitigates the meaning of negative or antagonistic facial displays, it does not necessarily increase the inclination to mimic them.
Collapse
Affiliation(s)
- Michal Olszanowski
- Center for Research on Biological Basis of Social Behavior, SWPS University in Warsaw, Warsaw, Poland
| | - Aleksandra Tołopiło
- Center for Research on Biological Basis of Social Behavior, SWPS University in Warsaw, Warsaw, Poland
| |
Collapse
|
2
|
Zetsche U, Neumann P, Bürkner PC, Renneberg B, Koster EHW, Hoorelbeke K. Computerized cognitive control training to reduce rumination in major depression: A randomized controlled trial. Behav Res Ther 2024; 177:104521. [PMID: 38615373 DOI: 10.1016/j.brat.2024.104521] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Revised: 01/12/2024] [Accepted: 03/13/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVE Rumination is a major risk factor for the onset and recurrence of depressive episodes and has been associated with deficits in updating working memory content. This randomized controlled trial examines whether training updating-specific cognitive control processes reduces daily ruminative thoughts in clinically depressed individuals. METHODS Sixty-five individuals with a current major depressive episode were randomized to 10 sessions of either cognitive control training (N = 31) or placebo training (N = 34). The frequency and negativity of individuals' daily ruminative thoughts were assessed for seven days before training, after training, and at a 3-month follow-up using experience sampling methodology. Secondary outcomes were depressive symptoms, depressed mood, and level of disability. RESULTS Cognitive control training led to stronger improvements in the trained task than placebo training. However, cognitive control training did not lead to greater reductions in the frequency or negativity of daily ruminative thoughts than placebo training. There were no training-specific effects on participants' depressive symptoms or level of disability. CONCLUSIONS The robustness of the present null-findings, combined with the methodological strengths of the study, suggest that training currently depressed individuals to update emotional content in working memory does not affect the frequency or negativity of their daily ruminative thoughts.
Collapse
Affiliation(s)
- Ulrike Zetsche
- Clinical Psychology and Psychotherapy, Freie Universität Berlin, Berlin, Germany.
| | - Pauline Neumann
- Clinical Psychology and Psychotherapy, Freie Universität Berlin, Berlin, Germany
| | | | - Babette Renneberg
- Clinical Psychology and Psychotherapy, Freie Universität Berlin, Berlin, Germany
| | - Ernst H W Koster
- Department of Experimental Clinical and Health Psychology, University Ghent, Belgium
| | - Kristof Hoorelbeke
- Department of Experimental Clinical and Health Psychology, University Ghent, Belgium
| |
Collapse
|
3
|
Santiago AF, Kosilo M, Cogoni C, Diogo V, Jerónimo R, Prata D. Oxytocin modulates neural activity during early perceptual salience attribution. Psychoneuroendocrinology 2024; 161:106950. [PMID: 38194846 DOI: 10.1016/j.psyneuen.2023.106950] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 12/18/2023] [Accepted: 12/24/2023] [Indexed: 01/11/2024]
Abstract
Leading hypotheses of oxytocin's (OT) role in human cognition posit that it enhances salience attribution. However, whether OT exerts its effects predominantly in social (vs non-social) contexts remains debatable, and the time-course of intranasal OT's effects' on salience attribution processing is still unknown. We used the social Salience Attribution Task modified (sSAT) in a double-blind, placebo-controlled intranasal OT (inOT) administration, between-subjects design, with 54 male participants, to test existing theories of OT's role in cognition. Namely, we aimed to test whether inOT would differently affect salience attribution processing of social stimuli (expressing fearfulness) and non-social stimuli (fruits) made relevant via monetary reinforcement, and its neural processing time-course. During electroencephalography (EEG) recording, participants made speeded responses to emotional social (fearful faces) and non-emotional non-social (fruits) stimuli - which were matched for task-relevant motivational salience through their (color-dependent) probability of monetary reinforcement. InOT affected early (rather than late, P3b and LPP) EEG components, increasing N170 amplitude (p = .041) and P2b latency (p .001; albeit not of P1), regardless of stimuli's (emotional) socialness or reinforcement probability. Fear-related socialness affected salience attribution processing EEG (p .05) across time (N170, P2b and P3b), being later modulated by reinforcement probability (LPP). Our data suggest that OT's effects on neural activity during early perception, may exist irrespective of fear-related social- or reward-contexts. This partially supports the tri-phasic model of OT (which posits OT enhances salience attribution in an early perception stage regardless of socialness), and not the social salience nor the general approach-withdrawal hypotheses of OT, for early salience processing event-related potentials.
Collapse
Affiliation(s)
- Andreia F Santiago
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal; William James Center for Research, ISPA - Instituto Universitário, Lisbon, Portugal
| | - Maciej Kosilo
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal
| | - Carlotta Cogoni
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal
| | - Vasco Diogo
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal; Instituto Universitário de Lisboa (Iscte-IUL), CIS_Iscte, Lisbon, Portugal
| | - Rita Jerónimo
- Instituto Universitário de Lisboa (Iscte-IUL), CIS_Iscte, Lisbon, Portugal
| | - Diana Prata
- Instituto de Biofísica e Engenharia Biomédica, Faculdade de Ciências da Universidade de Lisboa, Lisbon, Portugal; Department of Old Age Psychiatry, Institute of Psychiatry, Psychology and Neuroscience, King's College London, UK.
| |
Collapse
|
4
|
Orłowski P, Hobot J, Ruban A, Szczypiński J, Bola M. The relation between naturalistic use of psychedelics and perception of emotional stimuli: An event-related potential study comparing non-users and experienced users of classic psychedelics. J Psychopharmacol 2024; 38:68-79. [PMID: 38069478 DOI: 10.1177/02698811231216322] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 02/08/2024]
Abstract
BACKGROUND Previous research has suggested that controlled administration of psychedelic substances can modulate emotional reactivity, enhancing positive and diminishing negative emotions. However, it is unclear whether similar effects are associated with using psychedelics in less-controlled naturalistic environments. AIMS This cross-sectional study investigated the neural markers associated with the perception of emotional stimuli in individuals with extensive experience of naturalistic psychedelic use (15 or more lifetime experiences), comparing them to non-users. METHODS Electroencephalography (EEG) signals were recorded from two groups: experienced psychedelics users (N = 56) and non-users (N = 55). Participants were presented with facial images depicting neutral or emotional expressions (anger, sadness, and happiness). Event-related potential (ERP) components were analyzed as indices of emotional reactivity. RESULTS Psychedelic users were characterized by significantly lower amplitudes of the N200 component in response to fearful faces, in comparison to non-users. In addition, interaction effects between Group and Emotional expression were observed on N170 and N200 amplitudes, indicating group differences in the processing of fearful faces. However, no significant between-group differences emerged in the analysis of later ERP components associated with attention and cognitive processes (P200 and P300). CONCLUSIONS The results suggest that naturalistic use of psychedelics may be linked to reduced reactivity to emotionally negative stimuli at the early and automatic processing stages. Our study contributes to a better understanding of the effects related to using psychedelics in naturalistic contexts.
Collapse
Affiliation(s)
- Paweł Orłowski
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, Poland
| | - Justyna Hobot
- Consciousness Lab, Psychology Institute, Jagiellonian University, Kraków, Poland
| | - Anastasia Ruban
- Department of Psychology, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | - Jan Szczypiński
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, Poland
- Department of Psychiatry, Medical University of Warsaw, Warsaw, Poland
| | - Michał Bola
- Laboratory of Brain Imaging, Nencki Institute of Experimental Biology of Polish Academy of Sciences, Poland
| |
Collapse
|
5
|
Hsu CT, Sato W. Electromyographic Validation of Spontaneous Facial Mimicry Detection Using Automated Facial Action Coding. SENSORS (BASEL, SWITZERLAND) 2023; 23:9076. [PMID: 38005462 PMCID: PMC10675524 DOI: 10.3390/s23229076] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/28/2023] [Revised: 11/06/2023] [Accepted: 11/08/2023] [Indexed: 11/26/2023]
Abstract
Although electromyography (EMG) remains the standard, researchers have begun using automated facial action coding system (FACS) software to evaluate spontaneous facial mimicry despite the lack of evidence of its validity. Using the facial EMG of the zygomaticus major (ZM) as a standard, we confirmed the detection of spontaneous facial mimicry in action unit 12 (AU12, lip corner puller) via an automated FACS. Participants were alternately presented with real-time model performance and prerecorded videos of dynamic facial expressions, while simultaneous ZM signal and frontal facial videos were acquired. Facial videos were estimated for AU12 using FaceReader, Py-Feat, and OpenFace. The automated FACS is less sensitive and less accurate than facial EMG, but AU12 mimicking responses were significantly correlated with ZM responses. All three software programs detected enhanced facial mimicry by live performances. The AU12 time series showed a roughly 100 to 300 ms latency relative to the ZM. Our results suggested that while the automated FACS could not replace facial EMG in mimicry detection, it could serve a purpose for large effect sizes. Researchers should be cautious with the automated FACS outputs, especially when studying clinical populations. In addition, developers should consider the EMG validation of AU estimation as a benchmark.
Collapse
Affiliation(s)
- Chun-Ting Hsu
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| | - Wataru Sato
- Psychological Process Research Team, Guardian Robot Project, RIKEN, Soraku-gun, Kyoto 619-0288, Japan
| |
Collapse
|
6
|
Meiering MS, Weigner D, Enge S, Grimm S. Transdiagnostic phenomena of psychopathology in the context of the RDoC: protocol of a multimodal cross-sectional study. BMC Psychol 2023; 11:297. [PMID: 37770998 PMCID: PMC10540421 DOI: 10.1186/s40359-023-01335-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2023] [Accepted: 09/18/2023] [Indexed: 09/30/2023] Open
Abstract
In the past, affective and cognitive processes related to psychopathology have been examined within the boundaries of phenotype-based diagnostic labels, which has led to inconsistent findings regarding their underlying operating principles. Investigating these processes dimensionally in healthy individuals and by means of multiple modalities may provide additional insights into the psychological and neuronal mechanisms at their core. The transdiagnostic phenomena Neuroticism and Rumination are known to be closely linked. However, the exact nature of their relationship remains to be elucidated. The same applies to the associations between Hedonic Capacity, Negativity Bias and different Emotion Regulation strategies.This multimodal cross-sectional study examines the relationship of the transdiagnostic phenomena Neuroticism and Rumination as well as Hedonic Capacity, the Negativity Bias and Emotion Regulation from a RDoC (Research Domain Criteria) perspective. A total of 120 currently healthy subjects (past 12 months) will complete several questionnaires regarding personality, emotion regulation, hedonic capacity, and psychopathologies as well as functional magnetic resonance imaging (fMRI) during cognitive and emotional processing, to obtain data on the circuit, behavioral and self-report level.This study aims to contribute to the understanding of the relationship between cognitive and affective processes associated with psychopathologies as well as their neuronal correlates. Ultimately, a grounded understanding of these processes could guide improvement of diagnostic labels and treatments. Limitations include the cross-sectional design and the limited variability in psychopathology scores due to the restriction of the sample to currently healthy subjects.
Collapse
Affiliation(s)
- Marvin S Meiering
- Department of Natural Sciences, MSB Medical School Berlin, Rüdesheimer Straße 50, 14197, Berlin, Germany.
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195, Berlin, Germany.
| | - David Weigner
- Department of Natural Sciences, MSB Medical School Berlin, Rüdesheimer Straße 50, 14197, Berlin, Germany
- Department of Education and Psychology, Freie Universität Berlin, Habelschwerdter Allee 45, 14195, Berlin, Germany
| | - Sören Enge
- Department of Natural Sciences, MSB Medical School Berlin, Rüdesheimer Straße 50, 14197, Berlin, Germany
| | - Simone Grimm
- Department of Natural Sciences, MSB Medical School Berlin, Rüdesheimer Straße 50, 14197, Berlin, Germany
- Department of Psychiatry and Psychotherapy, Charité- Universitätsmedizin Berlin, corporate member of Freie Universität Berlin and Humboldt-Universität Zu Berlin, Hindenburgdamm 30, 12203, Berlin, Germany
- Department of Psychiatry, Psychotherapy and Psychosomatics, Psychiatric University Hospital Zurich, University of Zurich, Lenggstrasse 31, CH-8032, Zurich, Switzerland
| |
Collapse
|
7
|
Cross MP, Acevedo AM, Hunter JF. A Critique of Automated Approaches to Code Facial Expressions: What Do Researchers Need to Know? AFFECTIVE SCIENCE 2023; 4:500-505. [PMID: 37744972 PMCID: PMC10514002 DOI: 10.1007/s42761-023-00195-0] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2023] [Accepted: 06/03/2023] [Indexed: 09/26/2023]
Abstract
Facial expression recognition software is becoming more commonly used by affective scientists to measure facial expressions. Although the use of this software has exciting implications, there are persistent and concerning issues regarding the validity and reliability of these programs. In this paper, we highlight three of these issues: biases of the programs against certain skin colors and genders; the common inability of these programs to capture facial expressions made in non-idealized conditions (e.g., "in the wild"); and programs being forced to adopt the underlying assumptions of the specific theory of emotion on which each software is based. We then discuss three directions for the future of affective science in the area of automated facial coding. First, researchers need to be cognizant of exactly how and on which data sets the machine learning algorithms underlying these programs are being trained. In addition, there are several ethical considerations, such as privacy and data storage, surrounding the use of facial expression recognition programs. Finally, researchers should consider collecting additional emotion data, such as body language, and combine these data with facial expression data in order to achieve a more comprehensive picture of complex human emotions. Facial expression recognition programs are an excellent method of collecting facial expression data, but affective scientists should ensure that they recognize the limitations and ethical implications of these programs.
Collapse
Affiliation(s)
- Marie P. Cross
- Department of Biobehavioral Health, Pennsylvania State University, University Park, PA USA
| | - Amanda M. Acevedo
- Basic Biobehavioral and Psychological Sciences Branch, National Cancer Institute, Rockville, MD USA
| | - John F. Hunter
- Department of Psychology, Chapman University, Orange, CA USA
| |
Collapse
|
8
|
Cîrneanu AL, Popescu D, Iordache D. New Trends in Emotion Recognition Using Image Analysis by Neural Networks, A Systematic Review. SENSORS (BASEL, SWITZERLAND) 2023; 23:7092. [PMID: 37631629 PMCID: PMC10458371 DOI: 10.3390/s23167092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/04/2023] [Revised: 07/29/2023] [Accepted: 08/02/2023] [Indexed: 08/27/2023]
Abstract
Facial emotion recognition (FER) is a computer vision process aimed at detecting and classifying human emotional expressions. FER systems are currently used in a vast range of applications from areas such as education, healthcare, or public safety; therefore, detection and recognition accuracies are very important. Similar to any computer vision task based on image analyses, FER solutions are also suitable for integration with artificial intelligence solutions represented by different neural network varieties, especially deep neural networks that have shown great potential in the last years due to their feature extraction capabilities and computational efficiency over large datasets. In this context, this paper reviews the latest developments in the FER area, with a focus on recent neural network models that implement specific facial image analysis algorithms to detect and recognize facial emotions. This paper's scope is to present from historical and conceptual perspectives the evolution of the neural network architectures that proved significant results in the FER area. This paper endorses convolutional neural network (CNN)-based architectures against other neural network architectures, such as recurrent neural networks or generative adversarial networks, highlighting the key elements and performance of each architecture, and the advantages and limitations of the proposed models in the analyzed papers. Additionally, this paper presents the available datasets that are currently used for emotion recognition from facial expressions and micro-expressions. The usage of FER systems is also highlighted in various domains such as healthcare, education, security, or social IoT. Finally, open issues and future possible developments in the FER area are identified.
Collapse
Affiliation(s)
- Andrada-Livia Cîrneanu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania;
| | - Dan Popescu
- Faculty of Automatic Control and Computers, University Politehnica of Bucharest, 060042 Bucharest, Romania;
| | - Dragoș Iordache
- The National Institute for Research & Development in Informatics-ICI Bucharest, 011455 Bucharest, Romania;
| |
Collapse
|
9
|
Coppini S, Lucifora C, Vicario CM, Gangemi A. Experiments on real-life emotions challenge Ekman's model. Sci Rep 2023; 13:9511. [PMID: 37308555 DOI: 10.1038/s41598-023-36201-5] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2023] [Accepted: 05/31/2023] [Indexed: 06/14/2023] Open
Abstract
Ekman's emotions (1992) are defined as universal basic emotions. Over the years, alternative models have emerged (e.g. Greene and Haidt 2002; Barrett 2017) describing emotions as social and linguistic constructions. The variety of models existing today raises the question of whether the abstraction provided by such models is sufficient as a descriptive/predictive tool for representing real-life emotional situations. Our study presents a social inquiry to test whether traditional models are sufficient to capture the complexity of daily life emotions, reported in a textual context. The intent of the study is to establish the human-subject agreement rate in an annotated corpus based on Ekman's theory (Entity-Level Tweets Emotional Analysis) and the human-subject agreement rate when using Ekman's emotions to annotate sentences that don't respect the Ekman's model (The Dictionary of Obscure Sorrows). Furthermore, we investigated how much alexithymia can influence the human ability to detect and categorise emotions. On a total sample of 114 subjects, our results show low within subjects agreement rates for both datasets, particularly for subjects with low levels of alexithymia; low levels of agreement with the original annotations; frequent use of emotions based on Ekman model, particularly negative one, in people with high levels of alexithymia.
Collapse
Affiliation(s)
- Sara Coppini
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| | - Chiara Lucifora
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy.
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy.
| | - Carmelo M Vicario
- Department of Cognitive Science, University of Messina, Messina, Italy
| | - Aldo Gangemi
- Institute of Cognitive Sciences and Technologies, National Research Council, Rome, Italy
- Department of Philosophy and Communication, University of Bologna, Bologna, Italy
| |
Collapse
|
10
|
Long H, Peluso N, Baker CI, Japee S, Taubert J. A database of heterogeneous faces for studying naturalistic expressions. Sci Rep 2023; 13:5383. [PMID: 37012369 PMCID: PMC10070342 DOI: 10.1038/s41598-023-32659-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Facial expressions are thought to be complex visual signals, critical for communication between social agents. Most prior work aimed at understanding how facial expressions are recognized has relied on stimulus databases featuring posed facial expressions, designed to represent putative emotional categories (such as 'happy' and 'angry'). Here we use an alternative selection strategy to develop the Wild Faces Database (WFD); a set of one thousand images capturing a diverse range of ambient facial behaviors from outside of the laboratory. We characterized the perceived emotional content in these images using a standard categorization task in which participants were asked to classify the apparent facial expression in each image. In addition, participants were asked to indicate the intensity and genuineness of each expression. While modal scores indicate that the WFD captures a range of different emotional expressions, in comparing the WFD to images taken from other, more conventional databases, we found that participants responded more variably and less specifically to the wild-type faces, perhaps indicating that natural expressions are more multiplexed than a categorical model would predict. We argue that this variability can be employed to explore latent dimensions in our mental representation of facial expressions. Further, images in the WFD were rated as less intense and more genuine than images taken from other databases, suggesting a greater degree of authenticity among WFD images. The strong positive correlation between intensity and genuineness scores demonstrating that even the high arousal states captured in the WFD were perceived as authentic. Collectively, these findings highlight the potential utility of the WFD as a new resource for bridging the gap between the laboratory and real world in studies of expression recognition.
Collapse
Affiliation(s)
- Houqiu Long
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Natalie Peluso
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia
| | - Chris I Baker
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Shruti Japee
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA
| | - Jessica Taubert
- The School of Psychology, The University of Queensland, St Lucia, QLD, Australia.
- Laboratory of Brain and Cognition, National Institute of Mental Health, Bethesda, MD, USA.
| |
Collapse
|
11
|
Büdenbender B, Höfling TTA, Gerdes ABM, Alpers GW. Training machine learning algorithms for automatic facial coding: The role of emotional facial expressions' prototypicality. PLoS One 2023; 18:e0281309. [PMID: 36763694 PMCID: PMC9916590 DOI: 10.1371/journal.pone.0281309] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2022] [Accepted: 01/20/2023] [Indexed: 02/12/2023] Open
Abstract
Automatic facial coding (AFC) is a promising new research tool to efficiently analyze emotional facial expressions. AFC is based on machine learning procedures to infer emotion categorization from facial movements (i.e., Action Units). State-of-the-art AFC accurately classifies intense and prototypical facial expressions, whereas it is less accurate for non-prototypical and less intense facial expressions. A potential reason might be that AFC is typically trained with standardized and prototypical facial expression inventories. Because AFC would be useful to analyze less prototypical research material as well, we set out to determine the role of prototypicality in the training material. We trained established machine learning algorithms either with standardized expressions from widely used research inventories or with unstandardized emotional facial expressions obtained in a typical laboratory setting and tested them on identical or cross-over material. All machine learning models' accuracies were comparable when trained and tested with held-out dataset from the same dataset (acc. = [83.4% to 92.5%]). Strikingly, we found a substantial drop in accuracies for models trained with the highly prototypical standardized dataset when tested in the unstandardized dataset (acc. = [52.8%; 69.8%]). However, when they were trained with unstandardized expressions and tested with standardized datasets, accuracies held up (acc. = [82.7%; 92.5%]). These findings demonstrate a strong impact of the training material's prototypicality on AFC's ability to classify emotional faces. Because AFC would be useful for analyzing emotional facial expressions in research or even naturalistic scenarios, future developments should include more naturalistic facial expressions for training. This approach will improve the generalizability of AFC to encode more naturalistic facial expressions and increase robustness for future applications of this promising technology.
Collapse
Affiliation(s)
- Björn Büdenbender
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Tim T. A. Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Antje B. M. Gerdes
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W. Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
- * E-mail:
| |
Collapse
|
12
|
Moulds DJ, Meyer J, McLean JF, Kempe V. Exploring effects of response biases in affect induction procedures. PLoS One 2023; 18:e0285706. [PMID: 37167316 PMCID: PMC10174507 DOI: 10.1371/journal.pone.0285706] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Accepted: 05/02/2023] [Indexed: 05/13/2023] Open
Abstract
This study examined whether self-reports or ratings of experienced affect, often used as manipulation checks on the efficacy of affect induction procedures (AIPs), reflect genuine changes in affective states rather than response biases arising from demand characteristics or social desirability effects. In a between-participants design, participants were exposed to positive, negative and neutral images with valence-congruent music or sound to induce happy, sad and neutral mood. Half of the participants had to actively appraise each image whereas the other half viewed images passively. We hypothesised that if ratings of affective valence are subject to response biases then they should reflect the target mood in the same way for active appraisal and passive exposure as participants encountered the same affective stimuli in both conditions. We also tested whether the AIP resulted in mood-congruent changes in facial expressions analysed by FaceReader to see whether behavioural indicators corroborate the self-reports. The results showed that while participants' ratings reflected the induced target valence, the difference between positive and negative AIP was significantly attenuated in the active appraisal condition, suggesting that self-reports of mood experienced after the AIP are not entirely a reflection of response biases. However, there were no effects of the AIP on FaceReader valence scores, in line with theories questioning the existence of cross-culturally and inter-individually universal behavioural indicators of affective states. Efficacy of AIPs is therefore best checked using self-reports.
Collapse
Affiliation(s)
- David J Moulds
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Jona Meyer
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Janet F McLean
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| | - Vera Kempe
- Division of Psychology, School of Applied Sciences, Abertay University, Dundee, United Kingdom
| |
Collapse
|
13
|
Davis JD, Coulson S, Blaison C, Hess U, Winkielman P. Mimicry of partially occluded emotional faces: do we mimic what we see or what we know? Cogn Emot 2022; 36:1555-1575. [PMID: 36300446 DOI: 10.1080/02699931.2022.2135490] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/03/2022]
Abstract
Facial electromyography (EMG) was used to investigate patterns of facial mimicry in response to partial facial expressions in two contexts that differ in how naturalistic and socially significant the faces are. Experiment 1 presented participants with either the upper- or lower-half of facial expressions and used a forced-choice emotion categorisation task. This task emphasises cognition at the expense of ecological and social validity. Experiment 2 presented whole heads and expressions were occluded by clothing. Additionally, the emotion recognition task is more open-ended. This context has greater social validity. We found mimicry in both experiments, however mimicry differed in terms of which emotions were mimicked and the extent to which the mimicry involved muscle sites that were not observed. In the more cognitive context, there was relatively more motor matching (i.e. mimicking only what was seen). In the more socially valid context, participants were less likely to mimic only what they saw - and instead mimicked what they knew. Additionally, participants mimicked anger in the cognitive context but not the social context. These findings suggest that mimicry involves multiple mechanisms and that the more social the context, the more likely it is to reflect a mechanism of social regulation.
Collapse
Affiliation(s)
- Joshua D Davis
- Cognitive Science Department, University of California, San Diego, San Diego, USA
- Social and Behavioral Sciences Department, Southwestern College, Chula Vista, CA, USA
| | - Seana Coulson
- Cognitive Science Department, University of California, San Diego, San Diego, USA
| | | | - Ursula Hess
- Psychology Department, Humboldt University, Berlin, Germany
| | - Piotr Winkielman
- Psychology Department, University of California, San Diego, San Diego, USA
- Psychology Department, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| |
Collapse
|
14
|
Fabrício DDM, Ferreira BLC, Maximiano-Barreto MA, Muniz M, Chagas MHN. Construction of face databases for tasks to recognize facial expressions of basic emotions: a systematic review. Dement Neuropsychol 2022; 16:388-410. [DOI: 10.1590/1980-5764-dn-2022-0039] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 08/01/2022] [Accepted: 08/23/2022] [Indexed: 12/12/2022] Open
Abstract
ABSTRACT. Recognizing the other's emotions is an important skill for the social context that can be modulated by variables such as gender, age, and race. A number of studies seek to elaborate specific face databases to assess the recognition of basic emotions in different contexts. Objectives: This systematic review sought to gather these studies, describing and comparing the methodologies used in their elaboration. Methods: The databases used to select the articles were the following: PubMed, Web of Science, PsycInfo, and Scopus. The following word crossing was used: “Facial expression database OR Stimulus set AND development OR Validation.” Results: A total of 36 articles showed that most of the studies used actors to express the emotions that were elicited from specific situations to generate the most spontaneous emotion possible. The databases were mainly composed of colorful and static stimuli. In addition, most of the studies sought to establish and describe patterns to record the stimuli, such as color of the garments used and background. The psychometric properties of the databases are also described. Conclusions: The data presented in this review point to the methodological heterogeneity among the studies. Nevertheless, we describe their patterns, contributing to the planning of new research studies that seek to create databases for new contexts.
Collapse
Affiliation(s)
| | | | | | - Monalisa Muniz
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil
| | - Marcos Hortes Nisihara Chagas
- Universidade Federal de São Carlos, Brazil; Universidade Federal de São Carlos, Brazil; Universidade de São Paulo, Brazil; Instituto Bairral de Psiquiatria, Brazil
| |
Collapse
|
15
|
Rajchert J, Zajenkowska A, Nowakowska I, Bodecka-Zych M, Abramiuk A. Hostility bias or sadness bias in excluded individuals: does anodal transcranial direct current stimulation of right VLPFC vs. left DLPFC have a mitigating effect? COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2022; 22:1063-1077. [PMID: 35474567 DOI: 10.3758/s13415-022-01008-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 04/12/2022] [Indexed: 06/14/2023]
Abstract
Exclusion has multiple adverse effects on individual's well-being. It induces anger and hostile cognitions leading to aggressive behavior. The purpose of this study was to test whether exclusion would affect recognition of anger on ambivalent faces of the excluders. We hypothesized that exclusion would elicit more anger encoding (hostility bias) than inclusion, but this effect would be mitigated by anodal tDCS of right VLPFC or left DLPFC-regions engaged in negative affect regulation. Participants (N = 96) were recognizing emotions (anger, sadness, happiness) on ambiguous faces of individuals who-as they were told-liked them or not. Results showed that exclusion induced more sadness bias. tDCS to VLPFC decreased anger and increased sadness recognition on excluders' faces compared with includers' faces, expressing a mixture of these two emotions. Additionally, stimulation to VLPFC and DLPFC decreased latencies for faces expressing sadness (sad-angry and happy-sad) but increased for happy-angry faces. Stimulation to VLPFC also increased reaction time to excluders faces while stimulation of DLPFC decreased reaction latency to includers faces. Results were discussed with the reference to the form of exclusion, motivational mechanism affected by disliking but also to lateralization (valence vs. arousal theory) and cortical regions engaged in encoding sadness after a threat to belonging.
Collapse
|
16
|
Test–Retest Reliability in Automated Emotional Facial Expression Analysis: Exploring FaceReader 8.0 on Data from Typically Developing Children and Children with Autism. APPLIED SCIENCES-BASEL 2022. [DOI: 10.3390/app12157759] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
Automated emotional facial expression analysis (AEFEA) is used widely in applied research, including the development of screening/diagnostic systems for atypical human neurodevelopmental conditions. The validity of AEFEA systems has been systematically studied, but their test–retest reliability has not been researched thus far. We explored the test–retest reliability of a specific AEFEA software, Noldus FaceReader 8.0 (FR8; by Noldus Information Technology). We collected intensity estimates for 8 repeated emotions through FR8 from facial video recordings of 60 children: 31 typically developing children and 29 children with autism spectrum disorder. Test–retest reliability was imperfect in 20% of cases, affecting a substantial proportion of data points; however, the test–retest differences were small. This shows that the test–retest reliability of FR8 is high but not perfect. A proportion of cases which initially failed to show perfect test–retest reliability reached it in a subsequent analysis by FR8. This suggests that repeated analyses by FR8 can, in some cases, lead to the “stabilization” of emotion intensity datasets. Under ANOVA, the test–retest differences did not influence the pattern of cross-emotion and cross-group effects and interactions. Our study does not question the validity of previous results gained by AEFEA technology, but it shows that further exploration of the test–retest reliability of AEFEA systems is desirable.
Collapse
|
17
|
Cui X, Jiang X, Ding H. Affective prosody guides facial emotion processing. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-03528-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
18
|
Pandeirada JNS, Fernandes NL, Madeira M, Marinho PI, Vasconcelos M. Can I Trust This Person? Evaluations of Trustworthiness From Faces and Relevant Individual Variables. Front Psychol 2022; 13:857511. [PMID: 35619794 PMCID: PMC9127658 DOI: 10.3389/fpsyg.2022.857511] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Accepted: 03/28/2022] [Indexed: 11/13/2022] Open
Affiliation(s)
- Josefa N S Pandeirada
- William James Center for Research, University of Aveiro, Aveiro, Portugal.,Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Natália Lisandra Fernandes
- William James Center for Research, University of Aveiro, Aveiro, Portugal.,Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Mariana Madeira
- William James Center for Research, University of Aveiro, Aveiro, Portugal.,Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | | | - Marco Vasconcelos
- William James Center for Research, University of Aveiro, Aveiro, Portugal.,Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
19
|
Pandeirada JNS, Madeira M, Fernandes NL, Marinho P, Vasconcelos M. Judgements of Social Dominance From Faces and Related Variables. Front Psychol 2022; 13:873147. [PMID: 35578657 PMCID: PMC9106557 DOI: 10.3389/fpsyg.2022.873147] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2022] [Accepted: 03/16/2022] [Indexed: 11/23/2022] Open
Affiliation(s)
- Josefa N S Pandeirada
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Mariana Madeira
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Natália Lisandra Fernandes
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| | - Patrícia Marinho
- Social Services of University of Aveiro, University of Aveiro, Aveiro, Portugal
| | - Marco Vasconcelos
- William James Center for Research, Department of Education and Psychology, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
20
|
McFadyen J, Tsuchiya N, Mattingley JB, Garrido MI. Surprising Threats Accelerate Conscious Perception. Front Behav Neurosci 2022; 16:797119. [PMID: 35645748 PMCID: PMC9137416 DOI: 10.3389/fnbeh.2022.797119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 04/05/2022] [Indexed: 11/15/2022] Open
Abstract
The folk psychological notion that "we see what we expect to see" is supported by evidence that we become consciously aware of visual stimuli that match our prior expectations more quickly than stimuli that violate our expectations. Similarly, "we see what we want to see," such that more biologically-relevant stimuli are also prioritised for conscious perception. How, then, is perception shaped by biologically-relevant stimuli that we did not expect? Here, we conducted two experiments using breaking continuous flash suppression (bCFS) to investigate how prior expectations modulated response times to neutral and fearful faces. In both experiments, we found that prior expectations for neutral faces hastened responses, whereas the opposite was true for fearful faces. This interaction between emotional expression and prior expectations was driven predominantly by participants with higher trait anxiety. Electroencephalography (EEG) data collected in Experiment 2 revealed an interaction evident in the earliest stages of sensory encoding, suggesting prediction errors expedite sensory encoding of fearful faces. These findings support a survival hypothesis, where biologically-relevant fearful stimuli are prioritised for conscious access even more so when unexpected, especially for people with high trait anxiety.
Collapse
Affiliation(s)
- Jessica McFadyen
- Queensland Brain Institute, University of Queensland, Brisbane, QLD, Australia
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, United Kingdom
- Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, VIC, Australia
| | - Naotsugu Tsuchiya
- School of Psychological Sciences and Turner Institute for Brain and Mental Health, Monash University, Clayton, VIC, Australia
- Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Osaka, Japan
- Advanced Telecommunications Research Computational Neuroscience Laboratories, Kyoto, Japan
| | - Jason B. Mattingley
- Queensland Brain Institute, University of Queensland, Brisbane, QLD, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, VIC, Australia
- School of Psychology, University of Queensland, Brisbane, QLD, Australia
- Canadian Institute for Advanced Research, Toronto, ON, Canada
| | - Marta I. Garrido
- Queensland Brain Institute, University of Queensland, Brisbane, QLD, Australia
- Australian Research Council Centre of Excellence for Integrative Brain Function, Clayton, VIC, Australia
- Melbourne School of Psychological Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
21
|
Kalhan S, McFadyen J, Tsuchiya N, Garrido MI. Neural and computational processes of accelerated perceptual awareness and decisions: A 7T fMRI study. Hum Brain Mapp 2022; 43:3873-3886. [PMID: 35470490 PMCID: PMC9294306 DOI: 10.1002/hbm.25889] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2021] [Revised: 04/06/2022] [Accepted: 04/08/2022] [Indexed: 11/05/2022] Open
Abstract
Rapidly detecting salient information in our environments is critical for survival. Visual processing in subcortical areas like the pulvinar and amygdala has been shown to facilitate unconscious processing of salient stimuli. It is unknown, however, if and how these areas might interact with cortical regions to facilitate faster conscious perception of salient stimuli. Here we investigated these neural processes using 7T functional magnetic resonance imaging (fMRI) in concert with computational modelling while participants (n = 33) engaged in a breaking continuous flash suppression paradigm (bCFS) in which fearful and neutral faces are initially suppressed from conscious perception but then eventually ‘breakthrough’ into awareness. Participants reported faster breakthrough times for fearful faces compared with neutral faces. Drift‐diffusion modelling suggested that perceptual evidence was accumulated at a faster rate for fearful faces compared with neutral faces. For both neutral and fearful faces, faster response times were associated with greater activity in the amygdala (specifically within its subregions, including superficial, basolateral and amygdalo‐striatal transition area) and the insula. Faster rates of evidence accumulation coincided with greater activity in frontoparietal regions and occipital lobe, as well as the amygdala. A lower decision‐boundary correlated with activity in the insula and the posterior cingulate cortex (PCC), but not with the amygdala. Overall, our findings suggest that hastened perceptual awareness of salient stimuli recruits the amygdala and, more specifically, is driven by accelerated evidence accumulation in fronto‐parietal and visual areas. In sum, we have mapped distinct neural computations that accelerate perceptual awareness of visually suppressed faces.
Collapse
Affiliation(s)
- Shivam Kalhan
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Australia.,Queensland Brain Institute, University of Queensland, Brisbane, Queensland, Australia
| | - Jessica McFadyen
- Max Planck UCL Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Faculty of Biomedical and Psychological Sciences, Monash University, Clayton, Victoria, Australia.,Monash Institute of Cognitive and Clinical Neuroscience, Monash University, Clayton, Victoria, Australia.,Center for Information and Neural Networks (CiNet), National Institute of Information and Communications Technology (NICT), Suita, Osaka, Japan.,Advanced Telecommunications Research Computational Neuroscience Laboratories, Seika-cho, Soraku-gun, Kyoto, Japan
| | - Marta I Garrido
- Melbourne School of Psychological Sciences, University of Melbourne, Melbourne, Victoria, Australia.,Australian Research Council Centre of Excellence for Integrative Brain Function, Australia.,Queensland Brain Institute, University of Queensland, Brisbane, Queensland, Australia
| |
Collapse
|
22
|
What’s in a face: Automatic facial coding of untrained study participants compared to standardized inventories. PLoS One 2022; 17:e0263863. [PMID: 35239654 PMCID: PMC8893617 DOI: 10.1371/journal.pone.0263863] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2020] [Accepted: 01/28/2022] [Indexed: 11/19/2022] Open
Abstract
Automatic facial coding (AFC) is a novel research tool to automatically analyze emotional facial expressions. AFC can classify emotional expressions with high accuracy in standardized picture inventories of intensively posed and prototypical expressions. However, classification of facial expressions of untrained study participants is more error prone. This discrepancy requires a direct comparison between these two sources of facial expressions. To this end, 70 untrained participants were asked to express joy, anger, surprise, sadness, disgust, and fear in a typical laboratory setting. Recorded videos were scored with a well-established AFC software (FaceReader, Noldus Information Technology). These were compared with AFC measures of standardized pictures from 70 trained actors (i.e., standardized inventories). We report the probability estimates of specific emotion categories and, in addition, Action Unit (AU) profiles for each emotion. Based on this, we used a novel machine learning approach to determine the relevant AUs for each emotion, separately for both datasets. First, misclassification was more frequent for some emotions of untrained participants. Second, AU intensities were generally lower in pictures of untrained participants compared to standardized pictures for all emotions. Third, although profiles of relevant AU overlapped substantially across the two data sets, there were also substantial differences in their AU profiles. This research provides evidence that the application of AFC is not limited to standardized facial expression inventories but can also be used to code facial expressions of untrained participants in a typical laboratory setting.
Collapse
|
23
|
Macinska S, Jellema T. Memory for facial expressions on the autism spectrum: The influence of gaze direction and type of expression. Autism Res 2022; 15:870-880. [PMID: 35150078 DOI: 10.1002/aur.2682] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2021] [Revised: 01/19/2022] [Accepted: 01/24/2022] [Indexed: 11/10/2022]
Abstract
Face memory research in autism has largely neglected memory for facial expressions, in favor of memory for identity. This study in three experiments examined the role of gaze direction and type of expression on memory for facial expressions in relation to the autism spectrum. In the learning phase, four combinations of facial expressions (joy/anger) and gaze direction (toward/away), displayed by 16 different identities, were presented. In a subsequent surprise test the same identities were presented displaying neutral expressions, and the expression of each identity had to be recalled. In Experiment 1, typically-developed (TD) individuals with low and high Autism Quotient (AQ) scores were tested with three repetitions of each emotion/gaze combination, which did not produce any modulations. In Experiment 2, another group of TD individuals with low and high AQ scores were tested with eight repetitions, resulting in a "happy advantage" and a "direct gaze advantage", but no interactions. In Experiment 3, individuals with high-functioning autism (HFA) and a matched TD group were tested using eight repetitions. The HFA group revealed no emotion or gaze effects, while the matched TD group showed both a happy and a direct gaze advantage, and again no interaction. The results suggest that in autistic individuals the memory for facial expressions is intact, but is not modulated by the person's expression type and gaze direction. We discuss whether anomalous implicit learning of facial cues could have contributed to these findings, its relevance for social intuition, and its possible contribution to social deficits in autism. LAY SUMMARY: It has often been found that memory for someone's face (facial identity) is less good in autism. However, it is not yet known whether memory for someone's facial expression is also less good in autism. In this study, the memory for expressions of joy and anger was investigated in typically-developed (TD) individuals who possessed either few or many autistic-like traits (Experiments 1 and 2), and in individuals with high-functioning autism (Experiment 3). The gaze direction was also varied (directed either toward, or away from, the observer). We found that TD individuals best remembered expressions of joy, and remembered expressions of both joy and anger better when the gaze was directed at them. These effects did not depend on the extent to which they possessed autistic-like traits. Autistic participants remembered the facial expression of a previously encountered person as good as TD participants did. However, in contrast to the TD participants, the memory of autistic participants was not influenced by the expression type and gaze direction of the previously encountered persons. We discuss whether this may lead to difficulties in the development of social intuition, which in turn could give rise to difficulties in social interaction that are characteristic for autism.
Collapse
|
24
|
Abstract
AbstractSocial resemblance, like group membership or similar attitudes, increases the mimicry of the observed emotional facial display. In this study, we investigate whether facial self-resemblance (manipulated by computer morphing) modulates emotional mimicry in a similar manner. Participants watched dynamic expressions of faces that either did or did not resemble their own, while their facial muscle activity was measured using EMG. Additionally, after each presentation, respondents completed social evaluations of the faces they saw. The results show that self-resemblance evokes convergent facial reactions. More specifically, participants mimicked the happiness and, to a lesser extent, the anger of self-resembling faces. In turn, the happiness of non-resembling faces was less likely mimicked than in the case of self-resembling faces, while anger evoked a more divergent, smile-like response. Finally, we found that social evaluations were in general increased by happiness displays, but not influenced by resemblance. Overall, the study demonstrates an interesting and novel phenomenon, particularly that mimicry can be modified by relatively subtle cues of physical resemblance.
Collapse
|
25
|
Kovsh E, Yavna D, Babenko V, Ermakov P, Vorobyeva E, Denisova E, Alekseeva D. The Success of Facial Expression Recognition by Carriers of Various Genotypes of the COMT, DRD4, 5HT2A, MAOA GENES. EXPERIMENTAL PSYCHOLOGY (RUSSIA) 2022. [DOI: 10.17759/exppsy.2022150309] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
The work is aimed at describing the relationship between the genes COMT, DRD4, 5HT2A, MAOA with the success of facial expression recognition. These genes play an important role in various emotional and cognitive processes. At the same time, hereditary aspects of recognition of facial expressions, in contrast to sociocultural ones, have not been studied enough to date. The study involved 87 healthy students of Russian universities (20.4 ± 2.6 years). DNA analysis was carried out with the determination of genotypes by the polymorphic loci of the genes rs4680 COMT, rs6313 5HT2A (HTR2A), rs1800955 DRD4, VNTR MAOA (RSMU, Rostov-on-Don). The participants of the study were asked to distinguish emotional facial expressions in photographs taken from the MMI, KDEF, Rafd, WSEFEP image databases. The obtained results indicate the following differences in the success of facial expression recognition: carriers of the Val/Val genotype of the COMT gene significantly better recognize the emotions of surprise (H=7.7, df=2, p=0.02), fear (H=10.5, df=2, p=0.005), sadness (H=11.2, df=2, p=0.004); carriers of the heterozygous C/T genotype of the DRD4 gene significantly better recognize facial expression of disgust (H=9.1, df=2, p=0.01). No relationship was found between the MAOA gene genotypes and the success of emotion recognition.
Collapse
Affiliation(s)
| | | | | | | | | | | | - D.S. Alekseeva
- Regional Research Center of the Russian Academy of Education in the Southern Federal District
| |
Collapse
|
26
|
Kartheek MN, Prasad MVNK, Bhukya R. Modified chess patterns: handcrafted feature descriptors for facial expression recognition. COMPLEX INTELL SYST 2021. [DOI: 10.1007/s40747-021-00526-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/01/2022]
Abstract
AbstractFacial expressions are predominantly important in the social interaction as they convey the personal emotions of an individual. The main task in Facial Expression Recognition (FER) systems is to develop feature descriptors that could effectively classify the facial expressions into various categories. In this work, towards extracting distinctive features, Radial Cross Pattern (RCP), Chess Symmetric Pattern (CSP) and Radial Cross Symmetric Pattern (RCSP) feature descriptors have been proposed and are implemented in a 5 $$\times $$
×
5 overlapping neighborhood to overcome some of the limitations of the existing methods such as Chess Pattern (CP), Local Gradient Coding (LGC) and its variants. In a 5 $$\times $$
×
5 neighborhood, the 24 pixels surrounding the center pixel are arranged into two groups, namely Radial Cross Pattern (RCP), which extracts two feature values by comparing 16 pixels with the center pixel and Chess Symmetric Pattern (CSP) extracts one feature value from the remaining 8 pixels. The experiments are conducted using RCP and CSP independently and also with their fusion RCSP using different weights, on a variety of facial expression datasets to demonstrate the efficiency of the proposed methods. The results obtained from the experimental analysis demonstrate the efficiency of the proposed methods.
Collapse
|
27
|
The relationship between early and recent life stress and emotional expression processing: A functional connectivity study. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 20:588-603. [PMID: 32342272 PMCID: PMC7266792 DOI: 10.3758/s13415-020-00789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
The aim of this study was to characterize neural activation during the processing of negative facial expressions in a non-clinical group of individuals characterized by two factors: the levels of stress experienced in early life and in adulthood. Two models of stress consequences were investigated: the match/mismatch and cumulative stress models. The match/mismatch model assumes that early adversities may promote optimal coping with similar events in the future through fostering the development of coping strategies. The cumulative stress model assumes that effects of stress are additive, regardless of the timing of the stressors. Previous studies suggested that stress can have both cumulative and match/mismatch effects on brain structure and functioning and, consequently, we hypothesized that effects on brain circuitry would be found for both models. We anticipated effects on the neural circuitry of structures engaged in face perception and emotional processing. Hence, the amygdala, fusiform face area, occipital face area, and posterior superior temporal sulcus were selected as seeds for seed-based functional connectivity analyses. The interaction between early and recent stress was related to alterations during the processing of emotional expressions mainly in to the cerebellum, middle temporal gyrus, and supramarginal gyrus. For cumulative stress levels, such alterations were observed in functional connectivity to the middle temporal gyrus, lateral occipital cortex, precuneus, precentral and postcentral gyri, anterior and posterior cingulate gyri, and Heschl's gyrus. This study adds to the growing body of literature suggesting that both the cumulative and the match/mismatch hypotheses are useful in explaining the effects of stress.
Collapse
|
28
|
Caine JA, Klein B, Edwards SL. The Impact of a Novel Mimicry Task for Increasing Emotion Recognition in Adults with Autism Spectrum Disorder and Alexithymia: Protocol for a Randomized Controlled Trial. JMIR Res Protoc 2021; 10:e24543. [PMID: 34170257 PMCID: PMC8386358 DOI: 10.2196/24543] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2020] [Revised: 12/15/2020] [Accepted: 02/24/2021] [Indexed: 11/17/2022] Open
Abstract
Background Impaired facial emotion expression recognition (FEER) has typically been considered a correlate of autism spectrum disorder (ASD). Now, the alexithymia hypothesis is suggesting that this emotion processing problem is instead related to alexithymia, which frequently co-occurs with ASD. By combining predictive coding theories of ASD and simulation theories of emotion recognition, it is suggested that facial mimicry may improve the training of FEER in ASD and alexithymia. Objective This study aims to evaluate a novel mimicry task to improve FEER in adults with and without ASD and alexithymia. Additionally, this study will aim to determine the contributions of alexithymia and ASD to FEER ability and assess which of these 2 populations benefit from this training task. Methods Recruitment will primarily take place through an ASD community group with emphasis put on snowball recruiting. Included will be 64 consenting adults equally divided between participants without an ASD and participants with an ASD. Participants will be screened online using the Kessler Psychological Distress Scale (K-10; cut-off score of 22), Autism Spectrum Quotient (AQ-10), and Toronto Alexithymia Scale (TAS-20) followed by a clinical interview with a provisional psychologist at the Federation University psychology clinic. The clinical interview will include assessment of ability, anxiety, and depression as well as discussion of past ASD diagnosis and confirmatory administration of the Autism Mental Status Exam (AMSE). Following the clinical interview, the participant will complete the Bermond-Vorst Alexithymia Questionnaire (BVAQ) and then undertake a baseline assessment of FEER. Consenting participants will then be assigned using a permuted blocked randomization method into either the control task condition or the mimicry task condition. A brief measure of satisfaction of the task and a debriefing session will conclude the study. Results The study has Federation University Human Research Ethics Committee approval and is registered with the Australian New Zealand Clinical Trials. Participant recruitment is predicted to begin in the third quarter of 2021. Conclusions This study will be the first to evaluate the use of a novel facial mimicry task condition to increase FEER in adults with ASD and alexithymia. If efficacious, this task could prove useful as a cost-effective adjunct intervention that could be used at home and thus remove barriers to entry. This study will also explore the unique effectiveness of this task in people without an ASD, with an ASD, and with alexithymia. Trial Registration Australian New Zealand Clinical Trial Registry ACTRN12619000705189p; https://www.anzctr.org.au/Trial/Registration/TrialReview.aspx?id=377455 International Registered Report Identifier (IRRID) PRR1-10.2196/24543
Collapse
Affiliation(s)
- Joshua A Caine
- School of Science Psychology and Sport, Federation University Australia, Ballarat, Australia.,Deputy Vice Chancellor of Research & Innovation Portfolio, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| | - Britt Klein
- Deputy Vice Chancellor of Research & Innovation Portfolio, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| | - Stephen L Edwards
- School of Science Psychology and Sport, Federation University Australia, Ballarat, Australia.,Health Innovation and Transformation Centre, Federation University Australia, Ballarat, Australia.,Biopsychosocial and eHealth Research & Innovation, Federation University Australia, Ballarat, Australia
| |
Collapse
|
29
|
Wróbel M, Piórkowska M, Rzeczkowska M, Troszczyńska A, Tołopiło A, Olszanowski M. The “Big Two” and socially induced emotions: Agency and communion jointly influence emotional contagion and emotional mimicry. MOTIVATION AND EMOTION 2021. [DOI: 10.1007/s11031-021-09897-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
AbstractThree studies investigated the effects of two fundamental dimensions of social perception on emotional contagion (i.e., the transfer of emotions between people). Rooting our hypotheses in the Dual Perspective Model of Agency and Communion (Abele and Wojciszke in Adv Exp Soc Psychol 50:198–255, 10.1016/B978-0-12-800284-1.00004-7, 2014), we predicted that agency would strengthen the effects of communion on emotional contagion and emotional mimicry (a process often considered a key mechanism behind emotional contagion). To test this hypothesis, we exposed participants to happy, sad, and angry senders characterized by low vs. high communion and agency. Our results demonstrated that, as expected, the effects of the two dimensions on socially induced emotions were interactive. The strength and direction of these effects, however, were consistent with our predictions only when the senders expressed happiness. When the senders expressed sadness, we found no effects of agency or communion on participants’ emotional responses, whereas for anger a mixed pattern emerged. Overall, our results align with the notion that emotional contagion and mimicry are modulated not only by the senders’ traits but also by the social meaning of the expressed emotion.
Collapse
|
30
|
Guan H, Wei H, Hauer RJ, Liu P. Facial expressions of Asian people exposed to constructed urban forests: Accuracy validation and variation assessment. PLoS One 2021; 16:e0253141. [PMID: 34138924 PMCID: PMC8211262 DOI: 10.1371/journal.pone.0253141] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2020] [Accepted: 05/30/2021] [Indexed: 01/27/2023] Open
Abstract
An outcome of building sustainable urban forests is that people's well-being is improved when they are exposed to trees. Facial expressions directly represents one's inner emotions, and can be used to assess real-time perception. The emergence and change in the facial expressions of forest visitors are an implicit process. As such, the reserved character of Asians requires an instrument rating to accurately recognize expressions. In this study, a dataset was established with 2,886 randomly photographed faces from visitors at a constructed urban forest park and at a promenade during summertime in Shenyang City, Northeast China. Six experts were invited to choose 160 photos in total with 20 images representing one of eight typical expressions: angry, contempt, disgusted, happy, neutral, sad, scared, and surprised. The FireFACE ver. 3.0 software was used to test hit-ratio validation as an accuracy measurement (ac.) to match machine-recognized photos with those identified by experts. According to the Kruskal-Wallis test on the difference from averaged scores in 20 recently published papers, contempt (ac. = 0.40%, P = 0.0038) and scared (ac. = 25.23%, P = 0.0018) expressions do not pass the validation test. Both happy and sad expression scores were higher in forests than in promenades, but there were no difference in net positive response (happy minus sad) between locations. Men had a higher happy score but lower disgusted score in forests than in promenades. Men also had a higher angry score in forests. We conclude that FireFACE can be used for analyzing facial expressions in Asian people within urban forests. Women are encouraged to visit urban forests rather than promenades to elicit more positive emotions.
Collapse
Affiliation(s)
- Haoming Guan
- School of Geographical Sciences, Northeast Normal University, Changchun, China
| | - Hongxu Wei
- Key Laboratory of Wetland Ecology and Environment, Northeast Institute of Geography and Agroecology, Chinse Academy of Sciences, Changchun, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Richard J. Hauer
- College of Natural Resources, University of Wisconsin-Stevens Point, Stevens Point, Wisconsin, United States of America
| | - Ping Liu
- College of Forestry, Shenyang Agricultural University, Shenyang, China
| |
Collapse
|
31
|
Küntzler T, Höfling TTA, Alpers GW. Automatic Facial Expression Recognition in Standardized and Non-standardized Emotional Expressions. Front Psychol 2021; 12:627561. [PMID: 34025503 PMCID: PMC8131548 DOI: 10.3389/fpsyg.2021.627561] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2020] [Accepted: 03/11/2021] [Indexed: 12/22/2022] Open
Abstract
Emotional facial expressions can inform researchers about an individual's emotional state. Recent technological advances open up new avenues to automatic Facial Expression Recognition (FER). Based on machine learning, such technology can tremendously increase the amount of processed data. FER is now easily accessible and has been validated for the classification of standardized prototypical facial expressions. However, applicability to more naturalistic facial expressions still remains uncertain. Hence, we test and compare performance of three different FER systems (Azure Face API, Microsoft; Face++, Megvii Technology; FaceReader, Noldus Information Technology) with human emotion recognition (A) for standardized posed facial expressions (from prototypical inventories) and (B) for non-standardized acted facial expressions (extracted from emotional movie scenes). For the standardized images, all three systems classify basic emotions accurately (FaceReader is most accurate) and they are mostly on par with human raters. For the non-standardized stimuli, performance drops remarkably for all three systems, but Azure still performs similarly to humans. In addition, all systems and humans alike tend to misclassify some of the non-standardized emotional facial expressions as neutral. In sum, emotion recognition by automated facial expression recognition can be an attractive alternative to human emotion recognition for standardized and non-standardized emotional facial expressions. However, we also found limitations in accuracy for specific facial expressions; clearly there is need for thorough empirical evaluation to guide future developments in computer vision of emotional facial expressions.
Collapse
Affiliation(s)
- Theresa Küntzler
- Department of Politics and Public Administration, Center for Image Analysis in the Social Sciences, Graduate School of Decision Science, University of Konstanz, Konstanz, Germany
| | - T Tim A Höfling
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| | - Georg W Alpers
- Department of Psychology, School of Social Sciences, University of Mannheim, Mannheim, Germany
| |
Collapse
|
32
|
Hartling C, Metz S, Pehrs C, Scheidegger M, Gruzman R, Keicher C, Wunder A, Weigand A, Grimm S. Comparison of Four fMRI Paradigms Probing Emotion Processing. Brain Sci 2021; 11:525. [PMID: 33919024 PMCID: PMC8142995 DOI: 10.3390/brainsci11050525] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Revised: 04/17/2021] [Accepted: 04/17/2021] [Indexed: 11/27/2022] Open
Abstract
Previous fMRI research has applied a variety of tasks to examine brain activity underlying emotion processing. While task characteristics are known to have a substantial influence on the elicited activations, direct comparisons of tasks that could guide study planning are scarce. We aimed to provide a comparison of four common emotion processing tasks based on the same analysis pipeline to suggest tasks best suited for the study of certain target brain regions. We studied an n-back task using emotional words (EMOBACK) as well as passive viewing tasks of emotional faces (FACES) and emotional scenes (OASIS and IAPS). We compared the activation patterns elicited by these tasks in four regions of interest (the amygdala, anterior insula, dorsolateral prefrontal cortex (dlPFC) and pregenual anterior cingulate cortex (pgACC)) in three samples of healthy adults (N = 45). The EMOBACK task elicited activation in the right dlPFC and bilateral anterior insula and deactivation in the pgACC while the FACES task recruited the bilateral amygdala. The IAPS and OASIS tasks showed similar activation patterns recruiting the bilateral amygdala and anterior insula. We conclude that these tasks can be used to study different regions involved in emotion processing and that the information provided is valuable for future research and the development of fMRI biomarkers.
Collapse
Affiliation(s)
- Corinna Hartling
- Department of Psychiatry and Psychotherapy, CBF, Charité Universitätsmedizin Berlin, 12203 Berlin, Germany; (S.M.); (R.G.); (S.G.)
| | - Sophie Metz
- Department of Psychiatry and Psychotherapy, CBF, Charité Universitätsmedizin Berlin, 12203 Berlin, Germany; (S.M.); (R.G.); (S.G.)
| | - Corinna Pehrs
- Bernstein Center for Computational Neuroscience, Humboldt-University Berlin, 10115 Berlin, Germany;
| | - Milan Scheidegger
- Department of Psychiatry, Psychotherapy and Psychosomatics, University of Zurich, 8032 Zurich, Switzerland;
| | - Rebecca Gruzman
- Department of Psychiatry and Psychotherapy, CBF, Charité Universitätsmedizin Berlin, 12203 Berlin, Germany; (S.M.); (R.G.); (S.G.)
| | | | - Andreas Wunder
- Translational Medicine and Clinical Pharmacology, Boehringer Ingelheim Pharma GmbH and Co. KG, 52216 Ingelheim am Rhein, Germany;
| | - Anne Weigand
- Department of Psychology, Medical School Berlin, 14197 Berlin, Germany;
| | - Simone Grimm
- Department of Psychiatry and Psychotherapy, CBF, Charité Universitätsmedizin Berlin, 12203 Berlin, Germany; (S.M.); (R.G.); (S.G.)
- Department of Psychology, Medical School Berlin, 14197 Berlin, Germany;
| |
Collapse
|
33
|
Early institutionalized care disrupts the development of emotion processing in prosody. Dev Psychopathol 2021; 33:421-430. [PMID: 33583457 DOI: 10.1017/s0954579420002023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
Millions of children worldwide are raised in institutionalized settings. Unfortunately, institutionalized rearing is often characterized by psychosocial deprivation, leading to difficulties in numerous social, emotional, physical, and cognitive skills. One such skill is the ability to recognize emotional facial expressions. Children with a history of institutional rearing tend to be worse at recognizing emotions in facial expressions than their peers, and this deficit likely affects social interactions. However, emotional information is also conveyed vocally, and neither prosodic information processing nor the cross-modal integration of facial and prosodic emotional expressions have been investigated in these children to date. We recorded electroencephalograms (EEG) while 47 children under institutionalized care (IC) (n = 24) or biological family care (BFC) (n = 23) viewed angry, happy, or neutral facial expressions while listening to pseudowords with angry, happy, or neutral prosody. The results indicate that 20- to 40-month-olds living in IC have event-related potentials (ERPs) over midfrontal brain regions that are less sensitive to incongruent facial and prosodic emotions relative to children under BFC, and that their brain responses to prosody are less lateralized. Children under IC also showed midfrontal ERP differences in processing of angry prosody, indicating that institutionalized rearing may specifically affect the processing of anger.
Collapse
|
34
|
3D-Printed Capacitive Sensor Objects for Object Recognition Assays. eNeuro 2021; 8:ENEURO.0310-20.2020. [PMID: 33446515 PMCID: PMC7877456 DOI: 10.1523/eneuro.0310-20.2020] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2020] [Revised: 12/07/2020] [Accepted: 12/17/2020] [Indexed: 01/08/2023] Open
Abstract
Object recognition tasks are widely used assays for studying learning and memory in rodents. Object recognition typically involves familiarizing mice with a set of objects and then presenting a novel object or displacing an object to a novel location or context. Learning and memory are inferred by a relative increase in time investigating the novel/displaced object. These tasks are in widespread use, but there are many inconsistencies in the way they are conducted across labs. Two major contributors to this are the lack of consistency in the method of measuring object investigation and the lack of standardization of the objects that are used. Current video-based automated algorithms can often be unreliable whereas manual scoring of object investigation is time consuming, tedious, and more subjective. To resolve these issues, we sought to design and implement 3D-printed objects that can be standardized across labs and use capacitive sensing to measure object investigation. Using a 3D printer, conductive filament, and low-cost off-the-shelf components, we demonstrate that employing 3D-printed capacitive touch objects is a reliable and precise way to perform object recognition tasks. Ultimately, this approach will lead to increased standardization and consistency across labs, which will greatly improve basic and translational research into learning and memory mechanisms.
Collapse
|
35
|
Is it all about the feeling? Affective and (meta-)cognitive mechanisms underlying the truth effect. PSYCHOLOGICAL RESEARCH 2021; 86:12-36. [PMID: 33484352 PMCID: PMC8821071 DOI: 10.1007/s00426-020-01459-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Accepted: 12/07/2020] [Indexed: 11/29/2022]
Abstract
People are more likely to judge repeatedly perceived statements as true. A decisive explanation for this so-called truth effect is that the repeated information can be processed more fluently than new information and that this fluency experience renders the information more familiar and trustworthy. Little is known, however, regarding whether and how affective states and dispositional cognitive preferences influence the truth effect. To this end, we conducted two experiments in which we manipulated (a) processing fluency via repetition, (b) the time interval (10 min vs. 1 week) between repetitions, and (c) short-term affective states using the presentation of emotional faces (Experiment 1) or the presence of an irrelevant source for changes in affective states (Experiment 2). Additionally, we assessed the dispositional variables need for cognitive closure (NCC), preference for deliberation (PD) and preference for intuition (PI). Results of Experiment 1 showed that the truth effect was significantly reduced for statements that were followed by a negative prime, although this was the case only for the longer repetition lag. Furthermore, higher NCC and lower PD scores were associated with an increased truth effect. Results of Experiment 2 replicated the moderating role of NCC and further showed that participants, who were provided with an alternative source for changes in their affective states, showed a reduced truth effect. Together, the findings suggest that (a) fluency-related changes in affective states may be (co-)responsible for the truth effect, (b) the truth effect is decreased when the repetition interval is long rather than short, and (c) the truth effect is increased for individuals with a higher need for cognitive closure. Theoretical implications of these findings are discussed.
Collapse
|
36
|
Weil R, Palma TA, Gawronski B. Contextual positivity-familiarity effects are unaffected by known moderators of misattribution. Cogn Emot 2020; 35:636-648. [PMID: 33300422 DOI: 10.1080/02699931.2020.1858029] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
ABSTRACTThe positivity-familiarity effect refers to the phenomenon that positive affect increases the likelihood that people judge a stimulus as familiar. Drawing on the assumption that positivity-familiarity effects result from a common misattribution mechanism that is shared with conceptually similar effects (e.g. fluency-familiarity effects), we investigated whether positivity-familiarity effects are qualified by three known moderators of other misattribution phenomena: (a) conceptual similarity between affect-eliciting prime stimuli and focal target stimuli, (b) relative salience of affect-eliciting prime stimuli, and (c) explicit warnings about the effects of affect-eliciting prime stimuli on familiarity judgments of the targets. Counter to predictions, three experiments obtained robust positivity-familiarity effects that were unaffected by the hypothesised moderators. The findings pose a challenge for misattribution accounts of positivity-familiarity effects, but they are consistent with alternative accounts in terms of affective monitoring.
Collapse
Affiliation(s)
- Rebecca Weil
- Department of Psychology, University of Hull, Hull, UK
| | - Tomás A Palma
- CICPSI, Faculdade de Psicologia, Universidade de Lisboa, Lisbon, Portugal
| | - Bertram Gawronski
- Department of Psychology, The University of Texas at Austin, Austin, TX, USA
| |
Collapse
|
37
|
Ramis S, Buades JM, Perales FJ. Using a Social Robot to Evaluate Facial Expressions in the Wild. SENSORS 2020; 20:s20236716. [PMID: 33255347 PMCID: PMC7727691 DOI: 10.3390/s20236716] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/30/2020] [Revised: 11/20/2020] [Accepted: 11/20/2020] [Indexed: 11/22/2022]
Abstract
In this work an affective computing approach is used to study the human-robot interaction using a social robot to validate facial expressions in the wild. Our global goal is to evaluate that a social robot can be used to interact in a convincing manner with human users to recognize their potential emotions through facial expressions, contextual cues and bio-signals. In particular, this work is focused on analyzing facial expression. A social robot is used to validate a pre-trained convolutional neural network (CNN) which recognizes facial expressions. Facial expression recognition plays an important role in recognizing and understanding human emotion by robots. Robots equipped with expression recognition capabilities can also be a useful tool to get feedback from the users. The designed experiment allows evaluating a trained neural network in facial expressions using a social robot in a real environment. In this paper a comparison between the CNN accuracy and human experts is performed, in addition to analyze the interaction, attention and difficulty to perform a particular expression by 29 non-expert users. In the experiment, the robot leads the users to perform different facial expressions in motivating and entertaining way. At the end of the experiment, the users are quizzed about their experience with the robot. Finally, a set of experts and the CNN classify the expressions. The obtained results allow affirming that the use of social robot is an adequate interaction paradigm for the evaluation on facial expression.
Collapse
|
38
|
Meßmer JA, Weigl M, Li J, Mecklinger A. May the source be with you! Electrophysiological correlates of retrieval orientation are associated with source memory performance. Brain Cogn 2020; 146:105635. [PMID: 33190029 DOI: 10.1016/j.bandc.2020.105635] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2020] [Revised: 10/09/2020] [Accepted: 10/19/2020] [Indexed: 11/30/2022]
Abstract
Successful source memory retrieval is assumed to rely on intact preretrieval processes, such as retrieval orientation (RO). RO is the specialized processing of retrieval cues, depending on the type of information, memory is searched for. In a previous study, a positive frontal slow wave RO ERP effect was interpreted as reflecting memory search for self-relevant information. However, such a functional interpretation is hampered by the use of retrieval strategies as a consequence of which target source information can be indirectly inferred from the correct classification of non-target source information. To overcome this limitation, the present study compared two types of source information (i.e. color or character information) by asking participants to remember details within each source type and thus enforcing the selective retrieval of target information. Consistent with previous research, a positive frontal ERP component (600-800 ms post-stimulus) differentiated between correct rejections in both tasks, probably reflecting memory search for self-relevant information. Moreover, the RO ERP effect was associated with better source memory performance, providing evidence for the beneficial effect of ROs for memory retrieval. This relationship might be covered in memory exclusion tasks due to non-target retrieval.
Collapse
Affiliation(s)
- Julia A Meßmer
- Experimental Neuropsychology Unit, Saarland University, Campus A2 4, D-66123 Saarbrücken, Germany.
| | - Michael Weigl
- Experimental Neuropsychology Unit, Saarland University, Campus A2 4, D-66123 Saarbrücken, Germany
| | - Juan Li
- Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| | - Axel Mecklinger
- Experimental Neuropsychology Unit, Saarland University, Campus A2 4, D-66123 Saarbrücken, Germany
| |
Collapse
|
39
|
Hareli S, David O, Hess U. What Emotion Facial Expressions Tell Us About the Health of Others. Front Psychol 2020; 11:585242. [PMID: 33281681 PMCID: PMC7691273 DOI: 10.3389/fpsyg.2020.585242] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2020] [Accepted: 10/05/2020] [Indexed: 11/30/2022] Open
Abstract
To avoid contagion, we need information about the health status of those whom we engage with. This is especially important when we have cause for concern that the other is indeed sick, such as is the case during the world-wide outbreak of the coronavirus in 2020. In three studies, one conducted several years before the pandemic, and two during the pandemic, we showed that facial expressions of emotions are used as signals of health status. Specifically, happy expressers are perceived as healthier than expressers showing negative emotions or neutrality (Studies 1-3), whereas anger was interpreted as a signal of ill health (Study 3). Importantly, however, facial expressions affected health perception only when there was a prior reason to suspect ill health. This was the case for older expressers before and after the pandemic for whom age-related stereotypes set expectations of ill health and for all ages during a wide-spread pandemic, which extends this suspicion to everyone. In Study 3, we showed that the effect of emotion expressions was also generalized to the physical distance that the observer wishes to keep from the expresser. Overall, this research is the first to show a role of emotion expressions in informing health perception.
Collapse
Affiliation(s)
- Shlomo Hareli
- The Interdisciplinary Center for Research on Emotions, School of Business Administration, University of Haifa, Haifa, Israel
| | - Or David
- The Interdisciplinary Center for Research on Emotions, School of Business Administration, University of Haifa, Haifa, Israel
| | - Ursula Hess
- Department of Psychology, Humboldt University of Berlin, Berlin, Germany
| |
Collapse
|
40
|
Bello H, Zhou B, Lukowicz P. Facial Muscle Activity Recognition with Reconfigurable Differential Stethoscope-Microphones. SENSORS 2020; 20:s20174904. [PMID: 32872633 PMCID: PMC7506891 DOI: 10.3390/s20174904] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/28/2020] [Revised: 08/21/2020] [Accepted: 08/27/2020] [Indexed: 12/02/2022]
Abstract
Many human activities and states are related to the facial muscles’ actions: from the expression of emotions, stress, and non-verbal communication through health-related actions, such as coughing and sneezing to nutrition and drinking. In this work, we describe, in detail, the design and evaluation of a wearable system for facial muscle activity monitoring based on a re-configurable differential array of stethoscope-microphones. In our system, six stethoscopes are placed at locations that could easily be integrated into the frame of smart glasses. The paper describes the detailed hardware design and selection and adaptation of appropriate signal processing and machine learning methods. For the evaluation, we asked eight participants to imitate a set of facial actions, such as expressions of happiness, anger, surprise, sadness, upset, and disgust, and gestures, like kissing, winkling, sticking the tongue out, and taking a pill. An evaluation of a complete data set of 2640 events with 66% training and a 33% testing rate has been performed. Although we encountered high variability of the volunteers’ expressions, our approach shows a recall = 55%, precision = 56%, and f1-score of 54% for the user-independent scenario(9% chance-level). On a user-dependent basis, our worst result has an f1-score = 60% and best result with f1-score = 89%. Having a recall ≥60% for expressions like happiness, anger, kissing, sticking the tongue out, and neutral(Null-class).
Collapse
Affiliation(s)
- Hymalai Bello
- German Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, Germany; (B.Z.); (P.L.)
- Correspondence:
| | - Bo Zhou
- German Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, Germany; (B.Z.); (P.L.)
| | - Paul Lukowicz
- German Research Center for Artificial Intelligence(DFKI), 67663 Kaiserslautern, Germany; (B.Z.); (P.L.)
- Department of Computer Science, University of Kaiserslautern, 67663 Kaiserslautern, Germany
| |
Collapse
|
41
|
Ellmerer P, Heim B, Stefani A, Peball M, Werkmann M, Holzknecht E, Bergmann M, Brandauer E, Sojer M, Zamarian L, Delazer M, Seppi K, Högl B, Poewe W, Djamshidian A. Augmentation in restless legs syndrome: an eye tracking study on emotion processing. Ann Clin Transl Neurol 2020; 7:1620-1627. [PMID: 32786065 PMCID: PMC7480921 DOI: 10.1002/acn3.51144] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 06/04/2020] [Accepted: 07/06/2020] [Indexed: 11/09/2022] Open
Abstract
Objective To assess emotional processing and alexithymia in patients with restless legs syndrome (RLS) with augmentation versus those who never had augmentation. Methods We recruited 26 patients who had a history of augmentation (AUG), either current or past, 27 RLS patients treated with dopamine agonists who never had augmentation (RLS controls), and 21 healthy controls (HC). All participants were screened for impulse control disorders (ICDs). Alexithymia was assessed by means of the Toronto Alexithymia Scale – 20 (TAS‐20). Facial emotion recognition was tested through an eye‐tracking task. Furthermore, all participants performed neuropsychological tests assessing global cognitive status, impulsivity, anxiety, and depression. Results ICD symptoms occurred more frequently in AUG patients than in RLS controls (P = 0.047). Patients with AUG scored higher on the TAS‐20 (P = 0.007) and the attentional subdomain of an impulsivity scale (BIS‐11; P = 0.015) compared to HC. Patients with AUG also performed worse on the facial emotion recognition task relative to RLS controls (P = 0.009) and HC (P = 0.003). We found a group difference for the time to first fixation and the fixation count in the mouth region (P = 0.019 and P = 0.021, respectively). There were no other differences in the eye tracking examination. Interpretation This study showed evidence of poorer emotional processing in patients who had augmentation compared to RLS patients without augmentation and healthy controls. The altered exploration pattern of faces and the higher alexithymia scores suggest abnormalities in emotion processing in patients with augmentation.
Collapse
Affiliation(s)
- Philipp Ellmerer
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Beatrice Heim
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Ambra Stefani
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Marina Peball
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Mario Werkmann
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Evi Holzknecht
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Melanie Bergmann
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Elisabeth Brandauer
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Martin Sojer
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Laura Zamarian
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Margarete Delazer
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Klaus Seppi
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Birgit Högl
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Werner Poewe
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| | - Atbin Djamshidian
- Department of Neurology, Medical University of Innsbruck, Innsbruck, Austria
| |
Collapse
|
42
|
Condliffe O, Maratos FA. Can compassion, happiness and sympathetic concern be differentiated on the basis of facial expression? Cogn Emot 2020; 34:1395-1407. [PMID: 32281475 DOI: 10.1080/02699931.2020.1747989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Recent research has demonstrated the importance of positive emotions, and especially compassion, for well-being. Via two investigations, we set out to determine if facial expressions of happiness, "kind" compassion and sympathetic concern can be distinguished, given limitations of previous research. In investigation one, prototypes of the three expressions were analysed for similarities and differences using the facial action coding system (FACS) by two certified independent coders. Results established that each expression comprised distinct FACS units. Thus, in investigation 2, a new photographic stimulus set was developed using a gender/racially balanced group of actors to pose these expressions of "kind" compassion, happiness, sympathetic concern, and the face in a relaxed/neutral pose. 75 participants were then asked to name the FACS generated expressions using not only forced categorical quantitative ratings but, importantly, free response. Results revealed that kind compassionate facial expressions: (i) engendered words associated with contented and affiliative emotions (although, interestingly, not the word "kind"); (ii) were labelled as compassionate significantly more often than any of the other emotional expressions; but (iii) in common with happiness expressions, engendered happiness word groupings and ratings. Findings have implications for understandings of positive emotions, including specificity of expressions and their veridicality.
Collapse
Affiliation(s)
- Otto Condliffe
- School of Language, Education and Culture, The Sino-British College, University of Shanghai for Science and Technology, Shanghai, People's Republic of China
| | - Frances A Maratos
- College of Life & Natural Sciences, Human Sciences Research Centre, University of Derby, Derby, UK
| |
Collapse
|
43
|
Pandeirada JNS, Fernandes NL, Vasconcelos M. Attractiveness of Human Faces: Norms by Sex, Sexual Orientation, Age, Relationship Stability, and Own Attractiveness Judgements. Front Psychol 2020; 11:419. [PMID: 32231625 PMCID: PMC7083125 DOI: 10.3389/fpsyg.2020.00419] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 02/24/2020] [Indexed: 11/13/2022] Open
Affiliation(s)
- Josefa N S Pandeirada
- Centro de Investigação em Tecnologias e Serviços de Saúde, University of Aveiro, Aveiro, Portugal.,William James Center for Research, University of Aveiro, Aveiro, Portugal
| | - Natália Lisandra Fernandes
- Centro de Investigação em Tecnologias e Serviços de Saúde, University of Aveiro, Aveiro, Portugal.,William James Center for Research, University of Aveiro, Aveiro, Portugal
| | - Marco Vasconcelos
- William James Center for Research, University of Aveiro, Aveiro, Portugal
| |
Collapse
|
44
|
Kaminska OK, Magnuski M, Olszanowski M, Gola M, Brzezicka A, Winkielman P. Ambiguous at the second sight: Mixed facial expressions trigger late electrophysiological responses linked to lower social impressions. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2020; 20:441-454. [PMID: 32166625 PMCID: PMC7105445 DOI: 10.3758/s13415-020-00778-5] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Social interactions require quick perception, interpretation, and categorization of faces, with facial features offering cues to emotions, intentions, and traits. Importantly, reactions to faces depend not only on their features but also on their processing fluency, with disfluent faces suffering social devaluation. The current research used electrophysiological (EEG) and behavioral measures to explore at what processing stage and under what conditions emotional ambiguity is detected in the brain and how it influences trustworthiness judgments. Participants viewed male and female faces ranging from pure anger, through mixed expressions, to pure happiness. They categorized each face along the experimental dimension (happy vs. angry) or a control dimension (gender). In the emotion-categorization condition, mixed (ambiguous) expressions were classified relatively slower, and their trustworthiness was rated relatively lower. EEG analyses revealed that early brain responses are independent of the categorization condition, with pure faces evoking larger P1/N1 responses than mixed expressions. Some late (728- 880 ms) brain responses from central-parietal sites also were independent of the categorization condition and presumably reflect familiarity of the emotion categories, with pure expressions evoking larger central-parietal LPP amplitude than mixed expressions. Interestingly, other late responses were sensitive to both expressive features and categorization task, with ambiguous faces evoking a larger LPP amplitude in frontal-medial sites around 560-660 ms but only in the emotion categorization task. Critically, these late responses from the frontal-medial cluster correlated with the reduction in trustworthiness judgments. Overall, the results suggest that ambiguity detection involves late, top-down processes and that it influences important social impressions.
Collapse
Affiliation(s)
| | | | | | - Mateusz Gola
- Institute for Neural Computation, Swartz Center for Computational Neuroscience, University of California, San Diego, La Jolla, CA, USA
- Institute of Psychology, Polish Academy of Sciences, Warsaw, Poland
| | - Aneta Brzezicka
- University of Social Sciences and Humanities, Warsaw, Poland
- Department of Neurosurgery, Cedars-Sinai Medical Center, Los Angeles, CA, USA
| | - Piotr Winkielman
- University of Social Sciences and Humanities, Warsaw, Poland.
- Psychology Department, University of California, San Diego, La Jolla, CA, USA.
| |
Collapse
|
45
|
Facial Expression Recognition by Regional Weighting with Approximated Q-Learning. Symmetry (Basel) 2020. [DOI: 10.3390/sym12020319] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Several facial expression recognition methods cluster facial elements according to similarity and weight them considering the importance of each element in classification. However, these methods are limited by the pre-definitions of units restricting modification of the structure during optimization. This study proposes a modified support vector machine classifier called Grid Map, which is combined with reinforcement learning to improve the classification accuracy. To optimize training, the input image size is normalized according to the cascade rules of a pre-processing detector, and the regional weights are assigned by an adaptive cell size that divides each region of the image using bounding grids. Reducing the size of the bounding grid reduces the area used for feature extraction, allowing more detailed weighted features to be extracted. Error-correcting output codes with a histogram of gradient is selected as the classification method via an experiment to determine the optimal feature and classifier selection. The proposed method is formulated into a decision process and solved via Q-learning. To classify seven emotions, the proposed method exhibits accuracies of 96.36% and 98.47% for four databases and Extended Cohn-–Kanade Dataset (CK+), respectively. Compared to the basic method exhibiting a similar accuracy, the proposed method requires 68.81% fewer features and only 66.33% of the processing time.
Collapse
|
46
|
Expressure: Detect Expressions Related to Emotional and Cognitive Activities Using Forehead Textile Pressure Mechanomyography. SENSORS 2020; 20:s20030730. [PMID: 32013009 PMCID: PMC7038450 DOI: 10.3390/s20030730] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/01/2020] [Revised: 01/22/2020] [Accepted: 01/24/2020] [Indexed: 11/16/2022]
Abstract
We investigate how pressure-sensitive smart textiles, in the form of a headband, can detect changes in facial expressions that are indicative of emotions and cognitive activities. Specifically, we present the Expressure system that performs surface pressure mechanomyography on the forehead using an array of textile pressure sensors that is not dependent on specific placement or attachment to the skin. Our approach is evaluated in systematic psychological experiments. First, through a mimicking expression experiment with 20 participants, we demonstrate the system’s ability to detect well-defined facial expressions. We achieved accuracies of 0.824 to classify among three eyebrow movements (0.333 chance-level) and 0.381 among seven full-face expressions (0.143 chance-level). A second experiment was conducted with 20 participants to induce cognitive loads with N-back tasks. Statistical analysis has shown significant correlations between the Expressure features on a fine time granularity and the cognitive activity. The results have also shown significant correlations between the Expressure features and the N-back score. From the 10 most facially expressive participants, our approach can predict whether the N-back score is above or below the average with 0.767 accuracy.
Collapse
|
47
|
Krejtz I, Krejtz K, Wisiecka K, Abramczyk M, Olszanowski M, Duchowski AT. Attention Dynamics During Emotion Recognition by Deaf and Hearing Individuals. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:10-21. [PMID: 31665493 DOI: 10.1093/deafed/enz036] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/10/2018] [Revised: 07/11/2019] [Accepted: 08/01/2019] [Indexed: 06/10/2023]
Abstract
The enhancement hypothesis suggests that deaf individuals are more vigilant to visual emotional cues than hearing individuals. The present eye-tracking study examined ambient-focal visual attention when encoding affect from dynamically changing emotional facial expressions. Deaf (n = 17) and hearing (n = 17) individuals watched emotional facial expressions that in 10-s animations morphed from a neutral expression to one of happiness, sadness, or anger. The task was to recognize emotion as quickly as possible. Deaf participants tended to be faster than hearing participants in affect recognition, but the groups did not differ in accuracy. In general, happy faces were more accurately and more quickly recognized than faces expressing anger or sadness. Both groups demonstrated longer average fixation duration when recognizing happiness in comparison to anger and sadness. Deaf individuals directed their first fixations less often to the mouth region than the hearing group. During the last stages of emotion recognition, deaf participants exhibited more focal viewing of happy faces than negative faces. This pattern was not observed among hearing individuals. The analysis of visual gaze dynamics, switching between ambient and focal attention, was useful in studying the depth of cognitive processing of emotional information among deaf and hearing individuals.
Collapse
Affiliation(s)
- Izabela Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | - Krzysztof Krejtz
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | | | | - Michał Olszanowski
- SWPS University of Social Sciences and Humanities, Chodakowska 19/31, Warsaw, Poland
| | | |
Collapse
|
48
|
Chung KM, Kim S, Jung WH, Kim Y. Development and Validation of the Yonsei Face Database (YFace DB). Front Psychol 2019; 10:2626. [PMID: 31849755 PMCID: PMC6901828 DOI: 10.3389/fpsyg.2019.02626] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 11/07/2019] [Indexed: 12/13/2022] Open
Abstract
The purposes of this study were to develop the Yonsei Face Database (YFace DB), consisting of both static and dynamic face stimuli for six basic emotions (happiness, sadness, anger, surprise, fear, and disgust), and to test its validity. The database includes selected pictures (static stimuli) and film clips (dynamic stimuli) of 74 models (50% female) aged between 19 and 40. Thousand four hundred and eighty selected pictures and film clips were assessed for the accuracy, intensity, and naturalness during the validation procedure by 221 undergraduate students. The overall accuracy of the pictures was 76%. Film clips had a higher accuracy, of 83%; the highest accuracy was observed in happiness and the lowest in fear across all conditions (static with mouth open or closed, or dynamic). The accuracy was higher in film clips across all emotions but happiness and disgust, while the naturalness was higher in the pictures than in film clips except for sadness and anger. The intensity varied the most across conditions and emotions. Significant gender effects were found in perception accuracy for both the gender of models and raters. Male raters perceived surprise more accurately in static stimuli with mouth open and in dynamic stimuli while female raters perceived fear more accurately in all conditions. Moreover, sadness and anger expressed in static stimuli with mouth open and fear expressed in dynamic stimuli were perceived more accurately when models were male. Disgust expressed in static stimuli with mouth open and dynamic stimuli, and fear expressed in static stimuli with mouth closed were perceived more accurately when models were female. The YFace DB is the largest Asian face database by far and the first to include both static and dynamic facial expression stimuli, and the current study can provide researchers with a wealth of information about the validity of each stimulus through the validation procedure.
Collapse
Affiliation(s)
- Kyong-Mee Chung
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Soojin Kim
- Department of Psychology, Yonsei University, Seoul, South Korea
| | - Woo Hyun Jung
- Department of Psychology, Chungbuk National University, Cheongju, South Korea
| | - Yeunjoo Kim
- Department of Psychology, University of California, Berkeley, Berkeley, CA, United States
| |
Collapse
|
49
|
Buimer H, Schellens R, Kostelijk T, Nemri A, Zhao Y, Van der Geest T, Van Wezel R. Opportunities and Pitfalls in Applying Emotion Recognition Software for Persons With a Visual Impairment: Simulated Real Life Conversations. JMIR Mhealth Uhealth 2019; 7:e13722. [PMID: 31750838 PMCID: PMC6895890 DOI: 10.2196/13722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2019] [Revised: 07/12/2019] [Accepted: 07/19/2019] [Indexed: 11/17/2022] Open
Abstract
Background A large part of the communication cues exchanged between persons is nonverbal. Persons with a visual impairment are often unable to perceive these cues, such as gestures or facial expression of emotions. In a previous study, we have determined that visually impaired persons can increase their ability to recognize facial expressions of emotions from validated pictures and videos by using an emotion recognition system that signals vibrotactile cues associated with one of the six basic emotions. Objective The aim of this study was to determine whether the previously tested emotion recognition system worked equally well in realistic situations and under controlled laboratory conditions. Methods The emotion recognition system consists of a camera mounted on spectacles, a tablet running facial emotion recognition software, and a waist belt with vibrotactile stimulators to provide haptic feedback representing Ekman’s six universal emotions. A total of 8 visually impaired persons (4 females and 4 males; mean age 46.75 years, age range 28-66 years) participated in two training sessions followed by one experimental session. During the experiment, participants engaged in two 15 minute conversations, in one of which they wore the emotion recognition system. To conclude the study, exit interviews were conducted to assess the experiences of the participants. Due to technical issues with the registration of the emotion recognition software, only 6 participants were included in the video analysis. Results We found that participants were quickly able to learn, distinguish, and remember vibrotactile signals associated with the six emotions. A total of 4 participants felt that they were able to use the vibrotactile signals in the conversation. Moreover, 5 out of the 6 participants had no difficulties in keeping the camera focused on the conversation partner. The emotion recognition was very accurate in detecting happiness but performed unsatisfactorily in recognizing the other five universal emotions. Conclusions The system requires some essential improvements in performance and wearability before it is ready to support visually impaired persons in their daily life interactions. Nevertheless, the participants saw potential in the system as an assistive technology, assuming their user requirements can be met.
Collapse
Affiliation(s)
- Hendrik Buimer
- Department of Biophysics, Radboud University, Nijmegen, Netherlands.,Department of Biomedical Signals and Systems, University of Twente, Enschede, Netherlands
| | - Renske Schellens
- Department of Biophysics, Radboud University, Nijmegen, Netherlands
| | | | - Abdellatif Nemri
- Department of Biomedical Signals and Systems, University of Twente, Enschede, Netherlands
| | - Yan Zhao
- Department of Biomedical Signals and Systems, University of Twente, Enschede, Netherlands
| | - Thea Van der Geest
- Center IT + Media, Hogeschool van Arnhem en Nijmegen University of Applied Sciences, Arnhem, Netherlands
| | - Richard Van Wezel
- Department of Biophysics, Radboud University, Nijmegen, Netherlands.,Department of Biomedical Signals and Systems, University of Twente, Enschede, Netherlands
| |
Collapse
|
50
|
|