1
|
Cheng X, Wang S, Wei H, Sun X, Xin L, Li L, Li C, Wang Z. Application of Stereo Digital Image Correlation on Facial Expressions Sensing. SENSORS (BASEL, SWITZERLAND) 2024; 24:2450. [PMID: 38676067 PMCID: PMC11054127 DOI: 10.3390/s24082450] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2024] [Revised: 04/06/2024] [Accepted: 04/08/2024] [Indexed: 04/28/2024]
Abstract
Facial expression is an important way to reflect human emotions and it represents a dynamic deformation process. Analyzing facial movements is an effective means of understanding expressions. However, there is currently a lack of methods capable of analyzing the dynamic details of full-field deformation in expressions. In this paper, in order to enable effective dynamic analysis of expressions, a classic optical measuring method called stereo digital image correlation (stereo-DIC or 3D-DIC) is employed to analyze the deformation fields of facial expressions. The forming processes of six basic facial expressions of certain experimental subjects are analyzed through the displacement and strain fields calculated by 3D-DIC. The displacement fields of each expression exhibit strong consistency with the action units (AUs) defined by the classical Facial Action Coding System (FACS). Moreover, it is shown that the gradient of the displacement, i.e., the strain fields, offers special advantages in characterizing facial expressions due to their localized nature, effectively sensing the nuanced dynamics of facial movements. By processing extensive data, this study demonstrates two featured regions in six basic expressions, one where deformation begins and the other where deformation is most severe. Based on these two regions, the temporal evolutions of the six basic expressions are discussed. The presented investigations demonstrate the superior performance of 3D-DIC in the quantitative analysis of facial expressions. The proposed analytical strategy might have potential value in objectively characterizing human expressions based on quantitative measurement.
Collapse
Affiliation(s)
- Xuanshi Cheng
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Shibin Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Huixin Wei
- School of Civil Engineering and Architecture, Nanchang University, Nanchang 330000, China
| | - Xin Sun
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Lipan Xin
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Linan Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Chuanwei Li
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| | - Zhiyong Wang
- School of Mechanical Engineering, Tianjin University, Tianjin 300350, China; (X.C.)
| |
Collapse
|
2
|
González-Gualda LM, Vicente-Querol MA, García AS, Molina JP, Latorre JM, Fernández-Sotos P, Fernández-Caballero A. An exploratory study of the effect of age and gender on face scanning during affect recognition in immersive virtual reality. Sci Rep 2024; 14:5553. [PMID: 38448515 PMCID: PMC10918108 DOI: 10.1038/s41598-024-55774-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/01/2023] [Accepted: 02/26/2024] [Indexed: 03/08/2024] Open
Abstract
A person with impaired emotion recognition is not able to correctly identify facial expressions represented by other individuals. The aim of the present study is to assess eyes gaze and facial emotion recognition in a healthy population using dynamic avatars in immersive virtual reality (IVR). For the first time, the viewing of each area of interest of the face in IVR is studied by gender and age. This work in healthy people is conducted to assess the future usefulness of IVR in patients with deficits in the recognition of facial expressions. Seventy-four healthy volunteers participated in the study. The materials used were a laptop computer, a game controller, and a head-mounted display. Dynamic virtual faces randomly representing the six basic emotions plus neutral expression were used as stimuli. After the virtual human represented an emotion, a response panel was displayed with the seven possible options. Besides storing the hits and misses, the software program internally divided the faces into different areas of interest (AOIs) and recorded how long participants looked at each AOI. As regards the overall accuracy of the participants' responses, hits decreased from the youngest to the middle-aged and older adults. Also, all three groups spent the highest percentage of time looking at the eyes, but younger adults had the highest percentage. It is also noteworthy that attention to the face compared to the background decreased with age. Moreover, the hits between women and men were remarkably similar and, in fact, there were no statistically significant differences between them. In general, men paid more attention to the eyes than women, but women paid more attention to the forehead and mouth. In contrast to previous work, our study indicates that there are no differences between men and women in facial emotion recognition. Moreover, in line with previous work, the percentage of face viewing time for younger adults is higher than for older adults. However, contrary to earlier studies, older adults look more at the eyes than at the mouth.Consistent with other studies, the eyes are the AOI with the highest percentage of viewing time. For men the most viewed AOI is the eyes for all emotions in both hits and misses. Women look more at the eyes for all emotions, except for joy, fear, and anger on hits. On misses, they look more into the eyes for almost all emotions except surprise and fear.
Collapse
Affiliation(s)
- Luz M González-Gualda
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
| | - Miguel A Vicente-Querol
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
| | - Arturo S García
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José P Molina
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - José M Latorre
- Departmento de Psicología, Universidad de Castilla-La Mancha, 02071, Albacete, Spain
| | - Patricia Fernández-Sotos
- Servicio de Salud de Castilla-La Mancha, Complejo Hospitalario Universitario de Albacete, Servicio de Salud Mental, 02004, Albacete, Spain
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain
| | - Antonio Fernández-Caballero
- Neurocognition and Emotion Unit, Instituto de Investigación en Informática de Albacete, 02071, Albacete, Spain.
- Departmento de Sistemas Informáticos, Universidad de Castilla-La Mancha, 02071, Albacete, Spain.
- CIBERSAM-ISCIII (Biomedical Research Networking Centre in Mental Health, Instituto de Salud Carlos III), 28016, Madrid, Spain.
| |
Collapse
|
3
|
Higashi K, Isoyama N, Sakata N, Kiyokawa K. Manipulating Sense of Participation in Multipartite Conversations by Manipulating Head Attitude and Gaze Direction. JOURNAL OF ROBOTICS AND MECHATRONICS 2021. [DOI: 10.20965/jrm.2021.p1013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Interpersonal communication is so important in everyday life that it is desirable everyone who participates in the conversation is satisfied. However, every participant of the conversation cannot be satisfied in such cases as those wherein only one person cannot keep up with the conversation and feels alienated, or wherein someone cannot communicate non-verbal expressions with his/her conversation partner adequately. In this study, we have focused on facial direction and gaze among the various factors that are said to affect conversational satisfaction. We have attempted to lessen any sense of non-participation in the conversation and increase the conversational satisfaction of the non-participant in a tripartite conversation by modulating the visual information in such a way that the remaining two parties turn toward the non-participating party. In the experiments we have conducted in VR environments, we have reproduced a conversation of two male adults recorded in actual environments using two avatars. The experimental subjects have watched this over their HMDs. The experiments have found that visually modulating the avatars’ faces and gazes such that they appear to turn toward the subjects has increased the subjects’ sense of participation in the conversation. Nevertheless, the experiments have not increased the subjects’ conversational enjoyment, a component of the factors for conversational satisfaction.
Collapse
|
4
|
Type of Task Instruction Enhances the Role of Face and Context in Emotion Perception. JOURNAL OF NONVERBAL BEHAVIOR 2021. [DOI: 10.1007/s10919-021-00383-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
5
|
Virtual reality facial emotion recognition in social environments: An eye-tracking study. Internet Interv 2021; 25:100432. [PMID: 34401391 PMCID: PMC8350588 DOI: 10.1016/j.invent.2021.100432] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/23/2020] [Revised: 06/20/2021] [Accepted: 07/14/2021] [Indexed: 11/27/2022] Open
Abstract
BACKGROUND Virtual reality (VR) enables the administration of realistic and dynamic stimuli within a social context for the assessment and training of emotion recognition. We tested a novel VR emotion recognition task by comparing emotion recognition across a VR, video and photo task, investigating covariates of recognition and exploring visual attention in VR. METHODS Healthy individuals (n = 100) completed three emotion recognition tasks; a photo, video and VR task. During the VR task, emotions of virtual characters (avatars) in a VR street environment were rated, and eye-tracking was recorded in VR. RESULTS Recognition accuracy in VR (overall 75%) was comparable to the photo and video task. However, there were some differences; disgust and happiness had lower accuracy rates in VR, and better accuracy was achieved for surprise and anger in VR compared to the video task. Participants spent more time identifying disgust, fear and sadness than surprise and happiness. In general, attention was directed longer to the eye and nose areas than the mouth. DISCUSSION Immersive VR tasks can be used for training and assessment of emotion recognition. VR enables easily controllable avatars within environments relevant for daily life. Validated emotional expressions and tasks will be of relevance for clinical applications.
Collapse
|
6
|
Negative and Positive Bias for Emotional Faces: Evidence from the Attention and Working Memory Paradigms. Neural Plast 2021; 2021:8851066. [PMID: 34135956 PMCID: PMC8178010 DOI: 10.1155/2021/8851066] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2020] [Revised: 08/23/2020] [Accepted: 05/13/2021] [Indexed: 12/20/2022] Open
Abstract
Visual attention and visual working memory (VWM) are two major cognitive functions in humans, and they have much in common. A growing body of research has investigated the effect of emotional information on visual attention and VWM. Interestingly, contradictory findings have supported both a negative bias and a positive bias toward emotional faces (e.g., angry faces or happy faces) in the attention and VWM fields. We found that the classical paradigms-that is, the visual search paradigm in attention and the change detection paradigm in VWM-are considerably similar. The settings of these paradigms could therefore be responsible for the contradictory results. In this paper, we compare previous controversial results from behavioral and neuroscience studies using these two paradigms. We suggest three possible contributing factors that have significant impacts on the contradictory conclusions regarding different emotional bias effects; these factors are stimulus choice, experimental setting, and cognitive process. We also propose new research directions and guidelines for future studies.
Collapse
|
7
|
Validation of dynamic virtual faces for facial affect recognition. PLoS One 2021; 16:e0246001. [PMID: 33493234 PMCID: PMC7833130 DOI: 10.1371/journal.pone.0246001] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2020] [Accepted: 01/12/2021] [Indexed: 11/29/2022] Open
Abstract
The ability to recognise facial emotions is essential for successful social interaction. The most common stimuli used when evaluating this ability are photographs. Although these stimuli have proved to be valid, they do not offer the level of realism that virtual humans have achieved. The objective of the present paper is the validation of a new set of dynamic virtual faces (DVFs) that mimic the six basic emotions plus the neutral expression. The faces are prepared to be observed with low and high dynamism, and from front and side views. For this purpose, 204 healthy participants, stratified by gender, age and education level, were recruited for assessing their facial affect recognition with the set of DVFs. The accuracy in responses was compared with the already validated Penn Emotion Recognition Test (ER-40). The results showed that DVFs were as valid as standardised natural faces for accurately recreating human-like facial expressions. The overall accuracy in the identification of emotions was higher for the DVFs (88.25%) than for the ER-40 faces (82.60%). The percentage of hits of each DVF emotion was high, especially for neutral expression and happiness emotion. No statistically significant differences were discovered regarding gender. Nor were significant differences found between younger adults and adults over 60 years. Moreover, there is an increase of hits for avatar faces showing a greater dynamism, as well as front views of the DVFs compared to their profile presentations. DVFs are as valid as standardised natural faces for accurately recreating human-like facial expressions of emotions.
Collapse
|
8
|
Hortensius R, Hekele F, Cross ES. The Perception of Emotion in Artificial Agents. IEEE Trans Cogn Dev Syst 2018. [DOI: 10.1109/tcds.2018.2826921] [Citation(s) in RCA: 54] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
9
|
Seibt B, Mühlberger A, Likowski KU, Weyers P. Facial mimicry in its social setting. Front Psychol 2015; 6:1122. [PMID: 26321970 PMCID: PMC4531238 DOI: 10.3389/fpsyg.2015.01122] [Citation(s) in RCA: 83] [Impact Index Per Article: 9.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 07/20/2015] [Indexed: 11/14/2022] Open
Abstract
In interpersonal encounters, individuals often exhibit changes in their own facial expressions in response to emotional expressions of another person. Such changes are often called facial mimicry. While this tendency first appeared to be an automatic tendency of the perceiver to show the same emotional expression as the sender, evidence is now accumulating that situation, person, and relationship jointly determine whether and for which emotions such congruent facial behavior is shown. We review the evidence regarding the moderating influence of such factors on facial mimicry with a focus on understanding the meaning of facial responses to emotional expressions in a particular constellation. From this, we derive recommendations for a research agenda with a stronger focus on the most common forms of encounters, actual interactions with known others, and on assessing potential mediators of facial mimicry. We conclude that facial mimicry is modulated by many factors: attention deployment and sensitivity, detection of valence, emotional feelings, and social motivations. We posit that these are the more proximal causes of changes in facial mimicry due to changes in its social setting.
Collapse
Affiliation(s)
- Beate Seibt
- Department of Psychology, University of OsloOslo, Norway
- Centro de Investigação e Intervenção Social, ISCTE - Instituto Universitário de LisboaLisboa, Portugal
| | - Andreas Mühlberger
- Department of Psychology, University of WürzburgWürzburg, Germany
- Department of Psychology, University of RegensburgRegensburg, Germany
| | | | - Peter Weyers
- Department of Psychology, University of WürzburgWürzburg, Germany
| |
Collapse
|
10
|
Marschner L, Pannasch S, Schulz J, Graupner ST. Social communication with virtual agents: The effects of body and gaze direction on attention and emotional responding in human observers. Int J Psychophysiol 2015; 97:85-92. [DOI: 10.1016/j.ijpsycho.2015.05.007] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2015] [Revised: 05/13/2015] [Accepted: 05/17/2015] [Indexed: 11/26/2022]
|
11
|
|
12
|
Seibt B, Weyers P, Likowski KU, Pauli P, Mühlberger A, Hess U. Subliminal Interdependence Priming Modulates Congruent and Incongruent Facial Reactions to Emotional Displays. SOCIAL COGNITION 2013. [DOI: 10.1521/soco.2013.31.5.613] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
13
|
BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views. Psychiatry Res 2013; 209:98-109. [PMID: 23219103 DOI: 10.1016/j.psychres.2012.11.012] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2012] [Revised: 10/16/2012] [Accepted: 11/10/2012] [Indexed: 11/20/2022]
Abstract
This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high.
Collapse
|
14
|
Likowski KU, Mühlberger A, Gerdes ABM, Wieser MJ, Pauli P, Weyers P. Facial mimicry and the mirror neuron system: simultaneous acquisition of facial electromyography and functional magnetic resonance imaging. Front Hum Neurosci 2012; 6:214. [PMID: 22855675 PMCID: PMC3405279 DOI: 10.3389/fnhum.2012.00214] [Citation(s) in RCA: 84] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2012] [Accepted: 07/02/2012] [Indexed: 11/18/2022] Open
Abstract
Numerous studies have shown that humans automatically react with congruent facial reactions, i.e., facial mimicry, when seeing a vis-á-vis' facial expressions. The current experiment is the first investigating the neuronal structures responsible for differences in the occurrence of such facial mimicry reactions by simultaneously measuring BOLD and facial EMG in an MRI scanner. Therefore, 20 female students viewed emotional facial expressions (happy, sad, and angry) of male and female avatar characters. During picture presentation, the BOLD signal as well as M. zygomaticus major and M. corrugator supercilii activity were recorded simultaneously. Results show prototypical patterns of facial mimicry after correction for MR-related artifacts: enhanced M. zygomaticus major activity in response to happy and enhanced M. corrugator supercilii activity in response to sad and angry expressions. Regression analyses show that these congruent facial reactions correlate significantly with activations in the IFG, SMA, and cerebellum. Stronger zygomaticus reactions to happy faces were further associated to increased activities in the caudate, MTG, and PCC. Corrugator reactions to angry expressions were further correlated with the hippocampus, insula, and STS. Results are discussed in relation to core and extended models of the mirror neuron system (MNS).
Collapse
|
15
|
|
16
|
Stout JC, Paulsen JS, Queller S, Solomon AC, Whitlock KB, Campbell JC, Carlozzi N, Duff K, Beglinger LJ, Langbehn DR, Johnson SA, Biglan KM, Aylward EH. Neurocognitive signs in prodromal Huntington disease. Neuropsychology 2011; 25:1-14. [PMID: 20919768 PMCID: PMC3017660 DOI: 10.1037/a0020937] [Citation(s) in RCA: 271] [Impact Index Per Article: 20.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022] Open
Abstract
OBJECTIVE PREDICT-HD is a large-scale international study of people with the Huntington disease (HD) CAG-repeat expansion who are not yet diagnosed with HD. The objective of this study was to determine the stage in the HD prodrome at which cognitive differences from CAG-normal controls can be reliably detected. METHOD For each of 738 HD CAG-expanded participants, we computed estimated years to clinical diagnosis and probability of diagnosis in 5 years based on age and CAG-repeat expansion number (Langbehn, Brinkman, Falush, Paulsen, & Hayden, 2004). We then stratified the sample into groups: NEAR, estimated to be ≤9 years; MID, between 9 and 15 years; and FAR, ≥15 years. The control sample included 168 CAG-normal participants. Nineteen cognitive tasks were used to assess attention, working memory, psychomotor functions, episodic memory, language, recognition of facial emotion, sensory-perceptual functions, and executive functions. RESULTS Compared with the controls, the NEAR group showed significantly poorer performance on nearly all of the cognitive tests and the MID group on about half of the cognitive tests (p = .05, Cohen's d NEAR as large as -1.17, MID as large as -0.61). One test even revealed significantly poorer performance in the FAR group (Cohen's d = -0.26). Individual tasks accounted for 0.2% to 9.7% of the variance in estimated proximity to diagnosis. Overall, the cognitive battery accounted for 34% of the variance; in comparison, the Unified Huntington's Disease Rating Scale motor score accounted for 11.7%. CONCLUSIONS Neurocognitive tests are robust clinical indicators of the disease process prior to reaching criteria for motor diagnosis of HD.
Collapse
Affiliation(s)
- Julie C Stout
- School of Psychology, Psychiatry, and Psychological Medicine, Monash University, UT, USA
| | | | | | | | | | | | | | | | | | | | | | | | | |
Collapse
|
17
|
Dyck M, Winbeck M, Leiberg S, Chen Y, Mathiak K. Virtual faces as a tool to study emotion recognition deficits in schizophrenia. Psychiatry Res 2010; 179:247-52. [PMID: 20483465 DOI: 10.1016/j.psychres.2009.11.004] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/24/2009] [Accepted: 11/09/2009] [Indexed: 11/28/2022]
Abstract
Studies investigating emotion recognition in patients with schizophrenia predominantly presented photographs of facial expressions. Better control and higher flexibility of emotion displays could be afforded by virtual reality (VR). VR allows the manipulation of facial expression and can simulate social interactions in a controlled and yet more naturalistic environment. However, to our knowledge, there is no study that systematically investigated whether patients with schizophrenia show the same emotion recognition deficits when emotions are expressed by virtual as compared to natural faces. Twenty schizophrenia patients and 20 controls rated pictures of natural and virtual faces with respect to the basic emotion expressed (happiness, sadness, anger, fear, disgust, and neutrality). Consistent with our hypothesis, the results revealed that emotion recognition impairments also emerged for emotions expressed by virtual characters. As virtual in contrast to natural expressions only contain major emotional features, schizophrenia patients already seem to be impaired in the recognition of basic emotional features. This finding has practical implication as it supports the use of virtual emotional expressions for psychiatric research: the ease of changing facial features, animating avatar faces, and creating therapeutic simulations makes validated artificial expressions perfectly suited to study and treat emotion recognition deficits in schizophrenia.
Collapse
Affiliation(s)
- Miriam Dyck
- Department of Psychiatry and Psychotherapy, JARA, Translational Brain Medicine, RWTH Aachen University, Aachen, Germany.
| | | | | | | | | |
Collapse
|
18
|
Probing the attentional control theory in social anxiety: an emotional saccade task. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2009; 9:314-22. [PMID: 19679766 DOI: 10.3758/cabn.9.3.314] [Citation(s) in RCA: 78] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Volitional attentional control has been found to rely on prefrontal neuronal circuits. According to the attentional control theory of anxiety, impairment in the volitional control of attention is a prominent feature in anxiety disorders. The present study investigated this assumption in socially anxious individuals using an emotional saccade task with facial expressions (happy, angry, fearful, sad, neutral). The gaze behavior of participants was recorded during the emotional saccade task, in which participants performed either pro- or antisaccades in response to peripherally presented facial expressions. The results show that socially anxious persons have difficulties in inhibiting themselves to reflexively attend to facial expressions: They made more erratic prosaccades to all facial expressions when an antisaccade was required. Thus, these findings indicate impaired attentional control in social anxiety. Overall, the present study shows a deficit of socially anxious individuals in attentional control-for example, in inhibiting the reflexive orienting to neutral as well as to emotional facial expressions. This result may be due to a dysfunction in the prefrontal areas being involved in attentional control.
Collapse
|
19
|
Schrammel F, Pannasch S, Graupner ST, Mojzisch A, Velichkovsky BM. Virtual friend or threat? The effects of facial expression and gaze interaction on psychophysiological responses and emotional experience. Psychophysiology 2009; 46:922-31. [DOI: 10.1111/j.1469-8986.2009.00831.x] [Citation(s) in RCA: 65] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022]
|
20
|
Weyers P, Mühlberger A, Kund A, Hess U, Pauli P. Modulation of facial reactions to avatar emotional faces by nonconscious competition priming. Psychophysiology 2009; 46:328-35. [DOI: 10.1111/j.1469-8986.2008.00771.x] [Citation(s) in RCA: 73] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
21
|
Dyck M, Winbeck M, Leiberg S, Chen Y, Gur RC, Mathiak K. Recognition profile of emotions in natural and virtual faces. PLoS One 2008; 3:e3628. [PMID: 18985152 PMCID: PMC2574410 DOI: 10.1371/journal.pone.0003628] [Citation(s) in RCA: 62] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2008] [Accepted: 10/09/2008] [Indexed: 11/18/2022] Open
Abstract
Background Computer-generated virtual faces become increasingly realistic including the simulation of emotional expressions. These faces can be used as well-controlled, realistic and dynamic stimuli in emotion research. However, the validity of virtual facial expressions in comparison to natural emotion displays still needs to be shown for the different emotions and different age groups. Methodology/Principal Findings Thirty-two healthy volunteers between the age of 20 and 60 rated pictures of natural human faces and faces of virtual characters (avatars) with respect to the expressed emotions: happiness, sadness, anger, fear, disgust, and neutral. Results indicate that virtual emotions were recognized comparable to natural ones. Recognition differences in virtual and natural faces depended on specific emotions: whereas disgust was difficult to convey with the current avatar technology, virtual sadness and fear achieved better recognition results than natural faces. Furthermore, emotion recognition rates decreased for virtual but not natural faces in participants over the age of 40. This specific age effect suggests that media exposure has an influence on emotion recognition. Conclusions/Significance Virtual and natural facial displays of emotion may be equally effective. Improved technology (e.g. better modelling of the naso-labial area) may lead to even better results as compared to trained actors. Due to the ease with which virtual human faces can be animated and manipulated, validated artificial emotional expressions will be of major relevance in future research and therapeutic applications.
Collapse
Affiliation(s)
- Miriam Dyck
- Department of Psychiatry and Psychotherapy, RWTH Aachen University, Aachen, Germany.
| | | | | | | | | | | |
Collapse
|
22
|
Inverted-U effects generalize to the judgment of subjective properties of faces. ACTA ACUST UNITED AC 2008; 70:1274-88. [PMID: 18927009 DOI: 10.3758/pp.70.7.1274] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Researchers studying absolute identification have long known that it takes more time to identify a stimulus in the middle of a range than one at the extremes. That is, there is an inverted-U relation between mean response time and response position. In this task, an inverted-U relation also exists between response uncertainty and response position. Similarly, an inverted-U relation between mean response time and response position has been found for psychometric measures involving questions about the self. However, psychophysicists explain these inverted-U effects differently than do self-schema researchers. We propose an integrative framework in which task constraints explain these effects. To verify the generality of these inverted-U effects, we hypothesized that they would exist in three tasks having similar constraints--in this case, tasks involving the judgment of subjective properties of faces on a Likert-type scale. Our results are consistent with this hypothesis. We discuss the relevance of the results for other applications of Likert-type scales.
Collapse
|
23
|
Early cortical processing of natural and artificial emotional faces differs between lower and higher socially anxious persons. J Neural Transm (Vienna) 2008; 116:735-46. [DOI: 10.1007/s00702-008-0108-6] [Citation(s) in RCA: 173] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2008] [Accepted: 08/06/2008] [Indexed: 11/30/2022]
|
24
|
Wieser MJ, Pauli P, Weyers P, Alpers GW, Mühlberger A. Fear of negative evaluation and the hypervigilance-avoidance hypothesis: an eye-tracking study. J Neural Transm (Vienna) 2008; 116:717-23. [PMID: 18690409 DOI: 10.1007/s00702-008-0101-0] [Citation(s) in RCA: 113] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2008] [Accepted: 07/20/2008] [Indexed: 12/15/2022]
Abstract
The hypervigilance-avoidance hypothesis assumes that anxious individuals initially attend to and subsequently avoid threatening stimuli. In this study pairs of emotional (angry or happy) and neutral facial expressions were presented to students of high or low fear of negative evaluation (FNE) while their eye movements were recorded. High FNE participants initially looked more often at emotional compared to neutral faces, indicating an attentional bias for emotional facial expressions. This effect was further modulated by the sex of the face, as high FNE clearly showed a preference for happy female faces. Analysis of the time course of attention revealed that high FNE looked at the emotional faces longer during the first second of stimulus exposure, whereas they avoided these faces in the consecutive time interval from 1 to 1.5 s. These results partially support the hypervigilance-avoidance hypothesis and additionally indicate the relevance of happy faces for high FNE. Further research should clarify the meaning of happy facial expressions as well as the influence of the sex of the observed face in social anxiety.
Collapse
Affiliation(s)
- Matthias J Wieser
- Department of Biological Psychology, Clinical Psychology and Psychotherapy, University of Würzburg, Marcusstr. 9-11, 97070 Würzburg, Germany
| | | | | | | | | |
Collapse
|
25
|
|
26
|
Hui Sung Lee, Jeong Woo Park, Myung Jin Chung. A Linear Affect–Expression Space Model and Control Points for Mascot-Type Facial Robots. IEEE T ROBOT 2007. [DOI: 10.1109/tro.2007.907477] [Citation(s) in RCA: 31] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
|
27
|
|
28
|
Weyers P, Mühlberger A, Hefele C, Pauli P. Electromyographic responses to static and dynamic avatar emotional facial expressions. Psychophysiology 2006; 43:450-3. [PMID: 16965606 DOI: 10.1111/j.1469-8986.2006.00451.x] [Citation(s) in RCA: 149] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Facial muscular reactions to avatars' static (neutral, happy, angry) and dynamic (morphs developing from neutral to happy or angry) facial expressions, presented for 1 s each, were investigated in 48 participants. Dynamic expressions led to better recognition rates and higher intensity and realism ratings. Angry expressions were rated as more intense than happy expressions. EMG recordings indicated emotion-specific reactions to happy avatars as reflected in increased M. zygomaticus major and decreased M. corrugator supercilii tension, with stronger reactions to dynamic as compared to static expressions. Although rated as more intense, angry expressions elicited no significant M. corrugator supercilii activation. We conclude that facial reactions to angry and to happy facial expressions hold different functions in social interactions. Further research should vary dynamics in different ways and also include additional emotional expressions.
Collapse
Affiliation(s)
- Peter Weyers
- Department of Psychology, Universität Würzburg, Würzburg, Germany.
| | | | | | | |
Collapse
|
29
|
Ku J, Jang HJ, Kim KU, Kim JH, Park SH, Lee JH, Kim JJ, Kim IY, Kim SI. Experimental Results of Affective Valence and Arousal to Avatar's Facial Expressions. ACTA ACUST UNITED AC 2005; 8:493-503. [PMID: 16232042 DOI: 10.1089/cpb.2005.8.493] [Citation(s) in RCA: 34] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
The objectives of this study were to propose a method of presenting dynamic facial expressions to experimental subjects, in order to investigate human perception of avatar's facial expressions of different levels of emotional intensity. The investigation concerned how perception varies according to the strength of facial expression, as well as according to an avatar's gender. To accomplish these goals, we generated a male and a female virtual avatar with five levels of intensity of happiness and anger using a morphing technique. We then recruited 16 normal healthy subjects and measured each subject's emotional reaction by scoring affective arousal and valence after showing them the avatar's face. Through this study, we were able to investigate human perceptual characteristics evoked by male and female avatars' graduated facial expressions of happiness and anger. In addition, we were able to identify that a virtual avatar's facial expression could affect human emotion in different ways according to the avatar's gender and the intensity of its facial expressions. However, we could also see that virtual faces have some limitations because they are not real, so subjects recognized the expressions well, but were not influenced to the same extent. Although a virtual avatar has some limitations in conveying its emotion using facial expressions, this study is significant in that it shows that a new potential exists to use or manipulate emotional intensity by controlling a virtual avatar's facial expression linearly using a morphing technique. Therefore, it is predicted that this technique may be used for assessing emotional characteristics of humans, and may be of particular benefit for work with people with emotional disorders through a presentation of dynamic expression of various emotional intensities.
Collapse
Affiliation(s)
- Jeonghun Ku
- Department of Biomedical Engineering, Hanyang University, Seoul, Korea
| | | | | | | | | | | | | | | | | |
Collapse
|
30
|
Moving Smiles: The Role of Dynamic Components for the Perception of the Genuineness of Smiles. JOURNAL OF NONVERBAL BEHAVIOR 2005. [DOI: 10.1007/s10919-004-0887-x] [Citation(s) in RCA: 51] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|