1
|
Mulder MJ, Prummer F, Terburg D, Kenemans JL. Drift-diffusion modeling reveals that masked faces are preconceived as unfriendly. Sci Rep 2023; 13:16982. [PMID: 37813970 PMCID: PMC10562405 DOI: 10.1038/s41598-023-44162-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2022] [Accepted: 10/04/2023] [Indexed: 10/11/2023] Open
Abstract
During the COVID-19 pandemic, the use of face masks has become a daily routine. Studies have shown that face masks increase the ambiguity of facial expressions which not only affects (the development of) emotion recognition, but also interferes with social interaction and judgement. To disambiguate facial expressions, we rely on perceptual (stimulus-driven) as well as preconceptual (top-down) processes. However, it is unknown which of these two mechanisms accounts for the misinterpretation of masked expressions. To investigate this, we asked participants (N = 136) to decide whether ambiguous (morphed) facial expressions, with or without a mask, were perceived as friendly or unfriendly. To test for the independent effects of perceptual and preconceptual biases we fitted a drift-diffusion model (DDM) to the behavioral data of each participant. Results show that face masks induce a clear loss of information leading to a slight perceptual bias towards friendly choices, but also a clear preconceptual bias towards unfriendly choices for masked faces. These results suggest that, although face masks can increase the perceptual friendliness of faces, people have the prior preconception to interpret masked faces as unfriendly.
Collapse
Affiliation(s)
- Martijn J Mulder
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - Franziska Prummer
- School of Computing and Communications, Lancaster University, Lancaster, UK
| | - David Terburg
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - J Leon Kenemans
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Todd E, Subendran S, Wright G, Guo K. Emotion category-modulated interpretation bias in perceiving ambiguous facial expressions. Perception 2023; 52:695-711. [PMID: 37427421 PMCID: PMC10510303 DOI: 10.1177/03010066231186936] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2023] [Accepted: 06/22/2023] [Indexed: 07/11/2023]
Abstract
In contrast to prototypical facial expressions, we show less perceptual tolerance in perceiving vague expressions by demonstrating an interpretation bias, such as more frequent perception of anger or happiness when categorizing ambiguous expressions of angry and happy faces that are morphed in different proportions and displayed under high- or low-quality conditions. However, it remains unclear whether this interpretation bias is specific to emotion categories or reflects a general negativity versus positivity bias and whether the degree of this bias is affected by the valence or category of two morphed expressions. These questions were examined in two eye-tracking experiments by systematically manipulating expression ambiguity and image quality in fear- and sad-happiness faces (Experiment 1) and by directly comparing anger-, fear-, sadness-, and disgust-happiness expressions (Experiment 2). We found that increasing expression ambiguity and degrading image quality induced a general negativity versus positivity bias in expression categorization. The degree of negativity bias, the associated reaction time and face-viewing gaze allocation were further manipulated by different expression combinations. It seems that although we show a viewing condition-dependent bias in interpreting vague facial expressions that display valence-contradicting expressive cues, it appears that the perception of these ambiguous expressions is guided by a categorical process similar to that involved in perceiving prototypical expressions.
Collapse
|
3
|
Chen Y, Zhao M, Xu Z, Li K, Ji J. Wafer defect recognition method based on multi-scale feature fusion. Front Neurosci 2023; 17:1202985. [PMID: 37332866 PMCID: PMC10272367 DOI: 10.3389/fnins.2023.1202985] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 05/03/2023] [Indexed: 06/20/2023] Open
Abstract
Wafer defect recognition is an important process of chip manufacturing. As different process flows can lead to different defect types, the correct identification of defect patterns is important for recognizing manufacturing problems and fixing them in good time. To achieve high precision identification of wafer defects and improve the quality and production yield of wafers, this paper proposes a Multi-Feature Fusion Perceptual Network (MFFP-Net) inspired by human visual perception mechanisms. The MFFP-Net can process information at various scales and then aggregate it so that the next stage can abstract features from the different scales simultaneously. The proposed feature fusion module can obtain higher fine-grained and richer features to capture key texture details and avoid important information loss. The final experiments show that MFFP-Net achieves good generalized ability and state-of-the-art results on real-world dataset WM-811K, with an accuracy of 96.71%, this provides an effective way for the chip manufacturing industry to improve the yield rate.
Collapse
Affiliation(s)
- Yu Chen
- Research Center for Applied Mechanics, School of Electro-Mechanical Engineering, Xidian University, Xi’an, China
| | - Meng Zhao
- Research Center for Applied Mechanics, School of Electro-Mechanical Engineering, Xidian University, Xi’an, China
- Shaanxi Key Laboratory of Space Extreme Detection, Xi'an, China
| | - Zhenyu Xu
- Shenzhen Institute of Advanced Technology, Chinese Academy of Sciences, Shenzhen, Guangdong, China
| | - Kaiyue Li
- Research Center for Applied Mechanics, School of Electro-Mechanical Engineering, Xidian University, Xi’an, China
| | - Jing Ji
- Research Center for Applied Mechanics, School of Electro-Mechanical Engineering, Xidian University, Xi’an, China
- Shaanxi Key Laboratory of Space Extreme Detection, Xi'an, China
| |
Collapse
|
4
|
FEDA: Fine-grained emotion difference analysis for facial expression recognition. Biomed Signal Process Control 2023. [DOI: 10.1016/j.bspc.2022.104209] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
5
|
Rutt JL, Isaacowitz DM, Freund AM. Age and information preference: Neutral information sources in decision contexts. PLoS One 2022; 17:e0268713. [PMID: 35849571 PMCID: PMC9292105 DOI: 10.1371/journal.pone.0268713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2021] [Accepted: 05/05/2022] [Indexed: 11/19/2022] Open
Abstract
Do adults of different ages differ in their focus on positive, negative, or neutral information when making decisions? Some research suggests an increasing preference for attending to and remembering positive over negative information with advancing age (i.e., an age-related positivity effect). However, these prior studies have largely neglected the potential role of neutral information. The current set of three studies used a multimethod approach, including self-reports (Study 1), eye tracking and choice among faces reflecting negative, neutral, or positive health-related (Study 2) and leisure-related information (Study 3). Gaze results from Studies 2 and 3 as well as self-reports from Study 1 showed a stronger preference for sources of neutral than for positive or negative information regardless of age. Findings also suggest a general preference for decision-relevant information from neutral compared to positive or negative sources. Focusing exclusively on the difference between positive (happy) and negative (angry) faces, results are in line with the age-related positivity effect (i.e., the difference in gaze duration between happy and angry faces was significantly larger for older than for younger adults). These findings underscore the importance of neutral information across age groups. Thus, most research on the positivity effect may be biased in that it does not consider the strong preference for neutral over positive information.
Collapse
Affiliation(s)
- Joshua L. Rutt
- Department of Psychology, University of Zurich, Zurich, Switzerland
- * E-mail: (JLR); (AMF)
| | - Derek M. Isaacowitz
- Department of Psychology, Northeastern University, Boston, Massachusetts, United States of America
| | - Alexandra M. Freund
- Department of Psychology, University of Zurich, Zurich, Switzerland
- University Research Priority Program Dynamics of Healthy Aging, University of Zurich, Zurich, Switzerland
- * E-mail: (JLR); (AMF)
| |
Collapse
|
6
|
Faustmann LL, Eckhardt L, Hamann PS, Altgassen M. The Effects of Separate Facial Areas on Emotion Recognition in Different Adult Age Groups: A Laboratory and a Naturalistic Study. Front Psychol 2022; 13:859464. [PMID: 35846682 PMCID: PMC9281501 DOI: 10.3389/fpsyg.2022.859464] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 06/07/2022] [Indexed: 11/13/2022] Open
Abstract
The identification of facial expressions is critical for social interaction. The ability to recognize facial emotional expressions declines with age. These age effects have been associated with differential age-related looking patterns. The present research project set out to systematically test the role of specific facial areas for emotion recognition across the adult lifespan. Study 1 investigated the impact of displaying only separate facial areas versus the full face on emotion recognition in 62 younger (20-24 years) and 65 middle-aged adults (40-65 years). Study 2 examined if wearing face masks differentially compromises younger (18-33 years, N = 71) versus middle-aged to older adults' (51-83 years, N = 73) ability to identify different emotional expressions. Results of Study 1 suggested no general decrease in emotion recognition across the lifespan; instead, age-related performance seems to depend on the specific emotion and presented face area. Similarly, Study 2 observed only deficits in the identification of angry, fearful, and neutral expressions in older adults, but no age-related differences with regards to happy, sad, and disgusted expressions. Overall, face masks reduced participants' emotion recognition; however, there were no differential age effects. Results are discussed in light of current models of age-related changes in emotion recognition.
Collapse
Affiliation(s)
| | | | | | - Mareike Altgassen
- Department of Psychology, Johannes Gutenberg University Mainz, Mainz, Germany
| |
Collapse
|
7
|
Myronuk L. Effect of telemedicine via videoconference on provider fatigue and empathy: Implications for the Quadruple Aim. Healthc Manage Forum 2022; 35:174-178. [PMID: 35289218 DOI: 10.1177/08404704211059944] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/07/2022]
Abstract
Telemedicine via videoconferencing rapidly deployed during the COVID-19 pandemic reduces contact and opportunity for virus transmission, with Quadruple Aim benefits of improved population health and associated cost avoidance of COVID-related illness. Patient experience of telemedicine has generally been positive, but widespread use of videoconferencing outside of healthcare has brought growing recognition of associated mental fatigue. Experience in telepsychiatry shows attending to non-verbal communication and maintaining empathic rapport requires increased mental effort, making provider experience more sensitive to cumulative fatigue effects. Since empathy and therapeutic alliance are foundational to all physician-patient relationships, these telepsychiatry findings have implications for telehealth generally. Health leaders and providers planning for sustainable incorporation of videoconferencing into ongoing healthcare delivery should consider the potential for unintended negative effects on provider experience and burnout.
Collapse
Affiliation(s)
- Lonn Myronuk
- 8204Vancouver Island Health Authority, Nanaimo, British Columbia, Canada
| |
Collapse
|
8
|
Bogdanova OV, Bogdanov VB, Miller LE, Hadj-Bouziane F. Simulated proximity enhances perceptual and physiological responses to emotional facial expressions. Sci Rep 2022; 12:109. [PMID: 34996925 PMCID: PMC8741866 DOI: 10.1038/s41598-021-03587-z] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 12/02/2021] [Indexed: 11/29/2022] Open
Abstract
Physical proximity is important in social interactions. Here, we assessed whether simulated physical proximity modulates the perceived intensity of facial emotional expressions and their associated physiological signatures during observation or imitation of these expressions. Forty-four healthy volunteers rated intensities of dynamic angry or happy facial expressions, presented at two simulated locations, proximal (0.5 m) and distant (3 m) from the participants. We tested whether simulated physical proximity affected the spontaneous (in the observation task) and voluntary (in the imitation task) physiological responses (activity of the corrugator supercilii face muscle and pupil diameter) as well as subsequent ratings of emotional intensity. Angry expressions provoked relative activation of the corrugator supercilii muscle and pupil dilation, whereas happy expressions induced a decrease in corrugator supercilii muscle activity. In proximal condition, these responses were enhanced during both observation and imitation of the facial expressions, and were accompanied by an increase in subsequent affective ratings. In addition, individual variations in condition related EMG activation during imitation of angry expressions predicted increase in subsequent emotional ratings. In sum, our results reveal novel insights about the impact of physical proximity in the perception of emotional expressions, with early proximity-induced enhancements of physiological responses followed by an increased intensity rating of facial emotional expressions.
Collapse
Affiliation(s)
- Olena V Bogdanova
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France. .,INCIA, CNRS UMR 5287, Université de Bordeaux, Bordeaux, France.
| | - Volodymyr B Bogdanov
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France.,Université de Bordeaux, Collège Science de la Sante, Institut Universitaire des Sciences de la Réadaptation, Handicap Activité Cognition Santé EA 4136, Bordeaux, France
| | - Luke E Miller
- Donders Centre for Cognition of Radboud University in Nijmegen, Nijmegen, The Netherlands
| | - Fadila Hadj-Bouziane
- IMPACT Team, Lyon Neuroscience Research Center, INSERM, U1028, CNRS, UMR5292, University of Lyon, Bron Cedex, France
| |
Collapse
|
9
|
Duran N, Atkinson AP. Foveal processing of emotion-informative facial features. PLoS One 2021; 16:e0260814. [PMID: 34855898 PMCID: PMC8638924 DOI: 10.1371/journal.pone.0260814] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 11/17/2021] [Indexed: 11/18/2022] Open
Abstract
Certain facial features provide useful information for recognition of facial expressions. In two experiments, we investigated whether foveating informative features of briefly presented expressions improves recognition accuracy and whether these features are targeted reflexively when not foveated. Angry, fearful, surprised, and sad or disgusted expressions were presented briefly at locations which would ensure foveation of specific features. Foveating the mouth of fearful, surprised and disgusted expressions improved emotion recognition compared to foveating an eye or cheek or the central brow. Foveating the brow led to equivocal results in anger recognition across the two experiments, which might be due to the different combination of emotions used. There was no consistent evidence suggesting that reflexive first saccades targeted emotion-relevant features; instead, they targeted the closest feature to initial fixation. In a third experiment, angry, fearful, surprised and disgusted expressions were presented for 5 seconds. Duration of task-related fixations in the eyes, brow, nose and mouth regions was modulated by the presented expression. Moreover, longer fixation at the mouth positively correlated with anger and disgust accuracy both when these expressions were freely viewed (Experiment 2b) and when briefly presented at the mouth (Experiment 2a). Finally, an overall preference to fixate the mouth across all expressions correlated positively with anger and disgust accuracy. These findings suggest that foveal processing of informative features is functional/contributory to emotion recognition, but they are not automatically sought out when not foveated, and that facial emotion recognition performance is related to idiosyncratic gaze behaviour.
Collapse
Affiliation(s)
- Nazire Duran
- Department of Psychology, Durham University, Durham, United Kingdom
| | - Anthony P. Atkinson
- Department of Psychology, Durham University, Durham, United Kingdom
- * E-mail:
| |
Collapse
|
10
|
van den Berg NS, de Haan EHF, Huitema RB, Spikman JM. The neural underpinnings of facial emotion recognition in ischemic stroke patients. J Neuropsychol 2021; 15:516-532. [PMID: 33554463 PMCID: PMC8518120 DOI: 10.1111/jnp.12240] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 12/16/2020] [Indexed: 01/19/2023]
Abstract
Deficits in facial emotion recognition occur frequently after stroke, with adverse social and behavioural consequences. The aim of this study was to investigate the neural underpinnings of the recognition of emotional expressions, in particular of the distinct basic emotions (anger, disgust, fear, happiness, sadness and surprise). A group of 110 ischaemic stroke patients with lesions in (sub)cortical areas of the cerebrum was included. Emotion recognition was assessed with the Ekman 60 Faces Test of the FEEST. Patient data were compared to data of 162 matched healthy controls (HC's). For the patients, whole brain voxel-based lesion-symptom mapping (VLSM) on 3-Tesla MRI images was performed. Results showed that patients performed significantly worse than HC's on both overall recognition of emotions, and specifically of disgust, fear, sadness and surprise. VLSM showed significant lesion-symptom associations for FEEST total in the right fronto-temporal region. Additionally, VLSM for the distinct emotions showed, apart from overlapping brain regions (insula, putamen and Rolandic operculum), also regions related to specific emotions. These were: middle and superior temporal gyrus (anger); caudate nucleus (disgust); superior corona radiate white matter tract, superior longitudinal fasciculus and middle frontal gyrus (happiness) and inferior frontal gyrus (sadness). Our findings help in understanding how lesions in specific brain regions can selectively affect the recognition of the basic emotions.
Collapse
Affiliation(s)
- Nils S. van den Berg
- Department of PsychologyUniversity of AmsterdamThe Netherlands
- Department of NeurologyUniversity Medical Center GroningenUniversity of GroningenThe Netherlands
| | | | - Rients B. Huitema
- Department of NeurologyUniversity Medical Center GroningenUniversity of GroningenThe Netherlands
| | - Jacoba M. Spikman
- Department of NeurologyUniversity Medical Center GroningenUniversity of GroningenThe Netherlands
| |
Collapse
|
11
|
Hareli S, Elkabetz S, Hanoch Y, Hess U. Social Perception of Risk-Taking Willingness as a Function of Expressions of Emotions. Front Psychol 2021; 12:655314. [PMID: 34140916 PMCID: PMC8204010 DOI: 10.3389/fpsyg.2021.655314] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Accepted: 04/23/2021] [Indexed: 11/13/2022] Open
Abstract
Two studies showed that emotion expressions serve as cues to the expresser's willingness to take risks in general, as well as in five risk domains (ethical, financial, health and safety, recreational, and social). Emotion expressions did not have a uniform effect on risk estimates across risk domains. Rather, these effects fit behavioral intentions associated with each emotion. Thus, anger expressions were related to ethical and social risks. Sadness reduced perceived willingness to take financial (Study 1 only), recreational, and social risks. Happiness reduced perceived willingness to take ethical and health/safety risks relative to neutrality. Disgust expressions increased the perceived likelihood of taking a social risk. Finally, neutrality increased the perceived willingness to engage in risky behavior in general. Overall, these results suggest that observers use their naïve understanding of the meaning of emotions to infer how likely an expresser is to engage in risky behavior.
Collapse
Affiliation(s)
- Shlomo Hareli
- The Laboratory for the Study of Social Perception of Emotions, Department of Business Administration, University of Haifa, Haifa, Israel
| | - Shimon Elkabetz
- The Laboratory for the Study of Social Perception of Emotions, Department of Business Administration, University of Haifa, Haifa, Israel
| | - Yaniv Hanoch
- Southampton Business School, University of Southampton, Highfield, Southampton, United Kingdom
| | - Ursula Hess
- Department of Psychology, Humboldt-University of Berlin, Berlin, Germany
| |
Collapse
|
12
|
Blom SSAH, Aarts H, Kunst HPM, Wever CC, Semin GR. Facial emotion detection in Vestibular Schwannoma patients with and without facial paresis. Soc Neurosci 2021; 16:317-326. [PMID: 33781177 DOI: 10.1080/17470919.2021.1909127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
This study investigates whether there exist differences in facial emotion detection accuracy in patients suffering from Vestibular Schwannoma (VS) due to their facial paresis. Forty-four VS patients, half of them with, and half of them without a facial paresis, had to classify pictures of facial expressions as being emotional or non-emotional. The visual information of images was systematically manipulated by adding different levels of visual noise. The study had a mixed design with emotional expression (happy vs. angry) and visual noise level (10% to 80%) as repeated measures and facial paresis (present vs. absent) and degree of facial dysfunction as between subjects' factors. Emotion detection accuracy declined when visual information declined, an effect that was stronger for anger than for happy expressions. Overall, emotion detection accuracy for happy and angry faces did not differ between VS patients with or without a facial paresis, although exploratory analyses suggest that the ability to recognize emotions in angry facial expressions was slightly more impaired in patients with facial paresis. The findings are discussed in the context of the effects of facial paresis on emotion detection, and the role of facial mimicry, in particular, as an important mechanism for facial emotion processing and understanding.
Collapse
Affiliation(s)
- Stephanie S A H Blom
- Department of Psychology, Utrecht University & at the William James Center for Research, ISPA, Utrecht, The Netherlands
| | | | - H P M Kunst
- Department of Otolaryngology, Radboud Institute for Health Sciences, Radboud University Medical Center, Nijmegen, The Netherlands.,Department of Otolaryngology, Maastricht UMC+, Maastricht, The Netherlands
| | - Capi C Wever
- Department of Otolaryngology - Head & Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Gün R Semin
- Department of Psychology, Utrecht University & at the William James Center for Research, ISPA, Instituto Universitário, Lisbon, Portugal.,The Netherlands & William James Center for Research, ISPA - Instituto Universitário, Utrecht, Lisbon, Portugal
| |
Collapse
|
13
|
Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective. Atten Percept Psychophys 2021; 83:2159-2173. [PMID: 33759116 DOI: 10.3758/s13414-021-02281-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2021] [Indexed: 11/08/2022]
Abstract
A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.
Collapse
|
14
|
Kinchella J, Guo K. Facial Expression Ambiguity and Face Image Quality Affect Differently on Expression Interpretation Bias. Perception 2021; 50:328-342. [PMID: 33709837 DOI: 10.1177/03010066211000270] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
We often show an invariant or comparable recognition performance for perceiving prototypical facial expressions, such as happiness and anger, under different viewing settings. However, it is unclear to what extent the categorisation of ambiguous expressions and associated interpretation bias are invariant in degraded viewing conditions. In this exploratory eye-tracking study, we systematically manipulated both facial expression ambiguity (via morphing happy and angry expressions in different proportions) and face image clarity/quality (via manipulating image resolution) to measure participants' expression categorisation performance, perceived expression intensity, and associated face-viewing gaze distribution. Our analysis revealed that increasing facial expression ambiguity and decreasing face image quality induced the opposite direction of expression interpretation bias (negativity vs. positivity bias, or increased anger vs. increased happiness categorisation), the same direction of deterioration impact on rating expression intensity, and qualitatively different influence on face-viewing gaze allocation (decreased gaze at eyes but increased gaze at mouth vs. stronger central fixation bias). These novel findings suggest that in comparison with prototypical facial expressions, our visual system has less perceptual tolerance in processing ambiguous expressions which are subject to viewing condition-dependent interpretation bias.
Collapse
|
15
|
Stevenson N, Guo K. Image Valence Modulates the Processing of Low-Resolution Affective Natural Scenes. Perception 2020; 49:1057-1068. [PMID: 32924858 DOI: 10.1177/0301006620957213] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
In natural vision, noisy and distorted visual inputs often change our perceptual strategy in scene perception. However, it is unclear the extent to which the affective meaning embedded in the degraded natural scenes modulates our scene understanding and associated eye movements. In this eye-tracking experiment by presenting natural scene images with different categories and levels of emotional valence (high-positive, medium-positive, neutral/low-positive, medium-negative, and high-negative), we systematically investigated human participants' perceptual sensitivity (image valence categorization and arousal rating) and image-viewing gaze behaviour to the changes of image resolution. Our analysis revealed that reducing image resolution led to decreased valence recognition and arousal rating, decreased number of fixations in image-viewing but increased individual fixation duration, and stronger central fixation bias. Furthermore, these distortion effects were modulated by the scene valence with less deterioration impact on the valence categorization of negatively valenced scenes and on the gaze behaviour in viewing of high emotionally charged (high-positive and high-negative) scenes. It seems that our visual system shows a valence-modulated susceptibility to the image distortions in scene perception.
Collapse
|
16
|
Øvervoll M, Schettino I, Suzuki H, Okubo M, Laeng B. Filtered beauty in Oslo and Tokyo: A spatial frequency analysis of facial attractiveness. PLoS One 2020; 15:e0227513. [PMID: 31935264 PMCID: PMC6959585 DOI: 10.1371/journal.pone.0227513] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2019] [Accepted: 12/19/2019] [Indexed: 01/16/2023] Open
Abstract
Images of European female and male faces were digitally processed to generate spatial frequency (SF) filtered images containing only a narrow band of visual information within the Fourier spectrum. The original unfiltered images and four SF filtered images (low, medium-low, medium-high and high) were then paired in trials that kept constant SF band and face gender and participants made a forced-choice decision about the more attractive among the two faces. In this way, we aimed at identifying those specific SF bands where forced-choice preferences corresponded best to forced-choice judgements made when viewing the natural, broadband, facial images. We found that aesthetic preferences dissociated across SFs and face gender, but similarly for participants from Asia (Japan) and Europe (Norway). Specifically, preferences when viewing SF filtered images were best related to the preference with the broadband face images when viewing the highest filtering band for the female faces (about 48-77 cycles per face). In contrast, for the male faces, the medium-low SF band (about 11-19 cpf) related best to choices made with the natural facial images. Eye tracking provided converging evidence for the above, gender-related, SF dissociations. We suggest greater aesthetic relevance of the mobile and communicative parts for the female face and, conversely, of the rigid, structural, parts for the male face for facial aesthetics.
Collapse
Affiliation(s)
- Morten Øvervoll
- Department of Psychology, University of Tromsø (The Arctic University of Norway), Tromsø, Norway
| | | | - Hikaru Suzuki
- Department of Psychology, Senshu University, Tokyo, Japan
| | - Matia Okubo
- Department of Psychology, Senshu University, Tokyo, Japan
| | - Bruno Laeng
- Department of Psychology, University of Oslo, Oslo, Norway
- RITMO Centre for Interdisciplinary Studies of Rhythm, Time and Motion, University of Oslo, Oslo, Norway
| |
Collapse
|
17
|
Derya D, Kang J, Kwon DY, Wallraven C. Facial Expression Processing Is Not Affected by Parkinson's Disease, but by Age-Related Factors. Front Psychol 2019; 10:2458. [PMID: 31798486 PMCID: PMC6868040 DOI: 10.3389/fpsyg.2019.02458] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2019] [Accepted: 10/17/2019] [Indexed: 11/20/2022] Open
Abstract
The question whether facial expression processing may be impaired in Parkinson’s disease (PD) patients so far has yielded equivocal results – existing studies, however, have focused on testing expression processing in recognition tasks with static images of six standard, emotional facial expressions. Given that non-verbal communication contains both emotional and non-emotional, conversational expressions and that input to the brain is usually dynamic, here we address the question of potential facial expression processing differences in a novel format: we test a range of conversational and emotional, dynamic facial expressions in three groups – PD patients (n = 20), age- and education-matched older healthy controls (n = 20), and younger adult healthy controls (n = 20). This setup allows us to address both effects of PD and age-related differences. We employed a rating task for all groups in which 12 rating dimensions were used to assess evaluative processing of 27 expression videos from six different actors. We found that ratings overall were consistent across groups with several rating dimensions (such as arousal or outgoingness) having a strong correlation with the expressions’ motion energy content as measured by optic flow analysis. Most importantly, we found that the PD group did not differ in any rating dimension from the older healthy control group (HCG), indicating highly similar evaluation processing. Both older groups, however, did show significant differences for several rating scales in comparison with the younger adults HCG. Looking more closely, older participants rated negative expressions compared to the younger participants as more positive, but also as less natural, persuasive, empathic, and sincere. We interpret these findings in the context of the positivity effect and in-group processing advantages. Overall, our findings do not support strong processing deficits due to PD, but rather point to age-related differences in facial expression processing.
Collapse
Affiliation(s)
- Dilara Derya
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - June Kang
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea
| | - Do-Young Kwon
- Department of Neurology, Korea University Ansan Hospital, Korea University College of Medicine, Ansan-si, South Korea
| | - Christian Wallraven
- Department of Brain and Cognitive Engineering, Korea University, Seoul, South Korea.,Department of Artificial Intelligence, Korea University, Seoul, South Korea
| |
Collapse
|
18
|
Gunes H, Celiktutan O, Sariyanidi E. Live human-robot interactive public demonstrations with automatic emotion and personality prediction. Philos Trans R Soc Lond B Biol Sci 2019; 374:20180026. [PMID: 30853000 PMCID: PMC6452249 DOI: 10.1098/rstb.2018.0026] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/14/2019] [Indexed: 02/05/2023] Open
Abstract
Communication with humans is a multi-faceted phenomenon where the emotions, personality and non-verbal behaviours, as well as the verbal behaviours, play a significant role, and human-robot interaction (HRI) technologies should respect this complexity to achieve efficient and seamless communication. In this paper, we describe the design and execution of five public demonstrations made with two HRI systems that aimed at automatically sensing and analysing human participants' non-verbal behaviour and predicting their facial action units, facial expressions and personality in real time while they interacted with a small humanoid robot. We describe an overview of the challenges faced together with the lessons learned from those demonstrations in order to better inform the science and engineering fields to design and build better robots with more purposeful interaction capabilities. This article is part of the theme issue 'From social brains to social robots: applying neurocognitive insights to human-robot interaction'.
Collapse
Affiliation(s)
- Hatice Gunes
- Department of Computer Science and Technology, University of Cambridge, Cambridge CB3 0FD, UK
| | - Oya Celiktutan
- Centre for Robotics Research, Department of Informatics, King’s College London, London WC2R 2LS, UK
| | | |
Collapse
|
19
|
Macoir J, Hudon C, Tremblay MP, Laforce RJ, Wilson MA. The contribution of semantic memory to the recognition of basic emotions and emotional valence: Evidence from the semantic variant of primary progressive aphasia. Soc Neurosci 2019; 14:705-716. [PMID: 30714843 DOI: 10.1080/17470919.2019.1577295] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
There is compelling evidence that semantic memory is involved in emotion recognition. However, its contribution to the recognition of emotional valence and basic emotions remains unclear. We compared the performance of 10 participants with the semantic variant of primary progressive aphasia (svPPA), a clinical model of semantic memory impairment, to that of 33 healthy participants using three experimental tasks assessing the recognition of: 1) emotional valence conveyed by photographic scenes, 2) basic emotions conveyed by facial expressions, and 3) basic emotions conveyed by prosody sounds. Individuals with svPPA showed significant deficits in the recognition of emotional valence and basic emotions (except happiness and surprise conveyed by facial expressions). However, the performance of the two groups was comparable when the performance on tests assessing semantic memory was added as a covariate in the analyses. Altogether, these results suggest that semantic memory contributes to the recognition of emotional valence and basic emotions. By examining the recognition of emotional valence and basic emotions in individuals with selective semantic memory loss, our results contribute to the refinement of current theories on the role of semantic memory in emotion recognition.
Collapse
Affiliation(s)
- Joël Macoir
- Faculté de médecine, Département de réadaptation, Université Laval , Québec , QC , Canada.,Centre de recherche CERVO - Brain Research Centre , Québec , QC , Canada
| | - Carol Hudon
- Centre de recherche CERVO - Brain Research Centre , Québec , QC , Canada.,École de psychologie, Université Laval , Québec , QC , Canada
| | - Marie-Pier Tremblay
- Centre de recherche CERVO - Brain Research Centre , Québec , QC , Canada.,École de psychologie, Université Laval , Québec , QC , Canada
| | - Robert Jr Laforce
- Département des sciences neurologiques, Clinique interdisciplinaire de mémoire du CHU de Québec , Québec , QC , Canada.,Faculté de médecine, Département de médecine, Université Laval , Québec , QC , Canada
| | - Maximiliano A Wilson
- Faculté de médecine, Département de réadaptation, Université Laval , Québec , QC , Canada.,Centre de recherche CERVO - Brain Research Centre , Québec , QC , Canada
| |
Collapse
|
20
|
Smith FW, Rossit S. Identifying and detecting facial expressions of emotion in peripheral vision. PLoS One 2018; 13:e0197160. [PMID: 29847562 PMCID: PMC5976168 DOI: 10.1371/journal.pone.0197160] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2017] [Accepted: 04/27/2018] [Indexed: 11/24/2022] Open
Abstract
Facial expressions of emotion are signals of high biological value. Whilst recognition of facial expressions has been much studied in central vision, the ability to perceive these signals in peripheral vision has only seen limited research to date, despite the potential adaptive advantages of such perception. In the present experiment, we investigate facial expression recognition and detection performance for each of the basic emotions (plus neutral) at up to 30 degrees of eccentricity. We demonstrate, as expected, a decrease in recognition and detection performance with increasing eccentricity, with happiness and surprised being the best recognized expressions in peripheral vision. In detection however, while happiness and surprised are still well detected, fear is also a well detected expression. We show that fear is a better detected than recognized expression. Our results demonstrate that task constraints shape the perception of expression in peripheral vision and provide novel evidence that detection and recognition rely on partially separate underlying mechanisms, with the latter more dependent on the higher spatial frequency content of the face stimulus.
Collapse
Affiliation(s)
- Fraser W. Smith
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| | - Stephanie Rossit
- School of Psychology, University of East Anglia, Norwich, United Kingdom
| |
Collapse
|
21
|
Gur M. Very small faces are easily discriminated under long and short exposure times. J Neurophysiol 2018; 119:1599-1607. [DOI: 10.1152/jn.00622.2017] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Acuity measures related to overall face size that can be perceived have not been studied quantitatively. Consequently, experimenters use a wide range of sizes (usually large) without always providing a rationale for their choices. I studied thresholds for face discrimination by presenting both long (500 ms)- and short (17, 33, 50 ms)-duration stimuli. Face width threshold for the long presentation was ~0.2°, and thresholds for the flashed stimuli ranged from ~0.3° for the 17-ms flash to ~0.23° for the 33- and 50-ms flashes. Such thresholds indicate that face stimuli used in physiological or psychophysical experiments are often too large to tap human fine spatial capabilities, and thus interpretations of such experiments should take into account face discrimination acuity. The 0.2° threshold found in this study is incompatible with the prevalent view that faces are represented by a population of specialized “face cells” because those cells do not respond to <1° stimuli and are optimally tuned to >4° faces. Also, the ability to discriminate small, high-spatial frequency flashed face stimuli is inconsistent with models suggesting that fixational drift transforms retinal spatial patterns into a temporal code. It seems therefore that the small image motions occurring during fixation do not disrupt our perception, because all relevant processing is over with before those motions can have significant effects. NEW & NOTEWORTHY Although face perception is central to human behavior, the minimally perceived face size is not known. This study shows that humans can discriminate very small (~0.2°) faces. Furthermore, even when flashed for tens of milliseconds, ~0.25° faces can be discriminated. Such fine acuity should impact modeling of physiological mechanisms of face perception. The ability to discriminate flashed faces where there is almost no eye movement indicates that eye drift is not essential for visibility.
Collapse
Affiliation(s)
- Moshe Gur
- Department of Biomedical Engineering, Technion, Haifa, Israel
| |
Collapse
|
22
|
How socioemotional setting modulates late-stage conflict resolution processes in the lateral prefrontal cortex. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2018; 18:521-535. [DOI: 10.3758/s13415-018-0585-5] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
23
|
Abstract
Emotions correspond to the execution of a number of computations by the central nervous system. Previous research has studied the hypothesis that some of these computations yield visually identifiable facial muscle movements. Here, we study the supplemental hypothesis that some of these computations yield facial blood flow changes unique to the category and valence of each emotion. These blood flow changes are visible as specific facial color patterns to observers, who can then successfully decode the emotion. We present converging computational and behavioral evidence in favor of this hypothesis. Our studies demonstrate that people identify the correct emotion category and valence from these facial colors, even in the absence of any facial muscle movement. Facial expressions of emotion in humans are believed to be produced by contracting one’s facial muscles, generally called action units. However, the surface of the face is also innervated with a large network of blood vessels. Blood flow variations in these vessels yield visible color changes on the face. Here, we study the hypothesis that these visible facial colors allow observers to successfully transmit and visually interpret emotion even in the absence of facial muscle activation. To study this hypothesis, we address the following two questions. Are observable facial colors consistent within and differential between emotion categories and positive vs. negative valence? And does the human visual system use these facial colors to decode emotion from faces? These questions suggest the existence of an important, unexplored mechanism of the production of facial expressions of emotion by a sender and their visual interpretation by an observer. The results of our studies provide evidence in favor of our hypothesis. We show that people successfully decode emotion using these color features, even in the absence of any facial muscle activation. We also demonstrate that this color signal is independent from that provided by facial muscle movements. These results support a revised model of the production and perception of facial expressions of emotion where facial color is an effective mechanism to visually transmit and decode emotion.
Collapse
|
24
|
Guo K, Soornack Y, Settle R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vision Res 2018; 157:112-122. [PMID: 29496513 DOI: 10.1016/j.visres.2018.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 02/02/2018] [Accepted: 02/04/2018] [Indexed: 11/29/2022]
Abstract
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, UK.
| | | | | |
Collapse
|
25
|
Liedtke C, Kohl W, Kret ME, Koelkebeck K. Emotion recognition from faces with in- and out-group features in patients with depression. J Affect Disord 2018; 227:817-823. [PMID: 29689696 DOI: 10.1016/j.jad.2017.11.085] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/06/2017] [Revised: 11/08/2017] [Accepted: 11/13/2017] [Indexed: 11/25/2022]
Abstract
BACKGROUND Previous research has shown that context (e.g. culture) can have an impact on speed and accuracy when identifying facial expressions of emotion. Patients with a major depressive disorder (MDD) are known to have deficits in the identification of facial expressions, tending to give rather stereotypical judgments. While healthy individuals perceive situations which conflict with their own cultural values more negatively, this pattern would be even stronger in MDD patients, as their altered mood results in stronger biases. In this study we investigate the effect of cultural contextual cues on emotion identification in depression. METHODS Emotional faces were presented for 100ms to 34 patients with an MDD and matched controls. Stimulus faces were either covered by a cap and scarf (in-group condition) or by an Islamic headdress (niqab; out-group condition). Speed and accuracy were evaluated. RESULTS Results showed that across groups, fearful faces were identified faster and with higher accuracy in the out-group than in the in-group condition. Sadness was also identified more accurately in the out-group condition. In comparison, happy faces were more accurately (and tended to be faster) identified in the in-group condition. Furthermore, MDD patients were slower, yet not more accurate in identifying expressions of emotion compared to controls. LIMITATIONS All patients were on pharmacological treatment. Participants' political orientation was not included. The experiment differs from real life situations. CONCLUSION While our results underline findings that cultural context has a general impact on emotion identification, this effect was not found to be more prominent in patients with MDD.
Collapse
Affiliation(s)
- Carla Liedtke
- University of Muenster, School of Medicine, Department of Psychiatry and Psychotherapy, Albert-Schweitzer-Campus 1, Building A 9, 48149 Muenster, Germany.
| | - Waldemar Kohl
- University of Muenster, School of Medicine, Department of Psychiatry and Psychotherapy, Albert-Schweitzer-Campus 1, Building A 9, 48149 Muenster, Germany.
| | - Mariska Esther Kret
- Leiden University, Cognitive Psychology Unit and Leiden Institute of Brain and Cognition, Postzone C2-S, P.O.Box 9600, 2300 RC Leiden, The Netherlands.
| | - Katja Koelkebeck
- University of Muenster, School of Medicine, Department of Psychiatry and Psychotherapy, Albert-Schweitzer-Campus 1, Building A 9, 48149 Muenster, Germany.
| |
Collapse
|
26
|
Sex differences in facial emotion recognition across varying expression intensity levels from videos. PLoS One 2018; 13:e0190634. [PMID: 29293674 PMCID: PMC5749848 DOI: 10.1371/journal.pone.0190634] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2017] [Accepted: 12/18/2017] [Indexed: 11/29/2022] Open
Abstract
There has been much research on sex differences in the ability to recognise facial expressions of emotions, with results generally showing a female advantage in reading emotional expressions from the face. However, most of the research to date has used static images and/or ‘extreme’ examples of facial expressions. Therefore, little is known about how expression intensity and dynamic stimuli might affect the commonly reported female advantage in facial emotion recognition. The current study investigated sex differences in accuracy of response (Hu; unbiased hit rates) and response latencies for emotion recognition using short video stimuli (1sec) of 10 different facial emotion expressions (anger, disgust, fear, sadness, surprise, happiness, contempt, pride, embarrassment, neutral) across three variations in the intensity of the emotional expression (low, intermediate, high) in an adolescent and adult sample (N = 111; 51 male, 60 female) aged between 16 and 45 (M = 22.2, SD = 5.7). Overall, females showed more accurate facial emotion recognition compared to males and were faster in correctly recognising facial emotions. The female advantage in reading expressions from the faces of others was unaffected by expression intensity levels and emotion categories used in the study. The effects were specific to recognition of emotions, as males and females did not differ in the recognition of neutral faces. Together, the results showed a robust sex difference favouring females in facial emotion recognition using video stimuli of a wide range of emotions and expression intensity variations.
Collapse
|
27
|
Franklin RG, Zebrowitz LA. Age Differences In Emotion Recognition: Task Demands Or Perceptual Dedifferentiation? Exp Aging Res 2017; 43:453-466. [DOI: 10.1080/0361073x.2017.1369628] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Robert G. Franklin
- Department of Behavioral Sciences, Anderson University, Anderson, South Carolina, USA
| | | |
Collapse
|
28
|
Gülkesen KH, Isleyen F, Cinemre B, Samur MK, Sen Kaya S, Zayim N. A Web-based Game for Teaching Facial Expressions to Schizophrenic Patients. Appl Clin Inform 2017; 8:719-730. [PMID: 28696479 PMCID: PMC6220685 DOI: 10.4338/aci-2016-10-ra-0172] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2016] [Accepted: 04/27/2017] [Indexed: 01/21/2023] Open
Abstract
BACKGROUND Recognizing facial expressions is an important social skill. In some psychological disorders such as schizophrenia, loss of this skill may complicate the patient's daily life. Prior research has shown that information technology may help to develop facial expression recognition skills through educational software and games. OBJECTIVES To examine if a computer game designed for teaching facial expressions would improve facial expression recognition skills of patients with schizophrenia. METHODS We developed a website composed of eight serious games. Thirty-two patients were given a pre-test composed of 21 facial expression photographs. Eighteen patients were in the study group while 14 were in the control group. Patients in the study group were asked to play the games on the website. After a period of one month, we performed a post-test for all patients. RESULTS The median score of the correct answers was 17.5 in the control group whereas it was 16.5 in the study group (of 21) in pretest. The median post-test score was 18 in the control group (p=0.052) whereas it was 20 in the study group (p<0.001). CONCLUSIONS Computer games may be used for the purpose of educating people who have difficulty in recognizing facial expressions.
Collapse
Affiliation(s)
- Kemal Hakan Gülkesen
- Kemal Hakan Gülkesen, Department of Biostatistics and Medical Informatics, Faculty of Medicine, Akdeniz University, Antalya, Turkey,
| | | | | | | | | | | |
Collapse
|
29
|
Chamberland J, Roy-Charland A, Perron M, Dickinson J. Distinction between fear and surprise: an interpretation-independent test of the perceptual-attentional limitation hypothesis. Soc Neurosci 2016; 12:751-768. [PMID: 27767385 DOI: 10.1080/17470919.2016.1251964] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The perceptual-attentional limitation hypothesis posits that the confusion between emotional facial expressions of fear and surprise may be due to their visual similarity, with shared muscle movements. In Experiment 1 full face images of fear and surprise varying as a function of distinctiveness (mouth index, brow index, or both indices) were displayed in a gender oddball task. Experiment 2, in a similar task, directed attention toward the eye or mouth region with a blurring technique. The current two studies used response time and event-related potentials (ERP) to test the perceptual-attentional limitation hypothesis. While ERP results for Experiment 1 suggested that individuals may not have perceived a difference between the emotional expressions in any of the conditions, response time results suggested that individuals processed a difference between fear and surprise when a distinctive cue was in the mouth. With directed attention in Experiment 2, ERP results indicated that individuals were capable of detecting a difference in all the conditions. In effect, the current two experiments suggest that participants display difficulty in distinguishing the prototypes of fear and surprise with the eye region, which may be due to a lack of attention to that region, providing support for the attentional limitation hypothesis.
Collapse
Affiliation(s)
| | | | - Melanie Perron
- a Department of Psychology , Laurentian University , Sudbury , Canada
| | - Joël Dickinson
- a Department of Psychology , Laurentian University , Sudbury , Canada
| |
Collapse
|
30
|
Delis I, Chen C, Jack RE, Garrod OGB, Panzeri S, Schyns PG. Space-by-time manifold representation of dynamic facial expressions for emotion categorization. J Vis 2016; 16:14. [PMID: 27305521 PMCID: PMC4927208 DOI: 10.1167/16.8.14] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2015] [Accepted: 04/17/2016] [Indexed: 11/24/2022] Open
Abstract
Visual categorization is the brain computation that reduces high-dimensional information in the visual environment into a smaller set of meaningful categories. An important problem in visual neuroscience is to identify the visual information that the brain must represent and then use to categorize visual inputs. Here we introduce a new mathematical formalism-termed space-by-time manifold decomposition-that describes this information as a low-dimensional manifold separable in space and time. We use this decomposition to characterize the representations used by observers to categorize the six classic facial expressions of emotion (happy, surprise, fear, disgust, anger, and sad). By means of a Generative Face Grammar, we presented random dynamic facial movements on each experimental trial and used subjective human perception to identify the facial movements that correlate with each emotion category. When the random movements projected onto the categorization manifold region corresponding to one of the emotion categories, observers categorized the stimulus accordingly; otherwise they selected "other." Using this information, we determined both the Action Unit and temporal components whose linear combinations lead to reliable categorization of each emotion. In a validation experiment, we confirmed the psychological validity of the resulting space-by-time manifold representation. Finally, we demonstrated the importance of temporal sequencing for accurate emotion categorization and identified the temporal dynamics of Action Unit components that cause typical confusions between specific emotions (e.g., fear and surprise) as well as those resolving these confusions.
Collapse
|
31
|
Wingenbach TSH, Ashwin C, Brosnan M. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions. PLoS One 2016; 11:e0147112. [PMID: 26784347 PMCID: PMC4718603 DOI: 10.1371/journal.pone.0147112] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Collapse
Affiliation(s)
| | - Chris Ashwin
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Mark Brosnan
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
32
|
Gerhardsson A, Högman L, Fischer H. Viewing distance matter to perceived intensity of facial expressions. Front Psychol 2015; 6:944. [PMID: 26191035 PMCID: PMC4488603 DOI: 10.3389/fpsyg.2015.00944] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/11/2015] [Accepted: 06/22/2015] [Indexed: 11/17/2022] Open
Abstract
In our daily perception of facial expressions, we depend on an ability to generalize across the varied distances at which they may appear. This is important to how we interpret the quality and the intensity of the expression. Previous research has not investigated whether this so called perceptual constancy also applies to the experienced intensity of facial expressions. Using a psychophysical measure (Borg CR100 scale) the present study aimed to further investigate perceptual constancy of happy and angry facial expressions at varied sizes, which is a proxy for varying viewing distances. Seventy-one (42 females) participants rated the intensity and valence of facial expressions varying in distance and intensity. The results demonstrated that the perceived intensity (PI) of the emotional facial expression was dependent on the distance of the face and the person perceiving it. An interaction effect was noted, indicating that close-up faces are perceived as more intense than faces at a distance and that this effect is stronger the more intense the facial expression truly is. The present study raises considerations regarding constancy of the PI of happy and angry facial expressions at varied distances.
Collapse
Affiliation(s)
- Andreas Gerhardsson
- Stress Research Institute, Stockholm University, Stockholm Sweden ; Department of Psychology, Stockholm University, Stockholm Sweden
| | - Lennart Högman
- Department of Psychology, Stockholm University, Stockholm Sweden
| | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm Sweden
| |
Collapse
|
33
|
Takahashi N, Liu CH, Yamada H. Adaptation aftereffects may decipher Ophelia's facial expression. Perception 2015; 43:1393-9. [PMID: 25669055 DOI: 10.1068/p7838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
Ophelia is a 19th century painting by John Everett Millais. It shows Ophelia with a blank look to encourage the viewer's own imagination (Rosenfeld & Smith, 2007, Millais. London: Tate Publishing). Using the face adaptation paradigm, we attempted to identify the subtle emotion a viewer might perceive from Ophelia's expression. Since adapting to an expression is known to lower the viewer's subsequent sensitivity to that expression, we hypothesized that adaptation to Ophelia would impair identification of a similar expression. Participants adapted to Ophelia's face before identifying expression of a schematic face that was variably morphed between a neutral expression and each of the six basic expressions. Results showed a selective impairment of identification for sadness, suggesting that sadness was what participants perceived. The study demonstrates that high-level adaptation can reveal aesthetic experience and its neural mechanisms.
Collapse
|
34
|
Guo K, Shaw H. Face in profile view reduces perceived facial expression intensity: an eye-tracking study. Acta Psychol (Amst) 2015; 155:19-28. [PMID: 25531122 DOI: 10.1016/j.actpsy.2014.12.001] [Citation(s) in RCA: 32] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2014] [Revised: 11/28/2014] [Accepted: 12/03/2014] [Indexed: 10/24/2022] Open
Abstract
Recent studies measuring the facial expressions of emotion have focused primarily on the perception of frontal face images. As we frequently encounter expressive faces from different viewing angles, having a mechanism which allows invariant expression perception would be advantageous to our social interactions. Although a couple of studies have indicated comparable expression categorization accuracy across viewpoints, it is unknown how perceived expression intensity and associated gaze behaviour change across viewing angles. Differences could arise because diagnostic cues from local facial features for decoding expressions could vary with viewpoints. Here we manipulated orientation of faces (frontal, mid-profile, and profile view) displaying six common facial expressions of emotion, and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. In comparison with frontal faces, profile faces slightly reduced identification rates for disgust and sad expressions, but significantly decreased perceived intensity for all tested expressions. Although quantitatively viewpoint had expression-specific influence on the proportion of fixations directed at local facial features, the qualitative gaze distribution within facial features (e.g., the eyes tended to attract the highest proportion of fixations, followed by the nose and then the mouth region) was independent of viewpoint and expression type. Our results suggest that the viewpoint-invariant facial expression processing is categorical perception, which could be linked to a viewpoint-invariant holistic gaze strategy for extracting expressive facial cues.
Collapse
|
35
|
Abstract
Understanding the different categories of facial expressions of emotion regularly used by us is essential to gain insights into human cognition and affect as well as for the design of computational models and perceptual interfaces. Past research on facial expressions of emotion has focused on the study of six basic categories--happiness, surprise, anger, sadness, fear, and disgust. However, many more facial expressions of emotion exist and are used regularly by humans. This paper describes an important group of expressions, which we call compound emotion categories. Compound emotions are those that can be constructed by combining basic component categories to create new ones. For instance, happily surprised and angrily surprised are two distinct compound emotion categories. The present work defines 21 distinct emotion categories. Sample images of their facial expressions were collected from 230 human subjects. A Facial Action Coding System analysis shows the production of these 21 categories is different but consistent with the subordinate categories they represent (e.g., a happily surprised expression combines muscle movements observed in happiness and surprised). We show that these differences are sufficient to distinguish between the 21 defined categories. We then use a computational model of face perception to demonstrate that most of these categories are also visually discriminable from one another.
Collapse
|
36
|
Guo K. Size-invariant facial expression categorization and associated gaze allocation within social interaction space. Perception 2014; 42:1027-42. [PMID: 24494434 DOI: 10.1068/p7552] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
As faces often appear under very different viewing conditions (eg brightness, viewing angle, or viewing distance), invariant facial information recognition is a key to our social interactions. Although we would clearly benefit from differentiating different facial expressions (eg angry vs happy) at a distance, there is surprisingly little research examining how expression categorization and associated gaze allocation are affected by viewing distance within the range of typical social space. In this study I systematically varied the size of faces displaying six basic facial expressions of emotion with varying intensities to mimic viewing distances ranging from arms length to 5 m, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. Irrespective of the displayed expression and its intensity, the participants showed indistinguishable categorization accuracy and reaction time across the tested face sizes. Reducing face size decreased the number of fixations directed at the faces but increased individual fixation durations, and shifted gaze distribution from scanning all key internal facial features to fixating at mainly the central face region. The results suggest size-invariant facial expression categorization behaviour within social interaction distance which could be linked to a holistic gaze strategy for extracting expressive facial cues.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln LN6 7TS, UK.
| |
Collapse
|
37
|
Mitchell AE, Dickens GL, Picchioni MM. Facial Emotion Processing in Borderline Personality Disorder: A Systematic Review and Meta-Analysis. Neuropsychol Rev 2014; 24:166-84. [DOI: 10.1007/s11065-014-9254-9] [Citation(s) in RCA: 75] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2013] [Accepted: 02/11/2014] [Indexed: 01/16/2023]
|
38
|
Lopatovska I. Toward a model of emotions and mood in the online information search process. J Assoc Inf Sci Technol 2014. [DOI: 10.1002/asi.23078] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022]
Affiliation(s)
- Irene Lopatovska
- School of Library and Information Science; Pratt Institute; 144 W. 14th Street New York NY 10011-7301
| |
Collapse
|
39
|
Pushing the boundaries of human expertise in face perception: Emotion expression identification and error as a function of presentation angle, presentation time, and emotion. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2014. [DOI: 10.1016/j.jesp.2013.10.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
40
|
Hühnel I, Fölster M, Werheid K, Hess U. Empathic reactions of younger and older adults: No age related decline in affective responding. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2014. [DOI: 10.1016/j.jesp.2013.09.011] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
|
41
|
BESST (Bochum Emotional Stimulus Set)--a pilot validation study of a stimulus set containing emotional bodies and faces from frontal and averted views. Psychiatry Res 2013; 209:98-109. [PMID: 23219103 DOI: 10.1016/j.psychres.2012.11.012] [Citation(s) in RCA: 50] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/25/2012] [Revised: 10/16/2012] [Accepted: 11/10/2012] [Indexed: 11/20/2022]
Abstract
This article introduces the freely available Bochum Emotional Stimulus Set (BESST), which contains pictures of bodies and faces depicting either a neutral expression or one of the six basic emotions (happiness, sadness, fear, anger, disgust, and surprise), presented from two different perspectives (0° frontal view vs. camera averted by 45° to the left). The set comprises 565 frontal view and 564 averted view pictures of real-life bodies with masked facial expressions and 560 frontal and 560 averted view faces which were synthetically created using the FaceGen 3.5 Modeller. All stimuli were validated in terms of categorization accuracy and the perceived naturalness of the expression. Additionally, each facial stimulus was morphed into three age versions (20/40/60 years). The results show high recognition of the intended facial expressions, even under speeded forced-choice conditions, as corresponds to common experimental settings. The average naturalness ratings for the stimuli range between medium and high.
Collapse
|
42
|
Zhao K, Yan WJ, Chen YH, Zuo XN, Fu X. Amygdala volume predicts inter-individual differences in fearful face recognition. PLoS One 2013; 8:e74096. [PMID: 24009767 PMCID: PMC3756978 DOI: 10.1371/journal.pone.0074096] [Citation(s) in RCA: 27] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2013] [Accepted: 07/26/2013] [Indexed: 12/16/2022] Open
Abstract
The present study investigates the relationship between inter-individual differences in fearful face recognition and amygdala volume. Thirty normal adults were recruited and each completed two identical facial expression recognition tests offline and two magnetic resonance imaging (MRI) scans. Linear regression indicated that the left amygdala volume negatively correlated with the accuracy of recognizing fearful facial expressions and positively correlated with the probability of misrecognizing fear as surprise. Further exploratory analyses revealed that this relationship did not exist for any other subcortical or cortical regions. Nor did such a relationship exist between the left amygdala volume and performance recognizing the other five facial expressions. These mind-brain associations highlight the importance of the amygdala in recognizing fearful faces and provide insights regarding inter-individual differences in sensitivity toward fear-relevant stimuli.
Collapse
Affiliation(s)
- Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Wen-Jing Yan
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Yu-Hsin Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- University of Chinese Academy of Sciences, Beijing, China
| | - Xi-Nian Zuo
- Key Laboratory of Behavior Science, Magnetic Resonance Imaging Research Center, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
43
|
Tran US, Lamplmayr E, Pintzinger NM, Pfabigan DM. Happy and angry faces: Subclinical levels of anxiety are differentially related to attentional biases in men and women. JOURNAL OF RESEARCH IN PERSONALITY 2013. [DOI: 10.1016/j.jrp.2013.03.007] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
44
|
Du S, Martinez AM. Wait, are you sad or angry? Large exposure time differences required for the categorization of facial expressions of emotion. J Vis 2013; 13:13. [PMID: 23509409 DOI: 10.1167/13.4.13] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Facial expressions of emotion are essential components of human behavior, yet little is known about the hierarchical organization of their cognitive analysis. We study the minimum exposure time needed to successfully classify the six classical facial expressions of emotion (joy, surprise, sadness, anger, disgust, fear) plus neutral as seen at different image resolutions (240 × 160 to 15 × 10 pixels). Our results suggest a consistent hierarchical analysis of these facial expressions regardless of the resolution of the stimuli. Happiness and surprise can be recognized after very short exposure times (10-20 ms), even at low resolutions. Fear and anger are recognized the slowest (100-250 ms), even in high-resolution images, suggesting a later computation. Sadness and disgust are recognized in between (70-200 ms). The minimum exposure time required for successful classification of each facial expression correlates with the ability of a human subject to identify it correctly at low resolutions. These results suggest a fast, early computation of expressions represented mostly by low spatial frequencies or global configural cues and a later, slower process for those categories requiring a more fine-grained analysis of the image. We also demonstrate that those expressions that are mostly visible in higher-resolution images are not recognized as accurately. We summarize implications for current computational models.
Collapse
Affiliation(s)
- Shichuan Du
- The Ohio State University, Columbus, OH, USA
| | | |
Collapse
|
45
|
Soria Bauser D, Thoma P, Suchan B. Turn to me: electrophysiological correlates of frontal vs. averted view face and body processing are associated with trait empathy. Front Integr Neurosci 2012; 6:106. [PMID: 23226118 PMCID: PMC3510484 DOI: 10.3389/fnint.2012.00106] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/23/2012] [Accepted: 10/29/2012] [Indexed: 12/30/2022] Open
Abstract
The processing of emotional faces and bodies has been associated with brain regions related to empathic responding in interpersonal contexts. The aim of the present Electroencephalography (EEG) study was to investigate differences in the time course underlying the processing of bodies and faces showing neutral, happy, or angry expressions. The P100 and N170 were analyzed in response to the presentation of bodies and faces. Stimuli were presented either from a perspective facing the observer directly or being averted by 45° to manipulate the degree to which the participants had the impression of being involved in a dyadic interpersonal interaction. Participants were instructed to identify the emotional expression (neutral, happy, or angry) by pressing the corresponding button. The result pattern mirrored poorer behavioral performance for averted relative to frontal stimuli. P100 amplitudes were enhanced and latencies shorter for averted relative to frontal bodies, while P100 and N170 components were additionally affected by electrode position and hemisphere for faces. Affective trait empathy correlated with faster recognition of facial emotions and most consistently with higher recognition accuracy and larger N170 amplitudes for angry expressions, while cognitive trait empathy was mostly linked to shorter P100 latencies for averted expressions. The results highlight the contribution of trait empathy to fast and accurate identification of emotional faces and emotional actions conveyed by bodies.
Collapse
Affiliation(s)
- Denise Soria Bauser
- Department of Neuropsychology, Institute of Cognitive Neuroscience Ruhr University Bochum, Germany
| | | | | |
Collapse
|
46
|
Guo K. Holistic gaze strategy to categorize facial expression of varying intensities. PLoS One 2012; 7:e42585. [PMID: 22880043 PMCID: PMC3411802 DOI: 10.1371/journal.pone.0042585] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Accepted: 07/10/2012] [Indexed: 11/19/2022] Open
Abstract
Using faces representing exaggerated emotional expressions, recent behaviour and eye-tracking studies have suggested a dominant role of individual facial features in transmitting diagnostic cues for decoding facial expressions. Considering that in everyday life we frequently view low-intensity expressive faces in which local facial cues are more ambiguous, we probably need to combine expressive cues from more than one facial feature to reliably decode naturalistic facial affects. In this study we applied a morphing technique to systematically vary intensities of six basic facial expressions of emotion, and employed a self-paced expression categorization task to measure participants' categorization performance and associated gaze patterns. The analysis of pooled data from all expressions showed that increasing expression intensity would improve categorization accuracy, shorten reaction time and reduce number of fixations directed at faces. The proportion of fixations and viewing time directed at internal facial features (eyes, nose and mouth region), however, was not affected by varying levels of intensity. Further comparison between individual facial expressions revealed that although proportional gaze allocation at individual facial features was quantitatively modulated by the viewed expressions, the overall gaze distribution in face viewing was qualitatively similar across different facial expressions and different intensities. It seems that we adopt a holistic viewing strategy to extract expressive cues from all internal facial features in processing of naturalistic facial expressions.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, Lincoln, United Kingdom.
| |
Collapse
|
47
|
Martinez A, Du S. A Model of the Perception of Facial Expressions of Emotion by Humans: Research Overview and Perspectives. JOURNAL OF MACHINE LEARNING RESEARCH : JMLR 2012; 13:1589-1608. [PMID: 23950695 PMCID: PMC3742375] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
In cognitive science and neuroscience, there have been two leading models describing how humans perceive and classify facial expressions of emotion-the continuous and the categorical model. The continuous model defines each facial expression of emotion as a feature vector in a face space. This model explains, for example, how expressions of emotion can be seen at different intensities. In contrast, the categorical model consists of C classifiers, each tuned to a specific emotion category. This model explains, among other findings, why the images in a morphing sequence between a happy and a surprise face are perceived as either happy or surprise but not something in between. While the continuous model has a more difficult time justifying this latter finding, the categorical model is not as good when it comes to explaining how expressions are recognized at different intensities or modes. Most importantly, both models have problems explaining how one can recognize combinations of emotion categories such as happily surprised versus angrily surprised versus surprise. To resolve these issues, in the past several years, we have worked on a revised model that justifies the results reported in the cognitive science and neuroscience literature. This model consists of C distinct continuous spaces. Multiple (compound) emotion categories can be recognized by linearly combining these C face spaces. The dimensions of these spaces are shown to be mostly configural. According to this model, the major task for the classification of facial expressions of emotion is precise, detailed detection of facial landmarks rather than recognition. We provide an overview of the literature justifying the model, show how the resulting model can be employed to build algorithms for the recognition of facial expression of emotion, and propose research directions in machine learning and computer vision researchers to keep pushing the state of the art in these areas. We also discuss how the model can aid in studies of human perception, social interactions and disorders.
Collapse
|