1
|
Rodger H, Sokhn N, Lao J, Liu Y, Caldara R. Developmental eye movement strategies for decoding facial expressions of emotion. J Exp Child Psychol 2023; 229:105622. [PMID: 36641829 DOI: 10.1016/j.jecp.2022.105622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2022] [Revised: 12/21/2022] [Accepted: 12/23/2022] [Indexed: 01/15/2023]
Abstract
In our daily lives, we routinely look at the faces of others to try to understand how they are feeling. Few studies have examined the perceptual strategies that are used to recognize facial expressions of emotion, and none have attempted to isolate visual information use with eye movements throughout development. Therefore, we recorded the eye movements of children from 5 years of age up to adulthood during recognition of the six "basic emotions" to investigate when perceptual strategies for emotion recognition become mature (i.e., most adult-like). Using iMap4, we identified the eye movement fixation patterns for recognition of the six emotions across age groups in natural viewing and gaze-contingent (i.e., expanding spotlight) conditions. While univariate analyses failed to reveal significant differences in fixation patterns, more sensitive multivariate distance analyses revealed a U-shaped developmental trajectory with the eye movement strategies of the 17- to 18-year-old group most similar to adults for all expressions. A developmental dip in strategy similarity was found for each emotional expression revealing which age group had the most distinct eye movement strategy from the adult group: the 13- to 14-year-olds for sadness recognition; the 11- to 12-year-olds for fear, anger, surprise, and disgust; and the 7- to 8-year-olds for happiness. Recognition performance for happy, angry, and sad expressions did not differ significantly across age groups, but the eye movement strategies for these expressions diverged for each group. Therefore, a unique strategy was not a prerequisite for optimal recognition performance for these expressions. Our data provide novel insights into the developmental trajectories underlying facial expression recognition, a critical ability for adaptive social relations.
Collapse
Affiliation(s)
- Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| | - Nayla Sokhn
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Yingdi Liu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, 1700 Fribourg, Switzerland.
| |
Collapse
|
2
|
Ikeda S. Development of Emotion Recognition from Facial Expressions with Different Eye and Mouth Cues in Japanese People. J Genet Psychol 2023; 184:187-197. [PMID: 36661090 DOI: 10.1080/00221325.2023.2168174] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
Research has reported that Japanese people are more likely to focus on and look longer at eyes when reading emotions from facial expressions than their western counterparts. However, how these tendencies develop and whether there is a relationship between the two tendencies (to focus on the eyes and to look longer at the eyes) is unclear. The present study examined emotion recognition and gaze patterns in Japanese preschool children (n = 51) and university students (n = 57), using facial expressions with different eye and mouth cues. The results showed developmental changes in emotion recognition, with adults being more sensitive to negative emotions, whereas gaze patterns showed no developmental changes. Furthermore, there was no relationship between emotion recognition and gaze patterns. This suggests that the implicit and explicit processing of emotion recognition develops at different times, and that there is no direct relationship between the two processes.
Collapse
Affiliation(s)
- Shinnosuke Ikeda
- Faculty of Humanities, Kyoto University of Advanced Science, Kyoto, Japan
| |
Collapse
|
3
|
Fonteyn-Vinke A, Huurneman B, Boonstra FN. Viewing Strategies in Children With Visual Impairment and Children With Normal Vision: A Systematic Scoping Review. Front Psychol 2022; 13:898719. [PMID: 35783772 PMCID: PMC9248372 DOI: 10.3389/fpsyg.2022.898719] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2022] [Accepted: 05/19/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing strategies are strategies used to support visual information processing. These strategies may differ between children with cerebral visual impairment (CVI), children with ocular visual impairment, and children with normal vision since visual impairment might have an impact on viewing behavior. In current visual rehabilitation practice a variety of strategies is used without consideration of the differences in etiology of the visual impairment or in the spontaneous viewing strategies used. This systematic scoping review focuses on viewing strategies used during near school-based tasks like reading and on possible interventions aimed at viewing strategies. The goal is threefold: (1) creating a clear concept of viewing strategies, (2) mapping differences in viewing strategies between children with ocular visual impairment, children with CVI and children with normal vision, and (3) identifying interventions that can improve visual processing by targeting viewing strategies. Four databases were used to conduct the literature search: PubMed, Embase, PsycINFO and Cochrane. Seven hundred and ninety-nine articles were screened by two independent reviewers using PRISMA reporting guidelines of which 30 were included for qualitative analysis. Only five studies explicitly mentioned strategies used during visual processing, namely gaze strategies, reading strategies and search strategies. We define a viewing strategy as a conscious and systematic way of viewing during task performance. The results of this review are integrated with different attention network systems, which provide direction on how to design future interventions targeting the use of viewing strategies to improve different aspects of visual processing.
Collapse
Affiliation(s)
- Anke Fonteyn-Vinke
- Royal Dutch Visio, Nijmegen, Netherlands
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
| | - Bianca Huurneman
- Royal Dutch Visio, Nijmegen, Netherlands
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, Netherlands
- *Correspondence: Bianca Huurneman
| | - Frouke N. Boonstra
- Royal Dutch Visio, Nijmegen, Netherlands
- Behavioral Science Institute, Radboud University, Nijmegen, Netherlands
- Department of Cognitive Neuroscience, Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, Netherlands
| |
Collapse
|
4
|
Döllinger L, Laukka P, Högman LB, Bänziger T, Makower I, Fischer H, Hau S. Training Emotion Recognition Accuracy: Results for Multimodal Expressions and Facial Micro Expressions. Front Psychol 2021; 12:708867. [PMID: 34475841 PMCID: PMC8406528 DOI: 10.3389/fpsyg.2021.708867] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 07/09/2021] [Indexed: 12/22/2022] Open
Abstract
Nonverbal emotion recognition accuracy (ERA) is a central feature of successful communication and interaction, and is of importance for many professions. We developed and evaluated two ERA training programs-one focusing on dynamic multimodal expressions (audio, video, audio-video) and one focusing on facial micro expressions. Sixty-seven subjects were randomized to one of two experimental groups (multimodal, micro expression) or an active control group (emotional working memory task). Participants trained once weekly with a brief computerized training program for three consecutive weeks. Pre-post outcome measures consisted of a multimodal ERA task, a micro expression recognition task, and a task about patients' emotional cues. Post measurement took place approximately a week after the last training session. Non-parametric mixed analyses of variance using the Aligned Rank Transform were used to evaluate the effectiveness of the training programs. Results showed that multimodal training was significantly more effective in improving multimodal ERA compared to micro expression training or the control training; and the micro expression training was significantly more effective in improving micro expression ERA compared to the other two training conditions. Both pre-post effects can be interpreted as large. No group differences were found for the outcome measure about recognizing patients' emotion cues. There were no transfer effects of the training programs, meaning that participants only improved significantly for the specific facet of ERA that they had trained on. Further, low baseline ERA was associated with larger ERA improvements. Results are discussed with regard to methodological and conceptual aspects, and practical implications and future directions are explored.
Collapse
Affiliation(s)
- Lillian Döllinger
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Lennart Björn Högman
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Tanja Bänziger
- Department of Psychology and Social Work, Mid Sweden University, Sundsvall, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| | - Stephan Hau
- Department of Psychology, Faculty of Social Sciences, Stockholm University, Stockholm, Sweden
| |
Collapse
|
5
|
Gehrer NA, Zajenkowska A, Bodecka M, Schönenberg M. Attention orienting to the eyes in violent female and male offenders: An eye-tracking study. Biol Psychol 2021; 163:108136. [PMID: 34129874 DOI: 10.1016/j.biopsycho.2021.108136] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/30/2020] [Revised: 06/10/2021] [Accepted: 06/10/2021] [Indexed: 12/30/2022]
Abstract
Attention to the eyes and eye contact form an important basis for the development of empathy and social competences including prosocial behavior. Thus, impairments in attention to the eyes of an interaction partner might play a role in the etiology of antisocial behavior and violence. For the first time, the present study extends investigations of eye gaze to a large sample (N = 173) including not only male but also female violent offenders and a control group. We assessed viewing patterns during the categorization of emotional faces via eye tracking. Our results indicate a reduced frequency of initial attention shifts to the eyes in female and male offenders compared to controls, while there were no general group differences in overall attention to the eye region (i.e., relative dwell time). Thus, we conclude that violent offenders might be able to compensate for deficits in spontaneous attention orienting during later stages of information processing.
Collapse
Affiliation(s)
- Nina A Gehrer
- University of Tübingen, Department of Clinical Psychology and Psychotherapy, Tübingen, Germany.
| | - Anna Zajenkowska
- Maria Grzegorzewska University, Department of Psychology, Warsaw, Poland
| | - Marta Bodecka
- Maria Grzegorzewska University, Department of Psychology, Warsaw, Poland
| | - Michael Schönenberg
- University of Tübingen, Department of Clinical Psychology and Psychotherapy, Tübingen, Germany; University Hospital Tübingen, Department of Psychiatry and Psychotherapy, Tübingen, Germany
| |
Collapse
|
6
|
Ikeda S. Social anxiety enhances sensitivity to negative transition and eye region of facial expression. PERSONALITY AND INDIVIDUAL DIFFERENCES 2020. [DOI: 10.1016/j.paid.2020.110096] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
|
7
|
Roitblat Y, Cohensedgh S, Frig-Levinson E, Cohen M, Dadbin K, Shohed C, Shvartsman D, Shterenshis M. Emotional expressions with minimal facial muscle actions. Report 2: Recognition of emotions. CURRENT PSYCHOLOGY 2020. [DOI: 10.1007/s12144-020-00691-7] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022]
|
8
|
Stoll C, Rodger H, Lao J, Richoz AR, Pascalis O, Dye M, Caldara R. Quantifying Facial Expression Intensity and Signal Use in Deaf Signers. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2019; 24:346-355. [PMID: 31271428 DOI: 10.1093/deafed/enz023] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/09/2018] [Revised: 04/30/2019] [Accepted: 05/03/2019] [Indexed: 06/09/2023]
Abstract
We live in a world of rich dynamic multisensory signals. Hearing individuals rapidly and effectively integrate multimodal signals to decode biologically relevant facial expressions of emotion. Yet, it remains unclear how facial expressions are decoded by deaf adults in the absence of an auditory sensory channel. We thus compared early and profoundly deaf signers (n = 46) with hearing nonsigners (n = 48) on a psychophysical task designed to quantify their recognition performance for the six basic facial expressions of emotion. Using neutral-to-expression image morphs and noise-to-full signal images, we quantified the intensity and signal levels required by observers to achieve expression recognition. Using Bayesian modeling, we found that deaf observers require more signal and intensity to recognize disgust, while reaching comparable performance for the remaining expressions. Our results provide a robust benchmark for the intensity and signal use in deafness and novel insights into the differential coding of facial expressions of emotion between hearing and deaf individuals.
Collapse
Affiliation(s)
- Chloé Stoll
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
- Laboratory for Investigative Neurophysiology, Centre Hospitalier Universitaire Vaudois and University of Lausanne
| | - Helen Rodger
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| | - Olivier Pascalis
- Laboratoire de Psychologie et de Neurocognition (CNRS-UMR5105), Université Grenoble-Alpes
| | - Matthew Dye
- National Technical Institute for Deaf/Rochester Institute of Technology
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg
| |
Collapse
|
9
|
Pollux PM, Craddock M, Guo K. Gaze patterns in viewing static and dynamic body expressions. Acta Psychol (Amst) 2019; 198:102862. [PMID: 31226535 DOI: 10.1016/j.actpsy.2019.05.014] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2019] [Revised: 05/09/2019] [Accepted: 05/26/2019] [Indexed: 11/25/2022] Open
Abstract
Evidence for the importance of bodily cues for emotion recognition has grown over the last two decades. Despite this growing literature, it is underspecified how observers view whole bodies for body expression recognition. Here we investigate to which extent body-viewing is face- and context-specific when participants are categorizing whole body expressions in static (Experiment 1) and dynamic displays (Experiment 2). Eye-movement recordings showed that observers viewed the face exclusively when visible in dynamic displays, whereas viewing was distributed over head, torso and arms in static displays and in dynamic displays with faces not visible. The strong face bias in dynamic face-visible expressions suggests that viewing of the body responds flexibly to the informativeness of facial cues for emotion categorisation. However, when facial expressions are static or not visible, observers adopt a viewing strategy that includes all upper body regions. This viewing strategy is further influenced by subtle viewing biases directed towards emotion-specific body postures and movements to optimise recruitment of diagnostic information for emotion categorisation.
Collapse
|
10
|
Hermens F, Golubickis M, Macrae CN. Eye movements while judging faces for trustworthiness and dominance. PeerJ 2018; 6:e5702. [PMID: 30324015 PMCID: PMC6186410 DOI: 10.7717/peerj.5702] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2018] [Accepted: 09/06/2018] [Indexed: 11/20/2022] Open
Abstract
Past studies examining how people judge faces for trustworthiness and dominance have suggested that they use particular facial features (e.g. mouth features for trustworthiness, eyebrow and cheek features for dominance ratings) to complete the task. Here, we examine whether eye movements during the task reflect the importance of these features. We here compared eye movements for trustworthiness and dominance ratings of face images under three stimulus configurations: Small images (mimicking large viewing distances), large images (mimicking face to face viewing), and a moving window condition (removing extrafoveal information). Whereas first area fixated, dwell times, and number of fixations depended on the size of the stimuli and the availability of extrafoveal vision, and varied substantially across participants, no clear task differences were found. These results indicate that gaze patterns for face stimuli are highly individual, do not vary between trustworthiness and dominance ratings, but are influenced by the size of the stimuli and the availability of extrafoveal vision.
Collapse
Affiliation(s)
- Frouke Hermens
- School of Psychology, University of Lincoln, Lincoln, Lincolnshire, UK
| | | | - C. Neil Macrae
- School of Psychology, University of Aberdeen, Aberdeen, UK
| |
Collapse
|
11
|
Guo K, Soornack Y, Settle R. Expression-dependent susceptibility to face distortions in processing of facial expressions of emotion. Vision Res 2018; 157:112-122. [PMID: 29496513 DOI: 10.1016/j.visres.2018.02.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2017] [Revised: 02/02/2018] [Accepted: 02/04/2018] [Indexed: 11/29/2022]
Abstract
Our capability of recognizing facial expressions of emotion under different viewing conditions implies the existence of an invariant expression representation. As natural visual signals are often distorted and our perceptual strategy changes with external noise level, it is essential to understand how expression perception is susceptible to face distortion and whether the same facial cues are used to process high- and low-quality face images. We systematically manipulated face image resolution (experiment 1) and blur (experiment 2), and measured participants' expression categorization accuracy, perceived expression intensity and associated gaze patterns. Our analysis revealed a reasonable tolerance to face distortion in expression perception. Reducing image resolution up to 48 × 64 pixels or increasing image blur up to 15 cycles/image had little impact on expression assessment and associated gaze behaviour. Further distortion led to decreased expression categorization accuracy and intensity rating, increased reaction time and fixation duration, and stronger central fixation bias which was not driven by distortion-induced changes in local image saliency. Interestingly, the observed distortion effects were expression-dependent with less deterioration impact on happy and surprise expressions, suggesting this distortion-invariant facial expression perception might be achieved through the categorical model involving a non-linear configural combination of local facial features.
Collapse
Affiliation(s)
- Kun Guo
- School of Psychology, University of Lincoln, UK.
| | | | | |
Collapse
|
12
|
Ewing L, Karmiloff-Smith A, Farran EK, Smith ML. Developmental changes in the critical information used for facial expression processing. Cognition 2017; 166:56-66. [DOI: 10.1016/j.cognition.2017.05.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2016] [Revised: 05/10/2017] [Accepted: 05/12/2017] [Indexed: 11/25/2022]
|
13
|
Xu J, Yue S, Menchinelli F, Guo K. What has been missed for predicting human attention in viewing driving clips? PeerJ 2017; 5:e2946. [PMID: 28168112 PMCID: PMC5291110 DOI: 10.7717/peerj.2946] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/24/2016] [Accepted: 12/28/2016] [Indexed: 11/20/2022] Open
Abstract
Recent research progress on the topic of human visual attention allocation in scene perception and its simulation is based mainly on studies with static images. However, natural vision requires us to extract visual information that constantly changes due to egocentric movements or dynamics of the world. It is unclear to what extent spatio-temporal regularity, an inherent regularity in dynamic vision, affects human gaze distribution and saliency computation in visual attention models. In this free-viewing eye-tracking study we manipulated the spatio-temporal regularity of traffic videos by presenting them in normal video sequence, reversed video sequence, normal frame sequence, and randomised frame sequence. The recorded human gaze allocation was then used as the 'ground truth' to examine the predictive ability of a number of state-of-the-art visual attention models. The analysis revealed high inter-observer agreement across individual human observers, but all the tested attention models performed significantly worse than humans. The inferior predictability of the models was evident from indistinguishable gaze prediction irrespective of stimuli presentation sequence, and weak central fixation bias. Our findings suggest that a realistic visual attention model for the processing of dynamic scenes should incorporate human visual sensitivity with spatio-temporal regularity and central fixation bias.
Collapse
Affiliation(s)
- Jiawei Xu
- School of Computer Science, University of Lincoln , Lincoln , United Kingdom
| | - Shigang Yue
- School of Computer Science, University of Lincoln , Lincoln , United Kingdom
| | | | - Kun Guo
- School of Psychology, University of Lincoln , Lincoln , United Kingdom
| |
Collapse
|
14
|
Luke CJ, Pollux PMJ. Lateral presentation of faces alters overall viewing strategy. PeerJ 2016; 4:e2241. [PMID: 27547549 PMCID: PMC4958001 DOI: 10.7717/peerj.2241] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2016] [Accepted: 06/21/2016] [Indexed: 11/20/2022] Open
Abstract
Eye tracking has been used during face categorisation and identification tasks to identify perceptually salient facial features and infer underlying cognitive processes. However, viewing patterns are influenced by a variety of gaze biases, drawing fixations to the centre of a screen and horizontally to the left side of face images (left-gaze bias). In order to investigate potential interactions between gaze biases uniquely associated with facial expression processing, and those associated with screen location, face stimuli were presented in three possible screen positions to the left, right and centre. Comparisons of fixations between screen locations highlight a significant impact of the screen centre bias, pulling fixations towards the centre of the screen and modifying gaze biases generally observed during facial categorisation tasks. A left horizontal bias for fixations was found to be independent of screen position but interacting with screen centre bias, drawing fixations to the left hemi-face rather than just to the left of the screen. Implications for eye tracking studies utilising centrally presented faces are discussed.
Collapse
Affiliation(s)
| | - Petra M J Pollux
- School of Psychology, University of Lincoln, Lincoln, United Kingdom
| |
Collapse
|
15
|
Pollux P. Improved categorization of subtle facial expressions modulates Late Positive Potential. Neuroscience 2016; 322:152-63. [DOI: 10.1016/j.neuroscience.2016.02.027] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2015] [Revised: 02/11/2016] [Accepted: 02/13/2016] [Indexed: 11/25/2022]
|
16
|
Wegrzyn M, Bruckhaus I, Kissler J. Categorical Perception of Fear and Anger Expressions in Whole, Masked and Composite Faces. PLoS One 2015; 10:e0134790. [PMID: 26263000 PMCID: PMC4532458 DOI: 10.1371/journal.pone.0134790] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2015] [Accepted: 07/15/2015] [Indexed: 12/05/2022] Open
Abstract
Human observers are remarkably proficient at recognizing expressions of emotions and at readily grouping them into distinct categories. When morphing one facial expression into another, the linear changes in low-level features are insufficient to describe the changes in perception, which instead follow an s-shaped function. Important questions are, whether there are single diagnostic regions in the face that drive categorical perception for certain parings of emotion expressions, and how information in those regions interacts when presented together. We report results from two experiments with morphed fear-anger expressions, where (a) half of the face was masked or (b) composite faces made up of different expressions were presented. When isolated upper and lower halves of faces were shown, the eyes were found to be almost as diagnostic as the whole face, with the response function showing a steep category boundary. In contrast, the mouth allowed for a substantially lesser amount of accuracy and responses followed a much flatter psychometric function. When a composite face consisting of mismatched upper and lower halves was used and observers were instructed to exclusively judge either the expression of mouth or eyes, the to-be-ignored part always influenced perception of the target region. In line with experiment 1, the eye region exerted a much stronger influence on mouth judgements than vice versa. Again, categorical perception was significantly more pronounced for upper halves of faces. The present study shows that identification of fear and anger in morphed faces relies heavily on information from the upper half of the face, most likely the eye region. Categorical perception is possible when only the upper face half is present, but compromised when only the lower part is shown. Moreover, observers tend to integrate all available features of a face, even when trying to focus on only one part.
Collapse
Affiliation(s)
- Martin Wegrzyn
- Department of Psychology, Bielefeld University, Bielefeld, Germany
- Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
- * E-mail:
| | | | - Johanna Kissler
- Department of Psychology, Bielefeld University, Bielefeld, Germany
- Center of Excellence Cognitive Interaction Technology (CITEC), Bielefeld University, Bielefeld, Germany
| |
Collapse
|
17
|
How does image noise affect actual and predicted human gaze allocation in assessing image quality? Vision Res 2015; 112:11-25. [DOI: 10.1016/j.visres.2015.03.029] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 02/27/2015] [Accepted: 03/02/2015] [Indexed: 11/21/2022]
|