1
|
Babenko VV, Yavna DV, Ermakov PN, Anokhina PV. Nonlocal contrast calculated by the second order visual mechanisms and its significance in identifying facial emotions. F1000Res 2023; 10:274. [PMID: 37767361 PMCID: PMC10521119 DOI: 10.12688/f1000research.28396.2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 08/15/2023] [Indexed: 09/29/2023] Open
Abstract
Background: Previously obtained results indicate that faces are / preattentively/ detected in the visual scene very fast, and information on facial expression is rapidly extracted at the lower levels of the visual system. At the same time different facial attributes make different contributions in facial expression recognition. However, it is known, among the preattentive mechanisms there are none that would be selective for certain facial features, such as eyes or mouth. The aim of our study was to identify a candidate for the role of such a mechanism. Our assumption was that the most informative areas of the image are those characterized by spatial heterogeneity, particularly with nonlocal contrast changes. These areas may be identified / in the human visual system/ by the second-order visual / mechanisms/ filters selective to contrast modulations of brightness gradients. Methods: We developed a software program imitating the operation of these / mechanisms/ filters and finding areas of contrast heterogeneity in the image. Using this program, we extracted areas with maximum, minimum and medium contrast modulation amplitudes from the initial face images, then we used these to make three variants of one and the same face. The faces were demonstrated to the observers along with other objects synthesized the same way. The participants had to identify faces and define facial emotional expressions. Results: It was found that the greater is the contrast modulation amplitude of the areas shaping the face, the more precisely the emotion is identified. Conclusions: The results suggest that areas with a greater increase in nonlocal contrast are more informative in facial images, and the second-order visual / mechanisms/ filters can claim the role of /filters/ elements that detect areas of interest, attract visual attention and are windows through which subsequent levels of visual processing receive valuable information.
Collapse
Affiliation(s)
- Vitaly V. Babenko
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Denis V. Yavna
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Pavel N. Ermakov
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| | - Polina V. Anokhina
- Department of Psychophysiology and Clinical Psychology, Academy of Psychology and Education Sciences, Southern Federal University, Rostov-on-Don, Russian Federation
| |
Collapse
|
2
|
The Effect of Surgical Masks on the Featural and Configural Processing of Emotions. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2022; 19:ijerph19042420. [PMID: 35206620 PMCID: PMC8872142 DOI: 10.3390/ijerph19042420] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 01/28/2022] [Revised: 02/15/2022] [Accepted: 02/17/2022] [Indexed: 12/27/2022]
Abstract
From the start of the COVID-19 pandemic, the use of surgical masks became widespread. However, they occlude an important part of the face and make it difficult to decode and interpret other people's emotions. To clarify the effect of surgical masks on configural and featural processing, participants completed a facial emotion recognition task to discriminate between happy, sad, angry, and neutral faces. Stimuli included fully visible faces, masked faces, and a cropped photo of the eyes or mouth region. Occlusion due to the surgical mask affects emotion recognition for sadness, anger, and neutral faces, although no significative differences were found in happiness recognition. Our findings suggest that happiness is recognized predominantly via featural processing.
Collapse
|
3
|
How induced self-focus versus other-focus affect emotional recognition and verbalization. JOURNAL OF CULTURAL COGNITIVE SCIENCE 2021. [DOI: 10.1007/s41809-021-00091-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
|
4
|
Chen Z, McCrackin SD, Morgan A, Itier RJ. The Gaze Cueing Effect and Its Enhancement by Facial Expressions Are Impacted by Task Demands: Direct Comparison of Target Localization and Discrimination Tasks. Front Psychol 2021; 12:618606. [PMID: 33790836 PMCID: PMC8006310 DOI: 10.3389/fpsyg.2021.618606] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2020] [Accepted: 02/17/2021] [Indexed: 01/07/2023] Open
Abstract
The gaze cueing effect is characterized by faster attentional orienting to a gazed-at than a non-gazed-at target. This effect is often enhanced when the gazing face bears an emotional expression, though this finding is modulated by a number of factors. Here, we tested whether the type of task performed might be one such modulating factor. Target localization and target discrimination tasks are the two most commonly used gaze cueing tasks, and they arguably differ in cognitive resources, which could impact how emotional expression and gaze cues are integrated to orient attention. In a within-subjects design, participants performed both target localization and discrimination gaze cueing tasks with neutral, happy, and fearful faces. The gaze cueing effect for neutral faces was greatly reduced in the discrimination task relative to the localization task, and the emotional enhancement of the gaze cueing effect was only present in the localization task and only when this task was performed first. These results suggest that cognitive resources are needed for gaze cueing and for the integration of emotional expressions and gaze cues. We propose that a shift toward local processing may be the mechanism by which the discrimination task interferes with the emotional modulation of gaze cueing. The results support the idea that gaze cueing can be greatly modulated by top-down influences and cognitive resources and thus taps into endogenous attention. Results are discussed within the context of the recently proposed EyeTune model of social attention.
Collapse
Affiliation(s)
- Zelin Chen
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada
| | - Sarah D McCrackin
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada
| | - Alicia Morgan
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada
| | - Roxane J Itier
- Department of Psychology, University of Waterloo, Waterloo, ON, Canada
| |
Collapse
|
5
|
Kilpeläinen M, Salmela V. Perceived emotional expressions of composite faces. PLoS One 2020; 15:e0230039. [PMID: 32155204 PMCID: PMC7064203 DOI: 10.1371/journal.pone.0230039] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Accepted: 02/19/2020] [Indexed: 11/18/2022] Open
Abstract
The eye and mouth regions serve as the primary sources of facial information regarding an individual's emotional state. The aim of this study was to provide a comprehensive assessment of the relative importance of those two information sources in the identification of different emotions. The stimuli were composite facial images, in which different expressions (Neutral, Anger, Disgust, Fear, Happiness, Contempt, and Surprise) were presented in the eyes and the mouth. Participants (21 women, 11 men, mean age 25 years) rated the expressions of 7 congruent and 42 incongruent composite faces by clicking on a point within the valence-arousal emotion space. Eye movements were also monitored. With most incongruent composite images, the perceived emotion corresponded to the expression of either the eye region or the mouth region or an average of those. The happy expression was different. Happy eyes often shifted the perceived emotion towards a slightly negative point in the valence-arousal space, not towards the location associated with a congruent happy expression. The eye-tracking data revealed significant effects of congruency, expressions and interaction on total dwell time. Our data indicate that whether a face that combines features from two emotional expressions leads to a percept based on only one of the expressions (categorical perception) or integration of the two expressions (dimensional perception), or something altogether different, strongly depends upon the expressions involved.
Collapse
Affiliation(s)
- Markku Kilpeläinen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- * E-mail:
| |
Collapse
|
6
|
Sun Y, Ayaz H, Akansu AN. Multimodal Affective State Assessment Using fNIRS + EEG and Spontaneous Facial Expression. Brain Sci 2020; 10:E85. [PMID: 32041316 PMCID: PMC7071625 DOI: 10.3390/brainsci10020085] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 01/31/2020] [Accepted: 02/01/2020] [Indexed: 01/04/2023] Open
Abstract
Human facial expressions are regarded as a vital indicator of one's emotion and intention, and even reveal the state of health and wellbeing. Emotional states have been associated with information processing within and between subcortical and cortical areas of the brain, including the amygdala and prefrontal cortex. In this study, we evaluated the relationship between spontaneous human facial affective expressions and multi-modal brain activity measured via non-invasive and wearable sensors: functional near-infrared spectroscopy (fNIRS) and electroencephalography (EEG) signals. The affective states of twelve male participants detected via fNIRS, EEG, and spontaneous facial expressions were investigated in response to both image-content stimuli and video-content stimuli. We propose a method to jointly evaluate fNIRS and EEG signals for affective state detection (emotional valence as positive or negative). Experimental results reveal a strong correlation between spontaneous facial affective expressions and the perceived emotional valence. Moreover, the affective states were estimated by the fNIRS, EEG, and fNIRS + EEG brain activity measurements. We show that the proposed EEG + fNIRS hybrid method outperforms fNIRS-only and EEG-only approaches. Our findings indicate that the dynamic (video-content based) stimuli triggers a larger affective response than the static (image-content based) stimuli. These findings also suggest joint utilization of facial expression and wearable neuroimaging, fNIRS, and EEG, for improved emotional analysis and affective brain-computer interface applications.
Collapse
Affiliation(s)
- Yanjia Sun
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA;
| | - Hasan Ayaz
- School of Biomedical Engineering, Science and Health Systems, Drexel University, Philadelphia, PA 19104, USA;
- Department of Psychology, College of Arts and Sciences, Drexel University, Philadelphia, PA 19104, USA
- Department of Family and Community Health, University of Pennsylvania, Philadelphia, PA 19104, USA
- Center for Injury Research and Prevention, Children’s Hospital of Philadelphia, Philadelphia, PA 19104, USA
| | - Ali N. Akansu
- Department of Electrical and Computer Engineering, New Jersey Institute of Technology, Newark, NJ 07102, USA;
| |
Collapse
|
7
|
Yan X, Young AW, Andrews TJ. Differences in holistic processing do not explain cultural differences in the recognition of facial expression. Q J Exp Psychol (Hove) 2017; 70:2445-2459. [DOI: 10.1080/17470218.2016.1240816] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
The aim of this study was to investigate the causes of the own-race advantage in facial expression perception. In Experiment 1, we investigated Western Caucasian and Chinese participants’ perception and categorization of facial expressions of six basic emotions that included two pairs of confusable expressions (fear and surprise; anger and disgust). People were slightly better at identifying facial expressions posed by own-race members (mainly in anger and disgust). In Experiment 2, we asked whether the own-race advantage was due to differences in the holistic processing of facial expressions. Participants viewed composite faces in which the upper part of one expression was combined with the lower part of a different expression. The upper and lower parts of the composite faces were either aligned or misaligned. Both Chinese and Caucasian participants were better at identifying the facial expressions from the misaligned images, showing interference on recognizing the parts of the expressions created by holistic perception of the aligned composite images. However, this interference from holistic processing was equivalent across expressions of own-race and other-race faces in both groups of participants. Whilst the own-race advantage in recognizing facial expressions does seem to reflect the confusability of certain emotions, it cannot be explained by differences in holistic processing.
Collapse
Affiliation(s)
- Xiaoqian Yan
- Department of Psychology, University of York, York, England, UK
| | - Andrew W. Young
- Department of Psychology, University of York, York, England, UK
| | | |
Collapse
|
8
|
Meaux E, Vuilleumier P. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks. Neuroimage 2016; 141:154-173. [DOI: 10.1016/j.neuroimage.2016.07.004] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 06/26/2016] [Accepted: 07/02/2016] [Indexed: 11/27/2022] Open
|
9
|
de Gelder B, Huis in ‘t Veld EMJ, Van den Stock J. The Facial Expressive Action Stimulus Test. A test battery for the assessment of face memory, face and object perception, configuration processing, and facial expression recognition. Front Psychol 2015; 6:1609. [PMID: 26579004 PMCID: PMC4624856 DOI: 10.3389/fpsyg.2015.01609] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2015] [Accepted: 10/05/2015] [Indexed: 11/13/2022] Open
Abstract
There are many ways to assess face perception skills. In this study, we describe a novel task battery FEAST (Facial Expressive Action Stimulus Test) developed to test recognition of identity and expressions of human faces as well as stimulus control categories. The FEAST consists of a neutral and emotional face memory task, a face and shoe identity matching task, a face and house part-to-whole matching task, and a human and animal facial expression matching task. The identity and part-to-whole matching tasks contain both upright and inverted conditions. The results provide reference data of a healthy sample of controls in two age groups for future users of the FEAST.
Collapse
Affiliation(s)
- Beatrice de Gelder
- Department of Cognitive Neuroscience, Maastricht UniversityMaastricht, Netherlands
- Department of Psychiatry and Mental Health, University of Cape TownCape Town, South Africa
| | - Elisabeth M. J. Huis in ‘t Veld
- Department of Cognitive Neuroscience, Maastricht UniversityMaastricht, Netherlands
- Department of Medical and Clinical Psychology, Tilburg UniversityTilburg, Netherlands
| | - Jan Van den Stock
- Laboratory for Translational Neuropsychiatry, Department of Neurosciences, KU LeuvenLeuven, Belgium
- Old Age Psychiatry, University Hospitals LeuvenLeuven, Belgium
| |
Collapse
|
10
|
Bate S, Bennetts R. The independence of expression and identity in face-processing: evidence from neuropsychological case studies. Front Psychol 2015; 6:770. [PMID: 26106348 PMCID: PMC4460300 DOI: 10.3389/fpsyg.2015.00770] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 05/22/2015] [Indexed: 11/13/2022] Open
Abstract
The processing of facial identity and facial expression have traditionally been seen as independent—a hypothesis that has largely been informed by a key double dissociation between neurological patients with a deficit in facial identity recognition but not facial expression recognition, and those with the reverse pattern of impairment. The independence hypothesis is also reflected in more recent anatomical models of face-processing, although these theories permit some interaction between the two processes. Given that much of the traditional patient-based evidence has been criticized, a review of more recent case reports that are accompanied by neuroimaging data is timely. Further, the performance of individuals with developmental face-processing deficits has recently been considered with regard to the independence debate. This paper reviews evidence from both acquired and developmental disorders, identifying methodological and theoretical strengths and caveats in these reports, and highlighting pertinent avenues for future research.
Collapse
Affiliation(s)
- Sarah Bate
- Department of Psychology, Faculty of Science and Technology, Bournemouth University , Poole, UK
| | - Rachel Bennetts
- Department of Psychology, Faculty of Science and Technology, Bournemouth University , Poole, UK
| |
Collapse
|
11
|
Sariyanidi E, Gunes H, Cavallaro A. Automatic Analysis of Facial Affect: A Survey of Registration, Representation, and Recognition. IEEE TRANSACTIONS ON PATTERN ANALYSIS AND MACHINE INTELLIGENCE 2015; 37:1113-1133. [PMID: 26357337 DOI: 10.1109/tpami.2014.2366127] [Citation(s) in RCA: 113] [Impact Index Per Article: 12.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Automatic affect analysis has attracted great interest in various contexts including the recognition of action units and basic or non-basic emotions. In spite of major efforts, there are several open questions on what the important cues to interpret facial expressions are and how to encode them. In this paper, we review the progress across a range of affect recognition applications to shed light on these fundamental questions. We analyse the state-of-the-art solutions by decomposing their pipelines into fundamental components, namely face registration, representation, dimensionality reduction and recognition. We discuss the role of these components and highlight the models and new trends that are followed in their design. Moreover, we provide a comprehensive analysis of facial representations by uncovering their advantages and limitations; we elaborate on the type of information they encode and discuss how they deal with the key challenges of illumination variations, registration errors, head-pose variations, occlusions, and identity bias. This survey allows us to identify open issues and to define future directions for designing real-world affect recognition systems.
Collapse
|
12
|
Prazak ER, Burgund ED. Keeping it real: Recognizing expressions in real compared to schematic faces. VISUAL COGNITION 2014. [DOI: 10.1080/13506285.2014.914991] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
|
13
|
Bombari D, Schmid PC, Schmid Mast M, Birri S, Mast FW, Lobmaier JS. Emotion Recognition: The Role of Featural and Configural Face Information. Q J Exp Psychol (Hove) 2013; 66:2426-42. [DOI: 10.1080/17470218.2013.789065] [Citation(s) in RCA: 73] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Several studies investigated the role of featural and configural information when processing facial identity. A lot less is known about their contribution to emotion recognition. In this study, we addressed this issue by inducing either a featural or a configural processing strategy (Experiment 1) and by investigating the attentional strategies in response to emotional expressions (Experiment 2). In Experiment 1, participants identified emotional expressions in faces that were presented in three different versions (intact, blurred, and scrambled) and in two orientations (upright and inverted). Blurred faces contain mainly configural information, and scrambled faces contain mainly featural information. Inversion is known to selectively hinder configural processing. Analyses of the discriminability measure ( A′) and response times (RTs) revealed that configural processing plays a more prominent role in expression recognition than featural processing, but their relative contribution varies depending on the emotion. In Experiment 2, we qualified these differences between emotions by investigating the relative importance of specific features by means of eye movements. Participants had to match intact expressions with the emotional cues that preceded the stimulus. The analysis of eye movements confirmed that the recognition of different emotions rely on different types of information. While the mouth is important for the detection of happiness and fear, the eyes are more relevant for anger, fear, and sadness.
Collapse
Affiliation(s)
- Dario Bombari
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Petra C. Schmid
- Institute of Work and Organizational Psychology, University of Neuchatel, Neuchatel, Switzerland
| | - Marianne Schmid Mast
- Institute of Work and Organizational Psychology, University of Neuchatel, Neuchatel, Switzerland
| | - Sandra Birri
- Department of Psychology, University of Bern, Bern, Switzerland
| | - Fred W. Mast
- Department of Psychology, University of Bern, Bern, Switzerland
| | | |
Collapse
|
14
|
Tanaka JW, Kaiser MD, Butler S, Le Grand R. Mixed emotions: Holistic and analytic perception of facial expressions. Cogn Emot 2012; 26:961-77. [DOI: 10.1080/02699931.2011.630933] [Citation(s) in RCA: 55] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
|
15
|
Chiller-Glaus SD, Schwaninger A, Hofer F, Kleiner M, Knappmeyer B. Recognition of Emotion in Moving and Static Composite Faces. SWISS JOURNAL OF PSYCHOLOGY 2011. [DOI: 10.1024/1421-0185/a000061] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
This paper investigates whether the greater accuracy of emotion identification for dynamic versus static expressions, as noted in previous research, can be explained through heightened levels of either component or configural processing. Using a paradigm by Young, Hellawell, and Hay (1987 ), we tested recognition performance of aligned and misaligned composite faces with six basic emotions (happiness, fear, disgust, surprise, anger, sadness). Stimuli were created using 3D computer graphics and were shown as static peak expressions (static condition) and 7 s video sequences (dynamic condition). The results revealed that, overall, moving stimuli were better recognized than static faces, although no interaction between motion and other factors was found. For happiness, sadness, and surprise, misaligned composites were better recognized than aligned composites, suggesting that aligned composites fuse to form a single expression, while the two halves of misaligned composites are perceived as two separate emotions. For anger, disgust, and fear, this was not the case. These results indicate that emotions are perceived on the basis of both configural and component-based information, with specific activation patterns for separate emotions, and that motion has a quality of its own and does not increase configural or component-based recognition separately.
Collapse
Affiliation(s)
- Sarah Dagmar Chiller-Glaus
- Department of Psychology, University of Zurich, Switzerland
- Swiss University of Distance Education, Brig, Switzerland
| | - Adrian Schwaninger
- School of Applied Psychology, Institute Humans in Complex Systems, University of Applied Sciences Northwestern Switzerland, Olten, Switzerland, and Department of Informatics, University of Zurich, Switzerland
| | | | - Mario Kleiner
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Barbara Knappmeyer
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
- Center for Neural Science, New York University, USA
| |
Collapse
|
16
|
Mak-Fan KM, Thompson WF, Green REA. Visual search for schematic emotional faces risks perceptual confound. Cogn Emot 2011; 25:573-84. [PMID: 21547761 DOI: 10.1080/02699931.2010.500159] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
Several studies have used a visual search task to demonstrate that schematic negative-face targets are found faster and/or more efficiently than positive ones, with these findings taken as evidence that negative emotional expression is capable of guiding attentional allocation in visual search. A common hypothesis is that these effects should be disrupted by face inversion; however, this has not been consistently demonstrated, and raises the possibility of a perceptual confound. One candidate confound is the feature of "closure" (see Wolfe & Horowitz, 2004) caused by the down-turned mouth adjacent to edge of the face. This was investigated in the present series of experiments. In Experiment 1, the speed advantage for upright negative faces was replicated. In Experiment 2, the effect was not disrupted with inversion, and an efficiency advantage emerged, suggesting that perceptual features could be causing the advantage. In Experiment 3, speed and efficiency effects were seen when this perceptual characteristic remained but face features were scrambled. Taken together, these findings suggest that visual search using schematic faces containing a curved-line mouth feature cannot provide a valid test of guided search by negative facial emotion unless this confound is controlled.
Collapse
|
17
|
Affiliation(s)
- Graham Hole
- a School of Psychology , University of Sussex , Brighton, UK
| | - Patricia George
- a School of Psychology , University of Sussex , Brighton, UK
| |
Collapse
|
18
|
Palermo R, Willis ML, Rivolta D, McKone E, Wilson CE, Calder AJ. Impaired holistic coding of facial expression and facial identity in congenital prosopagnosia. Neuropsychologia 2011; 49:1226-1235. [PMID: 21333662 PMCID: PMC3083514 DOI: 10.1016/j.neuropsychologia.2011.02.021] [Citation(s) in RCA: 145] [Impact Index Per Article: 11.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2010] [Revised: 02/08/2011] [Accepted: 02/10/2011] [Indexed: 11/20/2022]
Abstract
We test 12 individuals with congenital prosopagnosia (CP), who replicate a common pattern of showing severe difficulty in recognising facial identity in conjunction with normal recognition of facial expressions (both basic and ‘social’). Strength of holistic processing was examined using standard expression composite and identity composite tasks. Compared to age- and sex-matched controls, group analyses demonstrated that CPs showed weaker holistic processing, for both expression and identity information. Implications are (a) normal expression recognition in CP can derive from compensatory strategies (e.g., over-reliance on non-holistic cues to expression); (b) the split between processing of expression and identity information may take place after a common stage of holistic processing; and (c) contrary to a recent claim, holistic processing of identity is functionally involved in face identification ability.
Collapse
Affiliation(s)
- Romina Palermo
- ARC Centre of Excellence in Cognition and its Disorders (CCD); Department of Psychology, Australian National University, Canberra, ACT 0200, Australia; Macquarie Centre for Cognitive Science (MACCS), Macquarie University, Sydney, NSW 2109, Australia.
| | - Megan L Willis
- Macquarie Centre for Cognitive Science (MACCS), Macquarie University, Sydney, NSW 2109, Australia
| | - Davide Rivolta
- Macquarie Centre for Cognitive Science (MACCS), Macquarie University, Sydney, NSW 2109, Australia
| | - Elinor McKone
- ARC Centre of Excellence in Cognition and its Disorders (CCD); Department of Psychology, Australian National University, Canberra, ACT 0200, Australia
| | - C Ellie Wilson
- Macquarie Centre for Cognitive Science (MACCS), Macquarie University, Sydney, NSW 2109, Australia
| | - Andrew J Calder
- ARC Centre of Excellence in Cognition and its Disorders (CCD); MRC Cognition and Brain Sciences Unit, Cambridge CB2 7EF, England, United Kingdom
| |
Collapse
|
19
|
|
20
|
|
21
|
Baudouin JY, Chambon V, Tiberghien G. Expert en visages ? Pourquoi sommes-nous tous… des experts en reconnaissance des visages. EVOLUTION PSYCHIATRIQUE 2009. [DOI: 10.1016/j.evopsy.2008.12.011] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
22
|
Holistic processing for faces operates over a wide range of sizes but is strongest at identification rather than conversational distances. Vision Res 2009; 49:268-83. [DOI: 10.1016/j.visres.2008.10.020] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2007] [Revised: 09/19/2008] [Accepted: 10/28/2008] [Indexed: 11/21/2022]
|
23
|
|
24
|
Leppänen JM, Nelson CA. The development and neural bases of facial emotion recognition. ADVANCES IN CHILD DEVELOPMENT AND BEHAVIOR 2006; 34:207-46. [PMID: 17120806 DOI: 10.1016/s0065-2407(06)80008-x] [Citation(s) in RCA: 49] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Affiliation(s)
- Jukka M Leppänen
- Human Information Processing Laboratory, Department of Psychology, University of Tampere, Finland
| | | |
Collapse
|
25
|
Stephan BCM, Breen N, Caine D. The recognition of emotional expression in prosopagnosia: decoding whole and part faces. J Int Neuropsychol Soc 2006; 12:884-95. [PMID: 17064450 DOI: 10.1017/s1355617706061066] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/02/2006] [Revised: 05/31/2006] [Accepted: 06/08/2006] [Indexed: 12/28/2022]
Abstract
Prosopagnosia is currently viewed within the constraints of two competing theories of face recognition, one highlighting the analysis of features, the other focusing on configural processing of the whole face. This study investigated the role of feature analysis versus whole face configural processing in the recognition of facial expression. A prosopagnosic patient, SC made expression decisions from whole and incomplete (eyes-only and mouth-only) faces where features had been obscured. SC was impaired at recognizing some (e.g., anger, sadness, and fear), but not all (e.g., happiness) emotional expressions from the whole face. Analyses of his performance on incomplete faces indicated that his recognition of some expressions actually improved relative to his performance on the whole face condition. We argue that in SC interference from damaged configural processes seem to override an intact ability to utilize part-based or local feature cues.
Collapse
|
26
|
Humphreys K, Minshew N, Leonard GL, Behrmann M. A fine-grained analysis of facial expression processing in high-functioning adults with autism. Neuropsychologia 2006; 45:685-95. [PMID: 17010395 DOI: 10.1016/j.neuropsychologia.2006.08.003] [Citation(s) in RCA: 170] [Impact Index Per Article: 9.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2005] [Revised: 08/01/2006] [Accepted: 08/03/2006] [Indexed: 10/24/2022]
Abstract
It is unclear whether individuals with autism are impaired at recognizing basic facial expressions, and whether, if any impairment exists, it applies to expression processing in general, or to certain expressions, in particular. To evaluate these alternatives, we adopted a fine-grained analysis of facial expression processing in autism. Specifically, we used the 'facial expression megamix' paradigm [Young, A. W., Rowland, D., Calder, A. J, Etcoff, N. L., Seth, A., & Perrett, D. I. (1997). Facial expression megamix: Tests of dimensional and category accounts of emotion recognition Cognition and Emotion, 14, 39-60] in which adults with autism and a typically developing comparison group performed a six alternative forced-choice response to morphs of all possible combinations of the six basic expressions identified by Ekman [Ekman, P. (1972). Universals and cultural differences in facial expressions of emotion. In J. K. Cole (Ed.), Nebraska symposium on motivation: vol. 1971, (pp. 207-283). Lincoln, Nebraska: University of Nebraska Press] (happiness, sadness, disgust, anger, fear and surprise). Clear differences were evident between the two groups, most obviously in the recognition of fear, but also in the recognition of disgust and happiness. A second experiment demonstrated that individuals with autism are able to discriminate between different emotional images and suggests that low-level perceptual difficulties do not underlie the difficulties with emotion recognition.
Collapse
Affiliation(s)
- Kate Humphreys
- Department of Psychology, Carnegie Mellon University, Baker Hall, Pittsburgh, PA 15213-3890, USA.
| | | | | | | |
Collapse
|
27
|
Gross TF. Global-local precedence in the perception of facial age and emotional expression by children with autism and other developmental disabilities. J Autism Dev Disord 2006; 35:773-85. [PMID: 16283086 DOI: 10.1007/s10803-005-0023-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Global information processing and perception of facial age and emotional expression was studied in children with autism, language disorders, mental retardation, and a clinical control group. Children were given a global-local task and asked to recognize age and emotion in human and canine faces. Children with autism made fewer global responses and more errors when recognizing human and canine emotions and canine age than children without autism. Significant relationships were found between global information processing and the recognition of human and canine emotions and canine age. Results are discussed with respect to the relationship between global information processing and face perception and neural structures underlying these abilities.
Collapse
Affiliation(s)
- Thomas F Gross
- Department of Psychology, University of Redlands, CA 92373-0999, USA.
| |
Collapse
|
28
|
Abstract
Faces with expressions (happy, surprise, anger, fear) were presented at study. Memory for facial expressions was tested by presenting the same faces with neutral expressions and asking participants to determine the expression that had been displayed at study. In three experiments, happy expressions were remembered better than other expressions. The advantage of a happy face was observed even when faces were inverted (upside down) and even when the salient perceptual feature (broad grin) was controlled across conditions. These findings are couched in terms of source monitoring, in which memory for facial expressions reflects encoding of the dispositional context of a prior event.
Collapse
|
29
|
Schwaninger A, Wallraven C, Cunningham DW, Chiller-Glaus SD. Processing of facial identity and expression: a psychophysical, physiological, and computational perspective. PROGRESS IN BRAIN RESEARCH 2006; 156:321-43. [PMID: 17015089 DOI: 10.1016/s0079-6123(06)56018-2] [Citation(s) in RCA: 33] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
A deeper understanding of how the brain processes visual information can be obtained by comparing results from complementary fields such as psychophysics, physiology, and computer science. In this chapter, empirical findings are reviewed with regard to the proposed mechanisms and representations for processing identity and emotion in faces. Results from psychophysics clearly show that faces are processed by analyzing component information (eyes, nose, mouth, etc.) and their spatial relationship (configural information). Results from neuroscience indicate separate neural systems for recognition of identity and facial expression. Computer science offers a deeper understanding of the required algorithms and representations, and provides computational modeling of psychological and physiological accounts. An interdisciplinary approach taking these different perspectives into account provides a promising basis for better understanding and modeling of how the human brain processes visual information for recognition of identity and emotion in faces.
Collapse
Affiliation(s)
- Adrian Schwaninger
- Department of Bülthoff, Max Planck Institute for Biological Cybernetics, Spemannstr. 38, 72076 Tübingen, Germany.
| | | | | | | |
Collapse
|
30
|
Caharel S, Courtay N, Bernard C, Lalonde R, Rebaï M. Familiarity and emotional expression influence an early stage of face processing: An electrophysiological study. Brain Cogn 2005; 59:96-100. [PMID: 16019117 DOI: 10.1016/j.bandc.2005.05.005] [Citation(s) in RCA: 167] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2004] [Revised: 02/24/2005] [Accepted: 05/16/2005] [Indexed: 10/25/2022]
Abstract
Recent data indicate that the familiarity and the emotional expression of faces occur at an early stage of information processing. The goal of the present study was to determine whether these two aspects interact at the structural encoding stage as reflected by the N170 component of event-related potentials in tasks requiring the subjects either to identify whether the faces were familiar or the nature of the emotional expression. The results indicate that the neural responses to level of familiarity and emotional expression were observable at this early processing stage but without interacting. In particular, faces of personal importance to the subjects differed from those of less personal importance. Because familiarity did not interact with emotional expression at behavioral and electrophysiologic levels, our results support the contention of parallel and independent processing of faces.
Collapse
Affiliation(s)
- Stéphanie Caharel
- Université de Rouen, Faculté des Sciences, Laboratoire Psychologie et Neurosciences de la Cognition (PSY.CO EA-1780), 76821 Mont-Saint-Aignan Cedex, France
| | | | | | | | | |
Collapse
|
31
|
Ambadar Z, Schooler JW, Cohn JF. Deciphering the enigmatic face: the importance of facial dynamics in interpreting subtle facial expressions. Psychol Sci 2005; 16:403-10. [PMID: 15869701 DOI: 10.1111/j.0956-7976.2005.01548.x] [Citation(s) in RCA: 299] [Impact Index Per Article: 15.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
Abstract
Most studies investigating the recognition of facial expressions have focused on static displays of intense expressions. Consequently, researchers may have underestimated the importance of motion in deciphering the subtle expressions that permeate real-life situations. In two experiments, we examined the effect of motion on perception of subtle facial expressions and tested the hypotheses that motion improves affect judgment by (a) providing denser sampling of expressions, (b) providing dynamic information, (c) facilitating configural processing, and (d) enhancing the perception of change. Participants viewed faces depicting subtle facial expressions in four modes (single-static, multi-static, dynamic, and first-last). Experiment 1 demonstrated a robust effect of motion and suggested that this effect was due to the dynamic property of the expression. Experiment 2 showed that the beneficial effect of motion may be due more specifically to its role in perception of change. Together, these experiments demonstrated the importance of motion in identifying subtle facial expressions.
Collapse
Affiliation(s)
- Zara Ambadar
- Department of Psychology, University of Pittsburgh, Pittsburgh, PA 15260, USA.
| | | | | |
Collapse
|
32
|
Calder AJ, Jansen J. Configural coding of facial expressions: The impact of inversion and photographic negative. VISUAL COGNITION 2005. [DOI: 10.1080/13506280444000418] [Citation(s) in RCA: 37] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
|
33
|
Abstract
The present study investigated whether facial expressions of emotion are recognized holistically, i.e., all at once as an entire unit, as faces are or featurally as other nonface stimuli. Evidence for holistic processing of faces comes from a reliable decrement in recognition performance when faces are presented inverted rather than upright. If emotion is recognized holistically, then recognition of facial expressions of emotion should be impaired by inversion. To test this, participants were shown schematic drawings of faces showing one of six emotions (surprise, sadness, anger, happiness, disgust, and fear) in either an upright or inverted orientation and were asked to indicate the emotion depicted. Participants were more accurate in the upright than in the inverted orientation, providing evidence in support of holistic recognition of facial emotion. Because recognition of facial expressions of emotion is important in social relationships, this research may have implications for treatment of some social disorders.
Collapse
Affiliation(s)
- Marte Fallshore
- Department of Psychology, Central Washington University, Ellensburg 98926-7575, USA.
| | | |
Collapse
|
34
|
White M. Different spatial-relational information is used to recognise faces and emotional expressions. Perception 2002; 31:675-82. [PMID: 12092794 DOI: 10.1068/p3329] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
In a face photo in which the two eyes have been moved up into the forehead region, configural spatial relations are altered more than categorical relations; in a photo in which only one eye is moved up, categorical relations are altered more. Matching the identities of two faces was slower when an unaltered photo was paired with a two-eyes-moved photo than when paired with a one-eye-moved photo, implicating configural relations in face identification. But matching the emotional expressions of the same faces was slower when an unaltered photo was paired with a one-eye-moved photo than when paired with a two-eyes-moved photo, showing that expression recognition uses categorically coded relations. The findings also indicate that changing spatial-relational information affects the perceptual encoding of identities and expressions rather than their memory representations.
Collapse
Affiliation(s)
- Murray White
- School of Psychology, Victoria University of Wellington, New Zealand.
| |
Collapse
|
35
|
White M. Effect of photographic negation on matching the expressions and identities of faces. Perception 2001; 30:969-81. [PMID: 11578082 DOI: 10.1068/p3225] [Citation(s) in RCA: 32] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
In four experiments, participants made speeded same-different responses to pairs of face photographs showing the same woman or different women with the same expression or different expressions. Compared with responses to positive pairs, negative pairs were matched more slowly on identity than on expression. A secondary finding showed that face expressions (same, different) influenced identity responses, and identities influenced expression responses, equally for positive and negative pairs. The independence of this irrelevant-dimension effect from the contrast effect supports the conclusion required by the main finding that negation slows perceptual encoding of surface-based information used for identification more than it does encoding of edge-based information used for expression recognition.
Collapse
Affiliation(s)
- M White
- School of Psychology, Victoria University of Wellington, New Zealand.
| |
Collapse
|
36
|
Calder AJ, Burton AM, Miller P, Young AW, Akamatsu S. A principal component analysis of facial expressions. Vision Res 2001; 41:1179-208. [PMID: 11292507 DOI: 10.1016/s0042-6989(01)00002-5] [Citation(s) in RCA: 295] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
Pictures of facial expressions from the Ekman and Friesen set (Ekman, P., Friesen, W. V., (1976). Pictures of facial affect. Palo Alto, California: Consulting Psychologists Press) were submitted to a principal component analysis (PCA) of their pixel intensities. The output of the PCA was submitted to a series of linear discriminant analyses which revealed three principal findings: (1) a PCA-based system can support facial expression recognition, (2) continuous two-dimensional models of emotion (e.g. Russell, J. A. (1980). A circumplex model of affect. Journal of Personality and Social Psychology, 39, 1161-1178) are reflected in the statistical structure of the Ekman and Friesen facial expressions, and (3) components for coding facial expression information are largely different to components for facial identity information. The implications for models of face processing are discussed.
Collapse
Affiliation(s)
- A J Calder
- MRC Cognition and Brain Sciences Unit, 15 Chaucer Road, CB2 2EF, Cambridge, UK.
| | | | | | | | | |
Collapse
|