1
|
Plisiecki H, Sobieszek A. Emotion topology: extracting fundamental components of emotions from text using word embeddings. Front Psychol 2024; 15:1401084. [PMID: 39439759 PMCID: PMC11494860 DOI: 10.3389/fpsyg.2024.1401084] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2024] [Accepted: 09/03/2024] [Indexed: 10/25/2024] Open
Abstract
This exploratory study examined the potential of word embeddings, an automated numerical representation of written text, as a novel method for emotion decomposition analysis. Drawing from a substantial dataset scraped from a Social Media site, we constructed emotion vectors to extract the dimensions of emotions, as annotated by the readers of the texts, directly from human language. Our findings demonstrated that word embeddings yield emotional components akin to those found in previous literature, offering an alternative perspective not bounded by theoretical presuppositions, as well as showing that the dimensional structure of emotions is reflected in the semantic structure of their text-based expressions. Our study highlights word embeddings as a promising tool for uncovering the nuances of human emotions and comments on the potential of this approach for other psychological domains, providing a basis for future studies. The exploratory nature of this research paves the way for further development and refinement of this method, promising to enrich our understanding of emotional constructs and psychological phenomena in a more ecologically valid and data-driven manner.
Collapse
Affiliation(s)
- Hubert Plisiecki
- Research Lab for the Digital Social Sciences, IFIS PAN, Warsaw, Poland
| | - Adam Sobieszek
- Department of Psychology, University of Warsaw, Warsaw, Poland
| |
Collapse
|
2
|
Zhao C, Zeng Q. Mechanism behind overestimating the duration of fearful expressions: The role of arousal and memory. Acta Psychol (Amst) 2024; 250:104516. [PMID: 39418764 DOI: 10.1016/j.actpsy.2024.104516] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/23/2023] [Revised: 09/08/2024] [Accepted: 10/03/2024] [Indexed: 10/19/2024] Open
Abstract
BACKGROUND Previous studies have demonstrated that individuals overestimate the time duration of fear-related stimuli compared with relatively neutral stimuli. However, their physiological and psychological mechanisms behind this effect remain unclear. This study investigates the overestimation the duration perception of fearful faces and its relationship with general cognitive ability (short-term memory, working memory, and attentional inhibition). METHOD Emotional pictures were selected from the Chinese Facial Affective Picture System. A total of 85 university students (43 females and 42 males,aged 20-24 years) participated in the experiments at a university. In Experiment 1,a temporal bisection task (300 ms: 1200 ms) was used to explore the effect of perceptual overestimating the duration perception of fearful faces and its relationship with general cognitive abilities (short-term memory, working memory, and attentional inhibition), In Experiment 2, the short and long standard time intervals were set to 1200 ms and 4800 ms, respectively, with the other conditions remaining the same as in Experiment 1. RESULTS Both experiment revealed that participants overestimated the duration of fearful faces compared with that of neutral faces. Experiment 1 indicated no significant correlation between short-term memory, working memory, attention inhibition tests, and the overestimation effect. Experiment 2 revealed a positive correlation between working memory test scores, short-term memory test scores, and the overestimation effect,as well as temporal sensitivity. CONCLUSION Individuals tend to overestimate the duration of fearful faces, and the influence of arousal and memory is modulated by the length of the target time intervals.
Collapse
Affiliation(s)
- Chunni Zhao
- School of Marxism, Foshan University, Foshan 528011, China
| | - Qing Zeng
- School of Marxism, Jinan University, Guangzhou 510632, China.
| |
Collapse
|
3
|
Bress KS, Cascio CJ. Sensorimotor regulation of facial expression - An untouched frontier. Neurosci Biobehav Rev 2024; 162:105684. [PMID: 38710425 DOI: 10.1016/j.neubiorev.2024.105684] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2024] [Revised: 04/16/2024] [Accepted: 04/18/2024] [Indexed: 05/08/2024]
Abstract
Facial expression is a critical form of nonverbal social communication which promotes emotional exchange and affiliation among humans. Facial expressions are generated via precise contraction of the facial muscles, guided by sensory feedback. While the neural pathways underlying facial motor control are well characterized in humans and primates, it remains unknown how tactile and proprioceptive information reaches these pathways to guide facial muscle contraction. Thus, despite the importance of facial expressions for social functioning, little is known about how they are generated as a unique sensorimotor behavior. In this review, we highlight current knowledge about sensory feedback from the face and how it is distinct from other body regions. We describe connectivity between the facial sensory and motor brain systems, and call attention to the other brain systems which influence facial expression behavior, including vision, gustation, emotion, and interoception. Finally, we petition for more research on the sensory basis of facial expressions, asserting that incomplete understanding of sensorimotor mechanisms is a barrier to addressing atypical facial expressivity in clinical populations.
Collapse
Affiliation(s)
- Kimberly S Bress
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| | - Carissa J Cascio
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
4
|
Faghel-Soubeyrand S, Richoz AR, Waeber D, Woodhams J, Caldara R, Gosselin F, Charest I. Neural computations in prosopagnosia. Cereb Cortex 2024; 34:bhae211. [PMID: 38795358 PMCID: PMC11127037 DOI: 10.1093/cercor/bhae211] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2022] [Revised: 04/30/2024] [Accepted: 05/03/2024] [Indexed: 05/27/2024] Open
Abstract
We report an investigation of the neural processes involved in the processing of faces and objects of brain-lesioned patient PS, a well-documented case of pure acquired prosopagnosia. We gathered a substantial dataset of high-density electrophysiological recordings from both PS and neurotypicals. Using representational similarity analysis, we produced time-resolved brain representations in a format that facilitates direct comparisons across time points, different individuals, and computational models. To understand how the lesions in PS's ventral stream affect the temporal evolution of her brain representations, we computed the temporal generalization of her brain representations. We uncovered that PS's early brain representations exhibit an unusual similarity to later representations, implying an excessive generalization of early visual patterns. To reveal the underlying computational deficits, we correlated PS' brain representations with those of deep neural networks (DNN). We found that the computations underlying PS' brain activity bore a closer resemblance to early layers of a visual DNN than those of controls. However, the brain representations in neurotypicals became more akin to those of the later layers of the model compared to PS. We confirmed PS's deficits in high-level brain representations by demonstrating that her brain representations exhibited less similarity with those of a DNN of semantics.
Collapse
Affiliation(s)
- Simon Faghel-Soubeyrand
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
- Department of Experimental Psychology, University of Oxford, Anna Watts Building, Woodstock Rd, Oxford OX2 6GG
| | - Anne-Raphaelle Richoz
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Delphine Waeber
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Jessica Woodhams
- School of Psychology, University of Birmingham, Hills Building, Edgbaston Park Rd, Birmingham B15 2TT, UK
| | - Roberto Caldara
- Département de psychologie, Université de Fribourg, RM 01 bu. C-3.117Rue P.A. de Faucigny 21700 Fribourg, Switzerland
| | - Frédéric Gosselin
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| | - Ian Charest
- Département de psychologie, Université de Montréal, 90 av. Vincent D’indy, Montreal, H2V 2S9, Canada
| |
Collapse
|
5
|
Tal S, Ben-David Sela T, Dolev-Amit T, Hel-Or H, Zilcha-Mano S. Reactivity and stability in facial expressions as an indicator of therapeutic alliance strength. Psychother Res 2024:1-15. [PMID: 38442022 DOI: 10.1080/10503307.2024.2311777] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 01/22/2024] [Indexed: 03/07/2024] Open
Abstract
Objective: Aspects of our emotional state are constantly being broadcast via our facial expressions. Psychotherapeutic theories highlight the importance of emotional dynamics between patients and therapists for an effective therapeutic relationship. Two emotional dynamics suggested by the literature are emotional reactivity (i.e., when one person is reacting to the other) and emotional stability (i.e., when a person has a tendency to remain in a given emotional state). Yet, little is known empirically about the association between these dynamics and the therapeutic alliance. This study investigates the association between the therapeutic alliance and the emotional dynamics of reactivity and stability, as manifested in the facial expressions of patients and therapists within the session. Methods: Ninety-four patients with major depressive disorder underwent short-term treatment for depression (N = 1256 sessions). Results: Both therapist reactivity and stability were associated with the alliance, across all time spans. Patient reactivity was associated with the alliance only in a short time span (1 s). Conclusions: These findings may potentially guide therapists in the field to attenuate not only their emotional reaction to their patients, but also their own unique presence in the therapy room.
Collapse
Affiliation(s)
- Shachaf Tal
- Department of Psychology, University of Haifa, Haifa, Israel
| | | | | | - Hagit Hel-Or
- Department of Computer Science, University of Haifa, Haifa, Israel
| | | |
Collapse
|
6
|
Zhu A, Boonipat T, Cherukuri S, Lin J, Bite U. How Brow Rotation Affects Emotional Expression Utilizing Artificial Intelligence. Aesthetic Plast Surg 2023; 47:2552-2560. [PMID: 37626138 DOI: 10.1007/s00266-023-03615-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2023] [Accepted: 08/12/2023] [Indexed: 08/27/2023]
Abstract
BACKGROUND It is well known that brow position affects emotional expression. However, there is little literature on how and to what degree this change in emotional expression happens. Previous studies on this topic have utilized manual rating; this method of study remains small and labor intensive. Our objective is to correlate manual brow rotations with emotional outcomes using artificial intelligence to objectively determine how specific brow manipulations affected human expression. METHODS We included 53 brow-lift patients in this study. Pre-operative patients' brows were rotated to - 20, - 10, +10, and +20 degrees in respect to the central axis of their existing brow using PIXLR, a cloud-based set of image editing tools and utilities. These images were analyzed using FaceReader, a validated software package that uses computer vision technology for facial expression recognition. The primary facial emotion and intensity of facial action units (0 = no action unit detected to 4 = most intense action unit detected) generated by the software were recorded. RESULTS 265 total images [5 images (pre-operative, - 20 degree brow rotation, - 10, +10, and +20) per patient] were analyzed using FaceReader. The primary emotion detected in the majority of images was neutral. The percentage of disgust in patients' expressions, as detected by FaceReader, increased with increased positive brow rotation (1.76% disgust detected at - 20 degrees, 2.09% at - 10 degrees, 2.65% at neutral, 2.61% at +10 degrees, and 2.95% at +20 degrees). In contrast, the percentage of sadness in patients' expressions decreased with increased positive brow rotation (29.92% sadness detected at - 20 degrees, 21.5% at - 10 degrees, 11.42% at neutral, 15.75% at +10 degrees, and 12.86% at +20 degrees). Our facial action unit analysis corresponded with primary emotion analysis. The intensity of the inner brow raiser decreased with increased positive brow rotation 8.54% at - 20 degrees, 4.21% at - 10 degrees, 1.48% at neutral, 0.84% at +10 degrees, and 0.76% at +20 degrees). The intensity of the outer brow raiser increased with increased positive brow rotation (0.97% at - 20 degrees, 0.45% at - 10 degrees, 1.12% at neutral, 5.45% at +10 degrees, and 11.19% at +20 degrees). CONCLUSION We demonstrated that increasing the degree of brow rotation correlated positively with the percentage of disgust and inversely with the percentage of sadness detected by FaceReader. This study demonstrated how different manipulated brow positions affected emotional outcomes using artificial intelligence. Physicians can use these findings to better understand how brow-lifts can affect the perceived emotion of their patients. LEVEL OF EVIDENCE III This journal requires that authors assign a level of evidence to each article. For a full description of these Evidence-Based Medicine ratings, please refer to the Table of Contents or the online Instructions to Authors www.springer.com/00266 .
Collapse
Affiliation(s)
- Agnes Zhu
- Mayo Clinic Alix School of Medicine, Mayo Clinic Alix School of Medicine, 200 First St. SW, Rochester, MN, 55905, USA.
| | | | - Sai Cherukuri
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| | - Jason Lin
- Division of Plastic and Reconstructive Surgery, Saint Louis University, St. Louis, MO, USA
| | - Uldis Bite
- Department of Plastic Surgery, Mayo Clinic, Rochester, MN, USA
| |
Collapse
|
7
|
Andrews TJ, Rogers D, Mileva M, Watson DM, Wang A, Burton AM. A narrow band of image dimensions is critical for face recognition. Vision Res 2023; 212:108297. [PMID: 37527594 DOI: 10.1016/j.visres.2023.108297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 07/07/2023] [Accepted: 07/12/2023] [Indexed: 08/03/2023]
Abstract
A key challenge in human and computer face recognition is to differentiate information that is diagnostic for identity from other sources of image variation. Here, we used a combined computational and behavioural approach to reveal critical image dimensions for face recognition. Behavioural data were collected using a sorting and matching task with unfamiliar faces and a recognition task with familiar faces. Principal components analysis was used to reveal the dimensions across which the shape and texture of faces in these tasks varied. We then asked which image dimensions were able to predict behavioural performance across these tasks. We found that the ability to predict behavioural responses in the unfamiliar face tasks increased when the early PCA dimensions (i.e. those accounting for most variance) of shape and texture were removed from the analysis. Image similarity also predicted the output of a computer model of face recognition, but again only when the early image dimensions were removed from the analysis. Finally, we found that recognition of familiar faces increased when the early image dimensions were removed, decreased when intermediate dimensions were removed, but then returned to baseline recognition when only later dimensions were removed. Together, these findings suggest that early image dimensions reflect ambient changes, such as changes in viewpoint or lighting, that do not contribute to face recognition. However, there is a narrow band of image dimensions for shape and texture that are critical for the recognition of identity in humans and computer models of face recognition.
Collapse
Affiliation(s)
| | - Daniel Rogers
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Mila Mileva
- Department of Psychology, University of York, York YO10 5DD, UK
| | - David M Watson
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Ao Wang
- Department of Psychology, University of York, York YO10 5DD, UK
| | - A Mike Burton
- Department of Psychology, University of York, York YO10 5DD, UK
| |
Collapse
|
8
|
Bell L, Duchaine B, Susilo T. Dissociations between face identity and face expression processing in developmental prosopagnosia. Cognition 2023; 238:105469. [PMID: 37216847 DOI: 10.1016/j.cognition.2023.105469] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Revised: 04/21/2023] [Accepted: 04/24/2023] [Indexed: 05/24/2023]
Abstract
Individuals with developmental prosopagnosia (DPs) experience severe and lifelong deficits recognising faces, but whether their deficits are selective to the processing of face identity or extend to the processing of face expression remains unclear. Clarifying this issue is important for understanding DP impairments and advancing theories of face processing. We compared identity and expression processing in a large sample of DPs (N = 124) using three different matching tasks that each assessed identity and expression processing with identical experimental formats. We ran each task in upright and inverted orientations and we measured inversion effects to assess the integrity of upright-specific face processes. We report three main results. First, DPs showed large deficits at discriminating identity but only subtle deficits at discriminating expression. Second, DPs showed a reduced inversion effect for identity but a normal inversion effect for expression. Third, DPs' performance on the expression tasks were linked to autism traits, but their performance on the identity tasks were not. These results constitute several dissociations between identity and expression processing in DP, and they are consistent with the view that the core impairment in DP is highly selective to identity.
Collapse
Affiliation(s)
- Lauren Bell
- Victoria University of Wellington, New Zealand
| | | | | |
Collapse
|
9
|
Zhou L, Yang A, Meng M, Zhou K. Emerged human-like facial expression representation in a deep convolutional neural network. SCIENCE ADVANCES 2022; 8:eabj4383. [PMID: 35319988 PMCID: PMC8942361 DOI: 10.1126/sciadv.abj4383] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 05/12/2021] [Accepted: 02/02/2022] [Indexed: 06/14/2023]
Abstract
Recent studies found that the deep convolutional neural networks (DCNNs) trained to recognize facial identities spontaneously learned features that support facial expression recognition, and vice versa. Here, we showed that the self-emerged expression-selective units in a VGG-Face trained for facial identification were tuned to distinct basic expressions and, importantly, exhibited hallmarks of human expression recognition (i.e., facial expression confusion and categorical perception). We then investigated whether the emergence of expression-selective units is attributed to either face-specific experience or domain-general processing by conducting the same analysis on a VGG-16 trained for object classification and an untrained VGG-Face without any visual experience, both having the identical architecture with the pretrained VGG-Face. Although similar expression-selective units were found in both DCNNs, they did not exhibit reliable human-like characteristics of facial expression perception. Together, these findings revealed the necessity of domain-specific visual experience of face identity for the development of facial expression perception, highlighting the contribution of nurture to form human-like facial expression perception.
Collapse
Affiliation(s)
- Liqin Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Anmin Yang
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| | - Ming Meng
- Philosophy and Social Science Laboratory of Reading and Development in Children and Adolescents (South China Normal University), Ministry of Education, Guangzhou 510631, China
- Guangdong Key Laboratory of Mental Health and Cognitive Science, School of Psychology, South China Normal University, Guangzhou 510631, China
| | - Ke Zhou
- Beijing Key Laboratory of Applied Experimental Psychology, Faculty of Psychology, Beijing Normal University, Beijing 100875, China
| |
Collapse
|
10
|
Automatic Recognition of Macaque Facial Expressions for Detection of Affective States. eNeuro 2021; 8:ENEURO.0117-21.2021. [PMID: 34799408 PMCID: PMC8664380 DOI: 10.1523/eneuro.0117-21.2021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2021] [Revised: 08/28/2021] [Accepted: 11/10/2021] [Indexed: 11/21/2022] Open
Abstract
Internal affective states produce external manifestations such as facial expressions. In humans, the Facial Action Coding System (FACS) is widely used to objectively quantify the elemental facial action units (AUs) that build complex facial expressions. A similar system has been developed for macaque monkeys-the Macaque FACS (MaqFACS); yet, unlike the human counterpart, which is already partially replaced by automatic algorithms, this system still requires labor-intensive coding. Here, we developed and implemented the first prototype for automatic MaqFACS coding. We applied the approach to the analysis of behavioral and neural data recorded from freely interacting macaque monkeys. The method achieved high performance in the recognition of six dominant AUs, generalizing between conspecific individuals (Macaca mulatta) and even between species (Macaca fascicularis). The study lays the foundation for fully automated detection of facial expressions in animals, which is crucial for investigating the neural substrates of social and affective states.
Collapse
|
11
|
Fitousi D. Stereotypical Processing of Emotional Faces: Perceptual and Decisional Components. Front Psychol 2021; 12:733432. [PMID: 34777118 PMCID: PMC8578932 DOI: 10.3389/fpsyg.2021.733432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/30/2021] [Accepted: 09/20/2021] [Indexed: 11/20/2022] Open
Abstract
People tend to associate anger with male faces and happiness or surprise with female faces. This angry-men-happy-women bias has been ascribed to either top-down (e.g., well-learned stereotypes) or bottom-up (e.g., shared morphological cues) processes. The dissociation between these two theoretical alternatives has proved challenging. The current effort addresses this challenge by harnessing two complementary metatheoretical approaches to dimensional interaction: Garner's logic of inferring informational structure and General Recognition Theory-a multidimensional extension of signal detection theory. Conjoint application of these two rigorous methodologies afforded us to: (a) uncover the internal representations that generate the angry-men-happy-women phenomenon, (b) disentangle varieties of perceptual (bottom-up) and decisional (top-down) sources of interaction, and (c) relate operational and theoretical meanings of dimensional independence. The results show that the dimensional interaction between emotion and gender is generated by varieties of perceptual and decisional biases. These outcomes document the involvement of both bottom-up (e.g., shared morphological structures) and top-down (stereotypes) factors in social perception.
Collapse
Affiliation(s)
- Daniel Fitousi
- Department of Psychology, Ariel University, Ariel, Israel
| |
Collapse
|
12
|
Murray T, O'Brien J, Sagiv N, Garrido L. The role of stimulus-based cues and conceptual information in processing facial expressions of emotion. Cortex 2021; 144:109-132. [PMID: 34666297 DOI: 10.1016/j.cortex.2021.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 07/16/2021] [Accepted: 08/09/2021] [Indexed: 01/07/2023]
Abstract
Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Psychology Department, School of Biological and Behavioural Sciences, Queen Mary University London, United Kingdom.
| | - Justin O'Brien
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Noam Sagiv
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Lúcia Garrido
- Department of Psychology, City, University of London, United Kingdom
| |
Collapse
|
13
|
Yitzhak N, Pertzov Y, Aviezer H. The elusive link between eye‐movement patterns and facial expression recognition. SOCIAL AND PERSONALITY PSYCHOLOGY COMPASS 2021. [DOI: 10.1111/spc3.12621] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Neta Yitzhak
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| | - Yoni Pertzov
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| | - Hillel Aviezer
- Department of Psychology Hebrew University of Jerusalem Jerusalem Israel
| |
Collapse
|
14
|
Does interpersonal emotion regulation ability change with age? HUMAN RESOURCE MANAGEMENT REVIEW 2021. [DOI: 10.1016/j.hrmr.2021.100847] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/10/2023]
|
15
|
Ghazouani H. A genetic programming-based feature selection and fusion for facial expression recognition. Appl Soft Comput 2021. [DOI: 10.1016/j.asoc.2021.107173] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
|
16
|
Barman A, Dutta P. Facial expression recognition using distance and shape signature features. Pattern Recognit Lett 2021. [DOI: 10.1016/j.patrec.2017.06.018] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022]
|
17
|
Colón YI, Castillo CD, O'Toole AJ. Facial expression is retained in deep networks trained for face identification. J Vis 2021; 21:4. [PMID: 33821927 PMCID: PMC8039571 DOI: 10.1167/jov.21.4.4] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Facial expressions distort visual cues for identification in two-dimensional images. Face processing systems in the brain must decouple image-based information from multiple sources to operate in the social world. Deep convolutional neural networks (DCNN) trained for face identification retain identity-irrelevant, image-based information (e.g., viewpoint). We asked whether a DCNN trained for identity also retains expression information that generalizes over viewpoint change. DCNN representations were generated for a controlled dataset containing images of 70 actors posing 7 facial expressions (happy, sad, angry, surprised, fearful, disgusted, neutral), from 5 viewpoints (frontal, 90° and 45° left and right profiles). Two-dimensional visualizations of the DCNN representations revealed hierarchical groupings by identity, followed by viewpoint, and then by facial expression. Linear discriminant analysis of full-dimensional representations predicted expressions accurately, mean 76.8% correct for happiness, followed by surprise, disgust, anger, neutral, sad, and fearful at 42.0%; chance ≈14.3%. Expression classification was stable across viewpoints. Representational similarity heatmaps indicated that image similarities within identities varied more by viewpoint than by expression. We conclude that an identity-trained, deep network retains shape-deformable information about expression and viewpoint, along with identity, in a unified form—consistent with a recent hypothesis for ventral visual stream processing.
Collapse
Affiliation(s)
- Y Ivette Colón
- Behavioral and Brain Sciences, The University of Texas at Dallas, TX, USA.,
| | - Carlos D Castillo
- University of Maryland Institute for Advanced Computer Studies, MD, USA.,
| | - Alice J O'Toole
- Behavioral and Brain Sciences, The University of Texas at Dallas, TX, USA.,
| |
Collapse
|
18
|
Facial expressions can be categorized along the upper-lower facial axis, from a perceptual perspective. Atten Percept Psychophys 2021; 83:2159-2173. [PMID: 33759116 DOI: 10.3758/s13414-021-02281-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/09/2021] [Indexed: 11/08/2022]
Abstract
A critical question, fundamental for building models of emotion, is how to categorize emotions. Previous studies have typically taken one of two approaches: (a) they focused on the pre-perceptual visual cues, how salient facial features or configurations were displayed; or (b) they focused on the post-perceptual affective experiences, how emotions affected behavior. In this study, we attempted to group emotions at a peri-perceptual processing level: it is well known that humans perceive different facial expressions differently, therefore, can we classify facial expressions into distinct categories in terms of their perceptual similarities? Here, using a novel non-lexical paradigm, we assessed the perceptual dissimilarities between 20 facial expressions using reaction times. Multidimensional-scaling analysis revealed that facial expressions were organized predominantly along the upper-lower face axis. Cluster analysis of behavioral data delineated three superordinate categories, and eye-tracking measurements validated these clustering results. Interestingly, these superordinate categories can be conceptualized according to how facial displays interact with acoustic communications: One group comprises expressions that have salient mouth features. They likely link to species-specific vocalization, for example, crying, laughing. The second group comprises visual displays with diagnosing features in both the mouth and the eye regions. They are not directly articulable but can be expressed prosodically, for example, sad, angry. Expressions in the third group are also whole-face expressions but are completely independent of vocalization, and likely being blends of two or more elementary expressions. We propose a theoretical framework to interpret the tripartite division in which distinct expression subsets are interpreted as successive phases in an evolutionary chain.
Collapse
|
19
|
Oliva A, Torre S, Taranto P, Delvecchio G, Brambilla P. Neural correlates of emotional processing in panic disorder: A mini review of functional magnetic resonance imaging studies. J Affect Disord 2021; 282:906-914. [PMID: 33601734 DOI: 10.1016/j.jad.2020.12.085] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/14/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 10/22/2022]
Abstract
BACKGROUND Panic Disorder (PD) is mainly characterized by recurrent unexpected panic attacks. Although the presence of emotional functioning deficits in PD is well established, their neuronal bases are still less known. Therefore, in this review, we aim to summarize the available functional Magnetic Resonance Imaging (fMRI) studies investigating the neural correlates associated with the processing of facial emotional expressions in patients with PD. METHODS A comprehensive search on PubMed was performed and 10 fMRI studies meeting our inclusion criteria were included in this review. RESULTS The majority of the studies reported selective deficits in key brain regions within the prefronto-limbic network in PD patients. Specifically, a mixed picture of hyperactivation and hypoactivation patterns were observed in limbic regions, including the amygdala and the anterior cingulate cortex (ACC), as well as in areas within the prefrontal cortex (PFC), either during negative or positive valenced stimuli, as compared to healthy controls (HC) or other anxiety disorders. LIMITATIONS The limited number of studies and the clinical and methodological heterogeneity make it difficult to draw definite conclusions on the neural mechanism of emotional processing associated with PD. CONCLUSION Although the results of the available evidence suggest the presence of selective dysfunctions in regions within the cortico-limbic network in PD patients during processing of emotional stimuli, the direction of these abnormalities is still unclear. Therefore, future larger and more homogeneous studies are needed to elucidate the neural mechanisms underpinning the emotional processing dysfunctions often observed in PD patients.
Collapse
Affiliation(s)
- Anna Oliva
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Silvia Torre
- Department of Neurology and Laboratory of Neuroscience, IRCCS Istituto Auxologico Italiano, Milan, Italy
| | - Paola Taranto
- Clinical and Health Psychology Unit, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Giuseppe Delvecchio
- Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy.
| | - Paolo Brambilla
- Department of Neurosciences and Mental Health, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy; Department of Pathophysiology and Transplantation, University of Milan, Milan, Italy
| |
Collapse
|
20
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
21
|
Sandford A, Pec D, Hatfield AN. Contrast Negation Impairs Sorting Unfamiliar Faces by Identity: A Comparison With Original (Contrast-Positive) and Stretched Images. Perception 2020; 50:3-26. [PMID: 33349150 DOI: 10.1177/0301006620982205] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Recognition of unfamiliar faces is difficult in part due to variations in expressions, angles, and image quality. Studies suggest shape and surface properties play varied roles in face learning, and identification of unfamiliar faces uses diagnostic pigmentation/surface reflectance relative to shape information. Here, participants sorted photo-cards of unfamiliar faces by identity, which were shown in their original, stretched, and contrast-negated forms, to examine the utility of diagnostic shape and surface properties in sorting unfamiliar faces by identity. In four experiments, we varied the presentation order of conditions (contrast-negated first or original first with stretched second across experiments) and whether the same or different photo-cards were seen across conditions. Stretching the images did not impair performance in any measures relative to other conditions. Contrast negation generally exacerbated poor sorting by identity compared with the other conditions. However, seeing the contrast-negated photo-cards last mitigated some of the effects of contrast negation. Together, results suggest an important role for surface properties such as pigmentation and reflectance for sorting by identity and add to literatures on informational content and appearance variability in discrimination of facial identity.
Collapse
|
22
|
Watson DM, Brown BB, Johnston A. A data-driven characterisation of natural facial expressions when giving good and bad news. PLoS Comput Biol 2020; 16:e1008335. [PMID: 33112846 PMCID: PMC7652307 DOI: 10.1371/journal.pcbi.1008335] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2020] [Revised: 11/09/2020] [Accepted: 09/12/2020] [Indexed: 11/18/2022] Open
Abstract
Facial expressions carry key information about an individual's emotional state. Research into the perception of facial emotions typically employs static images of a small number of artificially posed expressions taken under tightly controlled experimental conditions. However, such approaches risk missing potentially important facial signals and within-person variability in expressions. The extent to which patterns of emotional variance in such images resemble more natural ambient facial expressions remains unclear. Here we advance a novel protocol for eliciting natural expressions from dynamic faces, using a dimension of emotional valence as a test case. Subjects were video recorded while delivering either positive or negative news to camera, but were not instructed to deliberately or artificially pose any specific expressions or actions. A PCA-based active appearance model was used to capture the key dimensions of facial variance across frames. Linear discriminant analysis distinguished facial change determined by the emotional valence of the message, and this also generalised across subjects. By sampling along the discriminant dimension, and back-projecting into the image space, we extracted a behaviourally interpretable dimension of emotional valence. This dimension highlighted changes commonly represented in traditional face stimuli such as variation in the internal features of the face, but also key postural changes that would typically be controlled away such as a dipping versus raising of the head posture from negative to positive valences. These results highlight the importance of natural patterns of facial behaviour in emotional expressions, and demonstrate the efficacy of using data-driven approaches to study the representation of these cues by the perceptual system. The protocol and model described here could be readily extended to other emotional and non-emotional dimensions of facial variance.
Collapse
Affiliation(s)
- David M. Watson
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
- * E-mail:
| | - Ben B. Brown
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
23
|
FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behav Res Methods 2020; 52:2604-2622. [PMID: 32519291 DOI: 10.3758/s13428-020-01421-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.
Collapse
|
24
|
Zhu D, Chen T, Zhen N, Niu R. Monitoring the effects of open-pit mining on the eco-environment using a moving window-based remote sensing ecological index. ENVIRONMENTAL SCIENCE AND POLLUTION RESEARCH INTERNATIONAL 2020; 27:15716-15728. [PMID: 32086733 DOI: 10.1007/s11356-020-08054-2] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Accepted: 02/10/2020] [Indexed: 06/10/2023]
Abstract
Environmental problems caused by mines have been increasing. As one of the most serious types of mining damage caused to the eco-environment, open pits have been the focus of monitoring and management. Previous studies have obtained effective results when evaluating the ecological quality of a mining area by using the remote sensing ecological index (RSEI). However, the calculation of RSEI does not consider that the ecological environmental impact is limited under natural conditions. To overcome this shortcoming, this paper proposes an improved RSEI based on a moving window model, namely the moving window-based remote sensing ecological index (MW-RSEI). This improved index is more in agreement with the First Law of Geography than RSEI. This study uses Landsat ETM/OLI/TIRS images to extract MW-RSEI information of a case area in Zhengzhou City, Henan Province, central China, in 2009 and 2018. The results revealed that the average value of MW-RSEI declined from 0.668 to 0.611 from 2009 to 2018, and the main drivers of the deterioration of the eco-environment were land use/cover (LUCC) changes, most of which were derived from urban expansion and mining. The serious impact of open pits on the eco-environment in mining areas is mainly due to their low vegetation cover; therefore, some effectively managed open pits can have a positive impact on the mining environment. The use of MW-RSEI provides valuable information on the eco-environment surrounding the open pit, which can be used for the rapid and effective monitoring of the eco-environment in mining areas.
Collapse
Affiliation(s)
- Dongyu Zhu
- Institute of Geophysics and Geomatics, China University of Geosciences, No 388, Lumo Road, Wuhan, 430074, China
| | - Tao Chen
- Institute of Geophysics and Geomatics, China University of Geosciences, No 388, Lumo Road, Wuhan, 430074, China.
| | - Na Zhen
- Geological Environment Monitoring Institute of Henan Province, Zhengzhou, 450006, China
| | - Ruiqing Niu
- Institute of Geophysics and Geomatics, China University of Geosciences, No 388, Lumo Road, Wuhan, 430074, China
| |
Collapse
|
25
|
Liu CH, Young AW, Basra G, Ren N, Chen W. Perceptual integration and the composite face effect. Q J Exp Psychol (Hove) 2020; 73:1101-1114. [PMID: 31910718 DOI: 10.1177/1747021819899531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The composite face paradigm is widely used to investigate holistic perception of faces. In the paradigm, parts from different faces (usually the top and bottom halves) are recombined. The principal criterion for holistic perception is that responses involving the component parts of composites in which the parts are aligned into a face-like configuration are disrupted compared with the same parts in a misaligned (not face-like) format. This is often taken as evidence that seeing a whole face in the aligned condition interferes with perceiving its separate parts, but the extent to which the effect is perceptually driven remains unclear. We used salient perceptual categories of gender (male or female) and race (Asian or Caucasian appearance) to create composite stimuli from parts of faces that varied orthogonally on these characteristics. In Experiment 1, participants categorised the gender of the parts of aligned composite and misaligned images created from parts with the same (congruent) or different (incongruent) gender and the same (congruent) or different (incongruent) race. In Experiment 2, the same stimuli were used but the task changed to categorising race. In both experiments, there was a strong influence of the task-relevant manipulation on the composite effect, with slower responses to aligned stimuli with incongruent gender in Experiment 1 and incongruent race in Experiment 2. In contrast, the task-irrelevant variable (race in Experiment 1, gender in Experiment 2) did not exert much influence on the composite effect in either experiment. These findings show that although holistic integration of salient visual properties makes a strong contribution to the composite face effect, it clearly also involves targeted processing of an attended visual characteristic.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Bournemouth University, Poole, UK
| | | | - Govina Basra
- Department of Psychology, Bournemouth University, Poole, UK
| | - Naixin Ren
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, P.R. China
| | - Wenfeng Chen
- Department of Psychology, Renmin University of China, Beijing, P.R. China
| |
Collapse
|
26
|
The Priming Effect of a Facial Expression of Surprise on the Discrimination of a Facial Expression of Fear. CURRENT PSYCHOLOGY 2019. [DOI: 10.1007/s12144-017-9719-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
|
27
|
Mileva M, Young AW, Kramer RS, Burton AM. Understanding facial impressions between and within identities. Cognition 2019; 190:184-198. [DOI: 10.1016/j.cognition.2019.04.027] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
|
28
|
Tüttenberg SC, Wiese H. Intentionally remembering or forgetting own- and other-race faces: Evidence from directed forgetting. Br J Psychol 2019; 111:570-597. [PMID: 31264716 DOI: 10.1111/bjop.12413] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2019] [Revised: 05/30/2019] [Indexed: 11/27/2022]
Abstract
People are better at remembering faces of their own relative to another ethnic group. This so-called own-race bias (ORB) has been explained in terms of differential perceptual expertise for own- and other-race faces or, alternatively, as resulting from socio-cognitive factors. To test predictions derived from the latter account, we examined item-method directed forgetting (DF), a paradigm sensitive to an intentional modulation of memory, for faces belonging to different ethnic and social groups. In a series of five experiments, participants during learning received cues following each face to either remember or forget the item, but at test were required to recognize all items irrespective of instruction. In Experiments 1 and 5, Caucasian participants showed DF for own-race faces only while, in Experiment 2, East Asian participants with considerable expertise for Caucasian faces demonstrated DF for own- and other-race faces. Experiments 3 and 4 found clear DF for social in- and outgroup faces. These results suggest that a modulation of face memory by motivational processes is limited to faces with which we have acquired perceptual expertise. Thus, motivation alone is not sufficient to modulate memory for other-race faces and cannot fully explain the ORB.
Collapse
|
29
|
Tummon HM, Allen J, Bindemann M. Facial Identification at a Virtual Reality Airport. Iperception 2019; 10:2041669519863077. [PMID: 31321020 PMCID: PMC6628534 DOI: 10.1177/2041669519863077] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2019] [Accepted: 06/19/2019] [Indexed: 11/17/2022] Open
Abstract
Person identification at airports requires the comparison of a passport photograph with its bearer. In psychology, this process is typically studied with static pairs of face photographs that require identity-match (same person shown) versus mismatch (two different people) decisions, but this approach provides a limited proxy for studying how environment and social interaction factors affect this task. In this study, we explore the feasibility of virtual reality (VR) as a solution to this problem, by examining the identity matching of avatars in a VR airport. We show that facial photographs of real people can be rendered into VR avatars in a manner that preserves image and identity information (Experiments 1 to 3). We then show that identity matching of avatar pairs reflects similar cognitive processes to the matching of face photographs (Experiments 4 and 5). This pattern holds when avatar matching is assessed in a VR airport (Experiments 6 and 7). These findings demonstrate the feasibility of VR as a new method for investigating face matching in complex environments.
Collapse
Affiliation(s)
| | - John Allen
- School of Psychology,
University
of Kent, Canterbury, UK
| | | |
Collapse
|
30
|
Saffarian A, Shavaki YA, Shahidi GA, Jafari Z. Effect of Parkinson Disease on Emotion Perception Using the Persian Affective Voices Test. J Voice 2019; 33:580.e1-580.e9. [DOI: 10.1016/j.jvoice.2018.01.013] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2017] [Accepted: 01/16/2018] [Indexed: 12/01/2022]
|
31
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
32
|
Connolly HL, Young AW, Lewis GJ. Recognition of facial expression and identity in part reflects a common ability, independent of general intelligence and visual short-term memory. Cogn Emot 2018; 33:1119-1128. [DOI: 10.1080/02699931.2018.1535425] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Hannah L. Connolly
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Andrew W. Young
- Department of Psychology, University of York, Heslington, York, UK
| | - Gary J. Lewis
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| |
Collapse
|
33
|
Abstract
Early theories on face perception posit that invariant (i.e., identity) and changeable (i.e., expression) facial aspects are processed separately. However, many researchers have countered the hypothesis of parallel processes with findings of interactions between identity and emotion perception. The majority of tasks measuring interactions between identity and emotion employ a selective attention design, in which participants are instructed to attend to one dimension (e.g., identity) while the other dimension varies orthogonally (e.g., emotion), but is task irrelevant. Recently, a divided attention design (i.e., the redundancy gain paradigm) in which both identity and emotion are task relevant was employed to assess the interaction between identity and emotion. A redundancy gain is calculated by a drop in reaction time in trials in which a target from both dimensions is present in the stimulus face (e.g., “sad Person A”), compared with trials with only a single target present (e.g., “sad” or “Person A”). Redundancy gains are hypothesized to point to an interactive activation of both dimensions, and as such, could complement designs adopting a selective attention task. The initial aim of the current study was to reproduce the earlier findings with this paradigm on identity and emotion perception (Yankouskaya, Booth, & Humphreys, Attention, Perception, & Psychophysics, 74(8), 1692–1711, 2012), but our study failed to replicate the results. In a series of subtasks, multiple aspects of the design were manipulated separately in our goal to shed light on the factors that influence the redundancy gain effect in faces. A redundancy gain was eventually obtained after controlling for contingencies and stimulus presentation time.
Collapse
|
34
|
Gwinn OS, Matera CN, O'Neil SF, Webster MA. Asymmetric neural responses for facial expressions and anti-expressions. Neuropsychologia 2018; 119:405-416. [PMID: 30193846 PMCID: PMC6191349 DOI: 10.1016/j.neuropsychologia.2018.09.001] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2018] [Revised: 08/27/2018] [Accepted: 09/02/2018] [Indexed: 01/23/2023]
Abstract
Face recognition requires identifying both the invariant characteristics that distinguish one individual from another and the variations within the individual that correspond to emotional expressions. Both have been postulated to be represented via a norm-based code, in which identity or expression are represented as deviations from an average or neutral prototype. We used Fast Periodic Visual Stimulation (FPVS) with electroencephalography (EEG) to compare neural responses for neutral faces, expressions and anti-expressions. Anti-expressions are created by projecting an expression (e.g. a happy face) through the neutral face to form the opposite facial shape (anti-happy). Thus expressions and anti-expressions differ from the norm by the same "configural" amount and thus have equivalent but opposite status with regard to their shape, but differ in their ecological validity. We examined whether neural responses to these complementary stimulus pairs were equivalent or asymmetric, and also tested for norm-based coding by comparing whether stronger responses are elicited by expressions and anti-expressions than neutral faces. Observers viewed 20 s sequences of 6 Hz alternations of neutral faces and expressions, neutral faces and anti-expressions, and expressions and anti-expressions. Responses were analyzed in the frequency domain. Significant responses at half the frequency of the presentation rate (3 Hz), indicating asymmetries in responses, were observed for all conditions. Inversion of the images reduced the size of this signal, indicating these asymmetries are not solely due to differences in the low-level properties of the images. While our results do not preclude a norm-based code for expressions, similar to identity, this representation (as measured by the FPVS EEG responses) may also include components sensitive to which configural distortions form meaningful expressions.
Collapse
Affiliation(s)
- O Scott Gwinn
- Department of Psychology, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA; School of Psychology, Flinders University, Sturt Rd, Bedford Park, Adelaide, South Australia 5042, Australia.
| | - Courtney N Matera
- Department of Psychology, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Sean F O'Neil
- Department of Psychology, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| | - Michael A Webster
- Department of Psychology, University of Nevada, Reno, 1664 N Virginia St, Reno, NV 89557, USA
| |
Collapse
|
35
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|
36
|
Abstract
The fact that the face is a source of diverse social signals allows us to use face and person perception as a model system for asking important psychological questions about how our brains are organised. A key issue concerns whether we rely primarily on some form of generic representation of the common physical source of these social signals (the face) to interpret them, or instead create multiple representations by assigning different aspects of the task to different specialist components. Variants of the specialist components hypothesis have formed the dominant theoretical perspective on face perception for more than three decades, but despite this dominance of formally and informally expressed theories, the underlying principles and extent of any division of labour remain uncertain. Here, I discuss three important sources of constraint: first, the evolved structure of the brain; second, the need to optimise responses to different everyday tasks; and third, the statistical structure of faces in the perceiver's environment. I show how these constraints interact to determine the underlying functional organisation of face and person perception.
Collapse
|
37
|
Over H, Cook R. Where do spontaneous first impressions of faces come from? Cognition 2018; 170:190-200. [DOI: 10.1016/j.cognition.2017.10.002] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2017] [Revised: 10/03/2017] [Accepted: 10/04/2017] [Indexed: 10/18/2022]
|
38
|
Turano MT, Lao J, Richoz AR, de Lissa P, Degosciu SBA, Viggiano MP, Caldara R. Fear boosts the early neural coding of faces. Soc Cogn Affect Neurosci 2017; 12:1959-1971. [PMID: 29040780 PMCID: PMC5716185 DOI: 10.1093/scan/nsx110] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2016] [Revised: 09/18/2017] [Accepted: 10/02/2017] [Indexed: 11/14/2022] Open
Abstract
The rapid extraction of facial identity and emotional expressions is critical for adapted social interactions. These biologically relevant abilities have been associated with early neural responses on the face sensitive N170 component. However, whether all facial expressions uniformly modulate the N170, and whether this effect occurs only when emotion categorization is task-relevant, is still unclear. To clarify this issue, we recorded high-resolution electrophysiological signals while 22 observers perceived the six basic expressions plus neutral. We used a repetition suppression paradigm, with an adaptor followed by a target face displaying the same identity and expression (trials of interest). We also included catch trials to which participants had to react, by varying identity (identity-task), expression (expression-task) or both (dual-task) on the target face. We extracted single-trial Repetition Suppression (stRS) responses using a data-driven spatiotemporal approach with a robust hierarchical linear model to isolate adaptation effects on the trials of interest. Regardless of the task, fear was the only expression modulating the N170, eliciting the strongest stRS responses. This observation was corroborated by distinct behavioral performance during the catch trials for this facial expression. Altogether, our data reinforce the view that fear elicits distinct neural processes in the brain, enhancing attention and facilitating the early coding of faces.
Collapse
Affiliation(s)
- Maria Teresa Turano
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Junpeng Lao
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Anne-Raphaëlle Richoz
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Peter de Lissa
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Sarah B A Degosciu
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| | - Maria Pia Viggiano
- Department of Neuroscience, Psychology, Drug Research & Child's Health, University of Florence, Florence, Italy
| | - Roberto Caldara
- Eye and Brain Mapping Laboratory (iBMLab), Department of Psychology, University of Fribourg, Fribourg, Switzerland
| |
Collapse
|
39
|
Maselli A, Dhawan A, Cesqui B, Russo M, Lacquaniti F, d’Avella A. Where Are You Throwing the Ball? I Better Watch Your Body, Not Just Your Arm! Front Hum Neurosci 2017; 11:505. [PMID: 29163094 PMCID: PMC5674933 DOI: 10.3389/fnhum.2017.00505] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 10/06/2017] [Indexed: 11/13/2022] Open
Abstract
The ability to intercept or avoid a moving object, whether to catch a ball, snatch one's prey, or avoid the path of a predator, is a skill that has been acquired throughout evolution by many species in the animal kingdom. This requires processing early visual cues in order to program anticipatory motor responses tuned to the forthcoming event. Here, we explore the nature of the early kinematics cues that could inform an observer about the future direction of a ball projected with an unconstrained overarm throw. Our goal was to pinpoint the body segments that, throughout the temporal course of the throwing action, could provide key cues for accurately predicting the side of the outgoing ball. We recorded whole-body kinematics from twenty non-expert participants performing unconstrained overarm throws at four different targets placed on a vertical plane at 6 m distance. In order to characterize the spatiotemporal structure of the information embedded in the kinematics of the throwing action about the outgoing ball direction, we introduced a novel combination of dimensionality reduction and machine learning techniques. The recorded kinematics clearly shows that throwing styles differed considerably across individuals, with corresponding inter-individual differences in the spatio-temporal structure of the thrower predictability. We found that for most participants it is possible to predict the region where the ball hit the target plane, with an accuracy above 80%, as early as 400-500 ms before ball release. Interestingly, the body parts that provided the most informative cues about the action outcome varied with the throwing style and during the time course of the throwing action. Not surprisingly, at the very end of the action, the throwing arm is the most informative body segment. However, cues allowing for predictions to be made earlier than 200 ms before release are typically associated to other body parts, such as the lower limbs and the contralateral arm. These findings are discussed in the context of the sport-science literature on throwing and catching interactive tasks, as well as from the wider perspective of the role of sensorimotor coupling in interpersonal social interactions.
Collapse
Affiliation(s)
- Antonella Maselli
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
| | - Aishwar Dhawan
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
- Department of Biomechanics, Institute of Sukan Negara, Kuala Lumpur, Malaysia
| | - Benedetta Cesqui
- Department of Systems Medicine and Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
| | - Marta Russo
- Department of Systems Medicine and Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
| | - Francesco Lacquaniti
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
- Department of Systems Medicine and Center of Space Biomedicine, University of Rome Tor Vergata, Rome, Italy
| | - Andrea d’Avella
- Laboratory of Neuromotor Physiology, Santa Lucia Foundation, Rome, Italy
- Department of Biomedical and Dental Sciences and Morphofunctional Imaging, University of Messina, Messina, Italy
| |
Collapse
|
40
|
Olszanowski M, Kaminska OK, Winkielman P. Mixed matters: fluency impacts trust ratings when faces range on valence but not on motivational implications. Cogn Emot 2017; 32:1032-1051. [DOI: 10.1080/02699931.2017.1386622] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | | | - Piotr Winkielman
- SWPS University of Social Sciences and Humanities, Warsaw, Poland
- Department of Psychology, University of California, San Diego, USA
- Behavioural Science Group, Warwick Business School, University of Warwick, Coventry, UK
| |
Collapse
|
41
|
Abstract
We used highly variable, so-called 'ambient' images to test whether expressions affect the identity recognition of real-world facial images. Using movie segments of two actors unknown to our participants, we created image pairs - each image within a pair being captured from the same film segment. This ensured that, within pairs, variables such as lighting were constant whilst expressiveness differed. We created two packs of cards, one containing neutral face images, the other, their expressive counterparts. Participants sorted the card packs into piles, one for each perceived identity. As with previous studies, the perceived number of identities was higher than the veridical number of two. Interestingly, when looking within piles, we found a strong difference between the expressive and neutral sorting tasks. With expressive faces, identity piles were significantly more likely to contain cards of both identities. This finding demonstrates that, over and above other image variables, expressiveness variability can cause identity confusion; evidently, expression is not disregarded or factored out when we classify facial identity in real-world images. Our results provide clear support for a face processing architecture in which both invariant and changeable facial information may be drawn upon to drive our decisions of identity.
Collapse
|
42
|
Liu C, Liu Y, Iqbal Z, Li W, Lv B, Jiang Z. Symmetrical and Asymmetrical Interactions between Facial Expressions and Gender Information in Face Perception. Front Psychol 2017; 8:1383. [PMID: 28855882 PMCID: PMC5557826 DOI: 10.3389/fpsyg.2017.01383] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2017] [Accepted: 07/31/2017] [Indexed: 11/30/2022] Open
Abstract
To investigate the interaction between facial expressions and facial gender information during face perception, the present study matched the intensities of the two types of information in face images and then adopted the orthogonal condition of the Garner Paradigm to present the images to participants who were required to judge the gender and expression of the faces; the gender and expression presentations were varied orthogonally. Gender and expression processing displayed a mutual interaction. On the one hand, the judgment of angry expressions occurred faster when presented with male facial images; on the other hand, the classification of the female gender occurred faster when presented with a happy facial expression than when presented with an angry facial expression. According to the evoked-related potential results, the expression classification was influenced by gender during the face structural processing stage (as indexed by N170), which indicates the promotion or interference of facial gender with the coding of facial expression features. However, gender processing was affected by facial expressions in more stages, including the early (P1) and late (LPC) stages of perceptual processing, reflecting that emotional expression influences gender processing mainly by directing attention.
Collapse
Affiliation(s)
- Chengwei Liu
- School of Education, Hunan University of Science and TechnologyXiangtan, China.,School of Psychology, Liaoning Normal UniversityDalian, China
| | - Ying Liu
- School of Psychology, Liaoning Normal UniversityDalian, China
| | - Zahida Iqbal
- School of Psychology, Liaoning Normal UniversityDalian, China
| | - Wenhui Li
- College of Preschool and Primary Education, Shenyang Normal UniversityShenyang, China
| | - Bo Lv
- Collaborative Innovation Center of Assessment toward Basic Education Quality, Beijing Normal UniversityBeijing, China
| | - Zhongqing Jiang
- School of Psychology, Liaoning Normal UniversityDalian, China
| |
Collapse
|
43
|
Wang H, Ip C, Fu S, Sun P. Different underlying mechanisms for face emotion and gender processing during feature-selective attention: Evidence from event-related potential studies. Neuropsychologia 2017; 99:306-313. [DOI: 10.1016/j.neuropsychologia.2017.03.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2016] [Revised: 03/08/2017] [Accepted: 03/13/2017] [Indexed: 11/28/2022]
|
44
|
Gray KLH, Murphy J, Marsh JE, Cook R. Modulation of the composite face effect by unintended emotion cues. ROYAL SOCIETY OPEN SCIENCE 2017; 4:160867. [PMID: 28484607 PMCID: PMC5414244 DOI: 10.1098/rsos.160867] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 11/03/2016] [Accepted: 03/23/2017] [Indexed: 06/07/2023]
Abstract
When upper and lower regions from different emotionless faces are aligned to form a facial composite, observers 'fuse' the two halves together, perceptually. The illusory distortion induced by task-irrelevant ('distractor') halves hinders participants' judgements about task-relevant ('target') halves. This composite-face effect reveals a tendency to integrate feature information from disparate regions of intact upright faces, consistent with theories of holistic face processing. However, observers frequently perceive emotion in ostensibly neutral faces, contrary to the intentions of experimenters. This study sought to determine whether this 'perceived emotion' influences the composite-face effect. In our first experiment, we confirmed that the composite effect grows stronger as the strength of distractor emotion increased. Critically, effects of distractor emotion were induced by weak emotion intensities, and were incidental insofar as emotion cues hindered image matching, not emotion labelling per se. In Experiment 2, we found a correlation between the presence of perceived emotion in a set of ostensibly neutral distractor regions sourced from commonly used face databases, and the strength of illusory distortion they induced. In Experiment 3, participants completed a sequential matching composite task in which half of the distractor regions were rated high and low for perceived emotion, respectively. Significantly stronger composite effects were induced by the high-emotion distractor halves. These convergent results suggest that perceived emotion increases the strength of the composite-face effect induced by supposedly emotionless faces. These findings have important implications for the study of holistic face processing in typical and atypical populations.
Collapse
Affiliation(s)
- Katie L. H. Gray
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Jennifer Murphy
- Department of Psychology, City, University of London, London, UK
- MRC Social, Genetic and Developmental Psychiatry Centre, Institute of Psychiatry, Psychology, and Neuroscience, King's College London, London, UK
| | - Jade E. Marsh
- School of Psychology and Clinical Language Sciences, University of Reading, Reading, UK
| | - Richard Cook
- Department of Psychology, City, University of London, London, UK
| |
Collapse
|
45
|
Abstract
Traditional accounts of gaze perception emphasise the geometric or configural cues present in the eye; the position of the iris in relation to the corner of the eye, for example. This kind of geometric account has been supported, in part, by findings that gaze judgments are impaired in faces rotated through 180 degrees, a manipulation known to disrupt the processing of relations between facial elements. However, studies involving this manipulation have confounded inversion of the face context with inversion of the eye region. The effects of inversion might therefore have been caused by a disruption of the computation of gaze direction from the eye region itself and/or a disruption of the influence that face context might exert on gaze processing. In the experiment reported here we independently manipulated eye orientation and the orientation of the face context, and measured participants' sensitivity to gaze direction. Performance was severely affected by inversion of the eyes, regardless of the orientation of the face, whereas face inversion had no significant effect on gaze sensitivity. Previous reports of a face-inversion effect on gaze perception can therefore be attributed to inversion of the eye region itself which, we suggest, disrupts some form of configural or relational processing that is normally involved in the computation of eye-gaze direction.
Collapse
Affiliation(s)
- Jenny Jenkins
- Department of Psychology, University of Stirling, Stirling FK9 4LA, Scotland, UK
| | | |
Collapse
|
46
|
Sormaz M, Young AW, Andrews TJ. Contributions of feature shapes and surface cues to the recognition of facial expressions. Vision Res 2016; 127:1-10. [PMID: 27425385 DOI: 10.1016/j.visres.2016.07.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 06/07/2016] [Accepted: 07/07/2016] [Indexed: 11/20/2022]
Affiliation(s)
- Mladen Sormaz
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Andrew W Young
- Department of Psychology, University of York, York YO10 5DD, UK
| | | |
Collapse
|
47
|
Ansuini C, Cavallo A, Campus C, Quarona D, Koul A, Becchio C. Are We Real When We Fake? Attunement to Object Weight in Natural and Pantomimed Grasping Movements. Front Hum Neurosci 2016; 10:471. [PMID: 27713695 PMCID: PMC5031600 DOI: 10.3389/fnhum.2016.00471] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2016] [Accepted: 09/06/2016] [Indexed: 11/13/2022] Open
Abstract
Behavioral and neuropsychological studies suggest that real actions and pantomimed actions tap, at least in part, different neural systems. Inspired by studies showing weight-attunement in real grasps, here we asked whether (and to what extent) kinematics of pantomimed reach-to-grasp movement can reveal the weight of the pretended target. To address this question, we instructed participants (n = 15) either to grasp or pretend to grasp toward two differently weighted objects, i.e., a light object and heavy object. Using linear discriminant analysis, we then proceeded to classify the weight of the target - either real or pretended - on the basis of the recorded movement patterns. Classification analysis revealed that pantomimed reach-to-grasp movements retained information about object weight, although to a lesser extent than real grasp movements. These results are discussed in relation to the mechanisms underlying the control of real and pantomimed grasping movements.
Collapse
Affiliation(s)
- Caterina Ansuini
- C'MON Unit, Fondazione Istituto Italiano di Tecnologia Genova, Italy
| | - Andrea Cavallo
- Department of Psychology, University of Turin Torino, Italy
| | - Claudio Campus
- U-VIP Unit, Fondazione Istituto Italiano di Tecnologia Genova, Italy
| | - Davide Quarona
- C'MON Unit, Fondazione Istituto Italiano di Tecnologia Genova, Italy
| | - Atesh Koul
- C'MON Unit, Fondazione Istituto Italiano di Tecnologia Genova, Italy
| | - Cristina Becchio
- C'MON Unit, Fondazione Istituto Italiano di TecnologiaGenova, Italy; Department of Psychology, University of TurinTorino, Italy
| |
Collapse
|
48
|
|
49
|
Bruyer R, Leclere S, Quinet P. Ethnic Categorisation of Faces is Not Independent of Face Identity. Perception 2016; 33:169-79. [PMID: 15109160 DOI: 10.1068/p5094] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Is the extraction of a visually derived semantic code from faces (ethnicity) affected by face identity (familiarity) or not? The traditional view considers that this operation is made independently of face identity, and in parallel with the recognition of identity. However, some recent studies cast doubt on this parallel thesis regarding other visually derived semantic codes, namely: facial expression, facial speech, apparent age, and gender. Twenty-eight Caucasian participants were enrolled in an ‘ethnic-decision’ task on morphed faces made of an Asiatic source face and a Caucasian source face, in the proportion of 70%–30%. Half of the original faces were previously made familiar by a learning procedure (associating the face, surname, occupation, and city of residence of the person displayed), while the remaining half were unfamiliar. The results showed clearly that ethnic decision was affected by face familiarity. This adds support to the thesis according to which the identification of identity and the extraction of visually derived semantic codes are not made independently from each other and that the ‘parallel-route’ hypothesis becomes weakly supported.
Collapse
Affiliation(s)
- Raymond Bruyer
- Cognitive Neuroscience Research Unit (NESC), Department of Psychology, University of Louvain-la-Neuve, Place du Cardinal Mercier 10, B-1348 Louvain-la-Neuve, Belgium.
| | | | | |
Collapse
|
50
|
Sormaz M, Watson DM, Smith WA, Young AW, Andrews TJ. Modelling the perceptual similarity of facial expressions from image statistics and neural responses. Neuroimage 2016; 129:64-71. [DOI: 10.1016/j.neuroimage.2016.01.041] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2015] [Revised: 12/17/2015] [Accepted: 01/18/2016] [Indexed: 10/22/2022] Open
|