1
|
Becker C, Conduit R, Chouinard PA, Laycock R. Can deepfakes be used to study emotion perception? A comparison of dynamic face stimuli. Behav Res Methods 2024; 56:7674-7690. [PMID: 38834812 PMCID: PMC11362322 DOI: 10.3758/s13428-024-02443-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/11/2024] [Indexed: 06/06/2024]
Abstract
Video recordings accurately capture facial expression movements; however, they are difficult for face perception researchers to standardise and manipulate. For this reason, dynamic morphs of photographs are often used, despite their lack of naturalistic facial motion. This study aimed to investigate how humans perceive emotions from faces using real videos and two different approaches to artificially generating dynamic expressions - dynamic morphs, and AI-synthesised deepfakes. Our participants perceived dynamic morphed expressions as less intense when compared with videos (all emotions) and deepfakes (fearful, happy, sad). Videos and deepfakes were perceived similarly. Additionally, they perceived morphed happiness and sadness, but not morphed anger or fear, as less genuine than other formats. Our findings support previous research indicating that social responses to morphed emotions are not representative of those to video recordings. The findings also suggest that deepfakes may offer a more suitable standardized stimulus type compared to morphs. Additionally, qualitative data were collected from participants and analysed using ChatGPT, a large language model. ChatGPT successfully identified themes in the data consistent with those identified by an independent human researcher. According to this analysis, our participants perceived dynamic morphs as less natural compared with videos and deepfakes. That participants perceived deepfakes and videos similarly suggests that deepfakes effectively replicate natural facial movements, making them a promising alternative for face perception research. The study contributes to the growing body of research exploring the usefulness of generative artificial intelligence for advancing the study of human perception.
Collapse
|
2
|
Bennetts RJ, Gregory NJ, Bate S. Both identity and non-identity face perception tasks predict developmental prosopagnosia and face recognition ability. Sci Rep 2024; 14:6626. [PMID: 38503841 PMCID: PMC10951298 DOI: 10.1038/s41598-024-57176-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 03/14/2024] [Indexed: 03/21/2024] Open
Abstract
Developmental prosopagnosia (DP) is characterised by deficits in face identification. However, there is debate about whether these deficits are primarily perceptual, and whether they extend to other face processing tasks (e.g., identifying emotion, age, and gender; detecting faces in scenes). In this study, 30 participants with DP and 75 controls completed a battery of eight tasks assessing four domains of face perception (identity; emotion; age and gender; face detection). The DP group performed worse than the control group on both identity perception tasks, and one task from each other domain. Both identity perception tests uniquely predicted DP/control group membership, and performance on two measures of face memory. These findings suggest that deficits in DP may arise from issues with face perception. Some non-identity tasks also predicted DP/control group membership and face memory, even when face identity perception was accounted for. Gender perception and speed of face detection consistently predicted unique variance in group membership and face memory; several other tasks were only associated with some measures of face recognition ability. These findings indicate that face perception deficits in DP may extend beyond identity perception. However, the associations between tasks may also reflect subtle aspects of task demands or stimuli.
Collapse
Affiliation(s)
- Rachel J Bennetts
- Division of Psychology, College of Health, Medicine and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK.
| | | | - Sarah Bate
- Department of Psychology, Bournemouth University, Poole, UK
| |
Collapse
|
3
|
Momen A, Hugenberg K, Wiese E. Social perception of robots is shaped by beliefs about their minds. Sci Rep 2024; 14:5459. [PMID: 38443378 PMCID: PMC10914716 DOI: 10.1038/s41598-024-53187-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Accepted: 01/29/2024] [Indexed: 03/07/2024] Open
Abstract
Roboticists often imbue robots with human-like physical features to increase the likelihood that they are afforded benefits known to be associated with anthropomorphism. Similarly, deepfakes often employ computer-generated human faces to attempt to create convincing simulacra of actual humans. In the present work, we investigate whether perceivers' higher-order beliefs about faces (i.e., whether they represent actual people or android robots) modulate the extent to which perceivers deploy face-typical processing for social stimuli. Past work has shown that perceivers' recognition performance is more impacted by the inversion of faces than objects, thus highlighting that faces are processed holistically (i.e., as Gestalt), whereas objects engage feature-based processing. Here, we use an inversion task to examine whether face-typical processing is attenuated when actual human faces are labeled as non-human (i.e., android robot). This allows us to employ a task shown to be differentially sensitive to social (i.e., faces) and non-social (i.e., objects) stimuli while also randomly assigning face stimuli to seem real or fake. The results show smaller inversion effects when face stimuli were believed to represent android robots compared to when they were believed to represent humans. This suggests that robots strongly resembling humans may still fail to be perceived as "social" due pre-existing beliefs about their mechanistic nature. Theoretical and practical implications of this research are discussed.
Collapse
Affiliation(s)
- Ali Momen
- United States Air Force Academy, Colorado Springs, CO, USA.
- George Mason University, Fairfax, VA, USA.
| | | | - Eva Wiese
- George Mason University, Fairfax, VA, USA.
- Berlin Institute of Technology, Berlin, Germany.
| |
Collapse
|
4
|
González-Rodríguez A, García-Pérez Á, Godoy-Giménez M, Sayans-Jiménez P, Cañadas F, Estévez AF. The role of the differential outcomes procedure and schizotypy in the recognition of dynamic facial expressions of emotions. Sci Rep 2024; 14:2322. [PMID: 38282111 PMCID: PMC10822869 DOI: 10.1038/s41598-024-52893-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2023] [Accepted: 01/24/2024] [Indexed: 01/30/2024] Open
Abstract
Emotional facial expression recognition is a key ability for adequate social functioning. The current study aims to test if the differential outcomes procedure (DOP) may improve the recognition of dynamic facial expressions of emotions and to further explore whether schizotypal personality traits may have any effect on performance. 183 undergraduate students completed a task where a face morphed from a neutral expression to one of the six basic emotions at full intensity over 10 s. Participants had to press spacebar as soon as they identified the emotion and choose which had appeared. In the first block, participants received no outcomes. In the second block, a group received specific outcomes associated to each emotion (DOP group), while another group received non-differential outcomes after correctly responding (NOP group). Employing generalized linear models (GLMs) and Bayesian inference we estimated different parameters to answer our research goals. Schizotypal personality traits did not seem to affect dynamic emotional facial expression recognition. Participants of the DOP group were less likely to respond incorrectly to faces showing Fear and Surprise at fewer intensity levels. This may suggest that the DOP could lead to better identification of the main features that differentiate each facial expression of emotion.
Collapse
Affiliation(s)
- Antonio González-Rodríguez
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain
- CEINSA Health Research Centre, University of Almería, Almería, Spain
| | - Ángel García-Pérez
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain
- CIBIS Research Centre, University of Almería, Almería, Spain
| | - Marta Godoy-Giménez
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain
- CEINSA Health Research Centre, University of Almería, Almería, Spain
| | - Pablo Sayans-Jiménez
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain
- CEINSA Health Research Centre, University of Almería, Almería, Spain
| | - Fernando Cañadas
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain
- CIBIS Research Centre, University of Almería, Almería, Spain
| | - Angeles F Estévez
- Department of Psychology, University of Almería, Ctra Sacramento S/N, La Cañada de San Urbano, CP: 04120, Almería, Spain.
- CIBIS Research Centre, University of Almería, Almería, Spain.
| |
Collapse
|
5
|
Diel A, Sato W, Hsu CT, Minato T. Asynchrony enhances uncanniness in human, android, and virtual dynamic facial expressions. BMC Res Notes 2023; 16:368. [PMID: 38082445 PMCID: PMC10714471 DOI: 10.1186/s13104-023-06648-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/05/2023] [Accepted: 11/30/2023] [Indexed: 12/18/2023] Open
Abstract
OBJECTIVE Uncanniness plays a vital role in interactions with humans and artificial agents. Previous studies have shown that uncanniness is caused by a higher sensitivity to deviation or atypicality in specialized categories, such as faces or facial expressions, marked by configural processing. We hypothesized that asynchrony, understood as a temporal deviation in facial expression, could cause uncanniness in the facial expression. We also hypothesized that the effect of asynchrony could be disrupted through inversion. RESULTS Sixty-four participants rated the uncanniness of synchronous or asynchronous dynamic face emotion expressions of human, android, or computer-generated (CG) actors, presented either upright or inverted. Asynchrony vs. synchrony expressions increased uncanniness for all upright expressions except for CG angry expressions. Inverted compared with upright presentations produced less evident asynchrony effects for human angry and android happy expressions. These results suggest that asynchrony can cause dynamic expressions to appear uncanny, which is related to configural processing but different across agents.
Collapse
Affiliation(s)
- Alexander Diel
- Cardiff University School of Psychology, Cardiff, UK.
- RIKEN Institute, Kyoto, Japan.
- Clinic for Psychosomatic Medicine and Psychotherapy, LVR University Hospital Essen, University of Duisburg-Essen, 45147, Essen, Germany.
- Center for Translational Neuro- and Behavioral Sciences (C-TNBS), University of Duisburg- Essen, 45147, Essen, Germany.
| | | | | | | |
Collapse
|
6
|
Sano T, Kawabata H. A computational approach to investigating facial attractiveness factors using geometric morphometric analysis and deep learning. Sci Rep 2023; 13:19797. [PMID: 37957245 PMCID: PMC10643417 DOI: 10.1038/s41598-023-47084-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2023] [Accepted: 11/08/2023] [Indexed: 11/15/2023] Open
Abstract
Numerous studies discuss the features that constitute facial attractiveness. In recent years, computational research has received attention because it can examine facial features without relying on prior research hypotheses. This approach uses many face stimuli and models the relationship between physical facial features and attractiveness using methods such as geometric morphometrics and deep learning. However, studies using each method have been conducted independently and have technical and data-related limitations. It is also difficult to identify the factors of actual attractiveness perception using only computational methods. In this study, we examined morphometric features important for attractiveness perception through geometric morphometrics and impression evaluation. Furthermore, we used deep learning to analyze important facial features comprehensively. The results showed that eye-related areas are essential in determining attractiveness and that different racial groups contribute differently to the impact of shape and skin information on attractiveness. The approach used in this study will contribute toward understanding facial attractiveness features that are universal and diverse, extending psychological findings and engineering applications.
Collapse
Affiliation(s)
- Takanori Sano
- Graduate School of Sociology, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, 108-8345, Japan.
| | - Hideaki Kawabata
- Graduate School of Sociology, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, 108-8345, Japan
- Faculty of Literature, Keio University, 2-15-45 Mita, Minato-ku, Tokyo, 108-8345, Japan
| |
Collapse
|
7
|
Diel A, Sato W, Hsu CT, Minato T. Differences in configural processing for human versus android dynamic facial expressions. Sci Rep 2023; 13:16952. [PMID: 37805572 PMCID: PMC10560218 DOI: 10.1038/s41598-023-44140-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2023] [Accepted: 10/04/2023] [Indexed: 10/09/2023] Open
Abstract
Humanlike androids can function as social agents in social situations and in experimental research. While some androids can imitate facial emotion expressions, it is unclear whether their expressions tap the same processing mechanisms utilized in human expression processing, for example configural processing. In this study, the effects of global inversion and asynchrony between facial features as configuration manipulations were compared in android and human dynamic emotion expressions. Seventy-five participants rated (1) angry and happy emotion recognition and (2) arousal and valence ratings of upright or inverted, synchronous or asynchronous, android or human agent dynamic emotion expressions. Asynchrony in dynamic expressions significantly decreased all ratings (except valence in angry expressions) in all human expressions, but did not affect android expressions. Inversion did not affect any measures regardless of agent type. These results suggest that dynamic facial expressions are processed in a synchrony-based configural manner for humans, but not for androids.
Collapse
Affiliation(s)
- Alexander Diel
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan.
- School of Psychology, Cardiff University, Cardiff, UK.
| | - Wataru Sato
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| | - Chun-Ting Hsu
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| | - Takashi Minato
- RIKEN Information R&D and Strategy Headquarters, Guardian Robot Project, Kyoto, Japan
| |
Collapse
|
8
|
Diel A, Sato W, Hsu CT, Minato T. The inversion effect on the cubic humanness-uncanniness relation in humanlike agents. Front Psychol 2023; 14:1222279. [PMID: 37705949 PMCID: PMC10497116 DOI: 10.3389/fpsyg.2023.1222279] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2023] [Accepted: 08/11/2023] [Indexed: 09/15/2023] Open
Abstract
The uncanny valley describes the typically nonlinear relation between the esthetic appeal of artificial entities and their human likeness. The effect has been attributed to specialized (configural) processing that increases sensitivity to deviations from human norms. We investigate this effect in computer-generated, humanlike android and human faces using dynamic facial expressions. Angry and happy expressions with varying degrees of synchrony were presented upright and inverted and rated on their eeriness, strangeness, and human likeness. A sigmoidal function of human likeness and uncanniness ("uncanny slope") was found for upright expressions and a linear relation for inverted faces. While the function is not indicative of an uncanny valley, the results support the view that configural processing moderates the effect of human likeness on uncanniness and extend its role to dynamic facial expressions.
Collapse
Affiliation(s)
- Alexander Diel
- Guardian Robot Project, RIKEN, Kyoto, Japan
- Cardiff University School of Psychology, Cardiff University, Cardiff, United Kingdom
| | | | | | | |
Collapse
|
9
|
Tarchi P, Lanini MC, Frassineti L, Lanatà A. Real and Deepfake Face Recognition: An EEG Study on Cognitive and Emotive Implications. Brain Sci 2023; 13:1233. [PMID: 37759834 PMCID: PMC10526392 DOI: 10.3390/brainsci13091233] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2023] [Revised: 08/09/2023] [Accepted: 08/18/2023] [Indexed: 09/29/2023] Open
Abstract
The human brain's role in face processing (FP) and decision making for social interactions depends on recognizing faces accurately. However, the prevalence of deepfakes, AI-generated images, poses challenges in discerning real from synthetic identities. This study investigated healthy individuals' cognitive and emotional engagement in a visual discrimination task involving real and deepfake human faces expressing positive, negative, or neutral emotions. Electroencephalographic (EEG) data were collected from 23 healthy participants using a 21-channel dry-EEG headset; power spectrum and event-related potential (ERP) analyses were performed. Results revealed statistically significant activations in specific brain areas depending on the authenticity and emotional content of the stimuli. Power spectrum analysis highlighted a right-hemisphere predominance in theta, alpha, high-beta, and gamma bands for real faces, while deepfakes mainly affected the frontal and occipital areas in the delta band. ERP analysis hinted at the possibility of discriminating between real and synthetic faces, as N250 (200-300 ms after stimulus onset) peak latency decreased when observing real faces in the right frontal (LF) and left temporo-occipital (LTO) areas, but also within emotions, as P100 (90-140 ms) peak amplitude was found higher in the right temporo-occipital (RTO) area for happy faces with respect to neutral and sad ones.
Collapse
Affiliation(s)
- Pietro Tarchi
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (P.T.); (M.C.L.); (L.F.)
| | - Maria Chiara Lanini
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (P.T.); (M.C.L.); (L.F.)
| | - Lorenzo Frassineti
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (P.T.); (M.C.L.); (L.F.)
- Department of Information Engineering, University of Pisa, 56122 Pisa, Italy
| | - Antonio Lanatà
- Department of Information Engineering, University of Florence, 50139 Florence, Italy; (P.T.); (M.C.L.); (L.F.)
| |
Collapse
|
10
|
Treal T, Jackson PL, Meugnot A. Biological postural oscillations during facial expression of pain in virtual characters modulate early and late ERP components associated with empathy: A pilot study. Heliyon 2023; 9:e18161. [PMID: 37560681 PMCID: PMC10407205 DOI: 10.1016/j.heliyon.2023.e18161] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 06/21/2023] [Accepted: 07/10/2023] [Indexed: 08/11/2023] Open
Abstract
There is a surge in the use of virtual characters in cognitive sciences. However, their behavioural realism remains to be perfected in order to trigger more spontaneous and socially expected reactions in users. It was recently shown that biological postural oscillations (idle motion) were a key ingredient to enhance the empathic response to its facial pain expression. The objective of this study was to examine, using electroencephalography, whether idle motion would modulate the neural response associated with empathy when viewing a pain-expressing virtual character. Twenty healthy young adults were shown video clips of a virtual character displaying a facial expression of pain while its body was either static (Still condition) or animated with pre-recorded human postural oscillations (Idle condition). Participants rated the virtual human's facial expression of pain as significantly more intense in the Idle condition compared to the Still condition. Both the early (N2-N3) and the late (rLPP) event-related potentials (ERPs) associated with distinct dimensions of empathy, affective resonance and perspective-taking, respectively, were greater in the Idle condition compared to the Still condition. These findings confirm the potential of idle motion to increase empathy for pain expressed by virtual characters. They are discussed in line with contemporary empathy models in relation to human-machine interactions.
Collapse
Affiliation(s)
- Thomas Treal
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| | - Philip L. Jackson
- École de Psychologie, Université Laval, Québec, Canada
- Centre Interdisciplinaire de Recherche en Réadaptation et Intégration Sociale (CIRRIS), Québec, Canada
- CERVO Research Center, Québec, Canada
| | - Aurore Meugnot
- Université Paris-Saclay CIAMS, 91405, Orsay, France
- CIAMS, Université d'Orléans, 45067, Orléans, France
| |
Collapse
|
11
|
Tarchi P, Cala F, Frassineti L, Lanata A. Electroencephalographic Correlates in Synthetic and Real Emotional Face Stimulation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38082982 DOI: 10.1109/embc40787.2023.10340895] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
This work reports on physiological electroencephalographic (EEG) correlates in cognitive and emotional processes within the discrimination between synthetic and real faces visual stimuli. Human perception of manipulated data has been addressed in the literature from several perspectives. Researchers have investigated how the use of deep fakes alters people's ability in face-processing tasks, such as face recognition. Although recent studies showed that humans, on average, are still able to correctly recognize synthetic faces, this study investigates whether those findings still hold considering the latest advancements in AI-based, synthetic image creation. Specifically, 18-channels EEG signals from 21 healthy subjects were analyzed during a visual experiment where synthetic and actual emotional stimuli were administered. According to recent literature, participants were able to discriminate the real faces from the synthetic ones, by correctly classifying about 77% of all images. Preliminary encouraging results showed statistical significant differences in brain activation in both stimuli (synthetic and real) classification and emotional response.
Collapse
|
12
|
Nussbaum C, Pöhlmann M, Kreysa H, Schweinberger SR. Perceived naturalness of emotional voice morphs. Cogn Emot 2023; 37:731-747. [PMID: 37104118 DOI: 10.1080/02699931.2023.2200920] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2022] [Revised: 04/03/2023] [Accepted: 04/05/2023] [Indexed: 04/28/2023]
Abstract
Research into voice perception benefits from manipulation software to gain experimental control over acoustic expression of social signals such as vocal emotions. Today, parameter-specific voice morphing allows a precise control of the emotional quality expressed by single vocal parameters, such as fundamental frequency (F0) and timbre. However, potential side effects, in particular reduced naturalness, could limit ecological validity of speech stimuli. To address this for the domain of emotion perception, we collected ratings of perceived naturalness and emotionality on voice morphs expressing different emotions either through F0 or Timbre only. In two experiments, we compared two different morphing approaches, using either neutral voices or emotional averages as emotionally non-informative reference stimuli. As expected, parameter-specific voice morphing reduced perceived naturalness. However, perceived naturalness of F0 and Timbre morphs were comparable with averaged emotions as reference, potentially making this approach more suitable for future research. Crucially, there was no relationship between ratings of emotionality and naturalness, suggesting that the perception of emotion was not substantially affected by a reduction of voice naturalness. We hold that while these findings advocate parameter-specific voice morphing as a suitable tool for research on vocal emotion perception, great care should be taken in producing ecologically valid stimuli.
Collapse
Affiliation(s)
- Christine Nussbaum
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Manuel Pöhlmann
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
| | - Helene Kreysa
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Friedrich Schiller University Jena, Germany
- Voice Research Unit, Friedrich Schiller University, Jena, Germany
- Swiss Center for Affective Sciences, University of Geneva, Switzerland
| |
Collapse
|
13
|
Effects of social context on facial trustworthiness judgments. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-022-04143-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022]
|
14
|
Vaitonytė J, Alimardani M, Louwerse MM. Scoping review of the neural evidence on the uncanny valley. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2022. [DOI: 10.1016/j.chbr.2022.100263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022] Open
|
15
|
Vaitonytė J, Alimardani M, Louwerse MM. Corneal reflections and skin contrast yield better memory of human and virtual faces. Cogn Res Princ Implic 2022; 7:94. [PMID: 36258062 PMCID: PMC9579222 DOI: 10.1186/s41235-022-00445-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Accepted: 10/07/2022] [Indexed: 11/10/2022] Open
Abstract
Virtual faces have been found to be rated less human-like and remembered worse than photographic images of humans. What it is in virtual faces that yields reduced memory has so far remained unclear. The current study investigated face memory in the context of virtual agent faces and human faces, real and manipulated, considering two factors of predicted influence, i.e., corneal reflections and skin contrast. Corneal reflections referred to the bright points in each eye that occur when the ambient light reflects from the surface of the cornea. Skin contrast referred to the degree to which skin surface is rough versus smooth. We conducted two memory experiments, one with high-quality virtual agent faces (Experiment 1) and the other with the photographs of human faces that were manipulated (Experiment 2). Experiment 1 showed better memory for virtual faces with increased corneal reflections and skin contrast (rougher rather than smoother skin). Experiment 2 replicated these findings, showing that removing the corneal reflections and smoothening the skin reduced memory recognition of manipulated faces, with a stronger effect exerted by the eyes than the skin. This study highlights specific features of the eyes and skin that can help explain memory discrepancies between real and virtual faces and in turn elucidates the factors that play a role in the cognitive processing of faces.
Collapse
Affiliation(s)
- Julija Vaitonytė
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Maryam Alimardani
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| | - Max M. Louwerse
- grid.12295.3d0000 0001 0943 3265Department of Cognitive Science and Artificial Intelligence, Tilburg University, Dante Building D 134, Warandelaan 2, 5037 AB Tilburg, The Netherlands
| |
Collapse
|
16
|
Lee J, Penrod SD. Three‐level meta‐analysis of the other‐race bias in facial identification. APPLIED COGNITIVE PSYCHOLOGY 2022. [DOI: 10.1002/acp.3997] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Affiliation(s)
- Jungwon Lee
- Department of Psychology Hallym University Chuncheon South Korea
| | - Steven D. Penrod
- Department of Psychology John Jay College of Criminal Justice New York USA
| |
Collapse
|
17
|
Silvestri V, Arioli M, Baccolo E, Macchi Cassia V. Sensitivity to trustworthiness cues in own- and other-race faces: The role of spatial frequency information. PLoS One 2022; 17:e0272256. [PMID: 36067183 PMCID: PMC9447876 DOI: 10.1371/journal.pone.0272256] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/10/2021] [Accepted: 07/15/2022] [Indexed: 11/23/2022] Open
Abstract
Research has shown that adults are better at processing faces of the most represented ethnic group in their social environment compared to faces from other ethnicities, and that they rely more on holistic/configural information for identity discrimination in own-race than other-race faces. Here, we applied a spatial filtering approach to the investigation of trustworthiness perception to explore whether the information on which trustworthiness judgments are based differs according to face race. European participants (N = 165) performed an online-delivered pairwise preference task in which they were asked to select the face they would trust more within pairs randomly selected from validated White and Asian broad spectrum, low-pass filter and high-pass filter trustworthiness continua. Results confirmed earlier demonstrations that trustworthiness perception generalizes across face ethnicity, but discrimination of trustworthiness intensity relied more heavily on the LSF content of the images for own-race faces compared to other-race faces. Results are discussed in light of previous work on emotion discrimination and the hypothesis of overlapping perceptual mechanisms subtending social perception of faces.
Collapse
Affiliation(s)
- Valentina Silvestri
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
- * E-mail:
| | - Martina Arioli
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| | - Elisa Baccolo
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| | | |
Collapse
|
18
|
Dawel A, Miller EJ, Horsburgh A, Ford P. A systematic survey of face stimuli used in psychological research 2000-2020. Behav Res Methods 2022; 54:1889-1901. [PMID: 34731426 DOI: 10.3758/s13428-021-01705-3] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2021] [Indexed: 12/16/2022]
Abstract
For decades, psychology has relied on highly standardized images to understand how people respond to faces. Many of these stimuli are rigorously generated and supported by excellent normative data; as such, they have played an important role in the development of face science. However, there is now clear evidence that testing with ambient images (i.e., naturalistic images "in the wild") and including expressions that are spontaneous can lead to new and important insights. To precisely quantify the extent to which our current knowledge base has relied on standardized and posed stimuli, we systematically surveyed the face stimuli used in 12 key journals in this field across 2000-2020 (N = 3374 articles). Although a small number of posed expression databases continue to dominate the literature, the use of spontaneous expressions seems to be increasing. However, there has been no increase in the use of ambient or dynamic stimuli over time. The vast majority of articles have used highly standardized and nonmoving pictures of faces. An emerging trend is that virtual faces are being used as stand-ins for human faces in research. Overall, the results of the present survey highlight that there has been a significant imbalance in favor of standardized face stimuli. We argue that psychology would benefit from a more balanced approach because ambient and spontaneous stimuli have much to offer. We advocate a cognitive ethological approach that involves studying face processing in natural settings as well as the lab, incorporating more stimuli from "the wild".
Collapse
Affiliation(s)
- Amy Dawel
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia.
| | - Elizabeth J Miller
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Annabel Horsburgh
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| | - Patrice Ford
- Research School of Psychology (building 39), The Australian National University, Canberra, ACT 2600, Australia
| |
Collapse
|
19
|
Moshel ML, Robinson AK, Carlson TA, Grootswagers T. Are you for real? Decoding realistic AI-generated faces from neural activity. Vision Res 2022; 199:108079. [PMID: 35749833 DOI: 10.1016/j.visres.2022.108079] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2021] [Revised: 05/30/2022] [Accepted: 06/06/2022] [Indexed: 11/17/2022]
Abstract
Can we trust our eyes? Until recently, we rarely had to question whether what we see is indeed what exists, but this is changing. Artificial neural networks can now generate realistic images that challenge our perception of what is real. This new reality can have significant implications for cybersecurity, counterfeiting, fake news, and border security. We investigated how the human brain encodes and interprets realistic artificially generated images using behaviour and brain imaging. We found that we could reliably decode AI generated faces using people's neural activity. However, while at a group level people performed near chance classifying real and realistic fakes, participants tended to interchange the labels, classifying real faces as realistic fakes and vice versa. Understanding this difference between brain and behavioural responses may be key in determining the 'real' in our new reality. Stimuli, code, and data for this study can be found at https://osf.io/n2z73/.
Collapse
Affiliation(s)
- Michoel L Moshel
- School of Psychology, University of Sydney, NSW, Australia; School of Psychology, Macquarie University, NSW, Australia.
| | - Amanda K Robinson
- School of Psychology, University of Sydney, NSW, Australia; Queensland Brain Institute, The University of Queensland, QLD, Australia
| | | | - Tijl Grootswagers
- School of Psychology, University of Sydney, NSW, Australia; The MARCS Institute for Brain, Behaviour and Development, Western Sydney University, NSW, Australia
| |
Collapse
|
20
|
Shimane D, Matsui H, Itoh Y. False memory for faces is produced by the Deese-Roediger-McDermott paradigm based on the morphological characteristics of a list. CURRENT PSYCHOLOGY 2022. [DOI: 10.1007/s12144-020-00830-0] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
21
|
Diel A, Lewis M. Familiarity, orientation, and realism increase face uncanniness by sensitizing to facial distortions. J Vis 2022; 22:14. [PMID: 35344022 PMCID: PMC8982630 DOI: 10.1167/jov.22.4.14] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
The uncanny valley predicts aversive reactions toward near-humanlike entities. Greater uncanniness is elicited by distortions in realistic than unrealistic faces, possibly due to familiarity. Experiment 1 investigated how familiarity and inversion affect uncanniness of facial distortions and the ability to detect differences between the distorted variants of the same face (distortion sensitivity). Familiar or unfamiliar celebrity faces were incrementally distorted and presented either upright or inverted. Uncanniness ratings increased across the distortion levels, and were stronger for familiar and upright faces. Distortion sensitivity increased with increasing distortion difference levels, again stronger for familiar and upright faces. Experiment 2 investigated how face realism, familiarity, and face orientation interacted for the increase of uncanniness across distortions. Realism increased the increase of uncanniness across the distortion levels, further enhanced by upright orientation and familiarity. The findings show that familiarity, upright orientation, and high face realism increase the sensitivity of uncanniness, likely by increasing distortion sensitivity. Finally, a moderated linear function of face realism and deviation level could explain the uncanniness of stimuli better than a quadratic function. A re-interpretation of the uncanny valley as sensitivity toward deviations from familiarized patterns is discussed.
Collapse
Affiliation(s)
| | - Michael Lewis
- School of Psychology, Cardiff University, Cardiff, UK.,
| |
Collapse
|
22
|
A multilevel Bayesian meta-analysis of the body inversion effect: Evaluating controversies over headless and sexualized bodies. Psychon Bull Rev 2022; 29:1558-1593. [PMID: 35230674 DOI: 10.3758/s13423-022-02067-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/27/2022] [Indexed: 11/08/2022]
Abstract
Face and body perception rely on specialized processing mechanisms to interpret social information efficiently. The body inversion effect (BIE), refers to an inversion effect for bodies, such that recognition of bodies is impaired by inversion. The BIE, like the face inversion effect (FIE), is particularly important because a disproportionate BIE relative to inversion effects for objects could be interpreted in much the same way as the disproportionate FIE has often been characterized; that is, as evidence of specialized, configural processing. However, research supporting the BIE is marked by methodological heterogeneity and mixed findings. Our multilevel Bayesian meta-analysis addresses inconsistencies in the literature by pooling data from numerous studies to estimate the magnitude of the BIE across various methodological and stimulus properties. We included 180 effect sizes from 41 empirical articles representing data from 2,274 participants. Overall, we found that the BIE was moderate-large in magnitude (Hedges' g = 0.75). Importantly, the inversion effect was larger for bodies than objects (b = 0.42); however, the inversion effect for faces was larger than for bodies (b = 0.34). We tested the role of discrimination dimension, stimulus type, face/head inclusion, stimulus sexualization, and sexualized stimulus sex as moderators of the BIE. We found that the BIE was moderated by discrimination dimension, stimulus type, stimulus sexualization, and sexualized stimulus sex. By synthesizing the existing literature, we provide a better theoretical understanding of how underlying visual processing mechanisms may differ for different types of social information (i.e., bodies vs. faces).
Collapse
|
23
|
Hacker CM, Biederman I, Zhu T, Nelken M, X Meschke E. The sizable difficulty in matching unfamiliar faces differing only moderately in orientation in depth is a function of image dissimilarity. Vision Res 2022; 194:107959. [PMID: 35182894 DOI: 10.1016/j.visres.2021.09.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2020] [Revised: 09/20/2021] [Accepted: 09/24/2021] [Indexed: 10/19/2022]
Abstract
Attempting to match unfamiliar, highly similar faces at moderate differences in orientation in depth is surprisingly difficult. No neurocomputational account of these costs that addressed the representation of faces by which a face-similarity metric can be derived has been offered. A metric specifying the similarity of the to-be-distinguished faces is required as the rotation costs will be a function of the difficulty in distinguishing the faces. Consequently, rotation costs have typically been described in terms of angle of disparity, rather than the dissimilarity of the faces produced by the rotation. We assessed the effects of orientation disparity in a match-to-sample paradigm of a simultaneous presentation of a triangular display of three faces. Two lower test faces, a matching face and a foil, were always at the same orientation and differed by 0° to 20° from the sample on top. The similarity of the images was scaled by a model based on simple cell tuning, modeled as Gabor wavelets, that correlates almost perfectly with psychophysical similarity. Two measures of face similarity, with approximately additive effects on reaction times, accounted for matching performance: a) the decrease in similarity between the images of the matching and sample faces produced by increases in their orientation disparity, and b) the similarity between the matching face and the selection of a particular foil. The 20° orientation disparity was sufficient to yield a sizeable 301 msec increase in reaction time. An implication of the results is that the activity in V1 produced by viewing a face is fed forward to areas responsible for the individuation of that face.
Collapse
Affiliation(s)
| | - Irving Biederman
- Program in Neuroscience, University of Southern California, USA; Department of Psychology, University of Southern California, USA.
| | - Tianyi Zhu
- Department of Psychology, University of Southern California, USA
| | - Miles Nelken
- Program in Neuroscience, University of Southern California, USA
| | - Emily X Meschke
- Program in Neuroscience, University of Southern California, USA
| |
Collapse
|
24
|
Identifying criminals: No biasing effect of criminal context on recalled threat. Mem Cognit 2022; 50:1735-1755. [PMID: 35025077 PMCID: PMC9768013 DOI: 10.3758/s13421-021-01268-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/17/2021] [Indexed: 12/30/2022]
Abstract
To date, it is still unclear whether there is a systematic pattern in the errors made in eyewitness recall and whether certain features of a person are more likely to lead to false identification. Moreover, we also do not know the extent of systematic errors impacting identification of a person from their body rather than solely their face. To address this, based on the contextual model of eyewitness identification (CMEI; Osborne & Davies, 2014, Applied Cognitive Psychology, 28[3], 392-402), we hypothesized that having framed a target as a perpetrator of a violent crime, participants would recall that target person as appearing more like a stereotypical criminal (i.e., more threatening). In three separate experiments, participants were first presented with either no frame, a neutral frame, or a criminal frame (perpetrators of a violent crime) accompanying a target (either a face or body). Participants were then asked to identify the original target from a selection of people that varied in facial threat or body musculature. Contrary to our hypotheses, we found no evidence of bias. However, identification accuracy was highest for the most threatening target bodies high in musculature, as well as bodies paired with detailed neutral contextual information. Overall, these findings suggest that while no systematic bias exists in the recall of criminal bodies, the nature of the body itself and the context in which it is presented can significantly impact identification accuracy.
Collapse
|
25
|
Witham C, Foo YZ, Jeffery L, Burton NS, Rhodes G. Anger and fearful expressions influence perceptions of physical strength: Testing the signalling functions of emotional facial expressions with a visual aftereffects paradigm. EVOL HUM BEHAV 2021. [DOI: 10.1016/j.evolhumbehav.2021.05.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
|
26
|
Baccolo E, Quadrelli E, Macchi Cassia V. Neural sensitivity to trustworthiness cues from realistic face images is associated with temperament: An electrophysiological study with 6-month-old infants. Soc Neurosci 2021; 16:668-683. [PMID: 34469270 DOI: 10.1080/17470919.2021.1976271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Discriminating facial cues to trustworthiness is a fundamental social skill whose developmental origins are still debated. Prior investigations used computer-generated faces, which might fail to reflect infants' face processing expertise. Here, Event-Related Potentials (ERPs) were recorded in Caucasian adults (N = 20, 7 males, M age = 25.25 years) and 6-month-old infants (N = 21, 10 males) in response to variations in trustworthiness intensity expressed by morphed images of realistic female faces associated with explicit trustworthiness judgments (Study 1). Preferential looking behavior in response to the same faces was also investigated in infants (N = 27, 11 males) (Study 2). ERP results showed that both age groups distinguished subtle stimulus differences, and that interindividual variability in neural sensitivity to these differences were associated with infants' temperament. No signs of stimulus differentiation emerged from infants' looking behavior. These findings contribute to the understanding of the developmental origins of human sensitivity to social cues from faces by extending prior evidence to more ecological stimuli and by unraveling the mediating role of temperament.
Collapse
Affiliation(s)
- Elisa Baccolo
- Department of Psychology, Università Degli Studi Di Milano-Bicocca, Milano, Italy
| | - Ermanno Quadrelli
- Department of Psychology, Università Degli Studi Di Milano-Bicocca, Milano, Italy.,Nero Mi, Milan Center for Neuroscience, Milano, Italy
| | - Viola Macchi Cassia
- Department of Psychology, Università Degli Studi Di Milano-Bicocca, Milano, Italy.,Nero Mi, Milan Center for Neuroscience, Milano, Italy
| |
Collapse
|
27
|
Atwood S, Axt JR. Assessing implicit attitudes about androgyny. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2021. [DOI: 10.1016/j.jesp.2021.104162] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022]
|
28
|
Kegel LC, Brugger P, Frühholz S, Grunwald T, Hilfiker P, Kohnen O, Loertscher ML, Mersch D, Rey A, Sollfrank T, Steiger BK, Sternagel J, Weber M, Jokeit H. Dynamic human and avatar facial expressions elicit differential brain responses. Soc Cogn Affect Neurosci 2021; 15:303-317. [PMID: 32232359 PMCID: PMC7235958 DOI: 10.1093/scan/nsaa039] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/02/2020] [Accepted: 03/24/2020] [Indexed: 01/25/2023] Open
Abstract
Computer-generated characters, so-called avatars, are widely used in advertising, entertainment, human–computer interaction or as research tools to investigate human emotion perception. However, brain responses to avatar and human faces have scarcely been studied to date. As such, it remains unclear whether dynamic facial expressions of avatars evoke different brain responses than dynamic facial expressions of humans. In this study, we designed anthropomorphic avatars animated with motion tracking and tested whether the human brain processes fearful and neutral expressions in human and avatar faces differently. Our fMRI results showed that fearful human expressions evoked stronger responses than fearful avatar expressions in the ventral anterior and posterior cingulate gyrus, the anterior insula, the anterior and posterior superior temporal sulcus, and the inferior frontal gyrus. Fearful expressions in human and avatar faces evoked similar responses in the amygdala. We did not find different responses to neutral human and avatar expressions. Our results highlight differences, but also similarities in the processing of fearful human expressions and fearful avatar expressions even if they are designed to be highly anthropomorphic and animated with motion tracking. This has important consequences for research using dynamic avatars, especially when processes are investigated that involve cortical and subcortical regions.
Collapse
Affiliation(s)
- Lorena C Kegel
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| | - Peter Brugger
- Neuropsychology Unit, Valens Rehabilitation Centre, Valens, Switzerland.,Department of Psychiatry, Psychotherapy, and Psychosomatics, University Hospital of Psychiatry Zurich, Zurich, Switzerland
| | - Sascha Frühholz
- Department of Psychology, University of Zurich, Zurich, Switzerland
| | | | | | - Oona Kohnen
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland
| | - Miriam L Loertscher
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland.,Department of Psychology, University of Bern, Bern, Switzerland
| | - Dieter Mersch
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Anton Rey
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | | | | | - Joerg Sternagel
- Institute for Critical Theory, Zurich University of the Arts, Zurich, Switzerland
| | - Michel Weber
- Institute for the Performing Arts and Film, Zurich University of the Arts, Zurich, Switzerland
| | - Hennric Jokeit
- Swiss Epilepsy Center, CH-8008 Zurich, Switzerland.,Department of Psychology, University of Zurich, Zurich, Switzerland
| |
Collapse
|
29
|
Bayet L, Saville A, Balas B. Sensitivity to face animacy and inversion in childhood: Evidence from EEG data. Neuropsychologia 2021; 156:107838. [PMID: 33775702 DOI: 10.1016/j.neuropsychologia.2021.107838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/28/2020] [Accepted: 03/22/2021] [Indexed: 11/25/2022]
Abstract
Adults exhibit relative behavioral difficulties in processing inanimate, artificial faces compared to real human faces, with implications for using artificial faces in research and designing artificial social agents. However, the developmental trajectory of inanimate face perception is unknown. To address this gap, we used electroencephalography to investigate inanimate faces processing in cross-sectional groups of 5-10-year-old children and adults. A face inversion manipulation was used to test whether face animacy processing relies on expert face processing strategies. Groups of 5-7-year-olds (N = 18), 8-10-year-olds (N = 18), and adults (N = 16) watched pictures of real or doll faces presented in an upright or inverted orientation. Analyses of event-related potentials revealed larger N170 amplitudes in response to doll faces, irrespective of age group or face orientation. Thus, the N170 is sensitive to face animacy by 5-7 years of age, but such sensitivity may not reflect high-level, expert face processing. Multivariate pattern analyses of the EEG signal additionally assessed whether animacy information could be reliably extracted during face processing. Face orientation, but not face animacy, could be reliably decoded from occipitotemporal channels in children and adults. Face animacy could be decoded from whole scalp channels in adults, but not children. Together, these results suggest that 5-10-year-old children exhibit some sensitivity to face animacy over occipitotemporal regions that is comparable to adults.
Collapse
Affiliation(s)
- Laurie Bayet
- Department of Psychology and Center for Neuroscience and Behavior, American University, Washington, DC, USA.
| | - Alyson Saville
- Department of Psychology, North Dakota State University, Fargo, ND, USA
| | - Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, USA.
| |
Collapse
|
30
|
Malloy TE, DiPietro C, DeSimone B, Curley C, Chau S, Silva C. Facial attractiveness, social status, and face recognition. VISUAL COGNITION 2021. [DOI: 10.1080/13506285.2021.1884630] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
|
31
|
Vaitonytė J, Blomsma PA, Alimardani M, Louwerse MM. Realism of the face lies in skin and eyes: Evidence from virtual and human agents. COMPUTERS IN HUMAN BEHAVIOR REPORTS 2021. [DOI: 10.1016/j.chbr.2021.100065] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
|
32
|
Olivares EI, Urraca AS, Lage-Castellanos A, Iglesias J. Different and common brain signals of altered neurocognitive mechanisms for unfamiliar face processing in acquired and developmental prosopagnosia. Cortex 2020; 134:92-113. [PMID: 33271437 DOI: 10.1016/j.cortex.2020.10.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 09/21/2020] [Accepted: 10/14/2020] [Indexed: 11/25/2022]
Abstract
Neuropsychological studies have shown that prosopagnosic individuals perceive face structure in an atypical way. This might preclude the formation of appropriate face representations and, consequently, hamper effective recognition. The present ERP study, in combination with Bayesian source reconstruction, investigates how information related to both external (E) and internal (I) features was processed by E.C. and I.P., suffering from acquired and developmental prosopagnosia, respectively. They carried out a face-feature matching task with new faces. E.C. showed poor performance and remarkable lack of early face-sensitive P1, N170 and P2 responses on right (damaged) posterior cortex. Although she presented the expected mismatch effect to target faces in the E-I sequence, it was of shorter duration than in Controls, and involved left parietal, right frontocentral and dorsofrontal regions, suggestive of reduced neural circuitry to process face configurations. In turn, I.P. performed efficiently but with a remarkable bias to give "match" responses. His face-sensitive potentials P1-N170 were comparable to those from Controls, however, he showed no subsequent P2 response and a mismatch effect only in the I-E sequence, reflecting activation confined to those regions that sustain typically the initial stages of face processing. Relevantly, neither of the prosopagnosics exhibited conspicuous P3 responses to features acting as primes, indicating that diagnostic information for constructing face representations could not be sufficiently attended nor deeply encoded. Our findings suggest a different locus for altered neurocognitive mechanisms in the face network in participants with different types of prosopagnosia, but common indicators of a deficient allocation of attentional resources for further recognition.
Collapse
Affiliation(s)
- Ela I Olivares
- Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Spain.
| | - Ana S Urraca
- Centro Universitario Cardenal Cisneros, Alcalá de Henares, Madrid, Spain
| | - Agustín Lage-Castellanos
- Department of Neuroinformatics, Cuban Center for Neuroscience, Havana, Cuba; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Jaime Iglesias
- Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Spain
| |
Collapse
|
33
|
A new data-driven mathematical model dissociates attractiveness from sexual dimorphism of human faces. Sci Rep 2020; 10:16588. [PMID: 33024137 PMCID: PMC7538911 DOI: 10.1038/s41598-020-73472-8] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2020] [Accepted: 09/14/2020] [Indexed: 11/09/2022] Open
Abstract
Human facial attractiveness is evaluated by using multiple cues. Among others, sexual dimorphism (i.e. masculinity for male faces/femininity for female faces) is an influential factor of perceived attractiveness. Since facial attractiveness is judged by incorporating sexually dimorphic traits as well as other cues, it is theoretically possible to dissociate sexual dimorphism from facial attractiveness. This study tested this by using a data-driven mathematical modelling approach. We first analysed the correlation between perceived masculinity/femininity and attractiveness ratings for 400 computer-generated male and female faces (Experiment 1) and found positive correlations between perceived femininity and attractiveness for both male and female faces. Using these results, we manipulated a set of faces along the attractiveness dimension while controlling for sexual dimorphism by orthogonalisation with data-driven mathematical models (Experiment 2). Our results revealed that perceived attractiveness and sexual dimorphism are dissociable, suggesting that there are as yet unidentified facial cues other than sexual dimorphism that contribute to facial attractiveness. Future studies can investigate the true preference of sexual dimorphism or the genuine effects of attractiveness by using well-controlled facial stimuli like those that this study generated. The findings will be of benefit to the further understanding of what makes a face attractive.
Collapse
|
34
|
Ho PK, Newell FN. Turning Heads: The Effects of Face View and Eye Gaze Direction on the Perceived Attractiveness of Expressive Faces. Perception 2020; 49:330-356. [PMID: 32063133 DOI: 10.1177/0301006620905216] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
We investigated whether the perceived attractiveness of expressive faces was influenced by head turn and eye gaze towards or away from the observer. In all experiments, happy faces were consistently rated as more attractive than angry faces. A head turn towards the observer, whereby a full-face view was shown, was associated with relatively higher attractiveness ratings when gaze direction was aligned with face view (Experiment 1). However, preference for full-face views of happy faces was not affected by gaze shifts towards or away from the observer (Experiment 2a). In Experiment 3, the relative duration of each face view (front-facing or averted at 15°) during a head turn away or towards the observer was manipulated. There was benefit on attractiveness ratings for happy faces shown for a longer duration from the front view, regardless of the direction of head turn. Our findings support previous studies indicating a preference for positive expressions on attractiveness judgements, which is further enhanced by the front views of faces, whether presented during a head turn or shown statically. In sum, our findings imply a complex interaction between cues of social attention, indicated by the view of the face shown, and reward on attractiveness judgements of unfamiliar faces.
Collapse
Affiliation(s)
- Pik Ki Ho
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland; Institute of Anatomy I, University Hospital Jena, Germany
| | - Fiona N Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Ireland
| |
Collapse
|
35
|
Baccolo E, Macchi Cassia V. Age-Related Differences in Sensitivity to Facial Trustworthiness: Perceptual Representation and the Role of Emotional Development. Child Dev 2019; 91:1529-1547. [PMID: 31769004 DOI: 10.1111/cdev.13340] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 07/29/2019] [Accepted: 08/29/2019] [Indexed: 11/30/2022]
Abstract
The ability to discriminate social signals from faces is a fundamental component of human social interactions whose developmental origins are still debated. In this study, 5-year-old (N = 29) and 7-year-old children (N = 31) and adults (N = 34) made perceptual similarity and trustworthiness judgments on a set of female faces varying in level of expressed trustworthiness. All groups represented perceived similarity of the faces as a function of trustworthiness intensity, but such representation becomes more fine-grained with development. Moreover, 5-year-olds' accuracy in choosing the more trustworthy face in a pair varied as a function of children's score at the Test of Emotion Comprehension, suggesting that the ability to perform face-to-trait inferences is related to the development of emotional understanding.
Collapse
|
36
|
Alexi J, Dommisse K, Cleary D, Palermo R, Kloth N, Bell J. An Assessment of Computer-Generated Stimuli for Use in Studies of Body Size Estimation and Bias. Front Psychol 2019; 10:2390. [PMID: 31695661 PMCID: PMC6817789 DOI: 10.3389/fpsyg.2019.02390] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 10/07/2019] [Indexed: 12/20/2022] Open
Abstract
Inaccurate body size judgments are associated with body image disturbances, a clinical feature of many eating disorders. Accordingly, body-related stimuli have become increasingly important in the study of estimation inaccuracies and body image disturbances. Technological advancements in the last decade have led to an increased use of computer-generated (CG) body stimuli in body image research. However, recent face perception research has suggested that CG face stimuli are not recognized as readily and may not fully tap facial processing mechanisms. The current study assessed the effectiveness of using CG stimuli in an established body size estimation task (the “bodyline” task). Specifically, we examined whether employing CG body stimuli alters body size judgments and associated estimation biases. One hundred and six 17- to 25-year-old females completed the CG bodyline task, which involved estimating the size of full-length CG body stimuli along a visual analogue scale. Our results show that perception of body size for CG stimuli was non-linear. Participants struggled to discriminate between extreme bodies sizes and overestimated the size change between near to average bodies. Furthermore, one of our measured size estimation biases was larger for CG stimuli. Our collective findings suggest using caution when employing CG stimuli in experimental research on body perception.
Collapse
Affiliation(s)
- Joanna Alexi
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Kendra Dommisse
- School of Psychological Science, University of Western Australia, Perth, WA, Australia.,Telethon Kids Institute, University of Western Australia, Perth, WA, Australia
| | - Dominique Cleary
- School of Psychological Science, University of Western Australia, Perth, WA, Australia.,Telethon Kids Institute, University of Western Australia, Perth, WA, Australia
| | - Romina Palermo
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Nadine Kloth
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Jason Bell
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| |
Collapse
|
37
|
Kihara K, Takeda Y. The Role of Low-Spatial Frequency Components in the Processing of Deceptive Faces: A Study Using Artificial Face Models. Front Psychol 2019; 10:1468. [PMID: 31297078 PMCID: PMC6607955 DOI: 10.3389/fpsyg.2019.01468] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2018] [Accepted: 06/11/2019] [Indexed: 11/13/2022] Open
Abstract
Interpreting another's true emotion is important for social communication, even in the face of deceptive facial cues. Because spatial frequency components provide important clues for recognizing facial expressions, we investigated how we use spatial frequency information from deceptive faces to interpret true emotion. We conducted two different tasks: a face-generating experiment in which participants were asked to generate deceptive and genuine faces by tuning the intensity of happy and angry expressions (Experiment 1) and a face-classification task in which participants had to classify presented faces as either deceptive or genuine (Experiment 2). Low- and high-spatial frequency (LSF and HSF) components were varied independently. The results showed that deceptive happiness (i.e., anger is the hidden expression) involved different intensities for LSF and HSF. These results suggest that we can identify hidden anger by perceiving unbalanced intensities of emotional expression between LSF and HSF information contained in deceptive faces.
Collapse
Affiliation(s)
- Ken Kihara
- Automotive Human Factors Research Center, National Institute of Advanced Industrial, Science and Technology (AIST), Tsukuba, Japan
| | - Yuji Takeda
- Automotive Human Factors Research Center, National Institute of Advanced Industrial, Science and Technology (AIST), Tsukuba, Japan
| |
Collapse
|
38
|
Affiliation(s)
- Julia Spielmann
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| | - Chadly Stern
- Department of Psychology, University of Illinois at Urbana-Champaign, Champaign, IL, USA
| |
Collapse
|
39
|
Nakamura K, Watanabe K. Data-driven mathematical model of East-Asian facial attractiveness: the relative contributions of shape and reflectance to attractiveness judgements. ROYAL SOCIETY OPEN SCIENCE 2019; 6:182189. [PMID: 31218042 PMCID: PMC6549996 DOI: 10.1098/rsos.182189] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Accepted: 05/10/2019] [Indexed: 06/09/2023]
Abstract
Facial attractiveness is judged through a combination of multiple cues including morphology (facial shape) and skin properties (facial reflectance). While several studies have examined the way in which people in Western cultures judge facial attractiveness, there have been fewer investigations into non-Western attitudes. This is because stimuli that quantitatively vary the attractiveness of non-Western faces are rare. In the present study, we built a model of the attractiveness of East-Asian faces, judged by East-Asian observers. Therefore, 400 computer-generated East-Asian faces were created and attractiveness rating scores were collected from Japanese observers. Data-driven mathematical calculations were used to identify quantitative links between facial attractiveness and shape and reflectance properties, with no prior hypothesis. Results indicate that faces with larger eyes, smaller noses and brighter skin are judged as more attractive, regardless of the sex of the faces, possibly reflecting a general preference for femininity. Shape is shown to be a strong determinant of attractiveness for both male and female faces, while reflectance properties are less important in judging male facial attractiveness. Our model provides a tool to effectively produce East-Asian face stimuli that quantitatively varies attractiveness and can be used to elucidate visual processes related to attractiveness judgements.
Collapse
Affiliation(s)
- Koyo Nakamura
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Japan Society for the Promotion of Science, Tokyo, Japan
- Keio Advanced Research Centers, Tokyo, Japan
| | - Katsumi Watanabe
- Faculty of Science and Engineering, Waseda University, Tokyo, Japan
- Art and Design, University of New South Wales, Sydney, Australia
| |
Collapse
|
40
|
Ho PK, Woods A, Newell FN. Temporal shifts in eye gaze and facial expressions independently contribute to the perceived attractiveness of unfamiliar faces. VISUAL COGNITION 2019. [DOI: 10.1080/13506285.2018.1564807] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Pik Ki Ho
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | | | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|
41
|
Balas B, Auen A. Perceiving Animacy in Own-and Other-Species Faces. Front Psychol 2019; 10:29. [PMID: 30728795 PMCID: PMC6351462 DOI: 10.3389/fpsyg.2019.00029] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2018] [Accepted: 01/07/2019] [Indexed: 11/17/2022] Open
Abstract
Though artificial faces of various kinds are rapidly becoming more and more life-like due to advances in graphics technology (Suwajanakorn et al., 2015; Booth et al., 2017), observers can typically distinguish real faces from artificial faces. In general, face recognition is tuned to experience such that expert-level processing is most evident for faces that we encounter frequently in our visual world, but the extent to which face animacy perception is also tuned to in-group vs. out-group categories remains an open question. In the current study, we chose to examine how the perception of animacy in human faces and dog faces was affected by face inversion and the duration of face images presented to adult observers. We hypothesized that the impact of these manipulations may differ as a function of species category, indicating that face animacy perception is tuned for in-group faces. Briefly, we found evidence of such a differential impact, suggesting either that distinct mechanisms are used to evaluate the "life" in a face for in-group and out-group faces, or that the efficiency of a common mechanism varies substantially as a function of visual expertise.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, United States
- Center for Visual and Cognitive Neuroscience, North Dakota State University, Fargo, ND, United States
| | - Amanda Auen
- Department of Psychology, North Dakota State University, Fargo, ND, United States
| |
Collapse
|
42
|
Zhao J, Meng Q, An L, Wang Y. An event-related potential comparison of facial expression processing between cartoon and real faces. PLoS One 2019; 14:e0198868. [PMID: 30629582 PMCID: PMC6328201 DOI: 10.1371/journal.pone.0198868] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Accepted: 12/14/2018] [Indexed: 11/22/2022] Open
Abstract
Faces play important roles in the social lives of humans. Besides real faces, people also encounter numerous cartoon faces in daily life which convey basic emotional states through facial expressions. Using event-related potentials (ERPs), we conducted a facial expression recognition experiment with 17 university students to compare the processing of cartoon faces with that of real faces. This study used face type (real vs. cartoon), emotion valence (happy vs. angry) and participant gender (male vs. female) as independent variables. Reaction time, recognition accuracy, and the amplitudes and latencies of emotion processing-related ERP components such as N170, VPP (vertex positive potential), and LPP (late positive potential) were used as dependent variables. The ERP results revealed that cartoon faces caused larger N170 and VPP amplitudes as well as a briefer N170 latency than did real faces; that real faces induced larger LPP amplitudes than did cartoon faces. In addition, the results showed a significant difference in the brain regions as reflected in a right hemispheric advantage. The behavioral results showed that the reaction times for happy faces were shorter than those for angry faces; that females showed a higher accuracy than did males; and that males showed a higher recognition accuracy for angry faces than happy faces. Due to the sample size, these results may suggestively but not rigorously demonstrate differences in facial expression recognition and neurological processing between cartoon faces and real faces. Cartoon faces showed a higher processing intensity and speed than real faces during the early processing stage. However, more attentional resources were allocated for real faces during the late processing stage.
Collapse
Affiliation(s)
- Jiayin Zhao
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Qi Meng
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Licong An
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| | - Yifang Wang
- Beijing Key Laboratory of Learning and Cognition, Department of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
43
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
44
|
Třebický V, Fialová J, Stella D, Štěrbová Z, Kleisner K, Havlíček J. 360 Degrees of Facial Perception: Congruence in Perception of Frontal Portrait, Profile, and Rotation Photographs. Front Psychol 2018; 9:2405. [PMID: 30581400 PMCID: PMC6293201 DOI: 10.3389/fpsyg.2018.02405] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2018] [Accepted: 11/15/2018] [Indexed: 11/15/2022] Open
Abstract
Studies in social perception traditionally use as stimuli frontal portrait photographs. It turns out, however, that 2D frontal depiction may not fully capture the entire morphological diversity of facial features. Recently, 3D images started to become increasingly popular, but whether their perception differs from the perception of 2D has not been systematically studied as yet. Here we investigated congruence in the perception of portrait, left profile, and 360° rotation photographs. The photographs were obtained from 45 male athletes under standardized conditions. In two separate studies, each set of images was rated for formidability (portraits by 62, profiles by 60, and 360° rotations by 94 raters) and attractiveness (portraits by 195, profiles by 176, and 360° rotations by 150 raters) on a 7-point scale. The ratings of the stimuli types were highly intercorrelated (for formidability all rs > 0.8, for attractiveness all rs > 0.7). Moreover, we found no differences in the mean ratings between the three types of stimuli, neither in formidability, nor in attractiveness. Overall, our results clearly suggest that different facial views convey highly overlapping information about structural facial elements of an individual. They lead to congruent assessments of formidability and attractiveness, and a single angle view seems sufficient for face perception research.
Collapse
Affiliation(s)
- Vít Třebický
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| | - Jitka Fialová
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| | - David Stella
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| | - Zuzana Štěrbová
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| | - Karel Kleisner
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| | - Jan Havlíček
- National Institute of Mental Health, Klecany, Czechia
- Faculty of Science, Charles University, Prague, Czechia
| |
Collapse
|
45
|
Balas B, Tupa L, Pacella J. Measuring social variables in real and artificial faces. COMPUTERS IN HUMAN BEHAVIOR 2018. [DOI: 10.1016/j.chb.2018.07.013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
|
46
|
Del Zotto M, Framorando D, Pegna AJ. Waist-to-hip ratio affects female body attractiveness and modulates early brain responses. Eur J Neurosci 2018; 52:4490-4498. [PMID: 30347463 DOI: 10.1111/ejn.14209] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/06/2017] [Revised: 08/24/2018] [Accepted: 09/24/2018] [Indexed: 11/28/2022]
Abstract
This investigation examined the electrophysiological response underlying the visual processing of waist-to-hip ratio (WHR) in female bodies, a characteristic known to affect perceived attractiveness. WHRs of female bodies were artificially adjusted to values of 0.6, 0.7, 0.8 or 0.9. Behavioural ratings of attractiveness of the bodies revealed a preference for WHRs of 0.7 in the overall group of participants, which included both male and female heterosexual individuals. Event-related potentials (ERPs) were then recorded while participants performed a selective attention task involving photographs of female models and scrambled images. Results showed that the P1 (80-120 ms) and N1 (130-170 ms) components situated over posterior brain regions were the earliest components to be modulated by attention and bodies. Interestingly, the vertex-positive potential, occurring between 120-180 ms, produced a greater positivity for WHRs of 0.7 compared to the other ratios. However, this increase was only observed when the body stimuli were attended, while no effect was observed for unattended bodies. These findings provide evidence of an early brain sensitivity to visual attributes that constitute secondary sexual characteristics. Although they are relatively discrete from the point of view of their physical quality, these signs possess strong behavioural significance, producing greater reported attractiveness, likely by conveying the biological meaning that signals good health and greater reproductive success. Our results therefore reveal that attributes associated with sexual attractiveness in female bodies are processed rapidly in the stream of visual processing.
Collapse
Affiliation(s)
- Marzia Del Zotto
- Division of Medical Information Sciences, Department of Radiology and Medical Informatics, Faculty of Medicine, University of Geneva, Geneva, CH-1211, Switzerland.,Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211, Geneva, Switzerland
| | - David Framorando
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211, Geneva, Switzerland
| | - Alan J Pegna
- Faculty of Psychology and Educational Sciences, University of Geneva, CH-1211, Geneva, Switzerland.,School of Psychology, The University of Queensland, Brisbane, Qld, 4072, Australia
| |
Collapse
|
47
|
Lick DJ, Johnson KL. Facial Cues to Race and Gender Interactively Guide Age Judgments. SOCIAL COGNITION 2018. [DOI: 10.1521/soco.2018.36.5.497] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
|
48
|
Kätsyri J. Those Virtual People all Look the Same to me: Computer-Rendered Faces Elicit a Higher False Alarm Rate Than Real Human Faces in a Recognition Memory Task. Front Psychol 2018; 9:1362. [PMID: 30123166 PMCID: PMC6086000 DOI: 10.3389/fpsyg.2018.01362] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2017] [Accepted: 07/16/2018] [Indexed: 12/05/2022] Open
Abstract
Virtual as compared with real human characters can elicit a sense of uneasiness in human observers, characterized by lack of familiarity and even feelings of eeriness (the “uncanny valley” hypothesis). Here we test the possibility that this alleged lack of familiarity is literal in the sense that people have lesser perceptual expertise in processing virtual as compared with real human faces. Sixty-four participants took part in a recognition memory study in which they first learned a set of faces and were then asked to recognize them in a testing session. We used real and virtual (computer-rendered) versions of the same faces, presented in either upright or inverted orientation. Real and virtual faces were matched for low-level visual features such as global luminosity and spatial frequency contents. Our results demonstrated a higher response bias toward responding “seen before” for virtual as compared with real faces, which was further explained by a higher false alarm rate for the former. This finding resembles a similar effect for recognizing human faces from other than one's own ethnic groups (the “other race effect”). Virtual faces received clearly higher subjective eeriness ratings than real faces. Our results did not provide evidence of poorer overall recognition memory or lesser inversion effect for virtual faces, however. The higher false alarm rate finding supports the notion that lesser perceptual expertise may contribute to the lack of subjective familiarity with virtual faces. We discuss alternative interpretations and provide suggestions for future research.
Collapse
Affiliation(s)
- Jari Kätsyri
- Brain and Emotion Laboratory, Department of Cognitive Neuroscience, Maastricht University, Maastricht, Netherlands
| |
Collapse
|
49
|
de la Rosa S, Breidt M. Virtual reality: A new track in psychological research. Br J Psychol 2018; 109:427-430. [PMID: 29748966 PMCID: PMC6055789 DOI: 10.1111/bjop.12302] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2018] [Revised: 03/14/2018] [Indexed: 11/30/2022]
Abstract
One major challenge of social interaction research is to achieve high experimental control over social interactions to allow for rigorous scientific reasoning. Virtual reality (VR) promises this level of control. Pan and Hamilton guide us with a detailed review on existing and future possibilities and challenges of using VR for social interaction research. Here, we extend the discussion to methodological and practical implications when using VR.
Collapse
Affiliation(s)
- Stephan de la Rosa
- Department for Human Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Martin Breidt
- Department for Human Perception Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| |
Collapse
|
50
|
Walker M, Schönborn S, Greifeneder R, Vetter T. The Basel Face Database: A validated set of photographs reflecting systematic differences in Big Two and Big Five personality dimensions. PLoS One 2018; 13:e0193190. [PMID: 29590124 PMCID: PMC5873939 DOI: 10.1371/journal.pone.0193190] [Citation(s) in RCA: 22] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2017] [Accepted: 02/06/2018] [Indexed: 11/19/2022] Open
Abstract
Upon a first encounter, individuals spontaneously associate faces with certain personality dimensions. Such first impressions can strongly impact judgments and decisions and may prove highly consequential. Researchers investigating the impact of facial information often rely on (a) real photographs that have been selected to vary on the dimension of interest, (b) morphed photographs, or (c) computer-generated faces (avatars). All three approaches have distinct advantages. Here we present the Basel Face Database, which combines these advantages. In particular, the Basel Face Database consists of real photographs that are subtly, but systematically manipulated to show variations in the perception of the Big Two and the Big Five personality dimensions. To this end, the information specific to each psychological dimension is isolated and modeled in new photographs. Two studies serve as systematic validation of the Basel Face Database. The Basel Face Database opens a new pathway for researchers across psychological disciplines to investigate effects of perceived personality.
Collapse
Affiliation(s)
- Mirella Walker
- Department of Psychology, University of Basel, Basel, Switzerland
- * E-mail:
| | - Sandro Schönborn
- Department for Mathematics and Computer Science, University of Basel, Basel, Switzerland
| | | | - Thomas Vetter
- Department for Mathematics and Computer Science, University of Basel, Basel, Switzerland
| |
Collapse
|