1
|
Urbanova P, Goldmann T, Cerny D, Drahansky M. Head poses and grimaces: Challenges for automated face identification algorithms? Sci Justice 2024; 64:421-442. [PMID: 39025567 DOI: 10.1016/j.scijus.2024.06.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/23/2024] [Revised: 06/04/2024] [Accepted: 06/15/2024] [Indexed: 07/20/2024]
Abstract
In today's biometric and commercial settings, state-of-the-art image processing relies solely on artificial intelligence and machine learning which provides a high level of accuracy. However, these principles are deeply rooted in abstract, complex "black-box systems". When applied to forensic image identification, concerns about transparency and accountability emerge. This study explores the impact of two challenging factors in automated facial identification: facial expressions and head poses. The sample comprised 3D faces with nine prototype expressions, collected from 41 participants (13 males, 28 females) of European descent aged 19.96 to 50.89 years. Pre-processing involved converting 3D models to 2D color images (256 × 256 px). Probes included a set of 9 images per individual with head poses varying by 5° in both left-to-right (yaw) and up-and-down (pitch) directions for neutral expressions. A second set of 3,610 images per individual covered viewpoints in 5° increments from -45° to 45° for head movements and different facial expressions, forming the targets. Pair-wise comparisons using ArcFace, a state-of-the-art face identification algorithm yielded 54,615,690 dissimilarity scores. Results indicate that minor head deviations in probes have minimal impact. However, the performance diminished as targets deviated from the frontal position. Right-to-left movements were less influential than up and down, with downward pitch showing less impact than upward movements. The lowest accuracy was for upward pitch at 45°. Dissimilarity scores were consistently higher for males than for females across all studied factors. The performance particularly diverged in upward movements, starting at 15°. Among tested facial expressions, happiness and contempt performed best, while disgust exhibited the lowest AUC values.
Collapse
Affiliation(s)
- Petra Urbanova
- Department of Anthropology, Faculty of Science, Masaryk University, Czech Republic.
| | - Tomas Goldmann
- Department of Intelligent Systems, Faculty of Information Technology, Brno University of Technology, Brno, Czech Republic
| | - Dominik Cerny
- Department of Anthropology, Faculty of Science, Masaryk University, Czech Republic
| | - Martin Drahansky
- Department of Anthropology, Faculty of Science, Masaryk University, Czech Republic
| |
Collapse
|
2
|
Local features drive identity responses in macaque anterior face patches. Nat Commun 2022; 13:5592. [PMID: 36151142 PMCID: PMC9508131 DOI: 10.1038/s41467-022-33240-w] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 09/08/2022] [Indexed: 12/03/2022] Open
Abstract
Humans and other primates recognize one another in part based on unique structural details of the face, including both local features and their spatial configuration within the head and body. Visual analysis of the face is supported by specialized regions of the primate cerebral cortex, which in macaques are commonly known as face patches. Here we ask whether the responses of neurons in anterior face patches, thought to encode face identity, are more strongly driven by local or holistic facial structure. We created stimuli consisting of recombinant photorealistic images of macaques, where we interchanged the eyes, mouth, head, and body between individuals. Unexpectedly, neurons in the anterior medial (AM) and anterior fundus (AF) face patches were predominantly tuned to local facial features, with minimal neural selectivity for feature combinations. These findings indicate that the high-level structural encoding of face identity rests upon populations of neurons specialized for local features. Anterior face patches in the macaque have been assumed to represent face identity in a holistic manner. Here the authors show that the neural encoding of face identity in the anterior medial and anterior fundus face patches are instead driven principally by local features.
Collapse
|
3
|
Gogan T, Beaudry J, Oldmeadow J. Image variability and face matching. Perception 2022; 51:804-819. [PMID: 35989636 DOI: 10.1177/03010066221119088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
This study investigates whether variability in perceived trait judgements disrupts our ability to match unfamiliar faces. In this preregistered study, 174 participants completed a face matching task where they were asked to indicate whether two ambient face images belonged to the same person or different people (17,748 total data points). Participants completed 51 match trials consisting of images of the same person that differed substantially on one trait (either trustworthiness, dominance or attractiveness) with minimal differences in the alternate traits. Participants also completed 51 mismatch trials which contained two photos of similar-looking individuals. We hypothesised that participants would make more errors on match trials when images differed in terms of attractiveness ratings than when they differed on trustworthiness or dominance. Contrary to expectations, images that differed in terms of attractiveness were matched most accurately, and there was no relationship between the extent of differences in attractiveness ratings and accuracy. There was some evidence that differences in perceived dominance and, to a lesser extent, trustworthiness were associated with lower face matching performance. However, these relationships were not significant when alternate traits were accounted for. The findings of our study suggest that face matching performance is largely robust against variation in trait judgements.
Collapse
Affiliation(s)
- Taylor Gogan
- 3783Swinburne University of Technology, Australia
| | - Jennifer Beaudry
- 3783Swinburne University of Technology, Australia; 1065Flinders University, Australia
| | | |
Collapse
|
4
|
Mishra MV, Fry RM, Saad E, Arizpe JM, Ohashi YGB, DeGutis JM. Comparing the sensitivity of face matching assessments to detect face perception impairments. Neuropsychologia 2021; 163:108067. [PMID: 34673046 DOI: 10.1016/j.neuropsychologia.2021.108067] [Citation(s) in RCA: 16] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2021] [Revised: 10/15/2021] [Accepted: 10/17/2021] [Indexed: 11/29/2022]
Abstract
Numerous neurological, developmental, and psychiatric conditions demonstrate impaired face recognition, which can be socially debilitating. These impairments can be caused by either deficient face perception or face memory mechanisms. Though there are well-validated, sensitive measures of face memory impairments, it currently remains unclear which assessments best measure face perception impairments. A sensitive, validated face perception measure could help with diagnosing causes of face recognition deficits and be useful in characterizing individual differences in unimpaired populations. Here, we compared the computerized Benton Face Recognition Test (BFRT-c) and Cambridge Face Perception Test (CFPT) in their ability to differentiate developmental prosopagnosics (DPs, N = 30) and age-matched controls (N = 30). Participants completed the BFRT-c, CFPT, and two additional face perception assessments: the University of Southern California Face Perception Test (USCFPT) and a novel same/different face matching test (SDFMT). Participants were also evaluated on objective and subjective face recognition tasks including the Cambridge Face Memory Test, famous faces test, and Prosopagnosia Index-20. We performed a logistic regression with the perception tests predicting DP vs. control group membership and used multiple linear regressions to predict continuous objective and subjective face recognition memory. Our results show that the BFRT-c performed as well as, if not better than, the CFPT, and that both tests clearly outperformed the USCFPT and SDFMT. Further, exploratory analyses revealed that face lighting-change conditions better predicted DP group membership and face recognition abilities than viewpoint-change conditions. Together, these results support the combined use of the BFRT-c and CFPT to best assess face perception impairments.
Collapse
Affiliation(s)
- Maruti V Mishra
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA; Boston Attention and Learning Laboratory, VA Boston Healthcare, Jamaica Plain Division, 150 S Huntington Ave., Boston, MA, USA
| | - Regan M Fry
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA; Boston Attention and Learning Laboratory, VA Boston Healthcare, Jamaica Plain Division, 150 S Huntington Ave., Boston, MA, USA
| | - Elyana Saad
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA
| | - Joseph M Arizpe
- Science Applications International Corporation (SAIC), Fort Sam Houston, TX, USA
| | - Yuri-Grace B Ohashi
- Department of Psychology, Harvard University, Cambridge, MA, USA; Harvard Decision Science Laboratory, Harvard Kennedy School, Cambridge, MA, USA
| | - Joseph M DeGutis
- Department of Psychiatry, Harvard Medical School, Boston, MA, USA; Boston Attention and Learning Laboratory, VA Boston Healthcare, Jamaica Plain Division, 150 S Huntington Ave., Boston, MA, USA.
| |
Collapse
|
5
|
Hunnisett N, Favelle S. Within-person variability can improve the identification of unfamiliar faces across changes in viewpoint. Q J Exp Psychol (Hove) 2021; 74:1873-1887. [PMID: 33783277 DOI: 10.1177/17470218211009771] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Unfamiliar face identification is concerningly error prone, especially across changes in viewing conditions. Within-person variability has been shown to improve matching performance for unfamiliar faces, but this has only been demonstrated using images of a front view. In this study, we test whether the advantage of within-person variability from front views extends to matching to target images of a face rotated in view. Participants completed either a simultaneous matching task (Experiment 1) or a sequential matching task (Experiment 2) in which they were tested on their ability to match the identity of a face shown in an array of either one or three ambient front-view images, with a target image shown in front, three-quarter, or profile view. While the effect was stronger in Experiment 2, we found a consistent pattern in match trials across both experiments in that there was a multiple image matching benefit for front, three-quarter, and profile-view targets. We found multiple image effects for match trials only, indicating that providing observers with multiple ambient images confers an advantage for recognising different images of the same identity but not for discriminating between images of different identities. Signal detection measures also indicate a multiple image advantage despite a more liberal response bias for multiple image trials. Our results show that within-person variability information for unfamiliar faces can be generalised across views and can provide insights into the initial processes involved in the representation of familiar faces.
Collapse
Affiliation(s)
- Niamh Hunnisett
- School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| | - Simone Favelle
- School of Psychology, University of Wollongong, Wollongong, NSW, Australia
| |
Collapse
|
6
|
Gogan T, Beaudry J, Oldmeadow J. Within-Person Variability in First Impressions From Faces. Perception 2021; 50:595-614. [PMID: 34053353 DOI: 10.1177/03010066211019727] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Perceptions of an individual can change dramatically across different images of their face. Questions remain as to whether some traits are more sensitive to image variability than others. To investigate this issue, we constructed a database of 340 naturalistic images consisting of 20 photos of 17 individuals. In this preregistered study, 95 participants rated all 340 images on one of three traits: trustworthiness, dominance, or attractiveness. Across images, participants' trustworthiness ratings tended to vary more than dominance, which in turn varied more than attractiveness; however, the relative differences between traits depended on the identity in question. Importantly, despite the variability in ratings within identities, there were substantial differences between individuals, suggesting that these trait judgements are based to some degree on relatively invariant facial characteristics. We found greater between-identity variability for attractiveness judgements compared to trustworthiness and dominance. Future research should further investigate the extent to which each trait dimension is tied to the identity of the faces.
Collapse
|
7
|
Diego-Mas JA, Fuentes-Hurtado F, Naranjo V, Alcañiz M. The Influence of Each Facial Feature on How We Perceive and Interpret Human Faces. Iperception 2020; 11:2041669520961123. [PMID: 33062242 PMCID: PMC7533946 DOI: 10.1177/2041669520961123] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2019] [Accepted: 09/01/2020] [Indexed: 11/16/2022] Open
Abstract
Facial information is processed by our brain in such a way that we immediately make judgments about, for example, attractiveness or masculinity or interpret personality traits or moods of other people. The appearance of each facial feature has an effect on our perception of facial traits. This research addresses the problem of measuring the size of these effects for five facial features (eyes, eyebrows, nose, mouth, and jaw). Our proposal is a mixed feature-based and image-based approach that allows judgments to be made on complete real faces in the categorization tasks, more than on synthetic, noisy, or partial faces that can influence the assessment. Each facial feature of the faces is automatically classified considering their global appearance using principal component analysis. Using this procedure, we establish a reduced set of relevant specific attributes (each one describing a complete facial feature) to characterize faces. In this way, a more direct link can be established between perceived facial traits and what people intuitively consider an eye, an eyebrow, a nose, a mouth, or a jaw. A set of 92 male faces were classified using this procedure, and the results were related to their scores in 15 perceived facial traits. We show that the relevant features greatly depend on what we are trying to judge. Globally, the eyes have the greatest effect. However, other facial features are more relevant for some judgments like the mouth for happiness and femininity or the nose for dominance.
Collapse
Affiliation(s)
- Jose A. Diego-Mas
- i3B—Institute for Research and Innovation in Bioengineering, Universitat Politecnica de Valencia, Valencia, Spain
| | - Felix Fuentes-Hurtado
- i3B—Institute for Research and Innovation in Bioengineering, Universitat Politecnica de Valencia, Valencia, Spain
| | - Valery Naranjo
- i3B—Institute for Research and Innovation in Bioengineering, Universitat Politecnica de Valencia, Valencia, Spain
| | - Mariano Alcañiz
- i3B—Institute for Research and Innovation in Bioengineering, Universitat Politecnica de Valencia, Valencia, Spain
| |
Collapse
|
8
|
Burt AL, Crewther DP. The 4D Space-Time Dimensions of Facial Perception. Front Psychol 2020; 11:1842. [PMID: 32849084 PMCID: PMC7399249 DOI: 10.3389/fpsyg.2020.01842] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2020] [Accepted: 07/06/2020] [Indexed: 12/19/2022] Open
Abstract
Facial information is a powerful channel for human-to-human communication. Characteristically, faces can be defined as biological objects that are four-dimensional (4D) patterns, whereby they have concurrently a spatial structure and surface as well as temporal dynamics. The spatial characteristics of facial objects contain a volume and surface in three dimensions (3D), namely breadth, height and importantly, depth. The temporal properties of facial objects are defined by how a 3D facial structure and surface evolves dynamically over time; where time is referred to as the fourth dimension (4D). Our entire perception of another’s face, whether it be social, affective or cognitive perceptions, is therefore built on a combination of 3D and 4D visual cues. Counterintuitively, over the past few decades of experimental research in psychology, facial stimuli have largely been captured, reproduced and presented to participants with two dimensions (2D), while remaining largely static. The following review aims to advance and update facial researchers, on the recent revolution in computer-generated, realistic 4D facial models produced from real-life human subjects. We delve in-depth to summarize recent studies which have utilized facial stimuli that possess 3D structural and surface cues (geometry, surface and depth) and 4D temporal cues (3D structure + dynamic viewpoint and movement). In sum, we have found that higher-order perceptions such as identity, gender, ethnicity, emotion and personality, are critically influenced by 4D characteristics. In future, it is recommended that facial stimuli incorporate the 4D space-time perspective with the proposed time-resolved methods.
Collapse
Affiliation(s)
- Adelaide L Burt
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| | - David P Crewther
- Centre for Human Psychopharmacology, Swinburne University of Technology, Melbourne, VIC, Australia
| |
Collapse
|
9
|
Cass J, Giltrap G, Talbot D. Female Body Dissatisfaction and Attentional Bias to Body Images Evaluated Using Visual Search. Front Psychol 2020; 10:2821. [PMID: 32038346 PMCID: PMC6987376 DOI: 10.3389/fpsyg.2019.02821] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2019] [Accepted: 11/29/2019] [Indexed: 11/13/2022] Open
Abstract
One factor, believed to predict body dissatisfaction is an individual’s propensity to attend to certain classes of human body image stimuli relative to other classes. These attentional biases have been evaluated using a range of paradigms, including dot-probe, eye-tracking and free view visual search, which have yielded a range of – often contradictory – findings. This study is the first to employ a classic compound visual search task to investigate the relationship between body dissatisfaction and attentional biases to images of underweight and with-overweight female bodies. Seventy-one undergraduate females, varying their degree of body dissatisfaction and Body Mass Index (BMI), searched for a horizontal or vertical target line among tilted lines. A separate female body image was presented within close proximity to each line. On average, faster search times were obtained when the target line was paired with a uniquely underweight or with-overweight body relative to neutral (average weight only) trials indicating that body weight-related images can effectively guide search. This congruent search effect was stronger for individuals with high eating restraint (a behavioral manifestation of body image disturbance) when search involved a uniquely underweight body. By contrast, individuals with high BMIs searched for lines more rapidly when paired with with-overweight rather than underweight bodies, than did individuals with lower BMIs. For incongruent trials – in which a unique body was paired with a distractor rather than the target – search times were indistinguishable from neutral trials, indicating that the deviant bodies neither compulsorily “captured” attention nor reduced participants’ ability to disengage their attention from either underweight or with-overweight bodies. These results imply the existence of attentional strategies which reflect one’s current body and goal-directed eating behaviors.
Collapse
Affiliation(s)
- John Cass
- School of Psychology, Western Sydney University, Sydney, NSW, Australia.,School of Social Sciences and Psychology, Western Sydney University, Sydney, NSW, Australia
| | - Georgina Giltrap
- School of Social Sciences and Psychology, Western Sydney University, Sydney, NSW, Australia
| | - Daniel Talbot
- School of Social Sciences and Psychology, Western Sydney University, Sydney, NSW, Australia
| |
Collapse
|
10
|
Quantifying the effect of viewpoint changes on sensitivity to face identity. Vision Res 2019; 165:1-12. [DOI: 10.1016/j.visres.2019.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 08/28/2019] [Accepted: 09/16/2019] [Indexed: 11/20/2022]
|
11
|
View specific generalisation effects in face recognition: Front and yaw comparison views are better than pitch. PLoS One 2018; 13:e0209927. [PMID: 30592761 PMCID: PMC6310264 DOI: 10.1371/journal.pone.0209927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Accepted: 12/13/2018] [Indexed: 11/26/2022] Open
Abstract
It can be difficult to recognise new instances of an unfamiliar face. Recognition errors in this particular situation appear to be viewpoint dependent with error rates increasing with the angular distance between the face views. Studies using front views for comparison have shown that recognising faces rotated in yaw can be difficult and that recognition of faces rotated in pitch is more challenging still. Here we investigate the extent to which viewpoint dependent face recognition depends on the comparison view. Participants were assigned to one of four different comparison view groups: front, ¾ yaw (right), ¾ pitch-up (above) or ¾ pitch-down (below). On each trial, participants matched their particular comparison view to a range of yaw or pitch rotated test views. Results showed that groups with a front or ¾ yaw comparison view had superior overall performance and more successful generalisation to a broader range of both pitch and yaw test views compared to groups with pitch-up or pitch-down comparison views, both of which had a very restricted generalisation range. Regression analyses revealed the importance of image similarity between views for generalisation, with a lesser role for 3D face depth. These findings are consistent with a view interpolation solution to view generalisation of face recognition, with front and ¾ yaw views being most informative.
Collapse
|
12
|
Bülthoff I, Mohler BJ, Thornton IM. Face recognition of full-bodied avatars by active observers in a virtual environment. Vision Res 2018; 157:242-251. [PMID: 29274811 DOI: 10.1016/j.visres.2017.12.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/01/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning. Half of the learned faces were shown at test in an orientation close to that experienced during learning while the others were viewed from a new viewing angle. All observers found novel views more difficult to recognize than familiar ones. Overall, the active group performed better than both other groups. Furthermore, the group learning faces from static images was the only one to be at chance level in the novel-view condition. These findings suggest that active exploration combined with a dynamic experience of the faces to learn allow for more robust face recognition and point out the value of such techniques for integrating facial visual information and enhancing recognition from novel viewpoints.
Collapse
Affiliation(s)
- Isabelle Bülthoff
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany.
| | - Betty J Mohler
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Ian M Thornton
- Department of Cognitive Science, University of Malta, Malta
| |
Collapse
|
13
|
Favelle S, Hill H, Claes P. About Face: Matching Unfamiliar Faces Across Rotations of View and Lighting. Iperception 2017; 8:2041669517744221. [PMID: 29225768 PMCID: PMC5714100 DOI: 10.1177/2041669517744221] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Matching the identities of unfamiliar faces is heavily influenced by variations in their images. Changes to viewpoint and lighting direction during face perception are commonplace across yaw and pitch axes and can result in dramatic image differences. We report two experiments that, for the first time, factorially investigate the combined effects of lighting and view angle on matching performance for unfamiliar faces. The use of three-dimensional head models allowed control of both lighting and viewpoint. We found viewpoint effects in the yaw axis with little to no effect of lighting. However, for rotations about the pitch axis, there were both viewpoint and lighting effects and these interacted where lighting effects were found only for front views and views from below. The pattern of effects was similar regardless of whether view variation occurred as a result of head (Experiment 1) or camera (Experiment 2) suggesting that face matching is not purely image based. Along with face inversion effects in Experiment 1, the results of this study suggest that face perception is based on shape and surface information and draws on implicit knowledge of upright faces and ecological (top) lighting conditions.
Collapse
Affiliation(s)
- Simone Favelle
- School of Psychology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Harold Hill
- School of Psychology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Peter Claes
- ESAT/PSI, Department of Electrical Engineering, KU Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Belgium
| |
Collapse
|
14
|
Schneider TM, Carbon CC. Taking the Perfect Selfie: Investigating the Impact of Perspective on the Perception of Higher Cognitive Variables. Front Psychol 2017. [PMID: 28649219 PMCID: PMC5465279 DOI: 10.3389/fpsyg.2017.00971] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023] Open
Abstract
Taking selfies is now becoming a standard human habit. However, as a social phenomenon, research is still in the fledgling stage and the scientific framework is sparse. Selfies allow us to share social information with others in a compact format. Furthermore, we are able to control important photographic and compositional aspects, such as perspective, which have a strong impact on the assessment of a face (e.g., demonstrated by the height-weight illusion, effects of gaze direction, faceism-index). In Study 1, we focused on the impact of perspective (left/right hemiface, above/below vs. frontal presentation) on higher cognitive variables and let 172 participants rate the perceived attractiveness, helpfulness, sympathy, dominance, distinctiveness, and intelligence, plus important information on health issues (e.g., body weight), on the basis of 14 3D faces. We could show that lateral snapshots yielded higher ratings for attractiveness compared to the classical frontal view. However, this effect was more pronounced for left hemifaces and especially female faces. Compared to the frontal condition, 30° right hemifaces were rated as more helpful, but only for female faces while faces viewed from above were perceived as significant less helpful. Direct comparison between left vs. right hemifaces revealed no effect. Relating to sympathy, we only found a significant effect for 30° right male hemifaces, but only in comparison to the frontal condition. Furthermore, female 30° right hemifaces were perceived as more intelligent. Relating to body weight, we replicated the so-called “height-weight illusion.” Other variables remained unaffected. In Study 2, we investigated the impact of a typical selfie-style condition by presenting the respective faces from a lateral (left/right) and tilted (lower/higher) vantage point. Most importantly, depending on what persons wish to express with a selfie, a systematic change of perspective can strongly optimize their message; e.g., increasing their attractiveness by shooting from above left, and in contrast, decreasing their expressed helpfulness by shooting from below. We could further extent past findings relating to the height-weight illusion and showed that an additional rotation of the camera positively affected the perception of body weight (lower body weight). We discuss potential explanations for perspective-related effects, especially gender-related ones.
Collapse
Affiliation(s)
- Tobias M Schneider
- Department of General Psychology and Methodology, University of BambergBamberg, Germany.,Bamberg Graduate School of Affective and Cognitive Sciences, University of BambergBamberg, Germany.,Research Group EPÆG (Ergonomics, Psychological Æsthetics, Gestalt)Bamberg, Germany
| | - Claus-Christian Carbon
- Department of General Psychology and Methodology, University of BambergBamberg, Germany.,Bamberg Graduate School of Affective and Cognitive Sciences, University of BambergBamberg, Germany.,Research Group EPÆG (Ergonomics, Psychological Æsthetics, Gestalt)Bamberg, Germany
| |
Collapse
|
15
|
Andrews S, Jenkins R, Cursiter H, Burton AM. Telling faces together: Learning new faces through exposure to multiple instances. Q J Exp Psychol (Hove) 2015; 68:2041-50. [PMID: 25607814 DOI: 10.1080/17470218.2014.1003949] [Citation(s) in RCA: 92] [Impact Index Per Article: 10.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
We are usually able to recognize novel instances of familiar faces with little difficulty, yet recognition of unfamiliar faces can be dramatically impaired by natural within-person variability in appearance. In a card-sorting task for facial identity, different photos of the same unfamiliar face are often seen as different people. Here we report two card-sorting experiments in which we manipulate whether participants know the number of identities present. Without constraints, participants sort faces into many identities. However, when told the number of identities present, they are highly accurate. This minimal contextual information appears to support viewers in "telling faces together". In Experiment 2 we show that exposure to within-person variability in the sorting task improves performance in a subsequent face-matching task. This appears to offer a fast route to learning generalizable representations of new faces.
Collapse
Affiliation(s)
- Sally Andrews
- a School of Psychology , University of Aberdeen , Aberdeen , UK
| | | | | | | |
Collapse
|
16
|
Favelle SK, Palmisano S. The face inversion effect following pitch and yaw rotations: investigating the boundaries of holistic processing. Front Psychol 2012; 3:563. [PMID: 23267337 PMCID: PMC3525703 DOI: 10.3389/fpsyg.2012.00563] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2012] [Accepted: 11/28/2012] [Indexed: 11/13/2022] Open
Abstract
Upright faces are thought to be processed holistically. However, the range of views within which holistic processing occurs is unknown. Recent research by McKone (2008) suggests that holistic processing occurs for all yaw-rotated face views (i.e., full-face through to profile). Here we examined whether holistic processing occurs for pitch, as well as yaw, rotated face views. In this face recognition experiment: (i) participants made same/different judgments about two sequentially presented faces (either both upright or both inverted); (ii) the test face was pitch/yaw rotated by between 0° and 75° from the encoding face (always a full-face view). Our logic was as follows: if a particular pitch/yaw-rotated face view is being processed holistically when upright, then this processing should be disrupted by inversion. Consistent with previous research, significant face inversion effects (FIEs) were found for all yaw-rotated views. However, while FIEs were found for pitch rotations up to 45°, none were observed for 75° pitch rotations (rotated either above or below the full face). We conclude that holistic processing does not occur for all views of upright faces (e.g., not for uncommon pitch rotated views), only those that can be matched to a generic global representation of a face.
Collapse
Affiliation(s)
- Simone K Favelle
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | | |
Collapse
|
17
|
Balas B, Valente N. View-adaptation reveals coding of face pose along image, not object, axes. Vision Res 2012; 67:22-7. [PMID: 22796427 PMCID: PMC3444152 DOI: 10.1016/j.visres.2012.07.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2012] [Revised: 06/21/2012] [Accepted: 07/03/2012] [Indexed: 11/19/2022]
Abstract
High-level adaptation effects reveal important features of the neural coding of objects and faces. View-adaptation in particular is a highly useful means of characterizing how depth rotation of the face is represented and therefore, how view-invariant recognition of the face may be achieved. In the present study, we used view adaptation to determine the extent to which depth rotations of a face are represented in an image-based or object-based manner. Specifically, we dissociated object-based axes from image-based axes via a 90° planar rotation of the adapting face and observed that participants' responses pre- and post-adaptation are most consistent with an image-based representation of depth rotations of the face. We discuss our data in the context of previous results describing the impact of planar rotation on related aspects of face perception.
Collapse
Affiliation(s)
- Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND 58102, United States.
| | | |
Collapse
|