1
|
Muto H, Ide M, Tomita A, Morikawa K. Viewpoint Invariance of Eye Size Illusion Caused by Eyeshadow. Front Psychol 2019; 10:1510. [PMID: 31333542 PMCID: PMC6624442 DOI: 10.3389/fpsyg.2019.01510] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2019] [Accepted: 06/14/2019] [Indexed: 11/13/2022] Open
Abstract
Previous research found that application of eyeshadow on the upper eyelids induces overestimation of eye size. The present study examined whether or not this eyeshadow illusion is dependent on viewpoint. We created a three-dimensional model of a female face and manipulated the presence/absence of eyeshadow and face orientation around the axis of yaw (Experiment 1) or pitch (Experiment 2) rotation. Using the staircase method, we measured perceived eye size for each face stimulus. Results showed that the eyeshadow illusion occurred regardless of face orientation around axes of both yaw and pitch rotations. Crucially, the illusion’s magnitude did not vary across face orientations; lack of interaction between the illusion’s magnitude and face orientation was confirmed by small values of Bayes factors. These findings ruled out the hypothesis that eyeshadow serves as a depth cue and leads to overestimation of eye size due to size-distance scaling. Alternatively, the present findings suggest that the eyeshadow illusion can be well explained by the assimilation between the eyes and eyeshadow, which also facilitates assimilation between the eyes and eyebrows. Practical implications and the present findings’ generalizability are also discussed.
Collapse
Affiliation(s)
- Hiroyuki Muto
- School of Human Sciences, Osaka University, Suita, Japan
| | - Mayu Ide
- School of Human Sciences, Osaka University, Suita, Japan
| | | | | |
Collapse
|
2
|
View specific generalisation effects in face recognition: Front and yaw comparison views are better than pitch. PLoS One 2018; 13:e0209927. [PMID: 30592761 PMCID: PMC6310264 DOI: 10.1371/journal.pone.0209927] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2018] [Accepted: 12/13/2018] [Indexed: 11/26/2022] Open
Abstract
It can be difficult to recognise new instances of an unfamiliar face. Recognition errors in this particular situation appear to be viewpoint dependent with error rates increasing with the angular distance between the face views. Studies using front views for comparison have shown that recognising faces rotated in yaw can be difficult and that recognition of faces rotated in pitch is more challenging still. Here we investigate the extent to which viewpoint dependent face recognition depends on the comparison view. Participants were assigned to one of four different comparison view groups: front, ¾ yaw (right), ¾ pitch-up (above) or ¾ pitch-down (below). On each trial, participants matched their particular comparison view to a range of yaw or pitch rotated test views. Results showed that groups with a front or ¾ yaw comparison view had superior overall performance and more successful generalisation to a broader range of both pitch and yaw test views compared to groups with pitch-up or pitch-down comparison views, both of which had a very restricted generalisation range. Regression analyses revealed the importance of image similarity between views for generalisation, with a lesser role for 3D face depth. These findings are consistent with a view interpolation solution to view generalisation of face recognition, with front and ¾ yaw views being most informative.
Collapse
|
3
|
Gilad-Gutnick S, Harmatz ES, Tsourides K, Yovel G, Sinha P. Recognizing Facial Slivers. J Cogn Neurosci 2018; 30:951-962. [PMID: 29668392 DOI: 10.1162/jocn_a_01265] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Collapse
|
4
|
Bülthoff I, Mohler BJ, Thornton IM. Face recognition of full-bodied avatars by active observers in a virtual environment. Vision Res 2018; 157:242-251. [PMID: 29274811 DOI: 10.1016/j.visres.2017.12.001] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2017] [Revised: 12/01/2017] [Accepted: 12/13/2017] [Indexed: 10/18/2022]
Abstract
Viewing faces in motion or attached to a body instead of isolated static faces improves their subsequent recognition. Here we enhanced the ecological validity of face encoding by having observers physically moving in a virtual room populated by life-size avatars. We compared the recognition performance of this active group to two control groups. The first control group watched a passive reenactment of the visual experience of the active group. The second control group saw static screenshots of the avatars. All groups performed the same old/new recognition task after learning. Half of the learned faces were shown at test in an orientation close to that experienced during learning while the others were viewed from a new viewing angle. All observers found novel views more difficult to recognize than familiar ones. Overall, the active group performed better than both other groups. Furthermore, the group learning faces from static images was the only one to be at chance level in the novel-view condition. These findings suggest that active exploration combined with a dynamic experience of the faces to learn allow for more robust face recognition and point out the value of such techniques for integrating facial visual information and enhancing recognition from novel viewpoints.
Collapse
Affiliation(s)
- Isabelle Bülthoff
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany.
| | - Betty J Mohler
- Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Germany
| | - Ian M Thornton
- Department of Cognitive Science, University of Malta, Malta
| |
Collapse
|
5
|
Favelle S, Hill H, Claes P. About Face: Matching Unfamiliar Faces Across Rotations of View and Lighting. Iperception 2017; 8:2041669517744221. [PMID: 29225768 PMCID: PMC5714100 DOI: 10.1177/2041669517744221] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022] Open
Abstract
Matching the identities of unfamiliar faces is heavily influenced by variations in their images. Changes to viewpoint and lighting direction during face perception are commonplace across yaw and pitch axes and can result in dramatic image differences. We report two experiments that, for the first time, factorially investigate the combined effects of lighting and view angle on matching performance for unfamiliar faces. The use of three-dimensional head models allowed control of both lighting and viewpoint. We found viewpoint effects in the yaw axis with little to no effect of lighting. However, for rotations about the pitch axis, there were both viewpoint and lighting effects and these interacted where lighting effects were found only for front views and views from below. The pattern of effects was similar regardless of whether view variation occurred as a result of head (Experiment 1) or camera (Experiment 2) suggesting that face matching is not purely image based. Along with face inversion effects in Experiment 1, the results of this study suggest that face perception is based on shape and surface information and draws on implicit knowledge of upright faces and ecological (top) lighting conditions.
Collapse
Affiliation(s)
- Simone Favelle
- School of Psychology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Harold Hill
- School of Psychology, University of Wollongong, Wollongong, New South Wales, Australia
| | - Peter Claes
- ESAT/PSI, Department of Electrical Engineering, KU Leuven, Belgium; Medical Imaging Research Center, UZ Leuven, Belgium
| |
Collapse
|
6
|
Vakli P, Németh K, Zimmer M, Kovács G. The face evoked steady-state visual potentials are sensitive to the orientation, viewpoint, expression and configuration of the stimuli. Int J Psychophysiol 2014; 94:336-50. [DOI: 10.1016/j.ijpsycho.2014.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2014] [Revised: 10/02/2014] [Accepted: 10/12/2014] [Indexed: 10/24/2022]
|
7
|
Favelle SK, Palmisano S. The face inversion effect following pitch and yaw rotations: investigating the boundaries of holistic processing. Front Psychol 2012; 3:563. [PMID: 23267337 PMCID: PMC3525703 DOI: 10.3389/fpsyg.2012.00563] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2012] [Accepted: 11/28/2012] [Indexed: 11/13/2022] Open
Abstract
Upright faces are thought to be processed holistically. However, the range of views within which holistic processing occurs is unknown. Recent research by McKone (2008) suggests that holistic processing occurs for all yaw-rotated face views (i.e., full-face through to profile). Here we examined whether holistic processing occurs for pitch, as well as yaw, rotated face views. In this face recognition experiment: (i) participants made same/different judgments about two sequentially presented faces (either both upright or both inverted); (ii) the test face was pitch/yaw rotated by between 0° and 75° from the encoding face (always a full-face view). Our logic was as follows: if a particular pitch/yaw-rotated face view is being processed holistically when upright, then this processing should be disrupted by inversion. Consistent with previous research, significant face inversion effects (FIEs) were found for all yaw-rotated views. However, while FIEs were found for pitch rotations up to 45°, none were observed for 75° pitch rotations (rotated either above or below the full face). We conclude that holistic processing does not occur for all views of upright faces (e.g., not for uncommon pitch rotated views), only those that can be matched to a generic global representation of a face.
Collapse
Affiliation(s)
- Simone K Favelle
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | | |
Collapse
|
8
|
Parr LA, Siebert E, Taubert J. Effect of familiarity and viewpoint on face recognition in chimpanzees. Perception 2012; 40:863-72. [PMID: 22128558 DOI: 10.1068/p6971] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
Numerous studies have shown that familiarity strongly influences how well humans recognize faces. This is particularly true when faces are encountered across a change in viewpoint. In this situation, recognition may be accomplished by matching partial or incomplete information about a face to a stored representation of the known individual, whereas such representations are not available for unknown faces. Chimpanzees, our closest living relatives, share many of the same behavioral specializations for face processing as humans, but the influence of familiarity and viewpoint have never been compared in the same study. Here, we examined the ability of chimpanzees to match the faces of familiar and unfamiliar conspecifics in their frontal and 3/4 views using a computerized task. Results showed that, while chimpanzees were able to accurately match both familiar and unfamiliar faces in their frontal orientations, performance was significantly impaired only when unfamiliar faces were presented across a change in viewpoint. Therefore, like in humans, face processing in chimpanzees appears to be sensitive to individual familiarity. We propose that familiarization is a robust mechanism for strengthening the representation of faces and has been conserved in primates to achieve efficient individual recognition over a range of natural viewing conditions.
Collapse
Affiliation(s)
- Lisa A Parr
- Department of Psychiatry and Behavioral Sciences, Center for Translational Social Neuroscience, Emory University, Atlanta, GA 30322, USA.
| | | | | |
Collapse
|
9
|
Stollhoff R, Kennerknecht I, Elze T, Jost J. A computational model of dysfunctional facial encoding in congenital prosopagnosia. Neural Netw 2011; 24:652-64. [DOI: 10.1016/j.neunet.2011.03.006] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2010] [Revised: 02/14/2011] [Accepted: 03/06/2011] [Indexed: 11/15/2022]
|
10
|
Hill H, Claes P, Corcoran M, Walters M, Johnston A, Clement JG. How Different is Different? Criterion and Sensitivity in Face-Space. Front Psychol 2011; 2:41. [PMID: 21738516 PMCID: PMC3125532 DOI: 10.3389/fpsyg.2011.00041] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2010] [Accepted: 02/28/2011] [Indexed: 11/13/2022] Open
Abstract
Not all detectable differences between face images correspond to a change in identity. Here we measure both sensitivity to change and the criterion difference that is perceived as a change in identity. Both measures are used to test between possible similarity metrics. Using a same/different task and the method of constant stimuli criterion is specified as the 50% "different" point (P50) and sensitivity as the difference limen (DL). Stimuli and differences are defined within a "face-space" based on principal components analysis of measured differences in three-dimensional shape. In Experiment 1 we varied views available. Criterion (P50) was lowest for identical full-face view comparisons that can be based on image differences. When comparing across views P50, was the same for a static 45° change as for multiple animated views, although sensitivity (DL) was higher for the animated case, where it was as high as for identical views. Experiments 2 and 3 tested possible similarity metrics. Experiment 2 contrasted Euclidean and Mahalanobis distance by setting PC1 or PC2 to zero. DL did not differ between conditions consistent with Mahalanobis. P50 was lower when PC2 changed emphasizing that perceived changes in identity are not determined by the magnitude of Euclidean physical differences. Experiment 3 contrasted a distance with an angle based similarity measure. We varied the distinctiveness of the faces being compared by varying distance from the origin, a manipulation that affects distances but not angles between faces. Angular P50 and DL were both constant for faces from 1 to 2 SD from the mean, consistent with an angular measure. We conclude that both criterion and sensitivity need to be considered and that an angular similarity metric based on standardized PC values provides the best metric for specifying what physical differences will be perceived to change in identity.
Collapse
Affiliation(s)
- Harold Hill
- School of Psychology, University of Wollongong Wollongong, NSW, Australia
| | | | | | | | | | | |
Collapse
|
11
|
Favelle SK, Palmisano S, Avery G. Face Viewpoint Effects about Three Axes: The Role of Configural and Featural Processing. Perception 2011; 40:761-84. [DOI: 10.1068/p6878] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
We directly compared recognition for faces following 0° – 75° viewpoint rotation about the yaw, pitch, and roll axes. The aim was to determine the extent to which configural and featural information supported face recognition following rotations about each of these axes. Experiment 1 showed that performance on a sequential-matching task was viewpoint-dependent for all three types of rotation. The best face-recognition accuracy and shortest reaction time was found for roll rotations, then for yaw rotations, and finally the worst accuracy and slowest reaction time was found for pitch rotations. Directional differences in recognition were found for pitch rotations, but not for roll or yaw. Experiment 2 provided evidence that, in all three cases, viewpoint-dependent declines in recognition were primarily driven by the loss of configural information. However, it also appeared that significant featural information was lost following yaw and pitch (but not roll) rotations. Together, these findings show that unfamiliar-face recognition is viewpoint-dependent following rotation about each axis (and in each direction), and that performance is based on the availability of configural and, to a lesser extent, featural information.
Collapse
Affiliation(s)
- Simone K Favelle
- School of Psychology, University of Wollongong, Northfields Avenue, Wollongong 2522, NSW, Australia
| | - Stephen Palmisano
- School of Psychology, University of Wollongong, Northfields Avenue, Wollongong 2522, NSW, Australia
| | - Georgina Avery
- School of Psychology, University of Wollongong, Northfields Avenue, Wollongong 2522, NSW, Australia
| |
Collapse
|
12
|
A combinatorial study of pose effects in unfamiliar face recognition. Vision Res 2010; 50:522-33. [DOI: 10.1016/j.visres.2009.12.012] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2009] [Revised: 12/03/2009] [Accepted: 12/17/2009] [Indexed: 11/23/2022]
|