1
|
Ventura P, Pascual M, Cruz F, Araújo S. From Perugino to Picasso revisited: Electrophysiological responses to faces in paintings from different art styles. Neuropsychologia 2024; 193:108742. [PMID: 38056623 DOI: 10.1016/j.neuropsychologia.2023.108742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2023] [Revised: 11/27/2023] [Accepted: 11/30/2023] [Indexed: 12/08/2023]
Abstract
Behavioral research (Ventura, et al., 2023) suggested that pictorial representations of faces varying along a realism-distortion spectrum elicit holistic processing as natural faces. Whether holistic face neural responses are engaged similarly remains, however, underexplored. In the present study, we evaluated the neural correlates of naturalist and artistic face processing, by exploring electrophysiological responses to faces in photographs versus in four major painting styles. The N170 response to faces in photographs was indistinguishable from that elicited by faces in the renaissance art style (depicting the most realistic faces), whilst both categories elicited larger N170 than faces in other art styles (post-impressionism, expressionism, and cubism), with a gradation in brain activity. The present evidence suggest that visual processing may become finer grained the more the realistic nature of the face. Despite behavioral equivalence, the neural mechanisms for holistic processing of natural faces and faces in diverse art styles are not equivalent.
Collapse
Affiliation(s)
- Paulo Ventura
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal.
| | - Mariona Pascual
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Francisco Cruz
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| | - Susana Araújo
- Faculdade de Psicologia, Universidade de Lisboa, Alameda da Universidade, 1649-013, Lisboa, Portugal
| |
Collapse
|
2
|
González-Alvarez J, Sos-Peña R. The role of facial skin tone and texture in the perception of age. Vision Res 2023; 213:108319. [PMID: 37782999 DOI: 10.1016/j.visres.2023.108319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2022] [Revised: 08/20/2023] [Accepted: 09/22/2023] [Indexed: 10/04/2023]
Abstract
Age and gender perception from looking at people's faces, without any cultural or conventional cues, is primarily based on two independent components: a) the shape or facial structure, and b) surface reflectance (skin tone and texture, STT). This study examined the relative contribution of facial STT to the perception of age. A total of 204 subjects participated in four experiments presenting artificial 3D realistic faces of different age versions under two key experimental conditions: with and without STT. Two experiments involved a discrimination-age task, and other two involved a direct age-estimation task. The faces for the last experiment were generated from the photographs of real people. The results were quite consistent throughout the experiments. Data suggest that the contribution of the STT information leads to roughly 25-33 % of accuracy in age perception. Interestingly, a differential pattern emerges in relation to facial age: the relative contribution of skin information increases sharply with advancing age, to the point that age judgments of the older faces (60 years old) without STT information fall to the chance level. This pattern suggests that facial skin tone and texture are the main sources of information for estimating the age of people past their maturity as those are the principal visual signs of aging beyond the anatomical changes of facial structure.
Collapse
Affiliation(s)
- Julio González-Alvarez
- Department of Basic and Clinical Psychology and Psychobiology, University Jaume I, Castellón, Spain.
| | - Rosa Sos-Peña
- Department of Basic and Clinical Psychology and Psychobiology, University Jaume I, Castellón, Spain
| |
Collapse
|
3
|
Huang Y, Fang L, Hu S. TED-Face: Texture-Enhanced Deep Face Reconstruction in the Wild. SENSORS (BASEL, SWITZERLAND) 2023; 23:6525. [PMID: 37514819 PMCID: PMC10385218 DOI: 10.3390/s23146525] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Revised: 07/12/2023] [Accepted: 07/13/2023] [Indexed: 07/30/2023]
Abstract
We present TED-Face, a new method for recovering high-fidelity 3D facial geometry and appearance with enhanced textures from single-view images. While vision-based face reconstruction has received intensive research in the past decades due to its broad applications, it remains a challenging problem because human eyes are particularly sensitive to numerically minute yet perceptually significant details. Previous methods that seek to minimize reconstruction errors within a low-dimensional face space can suffer from this issue and generate close yet low-fidelity approximations. The loss of high-frequency texture details is a key factor in their process, which we propose to address by learning to recover both dense radiance residuals and sparse facial texture features from a single image, in addition to the variables solved by previous work-shape, appearance, illumination, and camera. We integrate the estimation of all these factors in a single unified deep neural network and train it on several popular face reconstruction datasets. We also introduce two new metrics, visual fidelity (VIF) and structural similarity (SSIM), to compensate for the fact that reconstruction error is not a consistent perceptual metric of quality. On the popular FaceWarehouse facial reconstruction benchmark, our proposed system achieves a VIF score of 0.4802 and an SSIM score of 0.9622, improving over the state-of-the-art Deep3D method by 6.69% and 0.86%, respectively. On the widely used LS3D-300W dataset, we obtain a VIF score of 0.3922 and an SSIM score of 0.9079 for indoor images, and the scores for outdoor images are 0.4100 and 0.9160, respectively, which also represent an improvement over those of Deep3D. These results show that our method is able to recover visually more realistic facial appearance details compared with previous methods.
Collapse
Affiliation(s)
- Ying Huang
- Institute of Virtual Reality and Intelligent Systems, Hangzhou Normal University, Hangzhou 311121, China
| | - Lin Fang
- Institute of Virtual Reality and Intelligent Systems, Hangzhou Normal University, Hangzhou 311121, China
| | - Shanfeng Hu
- Department of Computer and Information Sciences, Northumbria University, Newcastle-upon-Tyne NE1 8ST, UK
| |
Collapse
|
4
|
González-Álvarez J, Sos-Peña R. Sex perception from facial structure: Categorization with and without skin texture and color. Vision Res 2022; 201:108127. [PMID: 36194981 DOI: 10.1016/j.visres.2022.108127] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2021] [Revised: 04/26/2022] [Accepted: 07/17/2022] [Indexed: 11/06/2022]
Abstract
Sex identification of faces without any cultural or conventional sex cue is primarily based on two independent components: a) shape or facial structure, and b) surface reflectance (skin texture and color). The present work studied the relative contribution of each component by means of two experiments based on 3D face models created with different degrees of masculinity-femininity within a sex continuum. The first experiment utilized totally artificial faces created ex novo by computer. The second employed face models created from photographs of real people. The results of both experiments were consistent. As expected, when both components were present in a face, sex was correctly classified in almost all the cases. More interestingly, the contribution of the "pure" facial structure to the sex perception (with no surface reflectance) was about 80%, whereas 20% of the total information was provided by the surface reflectance. Furthermore, examination of the psychometric curves suggests that the information provided by surface reflectance contributes to a categorical perception of facial sex, since when it is removed the sex is perceived in a more continuous / less categorical way. On the other hand, our stimuli presented a certain "male" bias, repeatedly found in the literature on facial sex perception.
Collapse
Affiliation(s)
- Julio González-Álvarez
- Department of Basic and Clinical Psychology and Psychobiology, University Jaume I, Castellón, Spain.
| | - Rosa Sos-Peña
- Department of Basic and Clinical Psychology and Psychobiology, University Jaume I, Castellón, Spain
| |
Collapse
|
5
|
Hine K, Okubo H. Overestimation of eye size: People see themselves with bigger eyes in a holistic approach. Acta Psychol (Amst) 2021; 220:103419. [PMID: 34543806 DOI: 10.1016/j.actpsy.2021.103419] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2021] [Revised: 07/29/2021] [Accepted: 09/14/2021] [Indexed: 11/29/2022] Open
Abstract
A face contains crucial information for identification; moreover, face recognition is superior to other types of recognition. Notably, one's own face is recognized better than other familiar faces. However, it is unclear whether one's own face, especially one's own internal facial features, is represented more accurately than other faces. Here, we investigated how one's own internal facial features were represented. We conducted a psychological experiment in which the participants were required to adjust eye size to the real size in photos of their own or well-known celebrities' faces. To investigate why individuals' own and celebrity facial representations were different, two types of photos were prepared, with and without external features. It was found that the accuracy of eye size for one's own face was better than that for celebrities' faces in the condition without external features, in which holistic processing was less involved than in the condition with external features. This implies that the eye size of one's own face was represented more accurately than that of other familiar faces when external features were removed. Moreover, the accuracy of the eye size of one's own face in the condition with external features was worse than that in the condition without external features; the adjusted eye size in the condition with external features was larger than that in the condition without external features. In contrast, for celebrities' faces, there was no significant difference between the conditions with and without external features. The adjusted eye sizes in all conditions were overestimated compared to real eye sizes. Previous research indicated that eye size was adjusted to a larger size when evaluating as more attractive, in which the evaluation is related to holistic processing. Based on this perspective, it could be that one's own face was represented as more attractive in the condition with external features in the current study. Taken together, the results indicated that the representation of own eye size, which is an internal facial feature, was affected by the visibility of the external features.
Collapse
Affiliation(s)
- Kyoko Hine
- Toyohashi University of Technology, Toyohashi, Aichi, Japan.
| | - Hikaru Okubo
- Department of Information Environment, Tokyo Denki University, Adachi-ku, Tokyo, Japan
| |
Collapse
|
6
|
Abstract
The composite face effect—the failure of selective attention toward a target face half—is frequently used to study mechanisms of feature integration in faces. Here we studied how this effect depends on the perceptual fit between attended and unattended halves. We used composite faces that were rated by trained observers as either a seamless fit (i.e., close to a natural and homogeneous face) or as a deliberately bad quality of fit (i.e., unnatural, strongly segregated face halves). In addition, composites created by combining face halves randomly were tested. The composite face effect was measured as the alignment × congruency interaction (Gauthier and Bukach Cognition, 103, 322–330 2007), but also with alternative data analysis procedures (Rossion and Boremanse Journal of Vision, 8, 1–13 2008). We found strong but identical composite effects in all fit conditions. Fit quality neither increased the composite face effect nor was it attenuated by bad or random fit quality. The implications for a Gestalt account of holistic face processing are discussed.
Collapse
|
7
|
Bayet L, Saville A, Balas B. Sensitivity to face animacy and inversion in childhood: Evidence from EEG data. Neuropsychologia 2021; 156:107838. [PMID: 33775702 DOI: 10.1016/j.neuropsychologia.2021.107838] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2019] [Revised: 12/28/2020] [Accepted: 03/22/2021] [Indexed: 11/25/2022]
Abstract
Adults exhibit relative behavioral difficulties in processing inanimate, artificial faces compared to real human faces, with implications for using artificial faces in research and designing artificial social agents. However, the developmental trajectory of inanimate face perception is unknown. To address this gap, we used electroencephalography to investigate inanimate faces processing in cross-sectional groups of 5-10-year-old children and adults. A face inversion manipulation was used to test whether face animacy processing relies on expert face processing strategies. Groups of 5-7-year-olds (N = 18), 8-10-year-olds (N = 18), and adults (N = 16) watched pictures of real or doll faces presented in an upright or inverted orientation. Analyses of event-related potentials revealed larger N170 amplitudes in response to doll faces, irrespective of age group or face orientation. Thus, the N170 is sensitive to face animacy by 5-7 years of age, but such sensitivity may not reflect high-level, expert face processing. Multivariate pattern analyses of the EEG signal additionally assessed whether animacy information could be reliably extracted during face processing. Face orientation, but not face animacy, could be reliably decoded from occipitotemporal channels in children and adults. Face animacy could be decoded from whole scalp channels in adults, but not children. Together, these results suggest that 5-10-year-old children exhibit some sensitivity to face animacy over occipitotemporal regions that is comparable to adults.
Collapse
Affiliation(s)
- Laurie Bayet
- Department of Psychology and Center for Neuroscience and Behavior, American University, Washington, DC, USA.
| | - Alyson Saville
- Department of Psychology, North Dakota State University, Fargo, ND, USA
| | - Benjamin Balas
- Department of Psychology, North Dakota State University, Fargo, ND, USA.
| |
Collapse
|
8
|
Yang YF, Brunet-Gouet E, Burca M, Kalunga EK, Amorim MA. Brain Processes While Struggling With Evidence Accumulation During Facial Emotion Recognition: An ERP Study. Front Hum Neurosci 2020; 14:340. [PMID: 33100986 PMCID: PMC7497730 DOI: 10.3389/fnhum.2020.00340] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2020] [Accepted: 08/03/2020] [Indexed: 11/30/2022] Open
Abstract
The human brain is tuned to recognize emotional facial expressions in faces having a natural upright orientation. The relative contributions of featural, configural, and holistic processing to decision-making are as yet poorly understood. This study used a diffusion decision model (DDM) of decision-making to investigate the contribution of early face-sensitive processes to emotion recognition from physiognomic features (the eyes, nose, and mouth) by determining how experimental conditions tapping those processes affect early face-sensitive neuroelectric reflections (P100, N170, and P250) of processes determining evidence accumulation at the behavioral level. We first examined the effects of both stimulus orientation (upright vs. inverted) and stimulus type (photographs vs. sketches) on behavior and neuroelectric components (amplitude and latency). Then, we explored the sources of variance common to the experimental effects on event-related potentials (ERPs) and the DDM parameters. Several results suggest that the N170 indicates core visual processing for emotion recognition decision-making: (a) the additive effect of stimulus inversion and impoverishment on N170 latency; and (b) multivariate analysis suggesting that N170 neuroelectric activity must be increased to counteract the detrimental effects of face inversion on drift rate and of stimulus impoverishment on the stimulus encoding component of non-decision times. Overall, our results show that emotion recognition is still possible even with degraded stimulation, but at a neurocognitive cost, reflecting the extent to which our brain struggles to accumulate sensory evidence of a given emotion. Accordingly, we theorize that: (a) the P100 neural generator would provide a holistic frame of reference to the face percept through categorical encoding; (b) the N170 neural generator would maintain the structural cohesiveness of the subtle configural variations in facial expressions across our experimental manipulations through coordinate encoding of the facial features; and (c) building on the previous configural processing, the neurons generating the P250 would be responsible for a normalization process adapting to the facial features to match the stimulus to internal representations of emotional expressions.
Collapse
Affiliation(s)
- Yu-Fang Yang
- CIAMS, Université Paris-Saclay, Orsay, France.,CIAMS, Université d'Orléans, Orléans, France
| | - Eric Brunet-Gouet
- Centre Hospitalier de Versailles, Hôpital Mignot, Le Chesnay, France.,CESP, DevPsy, Université Paris-Saclay, UVSQ, Inserm, Villejuif, France
| | - Mariana Burca
- Centre Hospitalier de Versailles, Hôpital Mignot, Le Chesnay, France.,CESP, DevPsy, Université Paris-Saclay, UVSQ, Inserm, Villejuif, France
| | | | - Michel-Ange Amorim
- CIAMS, Université Paris-Saclay, Orsay, France.,CIAMS, Université d'Orléans, Orléans, France
| |
Collapse
|
9
|
Van Meel C, Op de Beeck HP. Temporal Contiguity Training Influences Behavioral and Neural Measures of Viewpoint Tolerance. Front Hum Neurosci 2018; 12:13. [PMID: 29441006 PMCID: PMC5797614 DOI: 10.3389/fnhum.2018.00013] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Accepted: 01/12/2018] [Indexed: 11/13/2022] Open
Abstract
Humans can often recognize faces across viewpoints despite the large changes in low-level image properties a shift in viewpoint introduces. We present a behavioral and an fMRI adaptation experiment to investigate whether this viewpoint tolerance is reflected in the neural visual system and whether it can be manipulated through training. Participants saw training sequences of face images creating the appearance of a rotating head. Half of the sequences showed faces undergoing veridical changes in appearance across the rotation (non-morph condition). The other half were non-veridical: during rotation, the face simultaneously morphed into another face. This procedure should successfully associate frontal face views with side views of the same or a different identity, and, according to the temporal contiguity hypothesis, thus enhance viewpoint tolerance in the non-morph condition and/or break tolerance in the morph condition. Performance on the same/different task in the behavioral experiment (N = 20) was affected by training. There was a significant interaction between training (associated/not associated) and identity (same/different), mostly reflecting a higher confusion of different identities when they were associated during training. In the fMRI study (N = 20), fMRI adaptation effects were found for same-viewpoint images of untrained faces, but no adaptation for untrained faces was present across viewpoints. Only trained faces which were not morphed during training elicited a slight adaptation across viewpoints in face-selective regions. However, both in the behavioral and in the neural data the effects were small and weak from a statistical point of view. Overall, we conclude that the findings are not inconsistent with the proposal that temporal contiguity can influence viewpoint tolerance, with more evidence for tolerance when faces are not morphed during training.
Collapse
Affiliation(s)
- Chayenne Van Meel
- Laboratory of Biological Psychology, Brain and Cognition, KU Leuven, Leuven, Belgium
| | | |
Collapse
|
10
|
Bowling NC, Banissy MJ. Emotion expression modulates perception of animacy from faces. JOURNAL OF EXPERIMENTAL SOCIAL PSYCHOLOGY 2017. [DOI: 10.1016/j.jesp.2017.02.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
11
|
Meinhardt G, Meinhardt-Injac B, Persike M. The complete design in the composite face paradigm: role of response bias, target certainty, and feedback. Front Hum Neurosci 2014; 8:885. [PMID: 25400573 PMCID: PMC4215786 DOI: 10.3389/fnhum.2014.00885] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2014] [Accepted: 10/14/2014] [Indexed: 11/25/2022] Open
Abstract
Some years ago an improved design (the “complete design”) was proposed to assess the composite face effect in terms of a congruency effect, defined as the performance difference for congruent and incongruent target to no-target relationships (Cheung et al., 2008). In a recent paper Rossion (2013) questioned whether the congruency effect was a valid hallmark of perceptual integration, because it may contain confounds with face-unspecific interference effects. Here we argue that the complete design is well-balanced and allows one to separate face-specific from face-unspecific effects. We used the complete design for a same/different composite stimulus matching task with face and non-face objects (watches). Subjects performed the task with and without trial-by-trial feedback, and with low and high certainty about the target half. Results showed large congruency effects for faces, particularly when subjects were informed late in the trial about which face halves had to be matched. Analysis of response bias revealed that subjects preferred the “different” response in incongruent trials, which is expected when upper and lower face halves are integrated perceptually at the encoding stage. The results pattern was observed in the absence of feedback, while providing feedback generally attenuated the congruency effect, and led to an avoidance of response bias. For watches no or marginal congruency effects and a moderate global “same” bias were observed. We conclude that the congruency effect, when complemented by an evaluation of response bias, is a valid hallmark of feature integration that allows one to separate faces from non-face objects.
Collapse
Affiliation(s)
- Günter Meinhardt
- Department of Psychology, Johannes Gutenberg University Mainz Mainz, Germany
| | | | - Malte Persike
- Department of Psychology, Johannes Gutenberg University Mainz Mainz, Germany
| |
Collapse
|