1
|
Butcher N, Bennetts RJ, Sexton L, Barbanta A, Lander K. Eye movement differences when recognising and learning moving and static faces. Q J Exp Psychol (Hove) 2024:17470218241252145. [PMID: 38644390 DOI: 10.1177/17470218241252145] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/23/2024]
Abstract
Seeing a face in motion can help subsequent face recognition. Several explanations have been proposed for this "motion advantage," but other factors that might play a role have received less attention. For example, facial movement might enhance recognition by attracting attention to the internal facial features, thereby facilitating identification. However, there is no direct evidence that motion increases attention to regions of the face that facilitate identification (i.e., internal features) compared with static faces. We tested this hypothesis by recording participants' eye movements while they completed the famous face recognition (Experiment 1, N = 32), and face-learning (Experiment 2, N = 60, Experiment 3, N = 68) tasks, with presentation style manipulated (moving or static). Across all three experiments, a motion advantage was found, and participants directed a higher proportion of fixations to the internal features (i.e., eyes, nose, and mouth) of moving faces versus static. Conversely, the proportion of fixations to the internal non-feature area (i.e., cheeks, forehead, chin) and external area (Experiment 3) was significantly reduced for moving compared with static faces (all ps < .05). Results suggest that during both familiar and unfamiliar face recognition, facial motion is associated with increased attention to internal facial features, but only during familiar face recognition is the magnitude of the motion advantage significantly related functionally to the proportion of fixations directed to the internal features.
Collapse
Affiliation(s)
- Natalie Butcher
- Department of Psychology, Teesside University, Middlesbrough, UK
| | | | - Laura Sexton
- Department of Psychology, Teesside University, Middlesbrough, UK
- School of Psychology, Faculty of Health Sciences and Wellbeing, University of Sunderland, Sunderland, UK
| | | | - Karen Lander
- Division of Psychology, Communication and Human Neuroscience, University of Manchester, Manchester, UK
| |
Collapse
|
2
|
Kim DH, Yang SC, Kim H, Lee SS, Kim YS, Lozanoff S, Kwak DS, Lee UY. Regression analysis of nasal shape from juvenile to adult ages for forensic facial reconstruction. Leg Med (Tokyo) 2024; 66:102363. [PMID: 38065055 DOI: 10.1016/j.legalmed.2023.102363] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Revised: 11/07/2023] [Accepted: 11/22/2023] [Indexed: 01/22/2024]
Abstract
The nose is a prominent feature for facial recognition and reconstruction. To investigate the relationship of the nasal shape with the piriform aperture in Korean adults and juveniles, we performed regression analysis. By regression analysis, prediction equations for nasal shape were obtained in relation to the shape of the piriform aperture considering sex and age groups. Three-dimensional skull and face models, rendered from computed tomography images, were assessed (331 males and 334 females). Juveniles (<20 years) were divided into three age groups according to the development of the dentition. Adults were divided into three age groups of two decades each, according to their age. To measure the nasal area, nine landmarks and nine measurements were chosen, while seven landmarks and five measurements were selected to measure the piriform aperture area. Four measurements were defined to explain the direct relationship between the nasal aperture and nasal shape. First, descriptive statistical analyses were performed according to sex and age groups. Subsequently, the correlation of nasal soft tissue measurements with piriform measurements was analyzed. Last, we performed a linear regression analysis of the measurements with higher correlations, considering sex and age groups as variables. Prediction equations were used to estimate the nasal bridge length, height, protrusion, and width. Equations considering sex and age groups showed better explanation ability. Measurements related to the height of the nasal bridge presented improvement. This study may assist in the more accurate approximation of nasal shape in facial reconstruction.
Collapse
Affiliation(s)
- Dong-Ho Kim
- Catholic Institute for Applied Anatomy / Department of Anatomy, College of Medicine, The Catholic University of Korea 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Seong-Cheol Yang
- Department of Orthopaedics, College of Medicine, The Catholic University of Korea, 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Hankyu Kim
- Department of Anatomy College of Medicine, Soonchunhyang University, 31, Soonchunhyang 6-gil, Dongnam-gu, Cheonan-si, Chungcheongnam-do 31151 Korea
| | - Sang-Seob Lee
- Catholic Institute for Applied Anatomy / Department of Anatomy, College of Medicine, The Catholic University of Korea 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Yi-Suk Kim
- Catholic Institute for Applied Anatomy / Department of Anatomy, College of Medicine, The Catholic University of Korea 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - Scott Lozanoff
- Department of Anatomy, Biochemistry & Physiology, John A. Burns School of Medicine, University of Hawai'i at Mānoa, Honolulu 96813 USA
| | - Dai-Soon Kwak
- Catholic Institute for Applied Anatomy / Department of Anatomy, College of Medicine, The Catholic University of Korea 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea
| | - U-Young Lee
- Catholic Institute for Applied Anatomy / Department of Anatomy, College of Medicine, The Catholic University of Korea 222 Banpo-daero, Seocho-gu, Seoul 06591, Republic of Korea.
| |
Collapse
|
3
|
Kawakami K, Vingilis-Jaremko L, Friesen JP, Meyers C, Fang X. Impact of similarity on recognition of faces of Black and White targets. Br J Psychol 2022; 113:1079-1099. [PMID: 35957498 DOI: 10.1111/bjop.12589] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 04/18/2022] [Accepted: 07/27/2022] [Indexed: 11/28/2022]
Abstract
One reason for the persistence of racial inequality may be anticipated dissimilarity with racial outgroups. In the present research, we explored the impact of perceived similarity with White and Black targets on facial identity recognition accuracy. In two studies, participants first completed an ostensible personality survey. Next, in a Learning Phase, Black and White faces were presented on one of three background colours. Participants were led to believe that these colours indicated similarities between them and the target person in the image. Specifically, they were informed that the background colours were associated with the extent to which responses by the target person on the personality survey and their own responses overlapped. In actual fact, faces were randomly assigned to colour. In both studies, non-Black participants (Experiment 1) and White participants (Experiment 2) showed better recognition of White than Black faces. More importantly in the present context, a positive linear effect of similarity was found in both studies, with better recognition of increasingly similar Black and White targets. The independent effects for race of target and similarity, with no interaction, indicated that participants responded to Black and White faces according to category membership as well as on an interpersonal level related to similarity with specific targets. Together these findings suggest that while perceived similarity may enhance identity recognition accuracy for Black and White faces, it may not reduce differences in facial memory for these racial categories.
Collapse
Affiliation(s)
| | | | | | | | - Xia Fang
- Zhejiang University, Zhejiang, China
| |
Collapse
|
4
|
Logan AJ, Gordon GE, Loffler G. The Effect of Age-Related Macular Degeneration on Components of Face Perception. Invest Ophthalmol Vis Sci 2021; 61:38. [PMID: 32543666 PMCID: PMC7415315 DOI: 10.1167/iovs.61.6.38] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Purpose Patients with age-related macular degeneration (AMD) experience difficulty with discriminating between faces. We aimed to use a new clinical test to quantify the impact of AMD on face perception and to determine the specific aspects that are affected. Methods The Caledonian face test uses an adaptive procedure to measure face discrimination thresholds: the minimum difference required between faces for reliable discrimination. Discrimination thresholds were measured for full-faces, external features (head-shape and hairline), internal features (nose, mouth, eyes, and eyebrows) and shapes (non-face task). Participants were 20 patients with dry AMD (logMAR VA = 0.14 to 0.62), 20 patients with wet AMD (0.10 to 0.60), and 20 age-matched control subjects (−0.18 to +0.06). Results Relative to controls, full-face discrimination thresholds were, on average, 1.76 and 1.73 times poorer in participants with dry and wet AMD, respectively. AMD also reduced sensitivity to face features, but discrimination of the internal, relative to external, features was disproportionately impaired. Both distance VA and contrast sensitivity were significant independent predictors of full-face discrimination thresholds (R2 = 0.66). Sensitivity to full-faces declined by a factor of approximately 1.19 per 0.1 logMAR reduction in VA. Conclusions Both dry and wet AMD significantly reduce sensitivity to full-faces and their component parts to similar extents. Distance VA and contrast sensitivity are closely associated with face discrimination sensitivity. These results quantify the extent of sensitivity impairment in patients with AMD and predict particular difficulty in everyday tasks that rely on internal feature information, including recognition of familiar faces and facial expressions.
Collapse
|
5
|
Olivares EI, Urraca AS, Lage-Castellanos A, Iglesias J. Different and common brain signals of altered neurocognitive mechanisms for unfamiliar face processing in acquired and developmental prosopagnosia. Cortex 2020; 134:92-113. [PMID: 33271437 DOI: 10.1016/j.cortex.2020.10.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2020] [Revised: 09/21/2020] [Accepted: 10/14/2020] [Indexed: 11/25/2022]
Abstract
Neuropsychological studies have shown that prosopagnosic individuals perceive face structure in an atypical way. This might preclude the formation of appropriate face representations and, consequently, hamper effective recognition. The present ERP study, in combination with Bayesian source reconstruction, investigates how information related to both external (E) and internal (I) features was processed by E.C. and I.P., suffering from acquired and developmental prosopagnosia, respectively. They carried out a face-feature matching task with new faces. E.C. showed poor performance and remarkable lack of early face-sensitive P1, N170 and P2 responses on right (damaged) posterior cortex. Although she presented the expected mismatch effect to target faces in the E-I sequence, it was of shorter duration than in Controls, and involved left parietal, right frontocentral and dorsofrontal regions, suggestive of reduced neural circuitry to process face configurations. In turn, I.P. performed efficiently but with a remarkable bias to give "match" responses. His face-sensitive potentials P1-N170 were comparable to those from Controls, however, he showed no subsequent P2 response and a mismatch effect only in the I-E sequence, reflecting activation confined to those regions that sustain typically the initial stages of face processing. Relevantly, neither of the prosopagnosics exhibited conspicuous P3 responses to features acting as primes, indicating that diagnostic information for constructing face representations could not be sufficiently attended nor deeply encoded. Our findings suggest a different locus for altered neurocognitive mechanisms in the face network in participants with different types of prosopagnosia, but common indicators of a deficient allocation of attentional resources for further recognition.
Collapse
Affiliation(s)
- Ela I Olivares
- Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Spain.
| | - Ana S Urraca
- Centro Universitario Cardenal Cisneros, Alcalá de Henares, Madrid, Spain
| | - Agustín Lage-Castellanos
- Department of Neuroinformatics, Cuban Center for Neuroscience, Havana, Cuba; Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, the Netherlands
| | - Jaime Iglesias
- Department of Biological and Health Psychology, Faculty of Psychology, Universidad Autónoma de Madrid, Spain
| |
Collapse
|
6
|
Zhao MF, Zimmer HD, Fu X, Zheng Z. Unitization of internal and external features contributes to associative recognition for faces: Evidence from modulations of the FN400. Brain Res 2020; 1748:147077. [PMID: 32861676 DOI: 10.1016/j.brainres.2020.147077] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Revised: 08/10/2020] [Accepted: 08/20/2020] [Indexed: 10/23/2022]
Abstract
Associative recognition requires discriminating between old items and conjunction lures constructed by recombining elements from two different study items. This task can be solved not only by recollection but also by familiarity if the to-be-remembered stimuli are perceived as a unitized representation. In two event-related potential (ERP) studies, we provide evidence for the integration of internal and external facial features by showing that the early frontal old-new effect (considered a correlate of familiarity) is modulated by the specific combination of facial features. Participants studied faces consisting of internal features (eyes, eyebrows, nose, and mouth) paired with external features (hair, head shape, and ears). During the testing phase, intact, recombined, and new faces were presented. Recombined faces consisted of internal and external features taken from two different studied faces. The results showed that at the frontal sites, during the time window from 300 to 500 ms, ERPs to intact faces were more positive than those to new and recombined faces; the latter two did not differ from one another. The late parietal effect was observed only after a more extended study phase in Experiment 2. We take the modulation of the early frontal old-new effect as evidence for the contribution of familiarity to associative recognition for combinations of internal and external facial features.
Collapse
Affiliation(s)
- Min-Fang Zhao
- School of Education Science, Huizhou University, Huizhou 516007, China
| | - Hubert D Zimmer
- Brain and Cognition Unit, Department of Psychology, Saarland University, Saarbruecken 66123, Germany
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Zhiwei Zheng
- Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China; Center on Aging Psychology, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China
| |
Collapse
|
7
|
Pressler MP, Geisler EL, Hallac RR, Seaward JR, Kane AA. The Use of Eye Tracking to Discern the Threshold at Which Metopic Orbitofrontal Deformity Attracts Attention. Cleft Palate Craniofac J 2020; 57:1392-1401. [PMID: 32489115 DOI: 10.1177/1055665620926014] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
INTRODUCTION AND OBJECTIVES Surgical treatment for trigonocephaly aims to eliminate a stigmatizing deformity, yet the severity that captures unwanted attention is unknown. Surgeons intervene at different points of severity, eliciting controversy. This study used eye tracking to investigate when deformity is perceived. MATERIAL AND METHODS Three-dimensional photogrammetric images of a normal child and a child with trigonocephaly were mathematically deformed, in 10% increments, to create a spectrum of 11 images. These images were shown to participants using an eye tracker. Participants' gaze patterns were analyzed, and participants were asked if each image looked "normal" or "abnormal." RESULTS Sixty-six graduate students were recruited. Average dwell time toward pathologic areas of interest (AOIs) increased proportionally, from 0.77 ± 0.33 seconds at 0% deformity to 1.08 ± 0.75 seconds at 100% deformity (P < .0001). A majority of participants did not agree an image looked "abnormal" until 90% deformity from any angle. CONCLUSION Eye tracking can be used as a proxy for attention threshold toward orbitofrontal deformity. The amount of attention toward orbitofrontal AOIs increased proportionally with severity. Participants did not generally agree there was "abnormality" until deformity was severe. This study supports the assertion that surgical intervention may be best reserved for more severe deformity.
Collapse
Affiliation(s)
- Mark P Pressler
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Emily L Geisler
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Rami R Hallac
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - James R Seaward
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| | - Alex A Kane
- Department of Plastic Surgery, 12334UT Southwestern, Dallas, TX, USA.,Analytical Imaging and Modeling Center, Children's Medical Center, Dallas, TX, USA
| |
Collapse
|
8
|
Koudelová J, Hoffmannová E, Dupej J, Velemínská J. Simulation of facial growth based on longitudinal data: Age progression and age regression between 7 and 17 years of age using 3D surface data. PLoS One 2019; 14:e0212618. [PMID: 30794623 PMCID: PMC6386244 DOI: 10.1371/journal.pone.0212618] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/21/2018] [Accepted: 02/06/2019] [Indexed: 12/03/2022] Open
Abstract
Modelling of the development of facial morphology during childhood and adolescence is highly useful in forensic and biomedical practice. However, most studies in this area fail to capture the essence of the face as a three-dimensional structure. The main aims of our present study were (1) to construct ageing trajectories for the female and male face between 7 and 17 years of age and (2) to propose a three-dimensional age progression (age -regression) system focused on real growth-related facial changes. Our approach was based on an assessment of a total of 522 three-dimensional (3D) facial scans of Czech children (39 boys, 48 girls) that were longitudinally studied between the ages of 7 to 12 and 12 to 17 years. Facial surface scans were obtained using a Vectra-3D scanner and evaluated using geometric morphometric methods (CPD-DCA, PCA, Hotelling’s T2 tests). We observed very similar growth rates between 7 and 10 years in both sexes, followed by an increase in growth velocity in both sexes, with maxima between 11 and 12 years in girls and 11 to 13 years in boys, which are connected with the different timing of the onset of puberty. Based on these partly different ageing trajectories for girls and boys, we simulated the effects of age progression (age regression) on facial scans. In girls, the mean error was 1.81 mm at 12 years and 1.7 mm at 17 years. In boys, the prediction system was slightly less successful: 2.0 mm at 12 years and 1.94 mm at 17 years. The areas with the greatest deviations between predicted and real facial morphology were not important for facial recognition. Changes of body mass index percentiles in children throughout the observation period had no significant influence on the accuracy of the age progression models for both sexes.
Collapse
Affiliation(s)
- Jana Koudelová
- Department of Anthropology and Human Genetics, Faculty of Science, Charles University, Prague, Czech Republic
- * E-mail:
| | - Eva Hoffmannová
- Department of Anthropology and Human Genetics, Faculty of Science, Charles University, Prague, Czech Republic
| | - Ján Dupej
- Department of Anthropology and Human Genetics, Faculty of Science, Charles University, Prague, Czech Republic
- Department of Software and Computer Science, Faculty of Mathematics and Physics, Charles University, Prague, Czech Republic
| | - Jana Velemínská
- Department of Anthropology and Human Genetics, Faculty of Science, Charles University, Prague, Czech Republic
| |
Collapse
|
9
|
Ramírez FM. Orientation Encoding and Viewpoint Invariance in Face Recognition: Inferring Neural Properties from Large-Scale Signals. Neuroscientist 2018; 24:582-608. [PMID: 29855217 DOI: 10.1177/1073858418769554] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Viewpoint-invariant face recognition is thought to be subserved by a distributed network of occipitotemporal face-selective areas that, except for the human anterior temporal lobe, have been shown to also contain face-orientation information. This review begins by highlighting the importance of bilateral symmetry for viewpoint-invariant recognition and face-orientation perception. Then, monkey electrophysiological evidence is surveyed describing key tuning properties of face-selective neurons-including neurons bimodally tuned to mirror-symmetric face-views-followed by studies combining functional magnetic resonance imaging (fMRI) and multivariate pattern analyses to probe the representation of face-orientation and identity information in humans. Altogether, neuroimaging studies suggest that face-identity is gradually disentangled from face-orientation information along the ventral visual processing stream. The evidence seems to diverge, however, regarding the prevalent form of tuning of neural populations in human face-selective areas. In this context, caveats possibly leading to erroneous inferences regarding mirror-symmetric coding are exposed, including the need to distinguish angular from Euclidean distances when interpreting multivariate pattern analyses. On this basis, this review argues that evidence from the fusiform face area is best explained by a view-sensitive code reflecting head angular disparity, consistent with a role of this area in face-orientation perception. Finally, the importance is stressed of explicit models relating neural properties to large-scale signals.
Collapse
Affiliation(s)
- Fernando M Ramírez
- 1 Bernstein Center for Computational Neuroscience, Charité Universitätsmedizin Berlin, Berlin, Germany
| |
Collapse
|
10
|
Gilad-Gutnick S, Harmatz ES, Tsourides K, Yovel G, Sinha P. Recognizing Facial Slivers. J Cogn Neurosci 2018; 30:951-962. [PMID: 29668392 DOI: 10.1162/jocn_a_01265] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We report here an unexpectedly robust ability of healthy human individuals ( n = 40) to recognize extremely distorted needle-like facial images, challenging the well-entrenched notion that veridical spatial configuration is necessary for extracting facial identity. In face identification tasks of parametrically compressed internal and external features, we found that the sum of performances on each cue falls significantly short of performance on full faces, despite the equal visual information available from both measures (with full faces essentially being a superposition of internal and external features). We hypothesize that this large deficit stems from the use of positional information about how the internal features are positioned relative to the external features. To test this, we systematically changed the relations between internal and external features and found preferential encoding of vertical but not horizontal spatial relationships in facial representations ( n = 20). Finally, we employ magnetoencephalography imaging ( n = 20) to demonstrate a close mapping between the behavioral psychometric curve and the amplitude of the M250 face familiarity, but not M170 face-sensitive evoked response field component, providing evidence that the M250 can be modulated by faces that are perceptually identifiable, irrespective of extreme distortions to the face's veridical configuration. We theorize that the tolerance to compressive distortions has evolved from the need to recognize faces across varying viewpoints. Our findings help clarify the important, but poorly defined, concept of facial configuration and also enable an association between behavioral performance and previously reported neural correlates of face perception.
Collapse
|
11
|
Kramer RS, Young AW, Burton AM. Understanding face familiarity. Cognition 2018; 172:46-58. [DOI: 10.1016/j.cognition.2017.12.005] [Citation(s) in RCA: 56] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2017] [Revised: 12/02/2017] [Accepted: 12/04/2017] [Indexed: 11/26/2022]
|
12
|
Thorup B, Crookes K, Chang PPW, Burton N, Pond S, Li TK, Hsiao J, Rhodes G. Perceptual experience shapes our ability to categorize faces by national origin: A new other-race effect. Br J Psychol 2018; 109:583-603. [PMID: 29473146 DOI: 10.1111/bjop.12289] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2017] [Revised: 12/20/2017] [Indexed: 11/28/2022]
Abstract
People are better at recognizing own-race than other-race faces. This other-race effect has been argued to be the result of perceptual expertise, whereby face-specific perceptual mechanisms are tuned through experience. We designed new tasks to determine whether other-race effects extend to categorizing faces by national origin. We began by selecting sets of face stimuli for these tasks that are typical in appearance for each of six nations (three Caucasian, three Asian) according to people from those nations (Study 1). Caucasian and Asian participants then categorized these faces by national origin (Study 2). Own-race faces were categorized more accurately than other-race faces. In contrast, Asian American participants, with more extensive other-race experience than the first Asian group, categorized other-race faces better than own-race faces, demonstrating a reversal of the other-race effect. Therefore, other-race effects extend to the ability to categorize faces by national origin, but only if participants have greater perceptual experience with own-race, than other-race faces. Study 3 ruled out non-perceptual accounts by showing that Caucasian and Asian faces were sorted more accurately by own-race than other-race participants, even in a sorting task without any explicit labelling required. Together, our results demonstrate a new other-race effect in sensitivity to national origin of faces that is linked to perceptual expertise.
Collapse
Affiliation(s)
- Bianca Thorup
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| | - Kate Crookes
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| | - Paul P W Chang
- School of Arts and Humanities, Edith Cowan University, Joondalup, Western Australia, Australia
| | - Nichola Burton
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| | - Stephen Pond
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| | - Tze Kwan Li
- Department of Psychology, The University of Hong Kong, Hong Kong
| | - Janet Hsiao
- ARC Centre of Excellence in Cognition and Its Disorders, Department of Psychology, The University of Hong Kong, Hong Kong
| | - Gillian Rhodes
- ARC Centre of Excellence in Cognition and Its Disorders, School of Psychological Science, University of Western Australia, Crawley, Western Australia, Australia
| |
Collapse
|
13
|
Do intoxicated witnesses produce poor facial composite images? Psychopharmacology (Berl) 2018; 235:2991-3003. [PMID: 30120491 PMCID: PMC6182606 DOI: 10.1007/s00213-018-4989-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/30/2017] [Accepted: 07/30/2018] [Indexed: 11/24/2022]
Abstract
RATIONALE The effect of alcohol intoxication on witness memory and performance has been the subject of research for some time, however, whether intoxication affects facial composite construction has not been investigated. OBJECTIVES Intoxication was predicted to adversely affect facial composite construction. METHODS Thirty-two participants were allocated to one of four beverage conditions consisting of factorial combinations of alcohol or placebo at face encoding, and later construction. Participants viewed a video of a target person and constructed a composite of this target the following day. The resulting images were presented as a full face composite, or a part face consisting of either internal or external facial features to a second sample of participants who provided likeness ratings as a measure of facial composite quality. RESULTS Intoxication at face encoding had a detrimental impact on the quality of facial composites produced the following day, suggesting that alcohol impaired the encoding of the target faces. The common finding that external compared to internal features are more accurately represented was demonstrated, even following alcohol at encoding. This finding was moderated by alcohol and target face gender such that alcohol at face encoding resulted in reduced likeness of external features for male composite faces only. CONCLUSIONS Moderate alcohol intoxication impairs the quality of facial composites, adding to existing literature demonstrating little effect of alcohol on line-up studies. The impact of intoxication on face perception mechanisms, and the apparent narrowing of processing to external face areas such as hair, is discussed in the context of alcohol myopia theory.
Collapse
|
14
|
Contributions of individual face features to face discrimination. Vision Res 2017; 137:29-39. [PMID: 28688904 DOI: 10.1016/j.visres.2017.05.011] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Revised: 05/02/2017] [Accepted: 05/06/2017] [Indexed: 11/21/2022]
Abstract
Faces are highly complex stimuli that contain a host of information. Such complexity poses the following questions: (a) do observers exhibit preferences for specific information? (b) how does sensitivity to individual face parts compare? These questions were addressed by quantifying sensitivity to different face features. Discrimination thresholds were determined for synthetic faces under the following conditions: (i) 'full face': all face features visible; (ii) 'isolated feature': single feature presented in isolation; (iii) 'embedded feature': all features visible, but only one feature modified. Mean threshold elevations for isolated features, relative to full-faces, were 0.84x, 1.08, 2.12, 3.34, 4.07 and 4.47 for head-shape, hairline, nose, mouth, eyes and eyebrows respectively. Hence, when two full faces can be discriminated at threshold, the difference between the eyes is about four times less than what is required when discriminating between isolated eyes. In all cases, sensitivity was higher when features were presented in isolation than when they were embedded within a face context (threshold elevations of 0.94x, 1.74, 2.67, 2.90, 5.94 and 9.94). This reveals a specific pattern of sensitivity to face information. Observers are between two and four times more sensitive to external than internal features. The pattern for internal features (higher sensitivity for the nose, compared to mouth, eyes and eyebrows) is consistent with lower sensitivity for those parts affected by facial dynamics (e.g. facial expressions). That isolated features are easier to discriminate than embedded features supports a holistic face processing mechanism which impedes extraction of information about individual features from full faces.
Collapse
|
15
|
Etchells DB, Brooks JL, Johnston RA. Evidence for view-invariant face recognition units in unfamiliar face learning. Q J Exp Psychol (Hove) 2016; 70:874-889. [PMID: 27809666 DOI: 10.1080/17470218.2016.1248453] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
Many models of face recognition incorporate the idea of a face recognition unit (FRU), an abstracted representation formed from each experience of a face which aids recognition under novel viewing conditions. Some previous studies have failed to find evidence of this FRU representation. Here, we report three experiments which investigated this theoretical construct by modifying the face learning procedure from that in previous work. During learning, one or two views of previously unfamiliar faces were shown to participants in a serial matching task. Later, participants attempted to recognize both seen and novel views of the learned faces (recognition phase). Experiment 1 tested participants' recognition of a novel view, a day after learning. Experiment 2 was identical, but tested participants on the same day as learning. Experiment 3 repeated Experiment 1, but tested participants on a novel view that was outside the rotation of those views learned. Results revealed a significant advantage, across all experiments, for recognizing a novel view when two views had been learned compared to single view learning. The observed view invariance supports the notion that an FRU representation is established during multi-view face learning under particular learning conditions.
Collapse
Affiliation(s)
- David B Etchells
- a Centre for Cognitive Neuroscience and Cognitive Systems, School of Psychology , University of Kent , Canterbury , UK
| | - Joseph L Brooks
- a Centre for Cognitive Neuroscience and Cognitive Systems, School of Psychology , University of Kent , Canterbury , UK
| | - Robert A Johnston
- a Centre for Cognitive Neuroscience and Cognitive Systems, School of Psychology , University of Kent , Canterbury , UK
| |
Collapse
|
16
|
Meaux E, Vuilleumier P. Facing mixed emotions: Analytic and holistic perception of facial emotion expressions engages separate brain networks. Neuroimage 2016; 141:154-173. [DOI: 10.1016/j.neuroimage.2016.07.004] [Citation(s) in RCA: 34] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2015] [Revised: 06/26/2016] [Accepted: 07/02/2016] [Indexed: 11/27/2022] Open
|