1
|
Andrews TJ, Rogers D, Mileva M, Watson DM, Wang A, Burton AM. A narrow band of image dimensions is critical for face recognition. Vision Res 2023; 212:108297. [PMID: 37527594 DOI: 10.1016/j.visres.2023.108297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2022] [Revised: 07/07/2023] [Accepted: 07/12/2023] [Indexed: 08/03/2023]
Abstract
A key challenge in human and computer face recognition is to differentiate information that is diagnostic for identity from other sources of image variation. Here, we used a combined computational and behavioural approach to reveal critical image dimensions for face recognition. Behavioural data were collected using a sorting and matching task with unfamiliar faces and a recognition task with familiar faces. Principal components analysis was used to reveal the dimensions across which the shape and texture of faces in these tasks varied. We then asked which image dimensions were able to predict behavioural performance across these tasks. We found that the ability to predict behavioural responses in the unfamiliar face tasks increased when the early PCA dimensions (i.e. those accounting for most variance) of shape and texture were removed from the analysis. Image similarity also predicted the output of a computer model of face recognition, but again only when the early image dimensions were removed from the analysis. Finally, we found that recognition of familiar faces increased when the early image dimensions were removed, decreased when intermediate dimensions were removed, but then returned to baseline recognition when only later dimensions were removed. Together, these findings suggest that early image dimensions reflect ambient changes, such as changes in viewpoint or lighting, that do not contribute to face recognition. However, there is a narrow band of image dimensions for shape and texture that are critical for the recognition of identity in humans and computer models of face recognition.
Collapse
Affiliation(s)
| | - Daniel Rogers
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Mila Mileva
- Department of Psychology, University of York, York YO10 5DD, UK
| | - David M Watson
- Department of Psychology, University of York, York YO10 5DD, UK
| | - Ao Wang
- Department of Psychology, University of York, York YO10 5DD, UK
| | - A Mike Burton
- Department of Psychology, University of York, York YO10 5DD, UK
| |
Collapse
|
2
|
Humble D, Schweinberger SR, Mayer A, Jesgarzewsky TL, Dobel C, Zäske R. The Jena Voice Learning and Memory Test (JVLMT): A standardized tool for assessing the ability to learn and recognize voices. Behav Res Methods 2023; 55:1352-1371. [PMID: 35648317 PMCID: PMC10126074 DOI: 10.3758/s13428-022-01818-3] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/23/2022] [Indexed: 11/08/2022]
Abstract
The ability to recognize someone's voice spans a broad spectrum with phonagnosia on the low end and super-recognition at the high end. Yet there is no standardized test to measure an individual's ability of learning and recognizing newly learned voices with samples of speech-like phonetic variability. We have developed the Jena Voice Learning and Memory Test (JVLMT), a 22-min test based on item response theory and applicable across languages. The JVLMT consists of three phases in which participants (1) become familiarized with eight speakers, (2) revise the learned voices, and (3) perform a 3AFC recognition task, using pseudo-sentences devoid of semantic content. Acoustic (dis)similarity analyses were used to create items with various levels of difficulty. Test scores are based on 22 items which had been selected and validated based on two online studies with 232 and 454 participants, respectively. Mean accuracy in the JVLMT is 0.51 (SD = .18) with an empirical (marginal) reliability of 0.66. Correlational analyses showed high and moderate convergent validity with the Bangor Voice Matching Test (BVMT) and Glasgow Voice Memory Test (GVMT), respectively, and high discriminant validity with a digit span test. Four participants with potential super recognition abilities and seven participants with potential phonagnosia were identified who performed at least 2 SDs above or below the mean, respectively. The JVLMT is a promising research and diagnostic screening tool to detect both impairments in voice recognition and super-recognition abilities.
Collapse
Affiliation(s)
- Denise Humble
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Axel Mayer
- Department of Psychological Methods and Evaluation, Institute of Psychology, Institute of Psychology and Sports Science, University of Bielefeld, Universitätsstr. 25, 33615, Bielefeld, Germany
| | - Tim L Jesgarzewsky
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany
| | - Christian Dobel
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany
| | - Romi Zäske
- Department of Experimental Otorhinolaryngology, Jena University Hospital, Stoystrasse 3, 07743, Jena, Germany.
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3/1, 07743, Jena, Germany.
| |
Collapse
|
3
|
Liu CH, Young AW, Li J, Tian X, Chen W. Predicting attractiveness from face parts reveals multiple covarying cues. Br J Psychol 2021; 113:264-286. [PMID: 34541676 DOI: 10.1111/bjop.12532] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2020] [Revised: 09/03/2021] [Indexed: 11/30/2022]
Abstract
In most studies of facial attractiveness perception, judgments are based on the whole face images. Here we investigated how attractiveness judgments from parts of faces compare to perceived attractiveness of the whole face, and to each other. We manipulated the extent and regions of occlusion, where either the left/right or the top/bottom half of the face was occluded. We also further segmented the face into relatively small horizontal regions involving the forehead, eyes, nose, or mouth. The results demonstrated the correlated nature of face regions, such that an attractiveness judgment for one face part can be highly predictive of the attractiveness of the whole face or the other parts. The left/right half of the face created more accurate predictions than the top/bottom half. Judgments involving a larger area of the face (i.e., left/right or top/bottom halves) produced more accurate predictions than those derived from smaller regions, such as the eyes or the mouth alone, but even the smallest and most featureless region investigated (the forehead) provided useful information. The correlated nature of the attractiveness of face parts shows that perceived attractiveness is determined by multiple covarying cues that the visual system can exploit to determine attractiveness from a single glance.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Bournemouth University, Poole, Dorset, UK
| | | | - Jiaxin Li
- Department of Psychology, Renmin University of China, Beijing, China
| | - Xinran Tian
- Department of Psychology, Renmin University of China, Beijing, China
| | - Wenfeng Chen
- Department of Psychology, Renmin University of China, Beijing, China
| |
Collapse
|
4
|
Connolly HL, Young AW, Lewis GJ. Face perception across the adult lifespan: evidence for age-related changes independent of general intelligence. Cogn Emot 2021; 35:890-901. [PMID: 33734017 PMCID: PMC8372290 DOI: 10.1080/02699931.2021.1901657] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
It is well-documented that face perception - including facial expression and identity recognition ability - declines with age. To date, however, it is not yet well understood whether this age-related decline reflects face-specific effects, or instead can be accounted for by well-known declines in general intelligence. We examined this issue using a relatively large, healthy, age-diverse (18-88 years) sample (N = 595) who were assessed on well-established measures of face perception and general intelligence. Replicating previous work, we observed that facial expression recognition, facial identity recognition, and general intelligence all showed declines with age. Of importance, the age-related decline of expression and identity recognition was present even when the effects of general intelligence were statistically controlled. Moreover, facial expression and identity ability each showed significant unique associations with age. These results indicate that face perception ability becomes poorer as we age, and that this decline is to some extent relatively focal in nature. Results are in line with a hierarchical structure of face perception ability, and suggest that age appears to have independent effects on the general and specific face processing levels within this structure.
Collapse
Affiliation(s)
- Hannah L Connolly
- Department of Psychology, Royal Holloway, University of London, Egham, England
| | - Andrew W Young
- Department of Psychology, University of York, Heslington, England
| | - Gary J Lewis
- Department of Psychology, Royal Holloway, University of London, Egham, England
| |
Collapse
|
5
|
Young AW, Frühholz S, Schweinberger SR. Face and Voice Perception: Understanding Commonalities and Differences. Trends Cogn Sci 2020; 24:398-410. [DOI: 10.1016/j.tics.2020.02.001] [Citation(s) in RCA: 33] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2019] [Revised: 01/16/2020] [Accepted: 02/03/2020] [Indexed: 01/01/2023]
|
6
|
Connolly HL, Lefevre CE, Young AW, Lewis GJ. Emotion recognition ability: Evidence for a supramodal factor and its links to social cognition. Cognition 2020; 197:104166. [DOI: 10.1016/j.cognition.2019.104166] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Revised: 12/16/2019] [Accepted: 12/18/2019] [Indexed: 11/27/2022]
|
7
|
Robinson JE, Woods W, Leung S, Kaufman J, Breakspear M, Young AW, Johnston PJ. Prediction-error signals to violated expectations about person identity and head orientation are doubly-dissociated across dorsal and ventral visual stream regions. Neuroimage 2020; 206:116325. [DOI: 10.1016/j.neuroimage.2019.116325] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2019] [Revised: 10/22/2019] [Accepted: 10/29/2019] [Indexed: 10/25/2022] Open
|
8
|
Liu CH, Young AW, Basra G, Ren N, Chen W. Perceptual integration and the composite face effect. Q J Exp Psychol (Hove) 2020; 73:1101-1114. [PMID: 31910718 DOI: 10.1177/1747021819899531] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
The composite face paradigm is widely used to investigate holistic perception of faces. In the paradigm, parts from different faces (usually the top and bottom halves) are recombined. The principal criterion for holistic perception is that responses involving the component parts of composites in which the parts are aligned into a face-like configuration are disrupted compared with the same parts in a misaligned (not face-like) format. This is often taken as evidence that seeing a whole face in the aligned condition interferes with perceiving its separate parts, but the extent to which the effect is perceptually driven remains unclear. We used salient perceptual categories of gender (male or female) and race (Asian or Caucasian appearance) to create composite stimuli from parts of faces that varied orthogonally on these characteristics. In Experiment 1, participants categorised the gender of the parts of aligned composite and misaligned images created from parts with the same (congruent) or different (incongruent) gender and the same (congruent) or different (incongruent) race. In Experiment 2, the same stimuli were used but the task changed to categorising race. In both experiments, there was a strong influence of the task-relevant manipulation on the composite effect, with slower responses to aligned stimuli with incongruent gender in Experiment 1 and incongruent race in Experiment 2. In contrast, the task-irrelevant variable (race in Experiment 1, gender in Experiment 2) did not exert much influence on the composite effect in either experiment. These findings show that although holistic integration of salient visual properties makes a strong contribution to the composite face effect, it clearly also involves targeted processing of an attended visual characteristic.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Bournemouth University, Poole, UK
| | | | - Govina Basra
- Department of Psychology, Bournemouth University, Poole, UK
| | - Naixin Ren
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, P.R. China
| | - Wenfeng Chen
- Department of Psychology, Renmin University of China, Beijing, P.R. China
| |
Collapse
|
9
|
Mileva M, Young AW, Jenkins R, Burton AM. Facial identity across the lifespan. Cogn Psychol 2019; 116:101260. [PMID: 31865002 DOI: 10.1016/j.cogpsych.2019.101260] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2019] [Revised: 11/29/2019] [Accepted: 11/30/2019] [Indexed: 10/25/2022]
Abstract
We can recognise people that we know across their lifespan. We see family members age, and we can recognise celebrities across long careers. How is this possible, despite the very large facial changes that occur as people get older? Here we analyse the statistical properties of faces as they age, sampling photos of the same people from their 20s to their 70s. Across a number of simulations, we observe that individuals' faces retain some idiosyncratic physical properties across the adult lifespan that can be used to support moderate levels of age-independent recognition. However, we found that models based exclusively on image-similarity only achieved limited success in recognising faces across age. In contrast, more robust recognition was achieved with the introduction of a minimal top-down familiarisation procedure. Such models can incorporate the within-person variability associated with a particular individual to show a surprisingly high level of generalisation, even across the lifespan. The analysis of this variability reveals a powerful statistical tool for understanding recognition, and demonstrates how visual representations may support operations typically thought to require conceptual properties.
Collapse
Affiliation(s)
- Mila Mileva
- Department of Psychology, University of York, UK.
| | | | - Rob Jenkins
- Department of Psychology, University of York, UK
| | | |
Collapse
|
10
|
Neural dynamics of racial categorization predicts racial bias in face recognition and altruism. Nat Hum Behav 2019; 4:69-87. [DOI: 10.1038/s41562-019-0743-y] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2019] [Accepted: 08/27/2019] [Indexed: 11/08/2022]
|
11
|
Mileva M, Young AW, Kramer RS, Burton AM. Understanding facial impressions between and within identities. Cognition 2019; 190:184-198. [DOI: 10.1016/j.cognition.2019.04.027] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 04/24/2019] [Accepted: 04/25/2019] [Indexed: 10/26/2022]
|
12
|
Young AW, Noyes E. We need to talk about super‐recognizers Invited commentary on: Ramon, M., Bobak, A. K., & White, D. Super‐recognizers: From the lab to the world and back again.
British Journal of Psychology
. Br J Psychol 2019; 110:492-494. [DOI: 10.1111/bjop.12395] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Affiliation(s)
| | - Eilidh Noyes
- Department of Psychology University of Huddersfield UK
| |
Collapse
|
13
|
Symmetrical Viewpoint Representations in Face-Selective Regions Convey an Advantage in the Perception and Recognition of Faces. J Neurosci 2019; 39:3741-3751. [PMID: 30842248 DOI: 10.1523/jneurosci.1977-18.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2018] [Revised: 01/11/2019] [Accepted: 01/16/2019] [Indexed: 11/21/2022] Open
Abstract
Learning new identities is crucial for effective social interaction. A critical aspect of this process is the integration of different images from the same face into a view-invariant representation that can be used for recognition. The representation of symmetrical viewpoints has been proposed to be a key computational step in achieving view-invariance. The aim of this study was to determine whether the representation of symmetrical viewpoints in face-selective regions is directly linked to the perception and recognition of face identity. In Experiment 1, we measured fMRI responses while male and female human participants viewed images of real faces from different viewpoints (-90, -45, 0, 45, and 90° from full-face view). Within the face regions, patterns of neural response to symmetrical views (-45 and 45° or -90 and 90°) were more similar than responses to nonsymmetrical views in the fusiform face area and superior temporal sulcus, but not in the occipital face area. In Experiment 2, participants made perceptual similarity judgements to pairs of face images. Images with symmetrical viewpoints were reported as being more similar than nonsymmetric views. In Experiment 3, we asked whether symmetrical views also convey an advantage when learning new faces. We found that recognition was best when participants were tested with novel face images that were symmetrical to the learning viewpoint. Critically, the pattern of perceptual similarity and recognition across different viewpoints predicted the pattern of neural response in face-selective regions. Together, our results provide support for the functional value of symmetry as an intermediate step in generating view-invariant representations.SIGNIFICANCE STATEMENT The recognition of identity from faces is crucial for successful social interactions. A critical step in this process is the integration of different views into a unified, view-invariant representation. The representation of symmetrical views (e.g., left profile and right profile) has been proposed as an important intermediate step in computing view-invariant representations. We found view symmetric representations were specific to some face-selective regions, but not others. We also show that these neural representations influence the perception of faces. Symmetric views were perceived to be more similar and were recognized more accurately than nonsymmetric views. Moreover, the perception and recognition of faces at different viewpoints predicted patterns of response in those face regions with view symmetric representations.
Collapse
|
14
|
Connolly HL, Young AW, Lewis GJ. Recognition of facial expression and identity in part reflects a common ability, independent of general intelligence and visual short-term memory. Cogn Emot 2018; 33:1119-1128. [DOI: 10.1080/02699931.2018.1535425] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- Hannah L. Connolly
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| | - Andrew W. Young
- Department of Psychology, University of York, Heslington, York, UK
| | - Gary J. Lewis
- Department of Psychology, Royal Holloway, University of London, Egham, UK
| |
Collapse
|
15
|
South Palomares JK, Young AW. Facial and self-report questionnaire measures capture different aspects of romantic partner preferences. Br J Psychol 2018; 110:549-575. [PMID: 30270430 DOI: 10.1111/bjop.12347] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2018] [Revised: 08/02/2018] [Indexed: 11/29/2022]
Abstract
Romantic relationship researchers often use self-report measures of partner preferences based on verbal questionnaires. These questionnaires show that partner preferences involve an evaluation in terms of underlying factors of vitality-attractiveness, status-resources, and warmth-trustworthiness. However, when people first encounter a potential partner, they can usually derive a wealth of impressions from their face, and little is known about the relationship between verbal self-reports and impressions derived from faces. We conducted four studies investigating potential parallels and differences between facial impressions and verbal self-reports. Study 1 showed that when evaluating highly variable everyday face images in a context that does not require considering them as potential partners, participants can reliably perceive the traits and factors that are related to partner preferences. However, despite being capable of these nuanced evaluations, Study 2 found that when asked to evaluate images of faces as potential romantic partners, participants' preferences become dominated by attractiveness-related concerns. Study 3 confirmed this dominance of facial attractiveness using morphed face-like images. Study 4 showed that attractiveness dominates partner preferences for faces even when task instructions imply that warmth-trustworthiness or status-resources should be of primary importance. In contrast to verbal questionnaire measures of partner preferences, evaluations of faces focus heavily on attractiveness, whereas questionnaire self-reports tend on average to prioritize warmth-trustworthiness over attractiveness. Evaluations of faces and verbal self-report measures therefore capture different aspects of partner preferences.
Collapse
|
16
|
Young AW, Burton AM. What We See in Unfamiliar Faces: A Response to Rossion. Trends Cogn Sci 2018; 22:472-473. [DOI: 10.1016/j.tics.2018.03.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 03/18/2018] [Accepted: 03/20/2018] [Indexed: 11/27/2022]
|
17
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|