1
|
Kasahara S, Kumasaki N, Shimizu K. Investigating the impact of motion visual synchrony on self face recognition using real time morphing. Sci Rep 2024; 14:13090. [PMID: 38849381 PMCID: PMC11161490 DOI: 10.1038/s41598-024-63233-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2023] [Accepted: 05/27/2024] [Indexed: 06/09/2024] Open
Abstract
Face recognition is a crucial aspect of self-image and social interactions. Previous studies have focused on static images to explore the boundary of self-face recognition. Our research, however, investigates the dynamics of face recognition in contexts involving motor-visual synchrony. We first validated our morphing face metrics for self-face recognition. We then conducted an experiment using state-of-the-art video processing techniques for real-time face identity morphing during facial movement. We examined self-face recognition boundaries under three conditions: synchronous, asynchronous, and static facial movements. Our findings revealed that participants recognized a narrower self-face boundary with moving facial images compared to static ones, with no significant differences between synchronous and asynchronous movements. The direction of morphing consistently biased the recognized self-face boundary. These results suggest that while motor information of the face is vital for self-face recognition, it does not rely on movement synchronization, and the sense of agency over facial movements does not affect facial identity judgment. Our methodology offers a new approach to exploring the 'self-face boundary in action', allowing for an independent examination of motion and identity.
Collapse
Affiliation(s)
- Shunichi Kasahara
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan.
- Okinawa Institute of Science and Technology Graduate University, Okinawa, 904-0412, Japan.
| | - Nanako Kumasaki
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| | - Kye Shimizu
- Sony Computer Science Laboratories, Inc., Tokyo, 141-0022, Japan
| |
Collapse
|
2
|
Gobbo S, Lega C, De Sandi A, Daini R. The role of preSMA and STS in face recognition: A transcranial magnetic stimulation (TMS) study. Neuropsychologia 2024; 198:108877. [PMID: 38555065 DOI: 10.1016/j.neuropsychologia.2024.108877] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2023] [Revised: 03/22/2024] [Accepted: 03/25/2024] [Indexed: 04/02/2024]
Abstract
Current models propose that facial recognition is mediated by two independent yet interacting anatomo-functional systems: one processing facial features mainly mediated by the Fusiform Face Area and the other involved in the extraction of dynamic information from faces, subserved by Superior Temporal Sulcus (STS). Also, the pre-Supplementary Motor Area (pre-SMA) is implicated in facial expression processing as it is involved in its motor mimicry. However, the literature only shows evidence of the implication of STS and preSMA for facial expression recognition, without relating it to face recognition. In addition, the literature shows a facilitatory role of facial motion in the recognition of unfamiliar faces, particularly for poor recognizers. The present study aimed at studying the role of STS and preSMA in unfamiliar face recognition in people with different face recognition skills. 34 healthy participants received repetitive transcranial magnetic stimulation over the right posterior STS, pre-SMA and as sham during a task of matching of faces encoded through: facial expression, rigid head movement or as static (i.e., absence of any facial or head motion). All faces were represented without emotional content. Results indicate that STS has a direct role in recognizing identities through rigid head movement and an indirect role in facial expression processing. This dissociation represents a step forward with respect to current face processing models suggesting that different types of motion involve separate brain and cognitive processes. PreSMA interacts with face recognition skills, increasing the performance of poor recognizers and decreasing that of good recognizers in all presentation conditions. Together, the results suggest the use of at least partially different mechanisms for face recognition in poor and good recognizers and a different role of STS and preSMA in face recognition.
Collapse
Affiliation(s)
- Silvia Gobbo
- Department of Psychology, University of Milan-Bicocca, Milan, Italy.
| | - Carlotta Lega
- Department of Brain and Behavioral Sciences, University of Pavia, Pavia, Italy
| | | | - Roberta Daini
- Department of Psychology, University of Milan-Bicocca, Milan, Italy
| |
Collapse
|
3
|
Bennetts RJ, Gregory NJ, Bate S. Both identity and non-identity face perception tasks predict developmental prosopagnosia and face recognition ability. Sci Rep 2024; 14:6626. [PMID: 38503841 PMCID: PMC10951298 DOI: 10.1038/s41598-024-57176-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2023] [Accepted: 03/14/2024] [Indexed: 03/21/2024] Open
Abstract
Developmental prosopagnosia (DP) is characterised by deficits in face identification. However, there is debate about whether these deficits are primarily perceptual, and whether they extend to other face processing tasks (e.g., identifying emotion, age, and gender; detecting faces in scenes). In this study, 30 participants with DP and 75 controls completed a battery of eight tasks assessing four domains of face perception (identity; emotion; age and gender; face detection). The DP group performed worse than the control group on both identity perception tasks, and one task from each other domain. Both identity perception tests uniquely predicted DP/control group membership, and performance on two measures of face memory. These findings suggest that deficits in DP may arise from issues with face perception. Some non-identity tasks also predicted DP/control group membership and face memory, even when face identity perception was accounted for. Gender perception and speed of face detection consistently predicted unique variance in group membership and face memory; several other tasks were only associated with some measures of face recognition ability. These findings indicate that face perception deficits in DP may extend beyond identity perception. However, the associations between tasks may also reflect subtle aspects of task demands or stimuli.
Collapse
Affiliation(s)
- Rachel J Bennetts
- Division of Psychology, College of Health, Medicine and Life Sciences, Brunel University London, Kingston Lane, Uxbridge, UB8 3PH, UK.
| | | | - Sarah Bate
- Department of Psychology, Bournemouth University, Poole, UK
| |
Collapse
|
4
|
Sexton CL, Buckley C, Lieberfarb J, Subiaul F, Hecht EE, Bradley BJ. What Is Written on a Dog's Face? Evaluating the Impact of Facial Phenotypes on Communication between Humans and Canines. Animals (Basel) 2023; 13:2385. [PMID: 37508162 PMCID: PMC10376741 DOI: 10.3390/ani13142385] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2023] [Revised: 07/15/2023] [Accepted: 07/20/2023] [Indexed: 07/30/2023] Open
Abstract
Facial phenotypes are significant in communication with conspecifics among social primates. Less is understood about the impact of such markers in heterospecific encounters. Through behavioral and physical phenotype analyses of domesticated dogs living in human households, this study aims to evaluate the potential impact of superficial facial markings on dogs' production of human-directed facial expressions. That is, this study explores how facial markings, such as eyebrows, patches, and widow's peaks, are related to expressivity toward humans. We used the Dog Facial Action Coding System (DogFACS) as an objective measure of expressivity, and we developed an original schematic for a standardized coding of facial patterns and coloration on a sample of more than 100 male and female dogs (N = 103), aged from 6 months to 12 years, representing eight breed groups. The present study found a statistically significant, though weak, correlation between expression rate and facial complexity, with dogs with plainer faces tending to be more expressive (r = -0.326, p ≤ 0.001). Interestingly, for adult dogs, human companions characterized dogs' rates of facial expressivity with more accuracy for dogs with plainer faces. Especially relevant to interspecies communication and cooperation, within-subject analyses revealed that dogs' muscle movements were distributed more evenly across their facial regions in a highly social test condition compared to conditions in which they received ambiguous cues from their owners. On the whole, this study provides an original evaluation of how facial features may impact communication in human-dog interactions.
Collapse
Affiliation(s)
- Courtney L Sexton
- Department of Population Health Sciences, Virginia-Maryland College of Veterinary Medicine, Virginia Polytechnic Institute and State University, Blacksburg, VA 24061, USA
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
| | - Colleen Buckley
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
| | | | - Francys Subiaul
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
- Department of Speech, Language and Hearing Sciences, The George Washington University, Washington, DC 20052, USA
| | - Erin E Hecht
- Department of Human Evolutionary Biology, Harvard University, Cambridge, MA 02138, USA
| | - Brenda J Bradley
- Center for the Advanced Study of Human Paleobiology, Department of Anthropology, The George Washington University, Washington, DC 20052, USA
| |
Collapse
|
5
|
Watson DM, Johnston A. A PCA-Based Active Appearance Model for Characterising Modes of Spatiotemporal Variation in Dynamic Facial Behaviours. Front Psychol 2022; 13:880548. [PMID: 35719501 PMCID: PMC9204357 DOI: 10.3389/fpsyg.2022.880548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Accepted: 04/22/2022] [Indexed: 11/13/2022] Open
Abstract
Faces carry key personal information about individuals, including cues to their identity, social traits, and emotional state. Much research to date has employed static images of faces taken under tightly controlled conditions yet faces in the real world are dynamic and experienced under ambient conditions. A common approach to studying key dimensions of facial variation is the use of facial caricatures. However, such techniques have again typically relied on static images, and the few examples of dynamic caricatures have relied on animating graphical head models. Here, we present a principal component analysis (PCA)-based active appearance model for capturing patterns of spatiotemporal variation in videos of natural dynamic facial behaviours. We demonstrate how this technique can be applied to generate dynamic anti-caricatures of biological motion patterns in facial behaviours. This technique could be extended to caricaturing other facial dimensions, or to more general analyses of spatiotemporal variations in dynamic faces.
Collapse
Affiliation(s)
- David M Watson
- School of Psychology, University of Nottingham, Nottingham, United Kingdom.,Department of Psychology, University of York, York, United Kingdom
| | - Alan Johnston
- School of Psychology, University of Nottingham, Nottingham, United Kingdom
| |
Collapse
|
6
|
The relationship between early and recent life stress and emotional expression processing: A functional connectivity study. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2021; 20:588-603. [PMID: 32342272 PMCID: PMC7266792 DOI: 10.3758/s13415-020-00789-2] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
The aim of this study was to characterize neural activation during the processing of negative facial expressions in a non-clinical group of individuals characterized by two factors: the levels of stress experienced in early life and in adulthood. Two models of stress consequences were investigated: the match/mismatch and cumulative stress models. The match/mismatch model assumes that early adversities may promote optimal coping with similar events in the future through fostering the development of coping strategies. The cumulative stress model assumes that effects of stress are additive, regardless of the timing of the stressors. Previous studies suggested that stress can have both cumulative and match/mismatch effects on brain structure and functioning and, consequently, we hypothesized that effects on brain circuitry would be found for both models. We anticipated effects on the neural circuitry of structures engaged in face perception and emotional processing. Hence, the amygdala, fusiform face area, occipital face area, and posterior superior temporal sulcus were selected as seeds for seed-based functional connectivity analyses. The interaction between early and recent stress was related to alterations during the processing of emotional expressions mainly in to the cerebellum, middle temporal gyrus, and supramarginal gyrus. For cumulative stress levels, such alterations were observed in functional connectivity to the middle temporal gyrus, lateral occipital cortex, precuneus, precentral and postcentral gyri, anterior and posterior cingulate gyri, and Heschl's gyrus. This study adds to the growing body of literature suggesting that both the cumulative and the match/mismatch hypotheses are useful in explaining the effects of stress.
Collapse
|
7
|
Starks MD, Shafer-Skelton A, Paradiso M, Martinez AM, Golomb JD. The influence of spatial location on same-different judgments of facial identity and expression. J Exp Psychol Hum Percept Perform 2020; 46:2020-78982-001. [PMID: 33090835 PMCID: PMC8641643 DOI: 10.1037/xhp0000872] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The "spatial congruency bias" is a behavioral phenomenon where 2 objects presented sequentially are more likely to be judged as being the same object if they are presented in the same location (Golomb, Kupitz, & Thiemann, 2014), suggesting that irrelevant spatial location information may be bound to object representations. Here, we examine whether the spatial congruency bias extends to higher-level object judgments of facial identity and expression. On each trial, 2 real-world faces were sequentially presented in variable screen locations, and subjects were asked to make same-different judgments on the facial expression (Experiments 1-2) or facial identity (Experiment 3) of the stimuli. We observed a robust spatial congruency bias for judgments of facial identity, yet a more fragile one for judgments of facial expression. Subjects were more likely to judge 2 faces as displaying the same expression if they were presented in the same location (compared to in different locations), but only when the faces shared the same identity. On the other hand, a spatial congruency bias was found when subjects made judgments on facial identity, even across faces displaying different facial expressions. These findings suggest a possible difference between the binding of facial identity and facial expression to spatial location. (PsycInfo Database Record (c) 2020 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Anna Shafer-Skelton
- Department of Psychology, The Ohio State University
- Department of Psychology, University of California, San Diego
| | | | - Aleix M. Martinez
- Department of Electrical and Computer Engineering, The Ohio State University
| | - Julie D. Golomb
- Department of Psychology, The Ohio State University
- Department of Neuroscience, The Ohio State University
| |
Collapse
|
8
|
Sliwinska MW, Bearpark C, Corkhill J, McPhillips A, Pitcher D. Dissociable pathways for moving and static face perception begin in early visual cortex: Evidence from an acquired prosopagnosic. Cortex 2020; 130:327-339. [DOI: 10.1016/j.cortex.2020.03.033] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 02/14/2020] [Accepted: 03/13/2020] [Indexed: 11/25/2022]
|
9
|
FaReT: A free and open-source toolkit of three-dimensional models and software to study face perception. Behav Res Methods 2020; 52:2604-2622. [PMID: 32519291 DOI: 10.3758/s13428-020-01421-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A problem in the study of face perception is that results can be confounded by poor stimulus control. Ideally, experiments should precisely manipulate facial features under study and tightly control irrelevant features. Software for 3D face modeling provides such control, but there is a lack of free and open source alternatives specifically created for face perception research. Here, we provide such tools by expanding the open-source software MakeHuman. We present a database of 27 identity models and six expression pose models (sadness, anger, happiness, disgust, fear, and surprise), together with software to manipulate the models in ways that are common in the face perception literature, allowing researchers to: (1) create a sequence of renders from interpolations between two or more 3D models (differing in identity, expression, and/or pose), resulting in a "morphing" sequence; (2) create renders by extrapolation in a direction of face space, obtaining 3D "anti-faces" and caricatures; (3) obtain videos of dynamic faces from rendered images; (4) obtain average face models; (5) standardize a set of models so that they differ only in selected facial shape features, and (6) communicate with experiment software (e.g., PsychoPy) to render faces dynamically online. These tools vastly improve both the speed at which face stimuli can be produced and the level of control that researchers have over face stimuli. We validate the face model database and software tools through a small study on human perceptual judgments of stimuli produced with the toolkit.
Collapse
|
10
|
Quantifying the effect of viewpoint changes on sensitivity to face identity. Vision Res 2019; 165:1-12. [DOI: 10.1016/j.visres.2019.09.006] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Revised: 08/28/2019] [Accepted: 09/16/2019] [Indexed: 11/20/2022]
|
11
|
Arbitrary signals of trustworthiness - social judgments may rely on facial expressions even with experimentally manipulated valence. Heliyon 2019; 5:e01736. [PMID: 31193439 PMCID: PMC6529738 DOI: 10.1016/j.heliyon.2019.e01736] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2018] [Revised: 03/07/2019] [Accepted: 05/10/2019] [Indexed: 11/22/2022] Open
Abstract
Generalization has been suggested as a basic mechanism in forming impressions about unfamiliar people. In this study, we investigated how social evaluations will be transferred to individual faces across contexts and to expressions across individuals. A total of 93 people (33 men, age: M = 29.95; SD = 13.74) were exposed to facial images which they had to evaluate. In the Association phase, we presented one individual with (1) a trustworthy, (2) an untrustworthy, (3) or an ambiguous expression, with either positive or negative descriptive sentence pairs. In the Evaluation phase participants were shown (1) a new individual with the same emotional facial expression as seen before, and (2) a neutral image of the previously presented individual. They were asked to judge the trustworthiness of each person. We found that the valence of the social description is transferred to both individuals and expressions. That is, the social evaluations (positive or negative) transferred between the images of two different individuals if they both displayed the same facial expression. The consistency between the facial expression and the description, however, had no effect on the evaluation of the same expression appearing on an unfamiliar face. Results suggest that in social evaluation of unfamiliar people invariant and dynamically changing facial traits are used to a similar extent and influence these judgements through the same associative process.
Collapse
|
12
|
Soto FA, Vucovich LE, Ashby FG. Linking signal detection theory and encoding models to reveal independent neural representations from neuroimaging data. PLoS Comput Biol 2018; 14:e1006470. [PMID: 30273337 PMCID: PMC6181430 DOI: 10.1371/journal.pcbi.1006470] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 10/11/2018] [Accepted: 08/29/2018] [Indexed: 11/18/2022] Open
Abstract
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. For example, theories of face recognition have proposed either completely or partially independent processing of identity and emotional expression. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. This article develops a new quantitative framework that links signal detection theory from psychophysics and encoding models from computational neuroscience, focusing on a special form of independence defined in the psychophysics literature: perceptual separability. The new theory allowed us, for the first time, to precisely define separability of neural representations and to theoretically link behavioral and brain measures of separability. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In particular, the theory identifies exactly what valid inferences can be made about independent encoding of stimulus dimensions from the results of multivariate analyses of neuroimaging data and psychophysical studies. In addition, commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we apply this new framework to the study of separability of brain representations of face identity and emotional expression (neutral/sad) in a human fMRI study with male and female participants. A common question in vision research is whether certain stimulus properties, like face identity and expression, are represented and processed independently. We develop a theoretical framework that allowed us, for the first time, to link behavioral and brain measures of independence. Unlike previous approaches, our framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach in the study of independence. This allows to identify what kind of inferences can be made about brain representations from multivariate analyses of neuroimaging data or psychophysical studies. We apply this framework to the study of independent processing of face identity and expression.
Collapse
Affiliation(s)
- Fabian A. Soto
- Department of Psychology, Florida International University, Miami, Florida, United States of America
- * E-mail:
| | - Lauren E. Vucovich
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| | - F. Gregory Ashby
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
13
|
An Integrated Neural Framework for Dynamic and Static Face Processing. Sci Rep 2018; 8:7036. [PMID: 29728577 PMCID: PMC5935689 DOI: 10.1038/s41598-018-25405-9] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2017] [Accepted: 04/03/2018] [Indexed: 11/19/2022] Open
Abstract
Faces convey rich information including identity, gender and expression. Current neural models of face processing suggest a dissociation between the processing of invariant facial aspects such as identity and gender, that engage the fusiform face area (FFA) and the processing of changeable aspects, such as expression and eye gaze, that engage the posterior superior temporal sulcus face area (pSTS-FA). Recent studies report a second dissociation within this network such that the pSTS-FA, but not the FFA, shows much stronger response to dynamic than static faces. The aim of the current study was to test a unified model that accounts for these two functional characteristics of the neural face network. In an fMRI experiment, we presented static and dynamic faces while subjects judged an invariant (gender) or a changeable facial aspect (expression). We found that the pSTS-FA was more engaged in processing dynamic than static faces and changeable than invariant aspects, whereas the OFA and FFA showed similar response across all four conditions. These findings support an integrated neural model of face processing in which the ventral areas extract form information from both invariant and changeable facial aspects whereas the dorsal face areas are sensitive to dynamic and changeable facial aspects.
Collapse
|
14
|
Dobs K, Schultz J, Bülthoff I, Gardner JL. Task-dependent enhancement of facial expression and identity representations in human cortex. Neuroimage 2018; 172:689-702. [PMID: 29432802 DOI: 10.1016/j.neuroimage.2018.02.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 02/02/2018] [Accepted: 02/06/2018] [Indexed: 11/24/2022] Open
Abstract
What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.
Collapse
Affiliation(s)
- Katharina Dobs
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Division of Medical Psychology and Department of Psychiatry, University of Bonn, Sigmund Freud Str. 25, 53105 Bonn, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Psychology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| |
Collapse
|
15
|
Are event-related potentials to dynamic facial expressions of emotion related to individual differences in the accuracy of processing facial expressions and identity? COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:364-380. [PMID: 28097516 DOI: 10.3758/s13415-016-0484-6] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Despite a wealth of knowledge about the neural mechanisms behind emotional facial expression processing, little is known about how they relate to individual differences in social cognition abilities. We studied individual differences in the event-related potentials (ERPs) elicited by dynamic facial expressions. First, we assessed the latent structure of the ERPs, reflecting structural face processing in the N170, and the allocation of processing resources and reflexive attention to emotionally salient stimuli, in the early posterior negativity (EPN) and the late positive complex (LPC). Then we estimated brain-behavior relationships between the ERP factors and behavioral indicators of facial identity and emotion-processing abilities. Structural models revealed that the participants who formed faster structural representations of neutral faces (i.e., shorter N170 latencies) performed better at face perception (r = -.51) and memory (r = -.42). The N170 amplitude was not related to individual differences in face cognition or emotion processing. The latent EPN factor correlated with emotion perception (r = .47) and memory (r = .32), and also with face perception abilities (r = .41). Interestingly, the latent factor representing the difference in EPN amplitudes between the two neutral control conditions (chewing and blinking movements) also correlated with emotion perception (r = .51), highlighting the importance of tracking facial changes in the perception of emotional facial expressions. The LPC factor for negative expressions correlated with the memory for emotional facial expressions. The links revealed between the latency and strength of activations of brain systems and individual differences in processing socio-emotional information provide new insights into the brain mechanisms involved in social communication.
Collapse
|
16
|
Dobs K, Ma WJ, Reddy L. Near-optimal integration of facial form and motion. Sci Rep 2017; 7:11002. [PMID: 28887554 PMCID: PMC5591281 DOI: 10.1038/s41598-017-10885-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2017] [Accepted: 08/08/2017] [Indexed: 11/09/2022] Open
Abstract
Human perception consists of the continuous integration of sensory cues pertaining to the same object. While it has been fairly well shown that humans use an optimal strategy when integrating low-level cues proportional to their relative reliability, the integration processes underlying high-level perception are much less understood. Here we investigate cue integration in a complex high-level perceptual system, the human face processing system. We tested cue integration of facial form and motion in an identity categorization task and found that an optimal model could successfully predict subjects’ identity choices. Our results suggest that optimal cue integration may be implemented across different levels of the visual processing hierarchy.
Collapse
Affiliation(s)
- Katharina Dobs
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France. .,CNRS, UMR 5549, Faculté de Médecine de Purpan, Toulouse, France.
| | - Wei Ji Ma
- New York University, Center for Neural Science and Department of Psychology, New York, New York, USA
| | - Leila Reddy
- Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, UMR 5549, Faculté de Médecine de Purpan, Toulouse, France
| |
Collapse
|
17
|
Johnstone LT, Downing PE. Dissecting the visual perception of body shape with the Garner selective attention paradigm. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1334733] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
- Leah T. Johnstone
- School of Psychology, Bangor University, Bangor, UK
- School of Psychology, University of East Anglia, Norwich, UK
| | | |
Collapse
|
18
|
Dobs K, Bülthoff I, Schultz J. Identity information content depends on the type of facial movement. Sci Rep 2016; 6:34301. [PMID: 27683087 PMCID: PMC5041143 DOI: 10.1038/srep34301] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2016] [Accepted: 09/09/2016] [Indexed: 11/09/2022] Open
Abstract
Facial movements convey information about many social cues, including identity. However, how much information about a person's identity is conveyed by different kinds of facial movements is unknown. We addressed this question using a recent motion capture and animation system, with which we animated one avatar head with facial movements of three types: (1) emotional, (2) emotional in social interaction and (3) conversational, all recorded from several actors. In a delayed match-to-sample task, observers were best at matching actor identity across conversational movements, worse with emotional movements in social interactions, and at chance level with emotional facial expressions. Model observers performing this task showed similar performance profiles, indicating that performance variation was due to differences in information content, rather than processing. Our results suggest that conversational facial movements transmit more dynamic identity information than emotional facial expressions, thus suggesting different functional roles and processing mechanisms for different types of facial motion.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Université de Toulouse, Centre de Recherche Cerveau et Cognition, Université Paul Sabatier, Toulouse, France.,CNRS, Faculté de Médecine de Purpan, UMR 5549, Toulouse, France
| | - Isabelle Bülthoff
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany
| | - Johannes Schultz
- Department of Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Division of Medical Psychology and Department of Psychiatry, University of Bonn, Bonn, Germany
| |
Collapse
|
19
|
Affiliation(s)
- Karin S. Pilz
- School of Psychology, University of Aberdeen, Aberdeen, Scotland, UK
| | - Ian M. Thornton
- Department of Cognitive Science, Faculty of Media & Knowledge Science, University of Malta, Msida, Malta
| |
Collapse
|
20
|
Liu CH, Chen W, Ward J, Takahashi N. Dynamic Emotional Faces Generalise Better to a New Expression but not to a New View. Sci Rep 2016; 6:31001. [PMID: 27499252 PMCID: PMC4976339 DOI: 10.1038/srep31001] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2016] [Accepted: 07/11/2016] [Indexed: 11/18/2022] Open
Abstract
Prior research based on static images has found limited improvement for recognising previously learnt faces in a new expression after several different facial expressions of these faces had been shown during the learning session. We investigated whether non-rigid motion of facial expression facilitates the learning process. In Experiment 1, participants remembered faces that were either presented in short video clips or still images. To assess the effect of exposure to expression variation, each face was either learnt through a single expression or three different expressions. Experiment 2 examined whether learning faces in video clips could generalise more effectively to a new view. The results show that faces learnt from video clips generalised effectively to a new expression with exposure to a single expression, whereas faces learnt from stills showed poorer generalisation with exposure to either single or three expressions. However, although superior recognition performance was demonstrated for faces learnt through video clips, dynamic facial expression did not create better transfer of learning to faces tested in a new view. The data thus fail to support the hypothesis that non-rigid motion enhances viewpoint invariance. These findings reveal both benefits and limitations of exposures to moving expressions for expression-invariant face recognition.
Collapse
Affiliation(s)
- Chang Hong Liu
- Department of Psychology, Faculty of Science and Technology Bournemouth University, Talbot Campus Fern Barrow Poole, Dorset, BH12 5BB, United Kingdom
| | - Wenfeng Chen
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing 100101, China
| | - James Ward
- Department of Computer Science, University of Hull, Cottingham Road, Hull, HU6 7RX, United Kingdom
| | - Nozomi Takahashi
- Department of Psychology, Graduate School of Literature and Social Science Nihon University, 3-25-40, Setagaya-ku, Sakurajosui Tokyo 156-8550, Japan
| |
Collapse
|
21
|
Yovel G, O’Toole AJ. Recognizing People in Motion. Trends Cogn Sci 2016; 20:383-395. [DOI: 10.1016/j.tics.2016.02.005] [Citation(s) in RCA: 83] [Impact Index Per Article: 10.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2016] [Revised: 02/18/2016] [Accepted: 02/18/2016] [Indexed: 11/15/2022]
|
22
|
Wingenbach TSH, Ashwin C, Brosnan M. Validation of the Amsterdam Dynamic Facial Expression Set--Bath Intensity Variations (ADFES-BIV): A Set of Videos Expressing Low, Intermediate, and High Intensity Emotions. PLoS One 2016; 11:e0147112. [PMID: 26784347 PMCID: PMC4718603 DOI: 10.1371/journal.pone.0147112] [Citation(s) in RCA: 40] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2015] [Accepted: 12/29/2015] [Indexed: 11/19/2022] Open
Abstract
Most of the existing sets of facial expressions of emotion contain static photographs. While increasing demand for stimuli with enhanced ecological validity in facial emotion recognition research has led to the development of video stimuli, these typically involve full-blown (apex) expressions. However, variations of intensity in emotional facial expressions occur in real life social interactions, with low intensity expressions of emotions frequently occurring. The current study therefore developed and validated a set of video stimuli portraying three levels of intensity of emotional expressions, from low to high intensity. The videos were adapted from the Amsterdam Dynamic Facial Expression Set (ADFES) and termed the Bath Intensity Variations (ADFES-BIV). A healthy sample of 92 people recruited from the University of Bath community (41 male, 51 female) completed a facial emotion recognition task including expressions of 6 basic emotions (anger, happiness, disgust, fear, surprise, sadness) and 3 complex emotions (contempt, embarrassment, pride) that were expressed at three different intensities of expression and neutral. Accuracy scores (raw and unbiased (Hu) hit rates) were calculated, as well as response times. Accuracy rates above chance level of responding were found for all emotion categories, producing an overall raw hit rate of 69% for the ADFES-BIV. The three intensity levels were validated as distinct categories, with higher accuracies and faster responses to high intensity expressions than intermediate intensity expressions, which had higher accuracies and faster responses than low intensity expressions. To further validate the intensities, a second study with standardised display times was conducted replicating this pattern. The ADFES-BIV has greater ecological validity than many other emotion stimulus sets and allows for versatile applications in emotion research. It can be retrieved free of charge for research purposes from the corresponding author.
Collapse
Affiliation(s)
| | - Chris Ashwin
- Department of Psychology, University of Bath, Bath, United Kingdom
| | - Mark Brosnan
- Department of Psychology, University of Bath, Bath, United Kingdom
| |
Collapse
|
23
|
Abstract
High performance level in face recognition studies does not seem to be replicable in real-life situations possibly because of the artificial nature of laboratory studies. Recognizing faces in natural social situations may be a more challenging task, as it involves constant examination of dynamic facial motions that may alter facial structure vital to the recognition of unfamiliar faces. Because of the incongruences of recognition performance, the current study developed stimuli that closely represent natural social situations to yield results that more accurately reflect observers' performance in real-life settings. Naturalistic stimuli of African, East Asian, and Western Caucasian actors introducing themselves were presented to investigate Malaysian Chinese participants' recognition sensitivity and looking strategies when performing a face recognition task. When perceiving dynamic facial stimuli, participants fixated most on the nose, followed by the mouth then the eyes. Focusing on the nose may have enabled participants to gain a more holistic view of actors' facial and head movements, which proved to be beneficial in recognizing identities. Participants recognized all three races of faces equally well. The current results, which differed from a previous static face recognition study, may be a more accurate reflection of observers' recognition abilities and looking strategies.
Collapse
Affiliation(s)
- Chrystalle B Y Tan
- School of Psychology, Faculty of Science, University of Nottingham Malaysia Campus, Selangor, Malaysia
| | | | | |
Collapse
|
24
|
Two neural pathways of face processing: A critical evaluation of current models. Neurosci Biobehav Rev 2015; 55:536-46. [DOI: 10.1016/j.neubiorev.2015.06.010] [Citation(s) in RCA: 127] [Impact Index Per Article: 14.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2014] [Revised: 04/22/2015] [Accepted: 06/05/2015] [Indexed: 11/15/2022]
|