1
|
Minemoto K, Ueda Y. Face identity and facial expression representations with adaptation paradigms: New directions for potential applications. Front Psychol 2022; 13:988497. [PMID: 36600709 PMCID: PMC9806277 DOI: 10.3389/fpsyg.2022.988497] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2022] [Accepted: 12/01/2022] [Indexed: 12/23/2022] Open
Abstract
Adaptation and aftereffect are well-known procedures for exploring our neural representation of visual stimuli. It has been reported that they occur in face identity, facial expressions, and low-level visual features. This method has two primary advantages. One is to reveal the common or shared process of faces, that is, the overlapped or discrete representation of face identities or facial expressions. The other is to investigate the coding system or theory of face processing that underlies the ability to recognize faces. This study aims to organize recent research to guide the reader into the field of face adaptation and its aftereffect and to suggest possible future expansions in the use of this paradigm. To achieve this, we reviewed the behavioral short-term aftereffect studies on face identity (i.e., who it is) and facial expressions (i.e., what expressions such as happiness and anger are expressed), and summarized their findings about the neural representation of faces. First, we summarize the basic characteristics of face aftereffects compared to simple visual features to clarify that facial aftereffects occur at a different stage and are not inherited or combinations of low-level visual features. Next, we introduce the norm-based coding hypothesis, which is one of the theories used to represent face identity and facial expressions, and adaptation is a commonly used procedure to examine this. Subsequently, we reviewed studies that applied this paradigm to immature or impaired face recognition (i.e., children and individuals with autism spectrum disorder or prosopagnosia) and examined the relationships between their poor recognition performance and representations. Moreover, we reviewed studies dealing with the representation of non-presented faces and social signals conveyed via faces and discussed that the face adaptation paradigm is also appropriate for these types of examinations. Finally, we summarize the research conducted to date and propose a new direction for the face adaptation paradigm.
Collapse
|
2
|
Lin H, Liang J. Behavioral and ERP effects of encoded facial expressions on facial identity recognition depend on recognized facial expressions. PSYCHOLOGICAL RESEARCH 2022; 87:1590-1606. [DOI: 10.1007/s00426-022-01756-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Accepted: 10/19/2022] [Indexed: 11/30/2022]
|
3
|
Jiang N, Li H, Chen C, Fu R, Zhang Y, Mei L. The emotional adaptation aftereffect discriminates between individuals with high and low levels of depressive symptoms. Cogn Emot 2021; 36:240-253. [PMID: 34775905 DOI: 10.1080/02699931.2021.2002822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
The adaptation aftereffect plays a critical role in human development and survival. Existing studies have found that, compared with general individuals, individuals with learning disability, autism and dyslexia show a smaller amount of non-affective-based cognitive adaptation aftereffect. Nevertheless, it is unclear whether individuals with depression or depression tendency show similar phenomenon in the adaptation aftereffect, and whether such depression tendency occurs in the non-affective-based cognitive or emotional adaptation aftereffect. To address this question, the present study conducted two experiments. Experiments 1A and 1B used the emotional facial expression adaptation paradigm to examine whether Chinese participants showed the emotional adaptation aftereffect and whether the emotional adaptation aftereffect was influenced by physical features of faces, respectively. Experiment 2 recruited two groups of participants, with high and low depression, respectively, to examine whether they showed differences in the emotional or cognitive adaptation aftereffect. Results showed that Chinese participants showed the typical emotional adaptation aftereffect, which was not influenced by physical features of faces. More importantly, compared to the low-depression group, the high-depression group showed a smaller emotional adaptation aftereffect, but the two groups showed a similar cognitive adaptation aftereffect. These results suggest that level of depressive symptoms is associated with the emotional adaptation aftereffect.
Collapse
Affiliation(s)
- Nan Jiang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, People's Republic of China.,School of Psychology, South China Normal University, Guangzhou, People's Republic of China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, People's Republic of China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, People's Republic of China
| | - Huiling Li
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, People's Republic of China.,School of Psychology, South China Normal University, Guangzhou, People's Republic of China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, People's Republic of China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, People's Republic of China
| | - Chuansheng Chen
- Department of Psychological Science, University of California, Irvine, CA, USA
| | - Ruilin Fu
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, People's Republic of China.,School of Psychology, South China Normal University, Guangzhou, People's Republic of China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, People's Republic of China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, People's Republic of China
| | - Yuzhou Zhang
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, People's Republic of China.,School of Psychology, South China Normal University, Guangzhou, People's Republic of China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, People's Republic of China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, People's Republic of China
| | - Leilei Mei
- Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, People's Republic of China.,School of Psychology, South China Normal University, Guangzhou, People's Republic of China.,Center for Studies of Psychological Application, South China Normal University, Guangzhou, People's Republic of China.,Guangdong Key Laboratory of Mental Health and Cognitive Science, South China Normal University, Guangzhou, People's Republic of China
| |
Collapse
|
4
|
Soto FA, Vucovich LE, Ashby FG. Linking signal detection theory and encoding models to reveal independent neural representations from neuroimaging data. PLoS Comput Biol 2018; 14:e1006470. [PMID: 30273337 PMCID: PMC6181430 DOI: 10.1371/journal.pcbi.1006470] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 10/11/2018] [Accepted: 08/29/2018] [Indexed: 11/18/2022] Open
Abstract
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. For example, theories of face recognition have proposed either completely or partially independent processing of identity and emotional expression. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. This article develops a new quantitative framework that links signal detection theory from psychophysics and encoding models from computational neuroscience, focusing on a special form of independence defined in the psychophysics literature: perceptual separability. The new theory allowed us, for the first time, to precisely define separability of neural representations and to theoretically link behavioral and brain measures of separability. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In particular, the theory identifies exactly what valid inferences can be made about independent encoding of stimulus dimensions from the results of multivariate analyses of neuroimaging data and psychophysical studies. In addition, commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we apply this new framework to the study of separability of brain representations of face identity and emotional expression (neutral/sad) in a human fMRI study with male and female participants. A common question in vision research is whether certain stimulus properties, like face identity and expression, are represented and processed independently. We develop a theoretical framework that allowed us, for the first time, to link behavioral and brain measures of independence. Unlike previous approaches, our framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach in the study of independence. This allows to identify what kind of inferences can be made about brain representations from multivariate analyses of neuroimaging data or psychophysical studies. We apply this framework to the study of independent processing of face identity and expression.
Collapse
Affiliation(s)
- Fabian A. Soto
- Department of Psychology, Florida International University, Miami, Florida, United States of America
- * E-mail:
| | - Lauren E. Vucovich
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| | - F. Gregory Ashby
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
5
|
Abstract
The fact that the face is a source of diverse social signals allows us to use face and person perception as a model system for asking important psychological questions about how our brains are organised. A key issue concerns whether we rely primarily on some form of generic representation of the common physical source of these social signals (the face) to interpret them, or instead create multiple representations by assigning different aspects of the task to different specialist components. Variants of the specialist components hypothesis have formed the dominant theoretical perspective on face perception for more than three decades, but despite this dominance of formally and informally expressed theories, the underlying principles and extent of any division of labour remain uncertain. Here, I discuss three important sources of constraint: first, the evolved structure of the brain; second, the need to optimise responses to different everyday tasks; and third, the statistical structure of faces in the perceiver's environment. I show how these constraints interact to determine the underlying functional organisation of face and person perception.
Collapse
|
6
|
Ichikawa H, Kanazawa S, Yamaguchi MK. Infants recognize identity in a dynamic facial animation that simultaneously changes its identity and expression. VISUAL COGNITION 2017. [DOI: 10.1080/13506285.2017.1399949] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Hiroko Ichikawa
- Department of Psychology, Chuo University, Tokyo, Japan
- Japan Society for the Promotion of Sciences, Tokyo, Japan
| | - So Kanazawa
- Department of Psychology, Japan Women’s University, Kanagawa, Japan
| | | |
Collapse
|
7
|
Abstract
Perception of a facial expression can be altered or biased by a prolonged viewing of other facial expressions, known as the facial expression adaptation aftereffect (FEAA). Recent studies using antiexpressions have demonstrated a monotonic relation between the magnitude of the FEAA and adaptor extremity, suggesting that facial expressions are opponent coded and represented continuously from one expression to its antiexpression. However, it is unclear whether the opponent-coding scheme can account for the FEAA between two facial expressions. In the current study, we demonstrated that the magnitude of the FEAA between two facial expressions increased monotonically as a function of the intensity of adapting facial expressions, consistent with the predictions based on the opponent-coding model. Further, the monotonic increase in the FEAA occurred even when the intensity of an adapting face was too weak for its expression to be recognized. These results together suggest that multiple facial expressions are encoded and represented by balanced activity of neural populations tuned to different facial expressions.
Collapse
|
8
|
Hartigan A, Richards A. Disgust exposure and explicit emotional appraisal enhance the LPP in response to disgusted facial expressions. Soc Neurosci 2016; 12:458-467. [PMID: 27121369 DOI: 10.1080/17470919.2016.1182067] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
Abstract
The influence of prior exposure to disgusting imagery and the conscious appraisal of facial expressions were examined in an event-related potential (ERP) experiment. Participants were exposed to either a disgust or a control manipulation and then presented with emotional and neutral expressions. An assessment of the gender of the face was required during half the blocks and an affective assessment of the emotion in the other half. The emotion-related early posterior negativity (EPN) and late positive potential (LPP) ERP components were examined for disgust and neutral stimuli. Results indicated that the EPN was enhanced for disgusted over neutral expressions. Prior disgust exposure modulated the middle phase of the LPP in response to disgusted but not neutral expressions, but only when the emotion of the face was explicitly evaluated. The late LPP was enhanced independently of stimuli when an emotional decision was made. Results demonstrated that exposure to disgusting imagery can affect the subsequent processing of disgusted facial expressions when the emotion is under conscious appraisal.
Collapse
Affiliation(s)
- Alex Hartigan
- a Affective and Cognitive Neuroscience Laboratory, Department of Psychological Sciences , Birkbeck College, University of London , London , UK
| | - Anne Richards
- a Affective and Cognitive Neuroscience Laboratory, Department of Psychological Sciences , Birkbeck College, University of London , London , UK
| |
Collapse
|
9
|
Dahl CD, Rasch MJ, Bülthoff I, Chen CC. Integration or separation in the processing of facial properties--a computational view. Sci Rep 2016; 6:20247. [PMID: 26829891 PMCID: PMC4735755 DOI: 10.1038/srep20247] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2015] [Accepted: 12/31/2015] [Indexed: 11/10/2022] Open
Abstract
A face recognition system ought to read out information about the identity, facial expression and invariant properties of faces, such as sex and race. A current debate is whether separate neural units in the brain deal with these face properties individually or whether a single neural unit processes in parallel all aspects of faces. While the focus of studies has been directed toward the processing of identity and facial expression, little research exists on the processing of invariant aspects of faces. In a theoretical framework we tested whether a system can deal with identity in combination with sex, race or facial expression using the same underlying mechanism. We used dimension reduction to describe how the representational face space organizes face properties when trained on different aspects of faces. When trained to learn identities, the system not only successfully recognized identities, but also was immediately able to classify sex and race, suggesting that no additional system for the processing of invariant properties is needed. However, training on identity was insufficient for the recognition of facial expressions and vice versa. We provide a theoretical approach on the interconnection of invariant facial properties and the separation of variant and invariant facial properties.
Collapse
Affiliation(s)
- Christoph D. Dahl
- Department of Psychology, National Taiwan University, Roosevelt Road, Taipei 106, Taiwan
- Department of Comparative Cognition, Institute of Biology, University of Neuchâtel, 2000, Rue Emile-Argand 11, Neuchâtel, Switzerland
| | - Malte J. Rasch
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, Xinjiekouwai Street 19, 100875 Beijing, China
| | - Isabelle Bülthoff
- Max Planck Institute for Biological Cybernetics, Human Perception, Cognition and Action, Spemannstrasse 38, 72074 Tübingen, Germany
| | - Chien-Chung Chen
- Department of Psychology, National Taiwan University, Roosevelt Road, Taipei 106, Taiwan
| |
Collapse
|
10
|
Song M, Shinomori K, Qian Q, Yin J, Zeng W. The Change of Expression Configuration Affects Identity-Dependent Expression Aftereffect but Not Identity-Independent Expression Aftereffect. Front Psychol 2015; 6:1937. [PMID: 26733922 PMCID: PMC4686644 DOI: 10.3389/fpsyg.2015.01937] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2015] [Accepted: 12/02/2015] [Indexed: 11/29/2022] Open
Abstract
The present study examined the influence of expression configuration on cross-identity expression aftereffect. The expression configuration refers to the spatial arrangement of facial features in a face for conveying an emotion, e.g., an open-mouth smile vs. a closed-mouth smile. In the first of two experiments, the expression aftereffect is measured using a cross-identity/cross-expression configuration factorial design. The facial identities of test faces were the same or different from the adaptor, while orthogonally, the expression configurations of those facial identities were also the same or different. The results show that the change of expression configuration impaired the expression aftereffect when the facial identities of adaptor and tests were the same; however, the impairment effect disappears when facial identities were different, indicating the identity-independent expression representation is more robust to the change of the expression configuration in comparison with the identity-dependent expression representation. In the second experiment, we used schematic line faces as adaptors and real faces as tests to minimize the similarity between the adaptor and tests, which is expected to exclude the contribution from the identity-dependent expression representation to expression aftereffect. The second experiment yields a similar result as the identity-independent expression aftereffect observed in Experiment 1. The findings indicate the different neural sensitivities to expression configuration for identity-dependent and identity-independent expression systems.
Collapse
Affiliation(s)
- Miao Song
- College of Information Engineering, Shanghai Maritime UniversityShanghai, China; School of Information, Kochi University of TechnologyKochi, Japan
| | - Keizo Shinomori
- School of Information, Kochi University of Technology Kochi, Japan
| | - Qian Qian
- Yunnan Key Laboratory of Computer Technology Applications, Kunming University of Science and Technology Kunming, China
| | - Jun Yin
- College of Information Engineering, Shanghai Maritime University Shanghai, China
| | - Weiming Zeng
- College of Information Engineering, Shanghai Maritime University Shanghai, China
| |
Collapse
|
11
|
Lander K, Butcher N. Independence of face identity and expression processing: exploring the role of motion. Front Psychol 2015; 6:255. [PMID: 25821441 PMCID: PMC4358059 DOI: 10.3389/fpsyg.2015.00255] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2014] [Accepted: 02/20/2015] [Indexed: 11/13/2022] Open
Abstract
According to the classic Bruce and Young (1986) model of face recognition, identity and emotional expression information from the face are processed in parallel and independently. Since this functional model was published, a growing body of research has challenged this viewpoint and instead support an interdependence view. In addition, neural models of face processing emphasize differences in terms of the processing of changeable and invariant aspects of faces. This article provides a critical appraisal of this literature and discusses the role of motion in both expression and identity recognition and the intertwined nature of identity, expression and motion processing. We conclude by discussing recent advancements in this area and research questions that still need to be addressed.
Collapse
Affiliation(s)
- Karen Lander
- School of Psychological Sciences, University of Manchester , Manchester, UK
| | - Natalie Butcher
- School of Social Sciences, Business and Law, Teesside University , Middlesbrough, UK
| |
Collapse
|
12
|
Soto FA, Vucovich L, Musgrave R, Ashby FG. General recognition theory with individual differences: a new method for examining perceptual and decisional interactions with an application to face perception. Psychon Bull Rev 2015; 22:88-111. [PMID: 24841236 PMCID: PMC4239198 DOI: 10.3758/s13423-014-0661-y] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
A common question in perceptual science is to what extent different stimulus dimensions are processed independently. General recognition theory (GRT) offers a formal framework via which different notions of independence can be defined and tested rigorously, while also dissociating perceptual from decisional factors. This article presents a new GRT model that overcomes several shortcomings with previous approaches, including a clearer separation between perceptual and decisional processes and a more complete description of such processes. The model assumes that different individuals share similar perceptual representations, but vary in their attention to dimensions and in the decisional strategies they use. We apply the model to the analysis of interactions between identity and emotional expression during face recognition. The results of previous research aimed at this problem have been disparate. Participants identified four faces, which resulted from the combination of two identities and two expressions. An analysis using the new GRT model showed a complex pattern of dimensional interactions. The perception of emotional expression was not affected by changes in identity, but the perception of identity was affected by changes in emotional expression. There were violations of decisional separability of expression from identity and of identity from expression, with the former being more consistent across participants than the latter. One explanation for the disparate results in the literature is that decisional strategies may have varied across studies and influenced the results of tests of perceptual interactions, as previous studies lacked the ability to dissociate between perceptual and decisional interactions.
Collapse
Affiliation(s)
- Fabian A Soto
- Sage Center for the Study of the Mind, University of California at Santa Barbara, Santa Barbara, CA, USA,
| | | | | | | |
Collapse
|
13
|
Vakli P, Németh K, Zimmer M, Kovács G. The face evoked steady-state visual potentials are sensitive to the orientation, viewpoint, expression and configuration of the stimuli. Int J Psychophysiol 2014; 94:336-50. [DOI: 10.1016/j.ijpsycho.2014.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/19/2014] [Revised: 10/02/2014] [Accepted: 10/12/2014] [Indexed: 10/24/2022]
|
14
|
Richards A, Holmes A, Pell PJ, Bethell EJ. Adapting effects of emotional expression in anxiety: Evidence for an enhanced Late Positive Potential. Soc Neurosci 2013; 8:650-64. [DOI: 10.1080/17470919.2013.854273] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
|