1
|
Chang CH, Drobotenko N, Ruocco AC, Lee ACH, Nestor A. Perception and memory-based representations of facial emotions: Associations with personality functioning, affective states and recognition abilities. Cognition 2024; 245:105724. [PMID: 38266352 DOI: 10.1016/j.cognition.2024.105724] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/03/2023] [Revised: 11/09/2023] [Accepted: 01/15/2024] [Indexed: 01/26/2024]
Abstract
Personality traits and affective states are associated with biases in facial emotion perception. However, the precise personality impairments and affective states that underlie these biases remain largely unknown. To investigate how relevant factors influence facial emotion perception and recollection, Experiment 1 employed an image reconstruction approach in which community-dwelling adults (N = 89) rated the similarity of pairs of facial expressions, including those recalled from memory. Subsequently, perception- and memory-based expression representations derived from such ratings were assessed across participants and related to measures of personality impairment, state affect, and visual recognition abilities. Impairment in self-direction and level of positive affect accounted for the largest components of individual variability in perception and memory representations, respectively. Additionally, individual differences in these representations were impacted by face recognition ability. In Experiment 2, adult participants (N = 81) rated facial image reconstructions derived in Experiment 1, revealing that individual variability was associated with specific visual face properties, such as expressiveness, representation accuracy, and positivity/negativity. These findings highlight and clarify the influence of personality, affective state, and recognition abilities on individual differences in the perception and recollection of facial expressions.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Natalia Drobotenko
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Anthony C Ruocco
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Department of Psychological Clinical Science at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada; Rotman Research Institute, Baycrest Centre, 3560 Bathurst St, North York, Ontario M6A 2E1, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, 1265 Military Trail, Scarborough, Ontario M1C 1A4, Canada.
| |
Collapse
|
2
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
3
|
Hou X, Zhao J, Zhang H. Reconstruction of perceived face images from brain activities based on multi-attribute constraints. Front Neurosci 2022; 16:1015752. [PMID: 36389231 PMCID: PMC9643433 DOI: 10.3389/fnins.2022.1015752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/10/2022] [Indexed: 11/24/2022] Open
Abstract
Reconstruction of perceived faces from brain signals is a hot topic in brain decoding and an important application in the field of brain-computer interfaces. Existing methods do not fully consider the multiple facial attributes represented in face images, and their different activity patterns at multiple brain regions are often ignored, which causes the reconstruction performance very poor. In the current study, we propose an algorithmic framework that efficiently combines multiple face-selective brain regions for precise multi-attribute perceived face reconstruction. Our framework consists of three modules: a multi-task deep learning network (MTDLN), which is developed to simultaneously extract the multi-dimensional face features attributed to facial expression, identity and gender from one single face image, a set of linear regressions (LR), which is built to map the relationship between the multi-dimensional face features and the brain signals from multiple brain regions, and a multi-conditional generative adversarial network (mcGAN), which is used to generate the perceived face images constrained by the predicted multi-dimensional face features. We conduct extensive fMRI experiments to evaluate the reconstruction performance of our framework both subjectively and objectively. The results show that, compared with the traditional methods, our proposed framework better characterizes the multi-attribute face features in a face image, better predicts the face features from brain signals, and achieves better reconstruction performance of both seen and unseen face images in both visual effects and quantitative assessment. Moreover, besides the state-of-the-art intra-subject reconstruction performance, our proposed framework can also realize inter-subject face reconstruction to a certain extent.
Collapse
Affiliation(s)
- Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology of the People’s Republic of China, Beihang University, Beijing, China
| |
Collapse
|
4
|
Abstract
Is Mr. Hyde more similar to his alter ego Dr. Jekyll, because of their physical identity, or to Jack the Ripper, because both evoke fear and loathing? The relative weight of emotional and visual dimensions in similarity judgements is still unclear. We expected an asymmetric effect of these dimensions on similarity perception, such that faces that express the same or similar feeling are judged as more similar than different emotional expressions of same person. We selected 10 male faces with different expressions. Each face posed one neutral expression and one emotional expression (five disgust, five fear). We paired these expressions, resulting in 190 pairs, varying either in emotional expressions, physical identity, or both. Twenty healthy participants rated the similarity of paired faces on a 7-point scale. We report a symmetric effect of emotional expression and identity on similarity judgements, suggesting that people may perceive Mr. Hyde to be just as similar to Dr. Jekyll (identity) as to Jack the Ripper (emotion). We also observed that emotional mismatch decreased perceived similarity, suggesting that emotions play a prominent role in similarity judgements. From an evolutionary perspective, poor discrimination between emotional stimuli might endanger the individual.
Collapse
|
5
|
Cox CR, Rogers TT. Finding Distributed Needles in Neural Haystacks. J Neurosci 2021; 41:1019-1032. [PMID: 33334868 PMCID: PMC7880292 DOI: 10.1523/jneurosci.0904-20.2020] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 12/02/2020] [Accepted: 12/04/2020] [Indexed: 11/21/2022] Open
Abstract
The human cortex encodes information in complex networks that can be anatomically dispersed and variable in their microstructure across individuals. Using simulations with neural network models, we show that contemporary statistical methods for functional brain imaging-including univariate contrast, searchlight multivariate pattern classification, and whole-brain decoding with L1 or L2 regularization-each have critical and complementary blind spots under these conditions. We then introduce the sparse-overlapping-sets (SOS) LASSO-a whole-brain multivariate approach that exploits structured sparsity to find network-distributed information-and show in simulation that it captures the advantages of other approaches while avoiding their limitations. When applied to fMRI data to find neural responses that discriminate visually presented faces from other visual stimuli, each method yields a different result, but existing approaches all support the canonical view that face perception engages localized areas in posterior occipital and temporal regions. In contrast, SOS LASSO uncovers a network spanning all four lobes of the brain. The result cannot reflect spurious selection of out-of-system areas because decoding accuracy remains exceedingly high even when canonical face and place systems are removed from the dataset. When used to discriminate visual scenes from other stimuli, the same approach reveals a localized signal consistent with other methods-illustrating that SOS LASSO can detect both widely distributed and localized representational structure. Thus, structured sparsity can provide an unbiased method for testing claims of functional localization. For faces and possibly other domains, such decoding may reveal representations more widely distributed than previously suspected.SIGNIFICANCE STATEMENT Brain systems represent information as patterns of activation over neural populations connected in networks that can be widely distributed anatomically, variable across individuals, and intermingled with other networks. We show that four widespread statistical approaches to functional brain imaging have critical blind spots in this scenario and use simulations with neural network models to illustrate why. We then introduce a new approach designed specifically to find radically distributed representations in neural networks. In simulation and in fMRI data collected in the well studied domain of face perception, the new approach discovers extensive signal missed by the other methods-suggesting that prior functional imaging work may have significantly underestimated the degree to which neurocognitive representations are distributed and variable across individuals.
Collapse
Affiliation(s)
- Christopher R Cox
- Department of Psychology, Louisiana State University, Baton Rouge, Louisiana 70803
| | - Timothy T Rogers
- Department of Psychology, University of Wisconsin, Madison, Wisconsin 53706
| |
Collapse
|
6
|
Wang X, Li X, Wu S, Shi K, He Y. DNA methylation and transcriptome comparative analysis for Lvliang Black goats in distinct feeding pattern reveals epigenetic basis for environment adaptation. BIOTECHNOL BIOTEC EQ 2021. [DOI: 10.1080/13102818.2021.1914164] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022] Open
Affiliation(s)
- Xi Wang
- Department of Animal Breeding and Genetics, College of animal science, Shanxi Agricultural University, Taigu, Shanxi, P.R. China
| | - Xi Li
- Department of Animal Breeding and Genetics, College of animal science, Shanxi Agricultural University, Taigu, Shanxi, P.R. China
| | - Sujun Wu
- Department of Animal Breeding and Genetics, College of animal science, Shanxi Agricultural University, Taigu, Shanxi, P.R. China
| | - Kerong Shi
- Department of Animal Breeding and Genetics, College of Animal Science and Technology, Shandong Agricultural University, Taian, Shandong, P.R. China
| | - Yanghua He
- Department of Human Nutrition, Food and Animal Sciences, College of Tropical Agriculture and Human Resources, University of Hawaii at Manoa, Honolulu, HI, USA
| |
Collapse
|
7
|
Hendriks MHA, Dillen C, Vettori S, Vercammen L, Daniels N, Steyaert J, Op de Beeck H, Boets B. Neural processing of facial identity and expression in adults with and without autism: A multi-method approach. NEUROIMAGE-CLINICAL 2020; 29:102520. [PMID: 33338966 PMCID: PMC7750419 DOI: 10.1016/j.nicl.2020.102520] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 10/23/2020] [Accepted: 11/30/2020] [Indexed: 11/28/2022]
Abstract
The ability to recognize faces and facial expressions is a common human talent. It has, however, been suggested to be impaired in individuals with autism spectrum disorder (ASD). The goal of this study was to compare the processing of facial identity and emotion between individuals with ASD and neurotypicals (NTs). Behavioural and functional magnetic resonance imaging (fMRI) data from 46 young adults (aged 17-23 years, NASD = 22, NNT = 24) was analysed. During fMRI data acquisition, participants discriminated between short clips of a face transitioning from a neutral to an emotional expression. Stimuli included four identities and six emotions. We performed behavioural, univariate, multi-voxel, adaptation and functional connectivity analyses to investigate potential group differences. The ASD-group did not differ from the NT-group on behavioural identity and expression processing tasks. At the neural level, we found no differences in average neural activation, neural activation patterns and neural adaptation to faces in face-related brain regions. In terms of functional connectivity, we found that amygdala seems to be more strongly connected to inferior occipital cortex and V1 in individuals with ASD. Overall, the findings indicate that neural representations of facial identity and expression have a similar quality in individuals with and without ASD, but some regions containing these representations are connected differently in the extended face processing network.
Collapse
Affiliation(s)
- Michelle H A Hendriks
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Claudia Dillen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Sofie Vettori
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Laura Vercammen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium
| | - Nicky Daniels
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Jean Steyaert
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Hans Op de Beeck
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Bart Boets
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium.
| |
Collapse
|
8
|
Jacques C, Rossion B, Volfart A, Brissart H, Colnat-Coulbois S, Maillard L, Jonas J. The neural basis of rapid unfamiliar face individuation with human intracerebral recordings. Neuroimage 2020; 221:117174. [DOI: 10.1016/j.neuroimage.2020.117174] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 05/19/2020] [Accepted: 07/14/2020] [Indexed: 12/24/2022] Open
|
9
|
Rossion B, Retter TL, Liu‐Shuang J. Understanding human individuation of unfamiliar faces with oddball fast periodic visual stimulation and electroencephalography. Eur J Neurosci 2020; 52:4283-4344. [DOI: 10.1111/ejn.14865] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 05/19/2020] [Accepted: 05/30/2020] [Indexed: 01/08/2023]
Affiliation(s)
- Bruno Rossion
- CNRS, CRAN UMR7039 Université de Lorraine F‐54000Nancy France
- Service de Neurologie, CHRU‐Nancy Université de Lorraine F‐54000Nancy France
| | - Talia L. Retter
- Department of Behavioural and Cognitive Sciences Faculty of Language and Literature Humanities, Arts and Education University of Luxembourg Luxembourg Luxembourg
| | - Joan Liu‐Shuang
- Institute of Research in Psychological Science Institute of Neuroscience Université de Louvain Louvain‐la‐Neuve Belgium
| |
Collapse
|
10
|
Nestor A, Lee ACH, Plaut DC, Behrmann M. The Face of Image Reconstruction: Progress, Pitfalls, Prospects. Trends Cogn Sci 2020; 24:747-759. [PMID: 32674958 PMCID: PMC7429291 DOI: 10.1016/j.tics.2020.06.006] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2020] [Revised: 05/27/2020] [Accepted: 06/15/2020] [Indexed: 10/23/2022]
Abstract
Recent research has demonstrated that neural and behavioral data acquired in response to viewing face images can be used to reconstruct the images themselves. However, the theoretical implications, promises, and challenges of this direction of research remain unclear. We evaluate the potential of this research for elucidating the visual representations underlying face recognition. Specifically, we outline complementary and converging accounts of the visual content, the representational structure, and the neural dynamics of face processing. We illustrate how this research addresses fundamental questions in the study of normal and impaired face recognition, and how image reconstruction provides a powerful framework for uncovering face representations, for unifying multiple types of empirical data, and for facilitating both theoretical and methodological progress.
Collapse
Affiliation(s)
- Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada; Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - David C Plaut
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA; Carnegie Mellon Neuroscience Institute, Pittsburgh, PA, USA
| |
Collapse
|
11
|
Cao R, Li X, Todorov A, Wang S. A Flexible Neural Representation of Faces in the Human Brain. Cereb Cortex Commun 2020; 1:tgaa055. [PMID: 34296119 PMCID: PMC8152845 DOI: 10.1093/texcom/tgaa055] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 07/27/2020] [Accepted: 08/21/2020] [Indexed: 11/13/2022] Open
Abstract
An important question in human face perception research is to understand whether the neural representation of faces is dynamically modulated by context. In particular, although there is a plethora of neuroimaging literature that has probed the neural representation of faces, few studies have investigated what low-level structural and textural facial features parametrically drive neural responses to faces and whether the representation of these features is modulated by the task. To answer these questions, we employed 2 task instructions when participants viewed the same faces. We first identified brain regions that parametrically encoded high-level social traits such as perceived facial trustworthiness and dominance, and we showed that these brain regions were modulated by task instructions. We then employed a data-driven computational face model with parametrically generated faces and identified brain regions that encoded low-level variation in the faces (shape and skin texture) that drove neural responses. We further analyzed the evolution of the neural feature vectors along the visual processing stream and visualized and explained these feature vectors. Together, our results showed a flexible neural representation of faces for both low-level features and high-level social traits in the human brain.
Collapse
Affiliation(s)
- Runnan Cao
- Department of Chemical and Biomedical Engineering, Rockefeller Neurosciences Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Xin Li
- Lane Department of Computer Science and Electrical Engineering, West Virginia University, Morgantown, WV 26506, USA
| | - Alexander Todorov
- Booth School of Business, University of Chicago, Chicago, IL 60637, USA
| | - Shuo Wang
- Department of Chemical and Biomedical Engineering, Rockefeller Neurosciences Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
12
|
Exemplar learning reveals the representational origins of expert category perception. Proc Natl Acad Sci U S A 2020; 117:11167-11177. [PMID: 32366664 DOI: 10.1073/pnas.1912734117] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Irrespective of whether one has substantial perceptual expertise for a class of stimuli, an observer invariably encounters novel exemplars from this class. To understand how novel exemplars are represented, we examined the extent to which previous experience with a category constrains the acquisition and nature of representation of subsequent exemplars from that category. Participants completed a perceptual training paradigm with either novel other-race faces (category of experience) or novel computer-generated objects (YUFOs) that included pairwise similarity ratings at the beginning, middle, and end of training, and a 20-d visual search training task on a subset of category exemplars. Analyses of pairwise similarity ratings revealed multiple dissociations between the representational spaces for those learning faces and those learning YUFOs. First, representational distance changes were more selective for faces than YUFOs; trained faces exhibited greater magnitude in representational distance change relative to untrained faces, whereas this trained-untrained distance change was much smaller for YUFOs. Second, there was a difference in where the representational distance changes were observed; for faces, representations that were closer together before training exhibited a greater distance change relative to those that were farther apart before training. For YUFOs, however, the distance changes occurred more uniformly across representational space. Last, there was a decrease in dimensionality of the representational space after training on YUFOs, but not after training on faces. Together, these findings demonstrate how previous category experience governs representational patterns of exemplar learning as well as the underlying dimensionality of the representational space.
Collapse
|
13
|
Ryali CK, Wang X, Yu AJ. Leveraging Computer Vision Face Representation to Understand Human Face Representation. COGSCI ... ANNUAL CONFERENCE OF THE COGNITIVE SCIENCE SOCIETY. COGNITIVE SCIENCE SOCIETY (U.S.). CONFERENCE 2020; 42:1080-1086. [PMID: 34355219 PMCID: PMC8336428] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/01/2022]
Abstract
Face processing plays a critical role in human social life, from differentiating friends from enemies to choosing a life mate. In this work, we leverage various computer vision techniques, combined with human assessments of similarity between pairs of faces, to investigate human face representation. We find that combining a shape- and texture-feature based model (Active Appearance Model) with a particular form of metric learning, not only achieves the best performance in predicting human similarity judgments on held-out data (both compared to other algorithms and to humans), but also performs better or comparable to alternative approaches in modeling human social trait judgment (e.g. trustworthiness, attractiveness) and affective assessment (e.g. happy, angry, sad). This analysis yields several scientific findings: (1) facial similarity judgments rely on a relative small number of facial features (8-12), (2) race- and gender-informative features play a prominent role in similarity perception, (3) similarity-relevant features alone are insufficient to capture human face representation, in particular some affective features missing from similarity judgments are also necessary for constructing the complete psychological face representation.
Collapse
Affiliation(s)
- Chaitanya K. Ryali
- Department of Computer Science and Engineering, University of California, San Diego La Jolla, CA 92093 USA
| | - Xiaotian Wang
- Department of Electrical and Computer Engineering, University of California, San Diego La Jolla, CA 92093 USA
| | - Angela J. Yu
- Department of Cognitive Science, University of California, San Diego La Jolla, CA 92093 USA
| |
Collapse
|
14
|
Lehky SR, Phan AH, Cichocki A, Tanaka K. Face Representations via Tensorfaces of Various Complexities. Neural Comput 2019; 32:281-329. [PMID: 31835006 DOI: 10.1162/neco_a_01258] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Neurons selective for faces exist in humans and monkeys. However, characteristics of face cell receptive fields are poorly understood. In this theoretical study, we explore the effects of complexity, defined as algorithmic information (Kolmogorov complexity) and logical depth, on possible ways that face cells may be organized. We use tensor decompositions to decompose faces into a set of components, called tensorfaces, and their associated weights, which can be interpreted as model face cells and their firing rates. These tensorfaces form a high-dimensional representation space in which each tensorface forms an axis of the space. A distinctive feature of the decomposition algorithm is the ability to specify tensorface complexity. We found that low-complexity tensorfaces have blob-like appearances crudely approximating faces, while high-complexity tensorfaces appear clearly face-like. Low-complexity tensorfaces require a larger population to reach a criterion face reconstruction error than medium- or high-complexity tensorfaces, and thus are inefficient by that criterion. Low-complexity tensorfaces, however, generalize better when representing statistically novel faces, which are faces falling beyond the distribution of face description parameters found in the tensorface training set. The degree to which face representations are parts based or global forms a continuum as a function of tensorface complexity, with low and medium tensorfaces being more parts based. Given the computational load imposed in creating high-complexity face cells (in the form of algorithmic information and logical depth) and in the absence of a compelling advantage to using high-complexity cells, we suggest face representations consist of a mixture of low- and medium-complexity face cells.
Collapse
Affiliation(s)
- Sidney R Lehky
- Cognitive Brain Mapping Laboratory, RIKEN Center for Brain Science, Wako-shi, Saitama 351-0198, Japan, and Computational Neurobiology Laboratory, Salk Institute, La Jolla, CA 92037, U.S.A.
| | - Anh Huy Phan
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 143026 Moscow, Russia; and Institute of Global Innovation Research, Tokyo University of Agriculture and Technology, Tokyo 183-8538, Japan
| | - Andrzej Cichocki
- Center for Computational and Data-Intensive Science and Engineering, Skolkovo Institute of Science and Technology, 143026 Moscow, Russia; Systems Research Institute, Polish Academy of Sciences, 01447 Warsaw, Poland; College of Computer Science, Hangzhou Dianzu University, Hangzhou 310018, China; and Institute of Global Innovation Research, Tokyo University of Agriculture and Technology, Tokyo 183-8538, Japan
| | - Keiji Tanaka
- Cognitive Brain Mapping Laboratory, RIKEN Center for Brain Science, Wako-shi, Saitama 325-0198, Japan
| |
Collapse
|
15
|
Liu TT, Nestor A, Vida MD, Pyles JA, Patterson C, Yang Y, Yang FN, Freud E, Behrmann M. Successful Reorganization of Category-Selective Visual Cortex following Occipito-temporal Lobectomy in Childhood. Cell Rep 2019; 24:1113-1122.e6. [PMID: 30067969 DOI: 10.1016/j.celrep.2018.06.099] [Citation(s) in RCA: 23] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/04/2016] [Revised: 05/22/2018] [Accepted: 06/22/2018] [Indexed: 12/18/2022] Open
Abstract
Investigations of functional (re)organization in children who have undergone large cortical resections offer a unique opportunity to elucidate the nature and extent of cortical plasticity. We report findings from a 3-year investigation of a child, U.D., who underwent surgical removal of the right occipital and posterior temporal lobes at age 6 years 9 months. Relative to controls, post-surgically, U.D. showed age-appropriate intellectual performance and visuoperceptual face and object recognition skills. Using fMRI at five different time points, we observed a persistent hemianopia and no visual field remapping. In category-selective visual cortices, however, object- and scene-selective regions in the intact left hemisphere were stable early on, but regions subserving face and word recognition emerged later and evinced competition for cortical representation. These findings reveal alterations in the selectivity and topography of category-selective regions when confined to a single hemisphere and provide insights into dynamic functional changes in extrastriate cortical architecture.
Collapse
Affiliation(s)
- Tina T Liu
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Adrian Nestor
- Department of Psychology, University of Toronto, Scarborough, Toronto, ON, Canada
| | - Mark D Vida
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - John A Pyles
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | | | - Ying Yang
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Machine Learning Department, School of Computer Science, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Fan Nils Yang
- Department of Psychology, Sun Yat-Sen University, Guangzhou, China; Center for Functional Neuroimaging & Department of Neurology, University of Pennsylvania, Philadelphia, PA, USA
| | - Erez Freud
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA
| | - Marlene Behrmann
- Center for the Neural Basis of Cognition, Carnegie Mellon University and the University of Pittsburgh, Pittsburgh, PA, USA; Department of Psychology, Carnegie Mellon University, Pittsburgh, PA, USA.
| |
Collapse
|
16
|
Hill MQ, Parde CJ, Castillo CD, Colón YI, Ranjan R, Chen JC, Blanz V, O’Toole AJ. Deep convolutional neural networks in the face of caricature. NAT MACH INTELL 2019. [DOI: 10.1038/s42256-019-0111-7] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
|
17
|
Heinrichs RW. The duality of human cognition: operations and intentionality in mental life and illness. Neurosci Biobehav Rev 2019; 108:139-148. [PMID: 31703967 DOI: 10.1016/j.neubiorev.2019.11.002] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2019] [Accepted: 11/04/2019] [Indexed: 01/20/2023]
Abstract
What people think about, the intentional aspect of cognition, is distinguished from its operational aspect, or how proficiently they think. Many psychiatric disorders as well as social problems like racism, are defined largely by specified thought contents, whereas neurological disorders including dementia are defined by low proficiency. Intentionality contrasts with operational cognition in resisting objectification and in being expressed primarily in verbal narratives and subjective self-disclosure. This yields insecure data that have slowed progress in fields where intentional cognition plays a key role. The question is how to produce more secure knowledge and open the intentional domain itself to objective investigation. The use of operational methods to infer intentionality has provided only partial answers. However, the science of reconstructing mental events with neural data is providing a new horizon for the study of intentional cognition. Reconstruction science must address major challenges related to fidelity and validity. Nevertheless, this approach is showing the first steps on the road to accessing and revealing objectively the contents of thought.
Collapse
Affiliation(s)
- R Walter Heinrichs
- Department of Psychology, York University, Toronto, Ontario, M3J 1P3, Canada.
| |
Collapse
|
18
|
Ling S, Lee ACH, Armstrong BC, Nestor A. How are visual words represented? Insights from EEG-based visual word decoding, feature derivation and image reconstruction. Hum Brain Mapp 2019; 40:5056-5068. [PMID: 31403749 PMCID: PMC6865374 DOI: 10.1002/hbm.24757] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2019] [Revised: 05/30/2019] [Accepted: 07/23/2019] [Indexed: 11/10/2022] Open
Abstract
Investigations into the neural basis of reading have shed light on the cortical locus and the functional role of visual‐orthographic processing. Yet, the fine‐grained structure of neural representations subserving reading remains to be clarified. Here, we capitalize on the spatiotemporal structure of electroencephalography (EEG) data to examine if and how EEG patterns can serve to decode and reconstruct the internal representation of visually presented words in healthy adults. Our results show that word classification and image reconstruction were accurate well above chance, that their temporal profile exhibited an early onset, soon after 100 ms, and peaked around 170 ms. Further, reconstruction results were well explained by a combination of visual‐orthographic word properties. Last, systematic individual differences were detected in orthographic representations across participants. Collectively, our results establish the feasibility of EEG‐based word decoding and image reconstruction. More generally, they help to elucidate the specific features, dynamics, and neurocomputational principles underlying word recognition.
Collapse
Affiliation(s)
- Shouyu Ling
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.,Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Blair C Armstrong
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.,BCBL, Basque Center on Cognition, Brain, and Language, Donostia, San Sebastián, Spain
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
19
|
Modelling face memory reveals task-generalizable representations. Nat Hum Behav 2019; 3:817-826. [PMID: 31209368 DOI: 10.1038/s41562-019-0625-3] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/20/2018] [Accepted: 05/02/2019] [Indexed: 11/08/2022]
Abstract
Current cognitive theories are cast in terms of information-processing mechanisms that use mental representations1-4. For example, people use their mental representations to identify familiar faces under various conditions of pose, illumination and ageing, or to draw resemblance between family members. Yet, the actual information contents of these representations are rarely characterized, which hinders knowledge of the mechanisms that use them. Here, we modelled the three-dimensional representational contents of 4 faces that were familiar to 14 participants as work colleagues. The representational contents were created by reverse-correlating identity information generated on each trial with judgements of the face's similarity to the individual participant's memory of this face. In a second study, testing new participants, we demonstrated the validity of the modelled contents using everyday face tasks that generalize identity judgements to new viewpoints, age and sex. Our work highlights that such models of mental representations are critical to understanding generalization behaviour and its underlying information-processing mechanisms.
Collapse
|
20
|
Perceptual Function and Category-Selective Neural Organization in Children with Resections of Visual Cortex. J Neurosci 2019; 39:6299-6314. [PMID: 31167940 DOI: 10.1523/jneurosci.3160-18.2019] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2018] [Revised: 05/13/2019] [Accepted: 05/13/2019] [Indexed: 12/20/2022] Open
Abstract
The consequences of cortical resection, a treatment for humans with pharmaco-resistant epilepsy, provide a unique opportunity to advance our understanding of the nature and extent of cortical (re)organization. Despite the importance of visual processing in daily life, the neural and perceptual sequellae of occipitotemporal resections remain largely unexplored. Using psychophysical and fMRI investigations, we compared the neural and visuoperceptual profiles of 10 children or adolescents following unilateral cortical resections and their age- and gender-matched controls. Dramatically, with the exception of two individuals, both of whom had relatively greater cortical alterations, all patients showed normal perceptual performance on tasks of intermediate- and high-level vision, including face and object recognition. Consistently, again with the exception of the same two individuals, both univariate and multivariate fMRI analyses revealed normal selectivity and representational structure of category-selective regions. Furthermore, the spatial organization of category-selective regions obeyed the typical medial-to-lateral topographic organization albeit unilaterally in the structurally preserved hemisphere rather than bilaterally. These findings offer novel insights into the malleability of cortex in the pediatric population and suggest that, although experience may be necessary for the emergence of neural category-selectivity, this emergence is not necessarily contingent on the integrity of particular cortical structures.SIGNIFICANCE STATEMENT One approach to reduce seizure activity in patients with pharmaco-resistant epilepsy involves the resection of the epileptogenic focus. The impact of these resections on the perceptual behaviors and organization of visual cortex remain largely unexplored. Here, we characterized the visuoperceptual and neural profiles of ventral visual cortex in a relatively large sample of post-resection pediatric patients. Two major findings emerged. First, most patients exhibited preserved visuoperceptual performance across a wide-range of visual behaviors. Second, normal topography, magnitude, and representational structure of category-selective organization were uncovered in the spared hemisphere. These comprehensive imaging and behavioral investigations uncovered novel evidence concerning the neural representations and visual functions in children who have undergone cortical resection, and have implications for cortical plasticity more generally.
Collapse
|
21
|
Sama MA, Nestor A, Cant JS. Independence of viewpoint and identity in face ensemble processing. J Vis 2019; 19:2. [DOI: 10.1167/19.5.2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Marco A. Sama
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| | - Adrian Nestor
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| | - Jonathan S. Cant
- Department of Psychology, University of Toronto Scarborough, Toronto, Canada
| |
Collapse
|
22
|
Kamps FS, Morris EJ, Dilks DD. A face is more than just the eyes, nose, and mouth: fMRI evidence that face-selective cortex represents external features. Neuroimage 2019; 184:90-100. [PMID: 30217542 PMCID: PMC6230492 DOI: 10.1016/j.neuroimage.2018.09.027] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/30/2018] [Accepted: 09/10/2018] [Indexed: 11/30/2022] Open
Abstract
What is a face? Intuition, along with abundant behavioral and neural evidence, indicates that internal features (e.g., eyes, nose, mouth) are critical for face recognition, yet some behavioral and neural findings suggest that external features (e.g., hair, head outline, neck and shoulders) may likewise be processed as a face. Here we directly test this hypothesis by investigating how external (and internal) features are represented in the brain. Using fMRI, we found highly selective responses to external features (relative to objects and scenes) within the face processing system in particular, rivaling that observed for internal features. We then further asked how external and internal features are represented in regions of the cortical face processing system, and found a similar division of labor for both kinds of features, with the occipital face area and posterior superior temporal sulcus representing the parts of both internal and external features, and the fusiform face area representing the coherent arrangement of both internal and external features. Taken together, these results provide strong neural evidence that a "face" is composed of both internal and external features.
Collapse
Affiliation(s)
- Frederik S Kamps
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Ethan J Morris
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Daniel D Dilks
- Department of Psychology, Emory University, Atlanta, GA 30322, USA.
| |
Collapse
|
23
|
Nemrodov D, Behrmann M, Niemeier M, Drobotenko N, Nestor A. Multimodal evidence on shape and surface information in individual face processing. Neuroimage 2019; 184:813-825. [DOI: 10.1016/j.neuroimage.2018.09.083] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2018] [Revised: 09/22/2018] [Accepted: 09/30/2018] [Indexed: 11/27/2022] Open
|
24
|
Dima DC, Perry G, Messaritaki E, Zhang J, Singh KD. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces. Hum Brain Mapp 2018; 39:3993-4006. [PMID: 29885055 PMCID: PMC6175429 DOI: 10.1002/hbm.24226] [Citation(s) in RCA: 33] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/13/2018] [Accepted: 05/14/2018] [Indexed: 12/05/2022] Open
Abstract
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time‐resolved decoding of sensor‐level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time‐resolved relevance patterns in source space track expression‐related information from the visual cortex (100 ms) to higher‐level temporal and frontal areas (200–500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
Collapse
Affiliation(s)
- Diana C Dima
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, CF24 4HQ, United Kingdom
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, CF24 4HQ, United Kingdom
| | - Eirini Messaritaki
- BRAIN Unit, School of Medicine, Cardiff University, Cardiff, CF24 4HQ, United Kingdom.,Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, CF24 4HQ, United Kingdom
| | - Jiaxiang Zhang
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, CF24 4HQ, United Kingdom
| | - Krish D Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff University, Cardiff, CF24 4HQ, United Kingdom
| |
Collapse
|
25
|
Zachariou V, Nikas CV, Safiullah ZN, Gotts SJ, Ungerleider LG. Spatial Mechanisms within the Dorsal Visual Pathway Contribute to the Configural Processing of Faces. Cereb Cortex 2018; 27:4124-4138. [PMID: 27522076 DOI: 10.1093/cercor/bhw224] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2016] [Accepted: 06/28/2016] [Indexed: 11/12/2022] Open
Abstract
Human face recognition is often attributed to configural processing; namely, processing the spatial relationships among the features of a face. If configural processing depends on fine-grained spatial information, do visuospatial mechanisms within the dorsal visual pathway contribute to this process? We explored this question in human adults using functional magnetic resonance imaging and transcranial magnetic stimulation (TMS) in a same-different face detection task. Within localized, spatial-processing regions of the posterior parietal cortex, configural face differences led to significantly stronger activation compared to featural face differences, and the magnitude of this activation correlated with behavioral performance. In addition, detection of configural relative to featural face differences led to significantly stronger functional connectivity between the right FFA and the spatial processing regions of the dorsal stream, whereas detection of featural relative to configural face differences led to stronger functional connectivity between the right FFA and left FFA. Critically, TMS centered on these parietal regions impaired performance on configural but not featural face difference detections. We conclude that spatial mechanisms within the dorsal visual pathway contribute to the configural processing of facial features and, more broadly, that the dorsal stream may contribute to the veridical perception of faces.
Collapse
Affiliation(s)
| | - Christine V Nikas
- Laboratory of Brain and Cognition, NIMH/NIH, Bethesda, MD20892-1366, USA
| | - Zaid N Safiullah
- Laboratory of Brain and Cognition, NIMH/NIH, Bethesda, MD20892-1366, USA
| | - Stephen J Gotts
- Laboratory of Brain and Cognition, NIMH/NIH, Bethesda, MD20892-1366, USA
| | | |
Collapse
|
26
|
The Neural Dynamics of Facial Identity Processing: Insights from EEG-Based Pattern Analysis and Image Reconstruction. eNeuro 2018; 5:eN-NWR-0358-17. [PMID: 29492452 PMCID: PMC5829556 DOI: 10.1523/eneuro.0358-17.2018] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2017] [Revised: 01/11/2018] [Accepted: 01/12/2018] [Indexed: 11/21/2022] Open
Abstract
Uncovering the neural dynamics of facial identity processing along with its representational basis outlines a major endeavor in the study of visual processing. To this end, here, we record human electroencephalography (EEG) data associated with viewing face stimuli; then, we exploit spatiotemporal EEG information to determine the neural correlates of facial identity representations and to reconstruct the appearance of the corresponding stimuli. Our findings indicate that multiple temporal intervals support: facial identity classification, face space estimation, visual feature extraction and image reconstruction. In particular, we note that both classification and reconstruction accuracy peak in the proximity of the N170 component. Further, aggregate data from a larger interval (50–650 ms after stimulus onset) support robust reconstruction results, consistent with the availability of distinct visual information over time. Thus, theoretically, our findings shed light on the time course of face processing while, methodologically, they demonstrate the feasibility of EEG-based image reconstruction.
Collapse
|
27
|
Chang CH, Nemrodov D, Lee ACH, Nestor A. Memory and Perception-based Facial Image Reconstruction. Sci Rep 2017; 7:6499. [PMID: 28747686 PMCID: PMC5529548 DOI: 10.1038/s41598-017-06585-2] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 06/14/2017] [Indexed: 01/25/2023] Open
Abstract
Visual memory for faces has been extensively researched, especially regarding the main factors that influence face memorability. However, what we remember exactly about a face, namely, the pictorial content of visual memory, remains largely unclear. The current work aims to elucidate this issue by reconstructing face images from both perceptual and memory-based behavioural data. Specifically, our work builds upon and further validates the hypothesis that visual memory and perception share a common representational basis underlying facial identity recognition. To this end, we derived facial features directly from perceptual data and then used such features for image reconstruction separately from perception and memory data. Successful levels of reconstruction were achieved in both cases for newly-learned faces as well as for familiar faces retrieved from long-term memory. Theoretically, this work provides insights into the content of memory-based representations while, practically, it may open the path to novel applications, such as computer-based 'sketch artists'.
Collapse
Affiliation(s)
- Chi-Hsun Chang
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada.
| | - Dan Nemrodov
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| | - Andy C H Lee
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
- Rotman Research Institute, Baycrest Centre, Toronto, Ontario, Canada
| | - Adrian Nestor
- Department of Psychology at Scarborough, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
28
|
Chang L, Tsao DY. The Code for Facial Identity in the Primate Brain. Cell 2017; 169:1013-1028.e14. [PMID: 28575666 DOI: 10.1016/j.cell.2017.05.011] [Citation(s) in RCA: 294] [Impact Index Per Article: 42.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2017] [Revised: 03/29/2017] [Accepted: 05/03/2017] [Indexed: 11/16/2022]
Abstract
Primates recognize complex objects such as faces with remarkable speed and reliability. Here, we reveal the brain's code for facial identity. Experiments in macaques demonstrate an extraordinarily simple transformation between faces and responses of cells in face patches. By formatting faces as points in a high-dimensional linear space, we discovered that each face cell's firing rate is proportional to the projection of an incoming face stimulus onto a single axis in this space, allowing a face cell ensemble to encode the location of any face in the space. Using this code, we could precisely decode faces from neural population responses and predict neural firing rates to faces. Furthermore, this code disavows the long-standing assumption that face cells encode specific facial identities, confirmed by engineering faces with drastically different appearance that elicited identical responses in single face cells. Our work suggests that other objects could be encoded by analogous metric coordinate systems. PAPERCLIP.
Collapse
Affiliation(s)
- Le Chang
- Division of Biology and Biological Engineering, Computation and Neural Systems, Caltech, Pasadena, CA 91125, USA.
| | - Doris Y Tsao
- Division of Biology and Biological Engineering, Computation and Neural Systems, Caltech, Pasadena, CA 91125, USA; Howard Hughes Medical Institute, Pasadena, CA 91125, USA.
| |
Collapse
|
29
|
Goddard E, Klein C, Solomon SG, Hogendoorn H, Carlson TA. Interpreting the dimensions of neural feature representations revealed by dimensionality reduction. Neuroimage 2017; 180:41-67. [PMID: 28663068 DOI: 10.1016/j.neuroimage.2017.06.068] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/23/2017] [Accepted: 06/23/2017] [Indexed: 10/19/2022] Open
Abstract
Recent progress in understanding the structure of neural representations in the cerebral cortex has centred around the application of multivariate classification analyses to measurements of brain activity. These analyses have proved a sensitive test of whether given brain regions provide information about specific perceptual or cognitive processes. An exciting extension of this approach is to infer the structure of this information, thereby drawing conclusions about the underlying neural representational space. These approaches rely on exploratory data-driven dimensionality reduction to extract the natural dimensions of neural spaces, including natural visual object and scene representations, semantic and conceptual knowledge, and working memory. However, the efficacy of these exploratory methods is unknown, because they have only been applied to representations in brain areas for which we have little or no secondary knowledge. One of the best-understood areas of the cerebral cortex is area MT of primate visual cortex, which is known to be important in motion analysis. To assess the effectiveness of dimensionality reduction for recovering neural representational space we applied several dimensionality reduction methods to multielectrode measurements of spiking activity obtained from area MT of marmoset monkeys, made while systematically varying the motion direction and speed of moving stimuli. Despite robust tuning at individual electrodes, and high classifier performance, dimensionality reduction rarely revealed dimensions for direction and speed. We use this example to illustrate important limitations of these analyses, and suggest a framework for how to best apply such methods to data where the structure of the neural representation is unknown.
Collapse
Affiliation(s)
- Erin Goddard
- McGill Vision Research, Dept of Ophthalmology, McGill University, Montreal, QC, H3G 1A4, Canada; School of Psychology, University of Sydney, Sydney, NSW, 2006, Australia; ARC Centre of Excellence in Cognition and Its Disorders (CCD), Macquarie University, Sydney, NSW, 2109, Australia.
| | - Colin Klein
- ARC Centre of Excellence in Cognition and Its Disorders (CCD), Macquarie University, Sydney, NSW, 2109, Australia; Department of Philosophy, Macquarie University, Sydney, NSW, 2109, Australia
| | - Samuel G Solomon
- Department of Experimental Psychology, University College London, Gower Street, London, WC1E 6BT, United Kingdom
| | - Hinze Hogendoorn
- School of Psychology, University of Sydney, Sydney, NSW, 2006, Australia; Helmholtz Institute, Neuroscience & Cognition Utrecht, Experimental Psychology Division, Utrecht University, Utrecht, The Netherlands
| | - Thomas A Carlson
- School of Psychology, University of Sydney, Sydney, NSW, 2006, Australia; ARC Centre of Excellence in Cognition and Its Disorders (CCD), Macquarie University, Sydney, NSW, 2109, Australia
| |
Collapse
|
30
|
Abstract
Progress in understanding the relation between brain profiles and emotions is being slowed by the belief in a collection of basic emotional states, with the names: fear, anger, joy, disgust, and sadness, that do not specify the species or age of the experiencing agent, the origin of the state, or the evidence used to infer it. This article evaluates critically the premise that decontextualized emotional words refer to natural kinds. It also suggests that investigators set aside the currently popular words and search for relations, in humans and animals, between patterns of measures to varied incentives presented in distinctive contexts.
Collapse
Affiliation(s)
- Jerome Kagan
- Department of Psychology, Harvard University, USA
| |
Collapse
|
31
|
Spatiotemporal dynamics of similarity-based neural representations of facial identity. Proc Natl Acad Sci U S A 2016; 114:388-393. [PMID: 28028220 DOI: 10.1073/pnas.1614763114] [Citation(s) in RCA: 34] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
Humans' remarkable ability to quickly and accurately discriminate among thousands of highly similar complex objects demands rapid and precise neural computations. To elucidate the process by which this is achieved, we used magnetoencephalography to measure spatiotemporal patterns of neural activity with high temporal resolution during visual discrimination among a large and carefully controlled set of faces. We also compared these neural data to lower level "image-based" and higher level "identity-based" model-based representations of our stimuli and to behavioral similarity judgments of our stimuli. Between ∼50 and 400 ms after stimulus onset, face-selective sources in right lateral occipital cortex and right fusiform gyrus and sources in a control region (left V1) yielded successful classification of facial identity. In all regions, early responses were more similar to the image-based representation than to the identity-based representation. In the face-selective regions only, responses were more similar to the identity-based representation at several time points after 200 ms. Behavioral responses were more similar to the identity-based representation than to the image-based representation, and their structure was predicted by responses in the face-selective regions. These results provide a temporally precise description of the transformation from low- to high-level representations of facial identity in human face-selective cortex and demonstrate that face-selective cortical regions represent multiple distinct types of information about face identity at different times over the first 500 ms after stimulus onset. These results have important implications for understanding the rapid emergence of fine-grained, high-level representations of object identity, a computation essential to human visual expertise.
Collapse
|