1
|
Schwartz E, Alreja A, Richardson RM, Ghuman A, Anzellotti S. Intracranial Electroencephalography and Deep Neural Networks Reveal Shared Substrates for Representations of Face Identity and Expressions. J Neurosci 2023; 43:4291-4303. [PMID: 37142430 PMCID: PMC10255163 DOI: 10.1523/jneurosci.1277-22.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2022] [Revised: 03/25/2023] [Accepted: 04/17/2023] [Indexed: 05/06/2023] Open
Abstract
According to a classical view of face perception (Bruce and Young, 1986; Haxby et al., 2000), face identity and facial expression recognition are performed by separate neural substrates (ventral and lateral temporal face-selective regions, respectively). However, recent studies challenge this view, showing that expression valence can also be decoded from ventral regions (Skerry and Saxe, 2014; Li et al., 2019), and identity from lateral regions (Anzellotti and Caramazza, 2017). These findings could be reconciled with the classical view if regions specialized for one task (either identity or expression) contain a small amount of information for the other task (that enables above-chance decoding). In this case, we would expect representations in lateral regions to be more similar to representations in deep convolutional neural networks (DCNNs) trained to recognize facial expression than to representations in DCNNs trained to recognize face identity (the converse should hold for ventral regions). We tested this hypothesis by analyzing neural responses to faces varying in identity and expression. Representational dissimilarity matrices (RDMs) computed from human intracranial recordings (n = 11 adults; 7 females) were compared with RDMs from DCNNs trained to label either identity or expression. We found that RDMs from DCNNs trained to recognize identity correlated with intracranial recordings more strongly in all regions tested-even in regions classically hypothesized to be specialized for expression. These results deviate from the classical view, suggesting that face-selective ventral and lateral regions contribute to the representation of both identity and expression.SIGNIFICANCE STATEMENT Previous work proposed that separate brain regions are specialized for the recognition of face identity and facial expression. However, identity and expression recognition mechanisms might share common brain regions instead. We tested these alternatives using deep neural networks and intracranial recordings from face-selective brain regions. Deep neural networks trained to recognize identity and networks trained to recognize expression learned representations that correlate with neural recordings. Identity-trained representations correlated with intracranial recordings more strongly in all regions tested, including regions hypothesized to be expression specialized in the classical hypothesis. These findings support the view that identity and expression recognition rely on common brain regions. This discovery may require reevaluation of the roles that the ventral and lateral neural pathways play in processing socially relevant stimuli.
Collapse
Affiliation(s)
- Emily Schwartz
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, Massachusetts 02467
| | - Arish Alreja
- Center for the Neural Basis of Cognition, Carnegie Mellon University/University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Machine Learning Department, Carnegie Mellon University, Pittsburgh, Pennsylvania 15213
- Department of Neurological Surgery, University of Pittsburgh Medical Center Presbyterian, Pittsburgh, Pennsylvania 15213
| | - R Mark Richardson
- Department of Neurosurgery, Massachusetts General Hospital, Boston, Massachusetts 02114
- Harvard Medical School, Boston, Massachusetts 02115
| | - Avniel Ghuman
- Center for the Neural Basis of Cognition, Carnegie Mellon University/University of Pittsburgh, Pittsburgh, Pennsylvania 15213
- Department of Neurological Surgery, University of Pittsburgh Medical Center Presbyterian, Pittsburgh, Pennsylvania 15213
- Center for Neuroscience, University of Pittsburgh, Pittsburgh, Pennsylvania 15260
| | - Stefano Anzellotti
- Department of Psychology and Neuroscience, Boston College, Chestnut Hill, Massachusetts 02467
| |
Collapse
|
2
|
Kanwisher N, Khosla M, Dobs K. Using artificial neural networks to ask 'why' questions of minds and brains. Trends Neurosci 2023; 46:240-254. [PMID: 36658072 DOI: 10.1016/j.tins.2022.12.008] [Citation(s) in RCA: 12] [Impact Index Per Article: 12.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2022] [Revised: 11/29/2022] [Accepted: 12/22/2022] [Indexed: 01/19/2023]
Abstract
Neuroscientists have long characterized the properties and functions of the nervous system, and are increasingly succeeding in answering how brains perform the tasks they do. But the question 'why' brains work the way they do is asked less often. The new ability to optimize artificial neural networks (ANNs) for performance on human-like tasks now enables us to approach these 'why' questions by asking when the properties of networks optimized for a given task mirror the behavioral and neural characteristics of humans performing the same task. Here we highlight the recent success of this strategy in explaining why the visual and auditory systems work the way they do, at both behavioral and neural levels.
Collapse
Affiliation(s)
- Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Meenakshi Khosla
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, USA; McGovern Institute for Brain Research, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Katharina Dobs
- Department of Psychology, Justus Liebig University Giessen, Giessen, Germany; Center for Mind, Brain and Behavior (CMBB), University of Marburg and Justus Liebig University, Giessen, Germany.
| |
Collapse
|
3
|
Schwartz E, O’Nell K, Saxe R, Anzellotti S. Challenging the Classical View: Recognition of Identity and Expression as Integrated Processes. Brain Sci 2023; 13:296. [PMID: 36831839 PMCID: PMC9954353 DOI: 10.3390/brainsci13020296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 02/01/2023] [Accepted: 02/02/2023] [Indexed: 02/12/2023] Open
Abstract
Recent neuroimaging evidence challenges the classical view that face identity and facial expression are processed by segregated neural pathways, showing that information about identity and expression are encoded within common brain regions. This article tests the hypothesis that integrated representations of identity and expression arise spontaneously within deep neural networks. A subset of the CelebA dataset is used to train a deep convolutional neural network (DCNN) to label face identity (chance = 0.06%, accuracy = 26.5%), and the FER2013 dataset is used to train a DCNN to label facial expression (chance = 14.2%, accuracy = 63.5%). The identity-trained and expression-trained networks each successfully transfer to labeling both face identity and facial expression on the Karolinska Directed Emotional Faces dataset. This study demonstrates that DCNNs trained to recognize face identity and DCNNs trained to recognize facial expression spontaneously develop representations of facial expression and face identity, respectively. Furthermore, a congruence coefficient analysis reveals that features distinguishing between identities and features distinguishing between expressions become increasingly orthogonal from layer to layer, suggesting that deep neural networks disentangle representational subspaces corresponding to different sources.
Collapse
Affiliation(s)
- Emily Schwartz
- Department of Psychology and Neuroscience, Boston College, Boston, MA 02467, USA
| | - Kathryn O’Nell
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH 03755, USA
| | - Rebecca Saxe
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA 02139, USA
| | - Stefano Anzellotti
- Department of Psychology and Neuroscience, Boston College, Boston, MA 02467, USA
| |
Collapse
|
4
|
Representational structure of fMRI/EEG responses to dynamic facial expressions. Neuroimage 2022; 263:119631. [PMID: 36113736 DOI: 10.1016/j.neuroimage.2022.119631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 09/09/2022] [Accepted: 09/12/2022] [Indexed: 11/23/2022] Open
Abstract
Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.
Collapse
|
5
|
Ramezanpour H, Fallah M. The role of temporal cortex in the control of attention. CURRENT RESEARCH IN NEUROBIOLOGY 2022; 3:100038. [PMID: 36685758 PMCID: PMC9846471 DOI: 10.1016/j.crneur.2022.100038] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2021] [Revised: 02/05/2022] [Accepted: 04/01/2022] [Indexed: 01/25/2023] Open
Abstract
Attention is an indispensable component of active vision. Contrary to the widely accepted notion that temporal cortex processing primarily focusses on passive object recognition, a series of very recent studies emphasize the role of temporal cortex structures, specifically the superior temporal sulcus (STS) and inferotemporal (IT) cortex, in guiding attention and implementing cognitive programs relevant for behavioral tasks. The goal of this theoretical paper is to advance the hypothesis that the temporal cortex attention network (TAN) entails necessary components to actively participate in attentional control in a flexible task-dependent manner. First, we will briefly discuss the general architecture of the temporal cortex with a focus on the STS and IT cortex of monkeys and their modulation with attention. Then we will review evidence from behavioral and neurophysiological studies that support their guidance of attention in the presence of cognitive control signals. Next, we propose a mechanistic framework for executive control of attention in the temporal cortex. Finally, we summarize the role of temporal cortex in implementing cognitive programs and discuss how they contribute to the dynamic nature of visual attention to ensure flexible behavior.
Collapse
Affiliation(s)
- Hamidreza Ramezanpour
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Corresponding author. Centre for Vision Research, York University, Toronto, Ontario, Canada.
| | - Mazyar Fallah
- Centre for Vision Research, York University, Toronto, Ontario, Canada,School of Kinesiology and Health Science, Faculty of Health, York University, Toronto, Ontario, Canada,VISTA: Vision Science to Application, York University, Toronto, Ontario, Canada,Department of Psychology, Faculty of Health, York University, Toronto, Ontario, Canada,Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada,Corresponding author. Department of Human Health and Nutritional Sciences, College of Biological Science, University of Guelph, Guelph, Ontario, Canada.
| |
Collapse
|
6
|
The neural coding of face and body orientation in occipitotemporal cortex. Neuroimage 2021; 246:118783. [PMID: 34879251 DOI: 10.1016/j.neuroimage.2021.118783] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/14/2021] [Revised: 11/09/2021] [Accepted: 12/04/2021] [Indexed: 11/20/2022] Open
Abstract
Face and body orientation convey important information for us to understand other people's actions, intentions and social interactions. It has been shown that several occipitotemporal areas respond differently to faces or bodies of different orientations. However, whether face and body orientation are processed by partially overlapping or completely separate brain networks remains unclear, as the neural coding of face and body orientation is often investigated separately. Here, we recorded participants' brain activity using fMRI while they viewed faces and bodies shown from three different orientations, while attending to either orientation or identity information. Using multivoxel pattern analysis we investigated which brain regions process face and body orientation respectively, and which regions encode both face and body orientation in a stimulus-independent manner. We found that patterns of neural responses evoked by different stimulus orientations in the occipital face area, extrastriate body area, lateral occipital complex and right early visual cortex could generalise across faces and bodies, suggesting a stimulus-independent encoding of person orientation in occipitotemporal cortex. This finding was consistent across functionally defined regions of interest and a whole-brain searchlight approach. The fusiform face area responded to face but not body orientation, suggesting that orientation responses in this area are face-specific. Moreover, neural responses to orientation were remarkably consistent regardless of whether participants attended to the orientation of faces and bodies or not. Together, these results demonstrate that face and body orientation are processed in a partially overlapping brain network, with a stimulus-independent neural code for face and body orientation in occipitotemporal cortex.
Collapse
|
7
|
Qiu S, Mei G. Spontaneous recovery of adaptation aftereffects of natural facial categories. Vision Res 2021; 188:202-210. [PMID: 34365177 DOI: 10.1016/j.visres.2021.07.015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2021] [Revised: 07/07/2021] [Accepted: 07/23/2021] [Indexed: 10/20/2022]
Abstract
Adaptation to a natural face attribute such as a happy face can bias the perception of a subsequent face in this dimension such as a neutral face. Such face adaptation aftereffects have been widely found in many natural facial categories. However, how temporally tuned mechanisms could control the temporal dynamics of natural face adaptation aftereffects remains unknown. To address the question, we used a deadaptation paradigm to examine whether the spontaneous recovery of natural facial aftereffects would emerge in four natural facial categories including variable categories (emotional expressions in Experiment 1 and eye gaze in Experiment 2) and invariable categories (facial gender in Experiment 3 and facial identity in Experiment 4). In the deadaptation paradigm, participants adapted to a face with an extreme attribute (such as a 100% angry face in Experiment 1) for a relatively long duration, and then deadapted to a face with an opposite extreme attribute (such as a 100% happy face in Experiment 1) for a relatively short duration. The time courses of face adaptation aftereffects were measured using a top-up manner. Deadaptation only masked the effects of initial longer-lasting adaptation, and the spontaneous recovery of adaptation aftereffects was observed at the post-test stage for all four natural facial categories. These results likely indicate that the temporal dynamics of adaptation aftereffects of natural facial categories may be controlled by multiple temporally tuned mechanisms.
Collapse
Affiliation(s)
- Shiming Qiu
- School of Psychology, Guizhou Normal University, Guiyang, PR China
| | - Gaoxing Mei
- School of Psychology, Guizhou Normal University, Guiyang, PR China.
| |
Collapse
|
8
|
Richardson H, Taylor J, Kane-Grade F, Powell L, Bosquet Enlow M, Nelson C. Preferential responses to faces in superior temporal and medial prefrontal cortex in three-year-old children. Dev Cogn Neurosci 2021; 50:100984. [PMID: 34246062 PMCID: PMC8274289 DOI: 10.1016/j.dcn.2021.100984] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2021] [Revised: 06/04/2021] [Accepted: 06/29/2021] [Indexed: 10/25/2022] Open
Abstract
Perceiving faces and understanding emotions are key components of human social cognition. Prior research with adults and infants suggests that these social cognitive functions are supported by superior temporal cortex (STC) and medial prefrontal cortex (MPFC). We used functional near-infrared spectroscopy (fNIRS) to characterize functional responses in these cortical regions to faces in early childhood. Three-year-old children (n = 88, M(SD) = 3.15(.16) years) passively viewed faces that varied in emotional content and valence (happy, angry, fearful, neutral) and, for fearful and angry faces, intensity (100%, 40%), while undergoing fNIRS. Bilateral STC and MPFC showed greater oxygenated hemoglobin concentration values to all faces relative to objects. MPFC additionally responded preferentially to happy faces relative to neutral faces. We did not detect preferential responses to angry or fearful faces, or overall differences in response magnitude by emotional valence (100% happy vs. fearful and angry) or intensity (100% vs. 40% fearful and angry). In exploratory analyses, preferential responses to faces in MPFC were not robustly correlated with performance on tasks of early social cognition. These results link and extend adult and infant research on functional responses to faces in STC and MPFC and contribute to the characterization of the neural correlates of early social cognition.
Collapse
Affiliation(s)
- H. Richardson
- Department of Pediatrics, Boston Children’s Hospital, United States
- Department of Pediatrics, Harvard Medical School, United States
- School of Philosophy, Psychology and Language Sciences, University of Edinburgh, United Kingdom
| | - J. Taylor
- Department of Pediatrics, Boston Children’s Hospital, United States
- Department of Pediatrics, Harvard Medical School, United States
| | - F. Kane-Grade
- Department of Pediatrics, Boston Children’s Hospital, United States
- Department of Pediatrics, Harvard Medical School, United States
- Institute of Child Development, University of Minnesota, United States
| | - L. Powell
- Department of Psychology, University of California San Diego, United States
| | - M. Bosquet Enlow
- Department of Psychiatry, Boston Children’s Hospital, United States
- Department of Psychiatry, Harvard Medical School, United States
| | - C.A. Nelson
- Department of Pediatrics, Boston Children’s Hospital, United States
- Department of Pediatrics, Harvard Medical School, United States
- Graduate School of Education, Harvard University, United States
| |
Collapse
|
9
|
Foster C, Zhao M, Bolkart T, Black MJ, Bartels A, Bülthoff I. Separated and overlapping neural coding of face and body identity. Hum Brain Mapp 2021; 42:4242-4260. [PMID: 34032361 PMCID: PMC8356992 DOI: 10.1002/hbm.25544] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/30/2021] [Revised: 04/15/2021] [Accepted: 05/13/2021] [Indexed: 11/25/2022] Open
Abstract
Recognising a person's identity often relies on face and body information, and is tolerant to changes in low‐level visual input (e.g., viewpoint changes). Previous studies have suggested that face identity is disentangled from low‐level visual input in the anterior face‐responsive regions. It remains unclear which regions disentangle body identity from variations in viewpoint, and whether face and body identity are encoded separately or combined into a coherent person identity representation. We trained participants to recognise three identities, and then recorded their brain activity using fMRI while they viewed face and body images of these three identities from different viewpoints. Participants' task was to respond to either the stimulus identity or viewpoint. We found consistent decoding of body identity across viewpoint in the fusiform body area, right anterior temporal cortex, middle frontal gyrus and right insula. This finding demonstrates a similar function of fusiform and anterior temporal cortex for bodies as has previously been shown for faces, suggesting these regions may play a general role in extracting high‐level identity information. Moreover, we could decode identity across fMRI activity evoked by faces and bodies in the early visual cortex, right inferior occipital cortex, right parahippocampal cortex and right superior parietal cortex, revealing a distributed network that encodes person identity abstractly. Lastly, identity decoding was consistently better when participants attended to identity, indicating that attention to identity enhances its neural representation. These results offer new insights into how the brain develops an abstract neural coding of person identity, shared by faces and bodies.
Collapse
Affiliation(s)
- Celia Foster
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Max Planck Institute for Intelligent Systems, Tübingen, Germany.,International Max Planck Research School for Cognitive and Systems Neuroscience, University of Tübingen, Tübingen, Germany
| | - Mintao Zhao
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,School of Psychology, University of East Anglia, UK
| | - Timo Bolkart
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael J Black
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Andreas Bartels
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany.,Centre for Integrative Neuroscience, Tübingen, Germany.,Department of Psychology, University of Tübingen, Germany.,Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | | |
Collapse
|
10
|
Dowdle LT, Ghose G, Ugurbil K, Yacoub E, Vizioli L. Clarifying the role of higher-level cortices in resolving perceptual ambiguity using ultra high field fMRI. Neuroimage 2021; 227:117654. [PMID: 33333319 PMCID: PMC10614695 DOI: 10.1016/j.neuroimage.2020.117654] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 11/17/2020] [Accepted: 12/05/2020] [Indexed: 12/17/2022] Open
Abstract
The brain is organized into distinct, flexible networks. Within these networks, cognitive variables such as attention can modulate sensory representations in accordance with moment-to-moment behavioral requirements. These modulations can be studied by varying task demands; however, the tasks employed are often incongruent with the postulated functions of a sensory system, limiting the characterization of the system in relation to natural behaviors. Here we combine domain-specific task manipulations and ultra-high field fMRI to study the nature of top-down modulations. We exploited faces, a visual category underpinned by a complex cortical network, and instructed participants to perform either a stimulus-relevant/domain-specific or a stimulus-irrelevant task in the scanner. We found that 1. perceptual ambiguity (i.e. difficulty of achieving a stable percept) is encoded in top-down modulations from higher-level cortices; 2. the right inferior-temporal lobe is active under challenging conditions and uniquely encodes trial-by-trial variability in face perception.
Collapse
Affiliation(s)
- Logan T Dowdle
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN 55455.
| | - Geoffrey Ghose
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN 55455
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States
| | - Essa Yacoub
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States
| | - Luca Vizioli
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neurosurgery, University of Minnesota, 500 SE Harvard St, Minneapolis, MN 55455.
| |
Collapse
|
11
|
Hendriks MHA, Dillen C, Vettori S, Vercammen L, Daniels N, Steyaert J, Op de Beeck H, Boets B. Neural processing of facial identity and expression in adults with and without autism: A multi-method approach. NEUROIMAGE-CLINICAL 2020; 29:102520. [PMID: 33338966 PMCID: PMC7750419 DOI: 10.1016/j.nicl.2020.102520] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 10/23/2020] [Accepted: 11/30/2020] [Indexed: 11/28/2022]
Abstract
The ability to recognize faces and facial expressions is a common human talent. It has, however, been suggested to be impaired in individuals with autism spectrum disorder (ASD). The goal of this study was to compare the processing of facial identity and emotion between individuals with ASD and neurotypicals (NTs). Behavioural and functional magnetic resonance imaging (fMRI) data from 46 young adults (aged 17-23 years, NASD = 22, NNT = 24) was analysed. During fMRI data acquisition, participants discriminated between short clips of a face transitioning from a neutral to an emotional expression. Stimuli included four identities and six emotions. We performed behavioural, univariate, multi-voxel, adaptation and functional connectivity analyses to investigate potential group differences. The ASD-group did not differ from the NT-group on behavioural identity and expression processing tasks. At the neural level, we found no differences in average neural activation, neural activation patterns and neural adaptation to faces in face-related brain regions. In terms of functional connectivity, we found that amygdala seems to be more strongly connected to inferior occipital cortex and V1 in individuals with ASD. Overall, the findings indicate that neural representations of facial identity and expression have a similar quality in individuals with and without ASD, but some regions containing these representations are connected differently in the extended face processing network.
Collapse
Affiliation(s)
- Michelle H A Hendriks
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Claudia Dillen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Sofie Vettori
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Laura Vercammen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium
| | - Nicky Daniels
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Jean Steyaert
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Hans Op de Beeck
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Bart Boets
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium.
| |
Collapse
|
12
|
Kovács G. Getting to Know Someone: Familiarity, Person Recognition, and Identification in the Human Brain. J Cogn Neurosci 2020; 32:2205-2225. [DOI: 10.1162/jocn_a_01627] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
In our everyday life, we continuously get to know people, dominantly through their faces. Several neuroscientific experiments showed that familiarization changes the behavioral processing and underlying neural representation of faces of others. Here, we propose a model of the process of how we actually get to know someone. First, the purely visual familiarization of unfamiliar faces occurs. Second, the accumulation of associated, nonsensory information refines person representation, and finally, one reaches a stage where the effortless identification of very well-known persons occurs. We offer here an overview of neuroimaging studies, first evaluating how and in what ways the processing of unfamiliar and familiar faces differs and, second, by analyzing the fMRI adaptation and multivariate pattern analysis results we estimate where identity-specific representation is found in the brain. The available neuroimaging data suggest that different aspects of the information emerge gradually as one gets more and more familiar with a person within the same network. We propose a novel model of familiarity and identity processing, where the differential activation of long-term memory and emotion processing areas is essential for correct identification.
Collapse
|
13
|
Jiahui G, Yang H, Duchaine B. Attentional modulation differentially affects ventral and dorsal face areas in both normal participants and developmental prosopagnosics. Cogn Neuropsychol 2020; 37:482-493. [PMID: 32490718 DOI: 10.1080/02643294.2020.1765753] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Face-selective cortical areas that can be divided into a ventral stream and a dorsal stream. Previous findings indicate selective attention to particular aspects of faces have different effects on the two streams. To better understand the organization of the face network and whether deficits in attentional modulation contribute to developmental prosopagnosia (DP), we assessed the effect of selective attention to different face aspects across eight face-selective areas. Our results from normal participants found that ROIs in the ventral pathway (OFA, FFA) responded strongly when attention was directed to identity and expression, and ROIs in the dorsal pathway (pSTS-FA, IFG-FA) responded the most when attention was directed to facial expression. Response profiles generated by attention to different face aspects were comparable in DPs and normals. Our results demonstrate attentional modulation affects the ventral and dorsal steam face areas differently and indicate deficits in attentional modulation do not contribute to DP.
Collapse
Affiliation(s)
- Guo Jiahui
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Hua Yang
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| | - Bradley Duchaine
- Department of Psychological and Brain Sciences, Dartmouth College, Hanover, NH, USA
| |
Collapse
|
14
|
Guo K, Calver L, Soornack Y, Bourke P. Valence-dependent Disruption in Processing of Facial Expressions of Emotion in Early Visual Cortex—A Transcranial Magnetic Stimulation Study. J Cogn Neurosci 2020; 32:906-916. [DOI: 10.1162/jocn_a_01520] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
Our visual inputs are often entangled with affective meanings in natural vision, implying the existence of extensive interaction between visual and emotional processing. However, little is known about the neural mechanism underlying such interaction. This exploratory transcranial magnetic stimulation (TMS) study examined the possible involvement of the early visual cortex (EVC, Area V1/V2/V3) in perceiving facial expressions of different emotional valences. Across three experiments, single-pulse TMS was delivered at different time windows (50–150 msec) after a brief 10-msec onset of face images, and participants reported the visibility and perceived emotional valence of faces. Interestingly, earlier TMS at ∼90 msec only reduced the face visibility irrespective of displayed expressions, but later TMS at ∼120 msec selectively disrupted the recognition of negative facial expressions, indicating the involvement of EVC in the processing of negative expressions at a later time window, possibly beyond the initial processing of fed-forward facial structure information. The observed TMS effect was further modulated by individuals' anxiety level. TMS at ∼110–120 msec disrupted the recognition of anger significantly more for those scoring relatively low in trait anxiety than the high scorers, suggesting that cognitive bias influences the processing of facial expressions in EVC. Taken together, it seems that EVC is involved in structural encoding of (at least) negative facial emotional valence, such as fear and anger, possibly under modulation from higher cortical areas.
Collapse
|
15
|
Spatio-temporal dynamics of face perception. Neuroimage 2020; 209:116531. [DOI: 10.1016/j.neuroimage.2020.116531] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 12/19/2019] [Accepted: 01/08/2020] [Indexed: 11/27/2022] Open
|
16
|
Borowiak K, Maguinness C, von Kriegstein K. Dorsal-movement and ventral-form regions are functionally connected during visual-speech recognition. Hum Brain Mapp 2020; 41:952-972. [PMID: 31749219 PMCID: PMC7267922 DOI: 10.1002/hbm.24852] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2019] [Revised: 09/03/2019] [Accepted: 10/21/2019] [Indexed: 01/17/2023] Open
Abstract
Faces convey social information such as emotion and speech. Facial emotion processing is supported via interactions between dorsal-movement and ventral-form visual cortex regions. Here, we explored, for the first time, whether similar dorsal-ventral interactions (assessed via functional connectivity), might also exist for visual-speech processing. We then examined whether altered dorsal-ventral connectivity is observed in adults with high-functioning autism spectrum disorder (ASD), a disorder associated with impaired visual-speech recognition. We acquired functional magnetic resonance imaging (fMRI) data with concurrent eye tracking in pairwise matched control and ASD participants. In both groups, dorsal-movement regions in the visual motion area 5 (V5/MT) and the temporal visual speech area (TVSA) were functionally connected to ventral-form regions (i.e., the occipital face area [OFA] and the fusiform face area [FFA]) during the recognition of visual speech, in contrast to the recognition of face identity. Notably, parts of this functional connectivity were decreased in the ASD group compared to the controls (i.e., right V5/MT-right OFA, left TVSA-left FFA). The results confirmed our hypothesis that functional connectivity between dorsal-movement and ventral-form regions exists during visual-speech processing. Its partial dysfunction in ASD might contribute to difficulties in the recognition of dynamic face information relevant for successful face-to-face communication.
Collapse
Affiliation(s)
- Kamila Borowiak
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
- Berlin School of Mind and Brain, Humboldt University of BerlinBerlinGermany
| | - Corrina Maguinness
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| | - Katharina von Kriegstein
- Chair of Cognitive and Clinical Neuroscience, Faculty of Psychology, Technische Universität DresdenDresdenGermany
- Max Planck Institute for Human Cognitive and Brain SciencesLeipzigGermany
| |
Collapse
|
17
|
Foster C, Zhao M, Romero J, Black MJ, Mohler BJ, Bartels A, Bülthoff I. Decoding subcategories of human bodies from both body- and face-responsive cortical regions. Neuroimage 2019; 202:116085. [PMID: 31401238 DOI: 10.1016/j.neuroimage.2019.116085] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2018] [Revised: 07/17/2019] [Accepted: 08/07/2019] [Indexed: 11/19/2022] Open
Abstract
Our visual system can easily categorize objects (e.g. faces vs. bodies) and further differentiate them into subcategories (e.g. male vs. female). This ability is particularly important for objects of social significance, such as human faces and bodies. While many studies have demonstrated category selectivity to faces and bodies in the brain, how subcategories of faces and bodies are represented remains unclear. Here, we investigated how the brain encodes two prominent subcategories shared by both faces and bodies, sex and weight, and whether neural responses to these subcategories rely on low-level visual, high-level visual or semantic similarity. We recorded brain activity with fMRI while participants viewed faces and bodies that varied in sex, weight, and image size. The results showed that the sex of bodies can be decoded from both body- and face-responsive brain areas, with the former exhibiting more consistent size-invariant decoding than the latter. Body weight could also be decoded in face-responsive areas and in distributed body-responsive areas, and this decoding was also invariant to image size. The weight of faces could be decoded from the fusiform body area (FBA), and weight could be decoded across face and body stimuli in the extrastriate body area (EBA) and a distributed body-responsive area. The sex of well-controlled faces (e.g. excluding hairstyles) could not be decoded from face- or body-responsive regions. These results demonstrate that both face- and body-responsive brain regions encode information that can distinguish the sex and weight of bodies. Moreover, the neural patterns corresponding to sex and weight were invariant to image size and could sometimes generalize across face and body stimuli, suggesting that such subcategorical information is encoded with a high-level visual or semantic code.
Collapse
Affiliation(s)
- Celia Foster
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Max Planck Institute for Intelligent Systems, Tübingen, Germany; Centre for Integrative Neuroscience, Tübingen, Germany.
| | - Mintao Zhao
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; School of Psychology, University of East Anglia, UK
| | - Javier Romero
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Michael J Black
- Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Betty J Mohler
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Max Planck Institute for Intelligent Systems, Tübingen, Germany
| | - Andreas Bartels
- Max Planck Institute for Biological Cybernetics, Tübingen, Germany; Centre for Integrative Neuroscience, Tübingen, Germany; Department of Psychology, University of Tübingen, Germany; Bernstein Center for Computational Neuroscience, Tübingen, Germany
| | | |
Collapse
|
18
|
Kim H, Kim G, Lee SH. Effects of individuation and categorization on face representations in the visual cortex. Neurosci Lett 2019; 708:134344. [PMID: 31228596 DOI: 10.1016/j.neulet.2019.134344] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2019] [Revised: 05/24/2019] [Accepted: 06/19/2019] [Indexed: 10/26/2022]
Abstract
The human faculty of distinguishing thousands of faces critically contributes to face identification and our social interactions. While prior studies have revealed the involvement of the fusiform face area (FFA) in the individuation processing of faces, there are also reports supporting that the responses of the FFA is flexible depending on tasks. Here, we investigated whether the specificity of neural responses in the FFA for individual faces depends on the need for individuation using functional magnetic resonance imaging (fMRI). We found that individual face images could be decoded from response patterns of the FFA when individuation was required for the task but not when only categorization according to a common feature such as race or gender was necessary. These results suggest that the specificity of neural responses for individual faces is flexible in the FFA, depending on the behavioral goal of face individuation.
Collapse
Affiliation(s)
- Hyehyeon Kim
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Gayoung Kim
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea
| | - Sue-Hyun Lee
- Department of Bio and Brain Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea; Program of Brain and Cognitive Engineering, College of Engineering, Korea Advanced Institute of Science and Technology (KAIST), Daejeon, Republic of Korea.
| |
Collapse
|
19
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
20
|
Dobs K, Isik L, Pantazis D, Kanwisher N. How face perception unfolds over time. Nat Commun 2019; 10:1258. [PMID: 30890707 PMCID: PMC6425020 DOI: 10.1038/s41467-019-09239-1] [Citation(s) in RCA: 98] [Impact Index Per Article: 19.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2018] [Accepted: 02/24/2019] [Indexed: 11/08/2022] Open
Abstract
Within a fraction of a second of viewing a face, we have already determined its gender, age and identity. A full understanding of this remarkable feat will require a characterization of the computational steps it entails, along with the representations extracted at each. Here, we used magnetoencephalography (MEG) to measure the time course of neural responses to faces, thereby addressing two fundamental questions about how face processing unfolds over time. First, using representational similarity analysis, we found that facial gender and age information emerged before identity information, suggesting a coarse-to-fine processing of face dimensions. Second, identity and gender representations of familiar faces were enhanced very early on, suggesting that the behavioral benefit for familiar faces results from tuning of early feed-forward processing mechanisms. These findings start to reveal the time course of face processing in humans, and provide powerful new constraints on computational theories of face perception.
Collapse
Affiliation(s)
- Katharina Dobs
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA.
| | - Leyla Isik
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Dimitrios Pantazis
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| | - Nancy Kanwisher
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- McGovern Institute of Brain Research, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
- The Center for Brains, Minds, and Machines, Massachusetts Institute of Technology, Cambridge, MA, 02139, USA
| |
Collapse
|
21
|
Gardner JL, Liu T. Inverted Encoding Models Reconstruct an Arbitrary Model Response, Not the Stimulus. eNeuro 2019; 6:ENEURO.0363-18.2019. [PMID: 30923743 PMCID: PMC6437661 DOI: 10.1523/eneuro.0363-18.2019] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2018] [Revised: 02/25/2019] [Accepted: 03/03/2019] [Indexed: 01/24/2023] Open
Abstract
Probing how large populations of neurons represent stimuli is key to understanding sensory representations as many stimulus characteristics can only be discerned from population activity and not from individual single-units. Recently, inverted encoding models have been used to produce channel response functions from large spatial-scale measurements of human brain activity that are reminiscent of single-unit tuning functions and have been proposed to assay "population-level stimulus representations" (Sprague et al., 2018a). However, these channel response functions do not assay population tuning. We show by derivation that the channel response function is only determined up to an invertible linear transform. Thus, these channel response functions are arbitrary, one of an infinite family and therefore not a unique description of population representation. Indeed, simulations demonstrate that bimodal, even random, channel basis functions can account perfectly well for population responses without any underlying neural response units that are so tuned. However, the approach can be salvaged by extending it to reconstruct the stimulus, not the assumed model. We show that when this is done, even using bimodal and random channel basis functions, a unimodal function peaking at the appropriate value of the stimulus is recovered which can be interpreted as a measure of population selectivity. More precisely, the recovered function signifies how likely any value of the stimulus is, given the observed population response. Whether an analysis is recovering the hypothetical responses of an arbitrary model rather than assessing the selectivity of population representations is not an issue unique to the inverted encoding model and human neuroscience, but a general problem that must be confronted as more complex analyses intervene between measurement of population activity and presentation of data.
Collapse
Affiliation(s)
| | - Taosheng Liu
- Department of Psychology, Michigan State University, East Lansing, MI 48824
| |
Collapse
|