1
|
Wang D, Xia L, Zhang Z, Guo J, Tian Y, Zhou H, Xiu M, Chen D, Zhang XY. Association of P50 with social function, but not with cognition in patients with first-episode schizophrenia. Eur Arch Psychiatry Clin Neurosci 2024; 274:1375-1384. [PMID: 37966511 DOI: 10.1007/s00406-023-01711-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/29/2023] [Accepted: 10/15/2023] [Indexed: 11/16/2023]
Abstract
Functional deficits including cognitive impairment and social dysfunction are the core symptoms of schizophrenia (SCZ), and sensory gating (SG) deficits may be involved in the pathological mechanism of functional deficits in SCZ. This study was to investigate the relationship between defective P50 inhibition and functional deficits in first-episode drug naïve (FEDN) SCZ patients. A total of 95 FEDN SCZ patients and 53 healthy controls (HC) were recruited. The Chinese version of UCSD Performance-Based Skills (UPSA), MATRICS Consensus Cognitive Battery (MCCB), and EEG system were used to assess the social function, cognitive performance, and P50 inhibition, respectively. The MCCB total score and eight domain scores were significantly lower in patients with FEDN SCZ than those in HC (all p < 0.05). The UPSA total score and financial skills scores were also significantly lower in SCZ patients than that in the HC (all p < 0.05). Compared with HC, patients with FEDF SCZ had a higher P50 ratio (all p < 0.05). There was no correlation between P50 components and MCCB scores in patients with FEDF SCZ. However, there was only a correlation between the P50 ratio and UPSA financial skills, communication skills, or total score in patients (all p < 0.05). Defective P50 inhibition in FEDN SCZ patients may be associated with social dysfunction but not cognitive impairment, suggesting that the social dysfunction and cognitive impairment of patients with FEDN SCZ may have different pathogenic mechanisms.
Collapse
Affiliation(s)
- Dongmei Wang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Luyao Xia
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Zhiqi Zhang
- Department of Psychology, Barnard College of Columbia University, New York, NY, USA
| | - Junru Guo
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, Guizhou Minzu University, Guiyang, China
| | - Yang Tian
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Huixia Zhou
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Meihong Xiu
- Beijing HuiLongGuan Hospital, Peking University, Beijing, China
| | - Dachun Chen
- Beijing HuiLongGuan Hospital, Peking University, Beijing, China
| | - Xiang-Yang Zhang
- CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, 16 Lincui Road, Chaoyang District, Beijing, 100101, China.
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
2
|
Li Y, Li S, Hu W, Yang L, Luo W. Spatial representation of multidimensional information in emotional faces revealed by fMRI. Neuroimage 2024; 290:120578. [PMID: 38499051 DOI: 10.1016/j.neuroimage.2024.120578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 03/20/2024] Open
Abstract
Face perception is a complex process that involves highly specialized procedures and mechanisms. Investigating into face perception can help us better understand how the brain processes fine-grained, multidimensional information. This research aimed to delve deeply into how different dimensions of facial information are represented in specific brain regions or through inter-regional connections via an implicit face recognition task. To capture the representation of various facial information in the brain, we employed support vector machine decoding, functional connectivity, and model-based representational similarity analysis on fMRI data, resulting in the identification of three crucial findings. Firstly, despite the implicit nature of the task, emotions were still represented in the brain, contrasting with all other facial information. Secondly, the connection between the medial amygdala and the parahippocampal gyrus was found to be essential for the representation of facial emotion in implicit tasks. Thirdly, in implicit tasks, arousal representation occurred in the parahippocampal gyrus, while valence depended on the connection between the primary visual cortex and the parahippocampal gyrus. In conclusion, these findings dissociate the neural mechanisms of emotional valence and arousal, revealing the precise spatial patterns of multidimensional information processing in faces.
Collapse
Affiliation(s)
- Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Weiyu Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Lan Yang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China.
| |
Collapse
|
3
|
Soulos P, Isik L. Disentangled deep generative models reveal coding principles of the human face processing network. PLoS Comput Biol 2024; 20:e1011887. [PMID: 38408105 PMCID: PMC10919870 DOI: 10.1371/journal.pcbi.1011887] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/15/2023] [Revised: 03/07/2024] [Accepted: 02/02/2024] [Indexed: 02/28/2024] Open
Abstract
Despite decades of research, much is still unknown about the computations carried out in the human face processing network. Recently, deep networks have been proposed as a computational account of human visual processing, but while they provide a good match to neural data throughout visual cortex, they lack interpretability. We introduce a method for interpreting brain activity using a new class of deep generative models, disentangled representation learning models, which learn a low-dimensional latent space that "disentangles" different semantically meaningful dimensions of faces, such as rotation, lighting, or hairstyle, in an unsupervised manner by enforcing statistical independence between dimensions. We find that the majority of our model's learned latent dimensions are interpretable by human raters. Further, these latent dimensions serve as a good encoding model for human fMRI data. We next investigate the representation of different latent dimensions across face-selective voxels. We find that low- and high-level face features are represented in posterior and anterior face-selective regions, respectively, corroborating prior models of human face recognition. Interestingly, though, we find identity-relevant and irrelevant face features across the face processing network. Finally, we provide new insight into the few "entangled" (uninterpretable) dimensions in our model by showing that they match responses in the ventral stream and carry information about facial identity. Disentangled face encoding models provide an exciting alternative to standard "black box" deep learning approaches for modeling and interpreting human brain data.
Collapse
Affiliation(s)
- Paul Soulos
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| | - Leyla Isik
- Department of Cognitive Science, Johns Hopkins University, Baltimore, Maryland, United States of America
| |
Collapse
|
4
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
5
|
Levine SM, Merz K, Keeser D, Kunz JI, Barton BB, Reinhard MA, Jobst A, Padberg F, Neukel C, Herpertz SC, Bertsch K, Musil R. Altered amygdalar emotion space in borderline personality disorder normalizes following dialectical behaviour therapy. J Psychiatry Neurosci 2023; 48:E431-E438. [PMID: 37935476 PMCID: PMC10635707 DOI: 10.1503/jpn.230085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/29/2023] [Revised: 07/24/2023] [Accepted: 09/05/2023] [Indexed: 11/09/2023] Open
Abstract
BACKGROUND Borderline personality disorder (BPD) is a mental health condition characterized by an inability to regulate emotions or accurately process the emotional states of others. Previous neuroimaging studies using classical univariate analyses have tied such emotion dysregulation to aberrant activity levels in the amygdala of patients with BPD. However, multivariate analyses have not yet been used to investigate how representational spaces of emotion information may be systematically altered in patients with BPD. METHODS Patients with BPD performed an emotional face matching task while undergoing MRI before and after a 10-week inpatient program of dialectical behavioural therapy. Representational similarity analysis (RSA) was applied to activity patterns (evoked by angry, fearful, neutral and surprised faces) in the amygdala and temporo-occipital fusiform gyrus of patients with BPD and in the amygdala of healthy controls. RESULTS We recruited 15 patients with BPD (8 females, 6 males, 1 transgender male) to participate in the study, and we obtained a neuroimaging data set for 25 healthy controls for a comparative analysis. The RSA of the amygdala revealed a negative bias in the underlying affective space (in that activity patterns evoked by angry, fearful and neutral faces were more similar to each other than to patterns evoked by surprised faces), which normalized after therapy. This bias-to-normalization effect was present neither in activity patterns of the temporo-occipital fusiform gyrus of patients nor in amygdalar activity patterns of healthy controls. LIMITATIONS Larger samples and additional questionnaires would help to better characterize the association between specific aspects of therapy and changes in the neural representational space. CONCLUSION Our findings suggest a more refined role for the amygdala in the pathological processing of perceived emotions and may provide new diagnostic and prognostic imaging-based markers of emotion dysregulation and personality disorders.Clinical trial registration: DRKS00019821, German Clinical Trials Register (Deutsches Register Klinischer Studien).
Collapse
Affiliation(s)
- Seth M Levine
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Katharina Merz
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Daniel Keeser
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Julia I Kunz
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Barbara B Barton
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Matthias A Reinhard
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Andrea Jobst
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Frank Padberg
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Corinne Neukel
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Sabine C Herpertz
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Katja Bertsch
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| | - Richard Musil
- From the Department of Psychology, LMU Munich, Munich, Germany (Levine, Bertsch); the NeuroImaging Core Unit Munich (NICUM), University Hospital, LMU Munich, Munich, Germany (Levine, Merz, Keeser, Bertsch); the Department of Psychiatry and Psychotherapy, University Hospital, LMU Munich, Munich, Germany (Merz, Keeser, Kunz, Barton, Reinhard, Jobst, Padberg, Musil); the Department of General Psychiatry, Center for Psychosocial Medicine, Heidelberg University, Heidelberg, Germany (Neukel, Herpertz, Bertsch); and the German Center for Mental Health (DZPG), Munich, Germany (Padberg, Bertsch)
| |
Collapse
|
6
|
Zhang H, Ding X, Liu N, Nolan R, Ungerleider LG, Japee S. Equivalent processing of facial expression and identity by macaque visual system and task-optimized neural network. Neuroimage 2023; 273:120067. [PMID: 36997134 PMCID: PMC10165955 DOI: 10.1016/j.neuroimage.2023.120067] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/31/2022] [Revised: 02/20/2023] [Accepted: 03/27/2023] [Indexed: 03/30/2023] Open
Abstract
Both the primate visual system and artificial deep neural network (DNN) models show an extraordinary ability to simultaneously classify facial expression and identity. However, the neural computations underlying the two systems are unclear. Here, we developed a multi-task DNN model that optimally classified both monkey facial expressions and identities. By comparing the fMRI neural representations of the macaque visual cortex with the best-performing DNN model, we found that both systems: (1) share initial stages for processing low-level face features which segregate into separate branches at later stages for processing facial expression and identity respectively, and (2) gain more specificity for the processing of either facial expression or identity as one progresses along each branch towards higher stages. Correspondence analysis between the DNN and monkey visual areas revealed that the amygdala and anterior fundus face patch (AF) matched well with later layers of the DNN's facial expression branch, while the anterior medial face patch (AM) matched well with later layers of the DNN's facial identity branch. Our results highlight the anatomical and functional similarities between macaque visual system and DNN model, suggesting a common mechanism between the two systems.
Collapse
Affiliation(s)
- Hui Zhang
- School of Engineering Medicine, Beihang University; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, China; Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, Maryland 20892, USA.
| | - Xuetong Ding
- School of Engineering Medicine, Beihang University; Key Laboratory of Biomechanics and Mechanobiology (Beihang University), Ministry of Education, Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology of the People's Republic of China, Beijing 100191, China
| | - Ning Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing 100101, China..óSchool of Artificial Intelligence, University of Chinese Academy of Sciences, Beijing, 100049, China; Institute of Artificial Intelligence, Hefei Comprehensive National Science Center, Hefei 230088, China; Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, Maryland 20892, USA
| | - Rachel Nolan
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, Maryland 20892, USA
| | | | - Shruti Japee
- Laboratory of Brain and Cognition, NIMH, NIH, Bethesda, Maryland 20892, USA
| |
Collapse
|
7
|
Biró B, Cserjési R, Kocsel N, Galambos A, Gecse K, Kovács LN, Baksa D, Juhász G, Kökönyei G. The neural correlates of context driven changes in the emotional response: An fMRI study. PLoS One 2022; 17:e0279823. [PMID: 36584048 PMCID: PMC9803168 DOI: 10.1371/journal.pone.0279823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/21/2022] [Accepted: 12/15/2022] [Indexed: 12/31/2022] Open
Abstract
Emotional flexibility reflects the ability to adjust the emotional response to the changing environmental context. To understand how context can trigger a change in emotional response, i.e., how it can upregulate the initial emotional response or trigger a shift in the valence of emotional response, we used a task consisting of picture pairs during functional magnetic resonance imaging sessions. In each pair, the first picture was a smaller detail (a decontextualized photograph depicting emotions using primarily facial and postural expressions) from the second (contextualized) picture, and the neural response to a decontextualized picture was compared with the same picture in a context. Thirty-one healthy participants (18 females; mean age: 24.44 ± 3.4) were involved in the study. In general, context (vs. pictures without context) increased activation in areas involved in facial emotional processing (e.g., middle temporal gyrus, fusiform gyrus, and temporal pole) and affective mentalizing (e.g., precuneus, temporoparietal junction). After excluding the general effect of context by using an exclusive mask with activation to context vs. no-context, the automatic shift from positive to negative valence induced by the context was associated with increased activation in the thalamus, caudate, medial frontal gyrus and lateral orbitofrontal cortex. When the meaning changed from negative to positive, it resulted in a less widespread activation pattern, mainly in the precuneus, middle temporal gyrus, and occipital lobe. Providing context cues to facial information recruited brain areas that induced changes in the emotional responses and interpretation of the emotional situations automatically to support emotional flexibility.
Collapse
Affiliation(s)
- Brigitte Biró
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Renáta Cserjési
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Natália Kocsel
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Attila Galambos
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Kinga Gecse
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Lilla Nóra Kovács
- Doctoral School of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
| | - Dániel Baksa
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Gabriella Juhász
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
| | - Gyöngyi Kökönyei
- NAP3.0-SE Neuropsychopharmacology Research Group, Hungarian Brain Research Program, Semmelweis University, Budapest, Hungary
- Institute of Psychology, ELTE Eötvös Loránd University, Budapest, Hungary
- Faculty of Pharmacy, Department of Pharmacodynamics, Semmelweis University, Budapest, Hungary
- * E-mail: ,
| |
Collapse
|
8
|
Dissociation and hierarchy of human visual pathways for simultaneously coding facial identity and expression. Neuroimage 2022; 264:119769. [PMID: 36435341 DOI: 10.1016/j.neuroimage.2022.119769] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2022] [Revised: 11/14/2022] [Accepted: 11/22/2022] [Indexed: 11/25/2022] Open
Abstract
Humans have an extraordinary ability to recognize facial expression and identity from a single face simultaneously and effortlessly, however, the underlying neural computation is not well understood. Here, we optimized a multi-task deep neural network to classify facial expression and identity simultaneously. Under various optimization training strategies, the best-performing model consistently showed 'share-separate' organization. The two separate branches of the best-performing model also exhibited distinct abilities to categorize facial expression and identity, and these abilities increased along the facial expression or identity branches toward high layers. By comparing the representational similarities between the best-performing model and functional magnetic resonance imaging (fMRI) responses in the human visual cortex to the same face stimuli, the face-selective posterior superior temporal sulcus (pSTS) in the dorsal visual cortex was significantly correlated with layers in the expression branch of the model, and the anterior inferotemporal cortex (aIT) and anterior fusiform face area (aFFA) in the ventral visual cortex were significantly correlated with layers in the identity branch of the model. Besides, the aFFA and aIT better matched the high layers of the model, while the posterior FFA (pFFA) and occipital facial area (OFA) better matched the middle and early layers of the model, respectively. Overall, our study provides a task-optimization computational model to better understand the neural mechanism underlying face recognition, which suggest that similar to the best-performing model, the human visual system exhibits both dissociated and hierarchical neuroanatomical organization when simultaneously coding facial identity and expression.
Collapse
|
9
|
Decoding six basic emotions from brain functional connectivity patterns. SCIENCE CHINA LIFE SCIENCES 2022; 66:835-847. [PMID: 36378473 DOI: 10.1007/s11427-022-2206-3] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 09/26/2022] [Indexed: 11/16/2022]
Abstract
Although distinctive neural and physiological states are suggested to underlie the six basic emotions, basic emotions are often indistinguishable from functional magnetic resonance imaging (fMRI) voxelwise activation (VA) patterns. Here, we hypothesize that functional connectivity (FC) patterns across brain regions may contain emotion-representation information beyond VA patterns. We collected whole-brain fMRI data while human participants viewed pictures of faces expressing one of the six basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) or showing neutral expressions. We obtained FC patterns for each emotion across brain regions over the whole brain and applied multivariate pattern decoding to decode emotions in the FC pattern representation space. Our results showed that the whole-brain FC patterns successfully classified not only the six basic emotions from neutral expressions but also each basic emotion from other emotions. An emotion-representation network for each basic emotion that spanned beyond the classical brain regions for emotion processing was identified. Finally, we demonstrated that within the same brain regions, FC-based decoding consistently performed better than VA-based decoding. Taken together, our findings revealed that FC patterns contained emotional information and advocated for paying further attention to the contribution of FCs to emotion processing.
Collapse
|
10
|
Hou X, Zhao J, Zhang H. Reconstruction of perceived face images from brain activities based on multi-attribute constraints. Front Neurosci 2022; 16:1015752. [PMID: 36389231 PMCID: PMC9643433 DOI: 10.3389/fnins.2022.1015752] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Accepted: 10/10/2022] [Indexed: 11/24/2022] Open
Abstract
Reconstruction of perceived faces from brain signals is a hot topic in brain decoding and an important application in the field of brain-computer interfaces. Existing methods do not fully consider the multiple facial attributes represented in face images, and their different activity patterns at multiple brain regions are often ignored, which causes the reconstruction performance very poor. In the current study, we propose an algorithmic framework that efficiently combines multiple face-selective brain regions for precise multi-attribute perceived face reconstruction. Our framework consists of three modules: a multi-task deep learning network (MTDLN), which is developed to simultaneously extract the multi-dimensional face features attributed to facial expression, identity and gender from one single face image, a set of linear regressions (LR), which is built to map the relationship between the multi-dimensional face features and the brain signals from multiple brain regions, and a multi-conditional generative adversarial network (mcGAN), which is used to generate the perceived face images constrained by the predicted multi-dimensional face features. We conduct extensive fMRI experiments to evaluate the reconstruction performance of our framework both subjectively and objectively. The results show that, compared with the traditional methods, our proposed framework better characterizes the multi-attribute face features in a face image, better predicts the face features from brain signals, and achieves better reconstruction performance of both seen and unseen face images in both visual effects and quantitative assessment. Moreover, besides the state-of-the-art intra-subject reconstruction performance, our proposed framework can also realize inter-subject face reconstruction to a certain extent.
Collapse
Affiliation(s)
- Xiaoyuan Hou
- School of Engineering Medicine, Beihang University, Beijing, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Jing Zhao
- School of Engineering Medicine, Beihang University, Beijing, China
- School of Biological Science and Medical Engineering, Beihang University, Beijing, China
| | - Hui Zhang
- School of Engineering Medicine, Beihang University, Beijing, China
- Key Laboratory of Biomechanics and Mechanobiology, Ministry of Education, Beihang University, Beijing, China
- Key Laboratory of Big Data-Based Precision Medicine, Ministry of Industry and Information Technology of the People’s Republic of China, Beihang University, Beijing, China
| |
Collapse
|
11
|
Representational structure of fMRI/EEG responses to dynamic facial expressions. Neuroimage 2022; 263:119631. [PMID: 36113736 DOI: 10.1016/j.neuroimage.2022.119631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Revised: 09/09/2022] [Accepted: 09/12/2022] [Indexed: 11/23/2022] Open
Abstract
Face perception provides an excellent example of how the brain processes nuanced visual differences and transforms them into behaviourally useful representations of identities and emotional expressions. While a body of literature has looked into the spatial and temporal neural processing of facial expressions, few studies have used a dimensionally varying set of stimuli containing subtle perceptual changes. In the current study, we used 48 short videos varying dimensionally in their intensity and category (happy, angry, surprised) of expression. We measured both fMRI and EEG responses to these video clips and compared the neural response patterns to the predictions of models based on image features and models derived from behavioural ratings of the stimuli. In fMRI, the inferior frontal gyrus face area (IFG-FA) carried information related only to the intensity of the expression, independent of image-based models. The superior temporal sulcus (STS), inferior temporal (IT) and lateral occipital (LO) areas contained information about both expression category and intensity. In the EEG, the coding of expression category and low-level image features were most pronounced at around 400 ms. The expression intensity model did not, however, correlate significantly at any EEG timepoint. Our results show a specific role for IFG-FA in the coding of expressions and suggest that it contains image and category invariant representations of expression intensity.
Collapse
|
12
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
13
|
Izumika R, Cabeza R, Tsukiura T. Neural Mechanisms of Perceiving and Subsequently Recollecting Emotional Facial Expressions in Young and Older Adults. J Cogn Neurosci 2022; 34:1183-1204. [PMID: 35468212 DOI: 10.1162/jocn_a_01851] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is known that emotional facial expressions modulate the perception and subsequent recollection of faces and that aging alters these modulatory effects. Yet, the underlying neural mechanisms are not well understood, and they were the focus of the current fMRI study. We scanned healthy young and older adults while perceiving happy, neutral, or angry faces paired with names. Participants were then provided with the names of the faces and asked to recall the facial expression of each face. fMRI analyses focused on the fusiform face area (FFA), the posterior superior temporal sulcus (pSTS), the OFC, the amygdala, and the hippocampus (HC). Univariate activity, multivariate pattern (MVPA), and functional connectivity analyses were performed. The study yielded two main sets of findings. First, in pSTS and the amygdala, univariate activity and MVPA discrimination during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, MVPA discriminated facial expressions less accurately in older than young adults. These findings suggest that facial expression representations in FFA and OFC reflect age-related dedifferentiation and positivity effect. Second, HC-OFC connectivity showed subsequent memory effects (SMEs) for happy expressions in both age groups, HC-FFA connectivity exhibited SMEs for happy and neutral expressions in young adults, and HC-pSTS interactions displayed SMEs for happy expressions in older adults. These results could be related to compensatory mechanisms and positivity effects in older adults. Taken together, the results clarify the effects of aging on the neural mechanisms in perceiving and encoding facial expressions.
Collapse
|
14
|
Wurm MF, Caramazza A. Two 'what' pathways for action and object recognition. Trends Cogn Sci 2021; 26:103-116. [PMID: 34702661 DOI: 10.1016/j.tics.2021.10.003] [Citation(s) in RCA: 42] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2021] [Revised: 09/03/2021] [Accepted: 10/01/2021] [Indexed: 10/20/2022]
Abstract
The ventral visual stream is conceived as a pathway for object recognition. However, we also recognize the actions an object can be involved in. Here, we show that action recognition critically depends on a pathway in lateral occipitotemporal cortex, partially overlapping and topographically aligned with object representations that are precursors for action recognition. By contrast, object features that are more relevant for object recognition, such as color and texture, are typically found in ventral occipitotemporal cortex. We argue that occipitotemporal cortex contains similarly organized lateral and ventral 'what' pathways for action and object recognition, respectively. This account explains a number of observed phenomena, such as the duplication of object domains and the specific representational profiles in lateral and ventral cortex.
Collapse
Affiliation(s)
- Moritz F Wurm
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy.
| | - Alfonso Caramazza
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Corso Bettini 31, 38068 Rovereto, Italy; Department of Psychology, Harvard University, 33 Kirkland St, Cambridge, MA 02138, USA
| |
Collapse
|
15
|
Murray T, O'Brien J, Sagiv N, Garrido L. The role of stimulus-based cues and conceptual information in processing facial expressions of emotion. Cortex 2021; 144:109-132. [PMID: 34666297 DOI: 10.1016/j.cortex.2021.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 07/16/2021] [Accepted: 08/09/2021] [Indexed: 01/07/2023]
Abstract
Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Psychology Department, School of Biological and Behavioural Sciences, Queen Mary University London, United Kingdom.
| | - Justin O'Brien
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Noam Sagiv
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Lúcia Garrido
- Department of Psychology, City, University of London, United Kingdom
| |
Collapse
|
16
|
Dalski A, Kovács G, Ambrus GG. Evidence for a General Neural Signature of Face Familiarity. Cereb Cortex 2021; 32:2590-2601. [PMID: 34628490 DOI: 10.1093/cercor/bhab366] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2021] [Revised: 08/12/2021] [Accepted: 08/13/2021] [Indexed: 11/12/2022] Open
Abstract
We explored the neural signatures of face familiarity using cross-participant and cross-experiment decoding of event-related potentials, evoked by unknown and experimentally familiarized faces from a set of experiments with different participants, stimuli, and familiarization-types. Human participants of both sexes were either familiarized perceptually, via media exposure, or by personal interaction. We observed significant cross-experiment familiarity decoding involving all three experiments, predominantly over posterior and central regions of the right hemisphere in the 270-630 ms time window. This shared face familiarity effect was most prominent across the Media and the Personal, as well as between the Perceptual and Personal experiments. Cross-experiment decodability makes this signal a strong candidate for a general neural indicator of face familiarity, independent of familiarization methods, participants, and stimuli. Furthermore, the sustained pattern of temporal generalization suggests that it reflects a single automatic processing cascade that is maintained over time.
Collapse
Affiliation(s)
- Alexia Dalski
- Institute of Psychology, Friedrich Schiller University Jena, D-07743 Jena, Germany
- Department of Psychology, Philipps-Universität Marburg, D-35039 Marburg, Germany
- Center for Mind, Brain and Behavior - CMBB, Philipps-Universität Marburg and Justus-Liebig-Universität Giessen, D-35039 Marburg, Germany
| | - Gyula Kovács
- Institute of Psychology, Friedrich Schiller University Jena, D-07743 Jena, Germany
| | - Géza Gergely Ambrus
- Institute of Psychology, Friedrich Schiller University Jena, D-07743 Jena, Germany
| |
Collapse
|
17
|
Dowdle LT, Ghose G, Ugurbil K, Yacoub E, Vizioli L. Clarifying the role of higher-level cortices in resolving perceptual ambiguity using ultra high field fMRI. Neuroimage 2021; 227:117654. [PMID: 33333319 PMCID: PMC10614695 DOI: 10.1016/j.neuroimage.2020.117654] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2020] [Revised: 11/17/2020] [Accepted: 12/05/2020] [Indexed: 12/17/2022] Open
Abstract
The brain is organized into distinct, flexible networks. Within these networks, cognitive variables such as attention can modulate sensory representations in accordance with moment-to-moment behavioral requirements. These modulations can be studied by varying task demands; however, the tasks employed are often incongruent with the postulated functions of a sensory system, limiting the characterization of the system in relation to natural behaviors. Here we combine domain-specific task manipulations and ultra-high field fMRI to study the nature of top-down modulations. We exploited faces, a visual category underpinned by a complex cortical network, and instructed participants to perform either a stimulus-relevant/domain-specific or a stimulus-irrelevant task in the scanner. We found that 1. perceptual ambiguity (i.e. difficulty of achieving a stable percept) is encoded in top-down modulations from higher-level cortices; 2. the right inferior-temporal lobe is active under challenging conditions and uniquely encodes trial-by-trial variability in face perception.
Collapse
Affiliation(s)
- Logan T Dowdle
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN 55455.
| | - Geoffrey Ghose
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neuroscience, University of Minnesota, 321 Church St SE, Minneapolis, MN 55455
| | - Kamil Ugurbil
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States
| | - Essa Yacoub
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States
| | - Luca Vizioli
- Center for Magnetic Resonance Research, University of Minnesota 2021 6th St SE, Minneapolis, MN 55455, United States; Department of Neurosurgery, University of Minnesota, 500 SE Harvard St, Minneapolis, MN 55455.
| |
Collapse
|
18
|
FFA and OFA Encode Distinct Types of Face Identity Information. J Neurosci 2021; 41:1952-1969. [PMID: 33452225 DOI: 10.1523/jneurosci.1449-20.2020] [Citation(s) in RCA: 39] [Impact Index Per Article: 13.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 12/18/2020] [Accepted: 12/22/2020] [Indexed: 01/11/2023] Open
Abstract
Faces of different people elicit distinct fMRI patterns in several face-selective regions of the human brain. Here we used representational similarity analysis to investigate what type of identity-distinguishing information is encoded in three face-selective regions: fusiform face area (FFA), occipital face area (OFA), and posterior superior temporal sulcus (pSTS). In a sample of 30 human participants (22 females, 8 males), we used fMRI to measure brain activity patterns elicited by naturalistic videos of famous face identities, and compared their representational distances in each region with models of the differences between identities. We built diverse candidate models, ranging from low-level image-computable properties (pixel-wise, GIST, and Gabor-Jet dissimilarities), through higher-level image-computable descriptions (OpenFace deep neural network, trained to cluster faces by identity), to complex human-rated properties (perceived similarity, social traits, and gender). We found marked differences in the information represented by the FFA and OFA. Dissimilarities between face identities in FFA were accounted for by differences in perceived similarity, Social Traits, Gender, and by the OpenFace network. In contrast, representational distances in OFA were mainly driven by differences in low-level image-based properties (pixel-wise and Gabor-Jet dissimilarities). Our results suggest that, although FFA and OFA can both discriminate between identities, the FFA representation is further removed from the image, encoding higher-level perceptual and social face information.SIGNIFICANCE STATEMENT Recent studies using fMRI have shown that several face-responsive brain regions can distinguish between different face identities. It is however unclear whether these different face-responsive regions distinguish between identities in similar or different ways. We used representational similarity analysis to investigate the computations within three brain regions in response to naturalistically varying videos of face identities. Our results revealed that two regions, the fusiform face area and the occipital face area, encode distinct identity information about faces. Although identity can be decoded from both regions, identity representations in fusiform face area primarily contained information about social traits, gender, and high-level visual features, whereas occipital face area primarily represented lower-level image features.
Collapse
|
19
|
Kovács G. Getting to Know Someone: Familiarity, Person Recognition, and Identification in the Human Brain. J Cogn Neurosci 2020; 32:2205-2225. [DOI: 10.1162/jocn_a_01627] [Citation(s) in RCA: 23] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/29/2023]
Abstract
Abstract
In our everyday life, we continuously get to know people, dominantly through their faces. Several neuroscientific experiments showed that familiarization changes the behavioral processing and underlying neural representation of faces of others. Here, we propose a model of the process of how we actually get to know someone. First, the purely visual familiarization of unfamiliar faces occurs. Second, the accumulation of associated, nonsensory information refines person representation, and finally, one reaches a stage where the effortless identification of very well-known persons occurs. We offer here an overview of neuroimaging studies, first evaluating how and in what ways the processing of unfamiliar and familiar faces differs and, second, by analyzing the fMRI adaptation and multivariate pattern analysis results we estimate where identity-specific representation is found in the brain. The available neuroimaging data suggest that different aspects of the information emerge gradually as one gets more and more familiar with a person within the same network. We propose a novel model of familiarity and identity processing, where the differential activation of long-term memory and emotion processing areas is essential for correct identification.
Collapse
|
20
|
Bayet L, Perdue KL, Behrendt HF, Richards JE, Westerlund A, Cataldo JK, Nelson CA. Neural responses to happy, fearful and angry faces of varying identities in 5- and 7-month-old infants. Dev Cogn Neurosci 2020; 47:100882. [PMID: 33246304 PMCID: PMC7695867 DOI: 10.1016/j.dcn.2020.100882] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2020] [Revised: 10/19/2020] [Accepted: 11/03/2020] [Indexed: 11/30/2022] Open
Abstract
fNIRS and looking responses to emotional faces were measured in 5- and 7-month-olds. Emotional faces had varying identities within happy, angry, and fearful blocks. Temporo-parietal and frontal activations were observed, particularly to happy faces. Infants looked longer to the mouth region of angry faces. No difference in behavior or neural activity observed between 5- and 7-month-olds.
The processing of facial emotion is an important social skill that develops throughout infancy and early childhood. Here we investigate the neural underpinnings of the ability to process facial emotion across changes in facial identity in cross-sectional groups of 5- and 7-month-old infants. We simultaneously measured neural metabolic, behavioral, and autonomic responses to happy, fearful, and angry faces of different female models using functional near-infrared spectroscopy (fNIRS), eye-tracking, and heart rate measures. We observed significant neural activation to these facial emotions in a distributed set of frontal and temporal brain regions, and longer looking to the mouth region of angry faces compared to happy and fearful faces. No differences in looking behavior or neural activations were observed between 5- and 7-month-olds, although several exploratory, age-independent associations between neural activations and looking behavior were noted. Overall, these findings suggest more developmental stability than previously thought in responses to emotional facial expressions of varying identities between 5- and 7-months of age.
Collapse
Affiliation(s)
- Laurie Bayet
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Katherine L Perdue
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA
| | - Hannah F Behrendt
- Boston Children's Hospital, Boston, MA, USA; Child Neuropsychology Section, Department of Child and Adolescent Psychiatry, Psychosomatics and Psychotherapy, University Hospital RWTH Aachen, Aachen, Germany
| | | | | | | | - Charles A Nelson
- Boston Children's Hospital, Boston, MA, USA; Harvard Medical School, Boston, MA, USA; Harvard Graduate School of Education, Cambridge, MA, USA.
| |
Collapse
|
21
|
Blunted neural response to emotional faces in the fusiform and superior temporal gyrus may be marker of emotion recognition deficits in pediatric epilepsy. Epilepsy Behav 2020; 112:107432. [PMID: 32919203 PMCID: PMC7895303 DOI: 10.1016/j.yebeh.2020.107432] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2020] [Revised: 07/17/2020] [Accepted: 08/12/2020] [Indexed: 01/13/2023]
Abstract
Individuals with epilepsy are at risk for social cognition deficits, including impairments in the ability to recognize nonverbal cues of emotion (i.e., emotion recognition [ER] skills). Such deficits are particularly pronounced in adult patients with childhood-onset seizures and are already evident in children and adolescents with epilepsy. Though these impairments have been linked to blunted neural response to emotional information in faces in adult patients, little is known about the neural correlates of ER deficits in youth with epilepsy. The current study compared ER accuracy and neural response to emotional faces during functional magnetic resonance imaging (fMRI) in youth with intractable focal epilepsy and typically developing youth. Relative to typically developing participants, individuals with epilepsy showed a) reduced accuracy in the ER task and b) blunted response to emotional faces (vs. neutral faces) in the bilateral fusiform gyri and right superior temporal gyrus (STG). Activation in these regions was correlated with performance, suggesting that aberrant response within these face-responsive regions may play a functional role in ER impairments. Reduced engagement of neural circuits relevant to processing socioemotional cues may be markers of risk for social cognitive deficits in youth with focal epilepsy.
Collapse
|
22
|
Li Y, Richardson RM, Ghuman AS. Posterior Fusiform and Midfusiform Contribute to Distinct Stages of Facial Expression Processing. Cereb Cortex 2020; 29:3209-3219. [PMID: 30124788 DOI: 10.1093/cercor/bhy186] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2018] [Revised: 07/15/2018] [Accepted: 07/19/2018] [Indexed: 11/12/2022] Open
Abstract
Though the fusiform is well-established as a key node in the face perception network, its role in facial expression processing remains unclear, due to competing models and discrepant findings. To help resolve this debate, we recorded from 17 subjects with intracranial electrodes implanted in face sensitive patches of the fusiform. Multivariate classification analysis showed that facial expression information is represented in fusiform activity and in the same regions that represent identity, though with a smaller effect size. Examination of the spatiotemporal dynamics revealed a functional distinction between posterior fusiform and midfusiform expression coding, with posterior fusiform showing an early peak of facial expression sensitivity at around 180 ms after subjects viewed a face and midfusiform showing a later and extended peak between 230 and 460 ms. These results support the hypothesis that the fusiform plays a role in facial expression perception and highlight a qualitative functional distinction between processing in posterior fusiform and midfusiform, with each contributing to temporally segregated stages of expression perception.
Collapse
Affiliation(s)
- Yuanning Li
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, PA, USA.,Program in Neural Computation and Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - R Mark Richardson
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| | - Avniel Singh Ghuman
- Center for the Neural Basis of Cognition, Carnegie Mellon University and University of Pittsburgh, Pittsburgh, PA, USA.,Program in Neural Computation and Machine Learning, Carnegie Mellon University, Pittsburgh, PA, USA.,Department of Neurological Surgery, University of Pittsburgh, Pittsburgh, PA, USA
| |
Collapse
|
23
|
Yue X, Robert S, Ungerleider LG. Curvature processing in human visual cortical areas. Neuroimage 2020; 222:117295. [PMID: 32835823 DOI: 10.1016/j.neuroimage.2020.117295] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2020] [Revised: 08/09/2020] [Accepted: 08/17/2020] [Indexed: 11/16/2022] Open
Abstract
Curvature is one of many visual features shown to be important for visual perception. We recently showed that curvilinear features provide sufficient information for categorizing animate vs. inanimate objects, while rectilinear features do not (Zachariou et al., 2018). Results from our fMRI study in rhesus monkeys (Yue et al., 2014) have shed light on some of the neural substrates underlying curvature processing by revealing a network of visual cortical patches with a curvature response preference. However, it is unknown whether a similar network exists in human visual cortex. Thus, the current study was designed to investigate cortical areas with a preference for curvature in the human brain using fMRI at 7T. Consistent with our monkey fMRI results, we found a network of curvature preferring cortical patches-some of which overlapped well-known face-selective areas. Moreover, principal component analysis (PCA) using all visually-responsive voxels indicated that curvilinear features of visual stimuli were associated with specific retinotopic regions in visual cortex. Regions associated with positive curvilinear PC values encompassed the central visual field representation of early visual areas and the lateral surface of temporal cortex, while those associated with negative curvilinear PC values encompassed the peripheral visual field representation of early visual areas and the medial surface of temporal cortex. Thus, we found that broad areas of curvature preference, which encompassed face-selective areas, were bound by central visual field representations. Our results support the hypothesis that curvilinearity preference interacts with central-peripheral processing biases as primary features underlying the organization of temporal cortex topography in the adult human brain.
Collapse
Affiliation(s)
- Xiaomin Yue
- Laboratory of Brain and Cognition, NIMH/NIH, Building 49, Room 6A68, 49 Convent Drive, Bethesda, MD 20892, USA.
| | - Sophia Robert
- Laboratory of Brain and Cognition, NIMH/NIH, Building 49, Room 6A68, 49 Convent Drive, Bethesda, MD 20892, USA.
| | - Leslie G Ungerleider
- Laboratory of Brain and Cognition, NIMH/NIH, Building 49, Room 6A68, 49 Convent Drive, Bethesda, MD 20892, USA.
| |
Collapse
|
24
|
Poyo Solanas M, Vaessen M, de Gelder B. Computation-Based Feature Representation of Body Expressions in the Human Brain. Cereb Cortex 2020; 30:6376-6390. [DOI: 10.1093/cercor/bhaa196] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 06/04/2020] [Accepted: 06/26/2020] [Indexed: 01/31/2023] Open
Abstract
Abstract
Humans and other primate species are experts at recognizing body expressions. To understand the underlying perceptual mechanisms, we computed postural and kinematic features from affective whole-body movement videos and related them to brain processes. Using representational similarity and multivoxel pattern analyses, we showed systematic relations between computation-based body features and brain activity. Our results revealed that postural rather than kinematic features reflect the affective category of the body movements. The feature limb contraction showed a central contribution in fearful body expression perception, differentially represented in action observation, motor preparation, and affect coding regions, including the amygdala. The posterior superior temporal sulcus differentiated fearful from other affective categories using limb contraction rather than kinematics. The extrastriate body area and fusiform body area also showed greater tuning to postural features. The discovery of midlevel body feature encoding in the brain moves affective neuroscience beyond research on high-level emotion representations and provides insights in the perceptual features that possibly drive automatic emotion perception.
Collapse
Affiliation(s)
- Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Maarten Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
- Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
25
|
Abstract
Support vector machines (SVMs) are being used increasingly in affective science as a data-driven classification method and feature reduction technique. Whereas traditional statistical methods typically compare group averages on selected variables, SVMs use a predictive algorithm to learn multivariate patterns that optimally discriminate between groups. In this review, we provide a framework for understanding the methods of SVM-based analyses and summarize the findings of seminal studies that use SVMs for classification or data reduction in the behavioral and neural study of emotion and affective disorders. We conclude by discussing promising directions and potential applications of SVMs in future research in affective science.
Collapse
Affiliation(s)
| | - Matthew D. Sacchet
- Center for Depression, Anxiety, and Stress Research, McLean Hospital, Harvard Medical School, USA
| | - Ian H. Gotlib
- Department of Psychology, Stanford Neurosciences Institute, Stanford University, USA
| |
Collapse
|
26
|
Spatio-temporal dynamics of face perception. Neuroimage 2020; 209:116531. [DOI: 10.1016/j.neuroimage.2020.116531] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2019] [Revised: 12/19/2019] [Accepted: 01/08/2020] [Indexed: 11/27/2022] Open
|
27
|
Grisendi T, Reynaud O, Clarke S, Da Costa S. Processing pathways for emotional vocalizations. Brain Struct Funct 2019; 224:2487-2504. [DOI: 10.1007/s00429-019-01912-x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/03/2019] [Accepted: 06/12/2019] [Indexed: 01/06/2023]
|
28
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
29
|
Buchweitz A, de Azeredo LA, Sanvicente-Vieira B, Metsavaht Cará V, Bianchini Esper N, Soder RB, da Costa JC, Portuguez MW, Franco AR, Grassi-Oliveira R. Violence and Latin-American preadolescents: A study of social brain function and cortisol levels. Dev Sci 2019; 22:e12799. [PMID: 30648778 DOI: 10.1111/desc.12799] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2018] [Accepted: 01/10/2019] [Indexed: 12/27/2022]
Abstract
The present study investigated exposure to violence and its association with brain function and hair cortisol concentrations in Latin-American preadolescents. Self-reported victimization scores (JVQ-R2), brain imaging (fMRI) indices for a social cognition task (the 'eyes test'), and hair cortisol concentrations were investigated, for the first time, in this population. The eyes test is based on two conditions: attributing mental state or sex to pictures of pairs of eyes (Baron-Cohen, Wheelwright, Hill, Raste, & Plumb, 2001). The results showed an association among higher victimization scores and (a) less activation of posterior temporoparietal right-hemisphere areas, in the mental state condition only (including right temporal sulcus and fusiform gyrus); (b) higher functional connectivity indices for the Amygdala and Right Fusiform Gyrus (RFFG) pair of brain regions, also in the mental state condition only; (c) higher hair cortisol concentrations. The results suggest more exposure to violence is associated with significant differences in brain function and connectivity. A putative mechanism of less activation in posterior right-hemisphere regions and of synchronized Amygdala: RFFG time series was identified in the mental state condition only. The results also suggest measurable effects of exposure to violence in hair cortisol concentrations, which contribute to the reliability of self-reported scores by young adolescents. The findings are discussed in light of the effects of exposure to violence on brain function and on social-cognitive development in the adolescent brain. A video abstract of this article can be viewed at https://www.youtube.com/watch?v=qHcXq7Y9PBk.
Collapse
Affiliation(s)
- Augusto Buchweitz
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil.,PUCRS, School of Health Sciences, Graduate Program of Psychology, Porto Alegre, Brazil
| | - Lucas Araújo de Azeredo
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil
| | - Breno Sanvicente-Vieira
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Health Sciences, Graduate Program of Psychology, Porto Alegre, Brazil
| | - Valentina Metsavaht Cará
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil
| | - Nathália Bianchini Esper
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil
| | | | - Jaderson Costa da Costa
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil
| | | | - Alexandre Rosa Franco
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,Center for Biomedical Imaging and Neuromodulation, Nathan S. Kline Institute for Psychiatric Research, Orangeburg, NY.,Center for the Developing Brain, Child Mind Institute, New York, NY
| | - Rodrigo Grassi-Oliveira
- PUCRS, Brain Institute of Rio Grande do Sul (BraIns), Porto Alegre, Brazil.,PUCRS, School of Medicine, Graduate Program of Medicine, Neurosciences, Porto Alegre, Brazil.,PUCRS, School of Health Sciences, Graduate Program of Psychology, Porto Alegre, Brazil
| |
Collapse
|
30
|
Soto FA, Vucovich LE, Ashby FG. Linking signal detection theory and encoding models to reveal independent neural representations from neuroimaging data. PLoS Comput Biol 2018; 14:e1006470. [PMID: 30273337 PMCID: PMC6181430 DOI: 10.1371/journal.pcbi.1006470] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/17/2018] [Revised: 10/11/2018] [Accepted: 08/29/2018] [Indexed: 11/18/2022] Open
Abstract
Many research questions in visual perception involve determining whether stimulus properties are represented and processed independently. In visual neuroscience, there is great interest in determining whether important object dimensions are represented independently in the brain. For example, theories of face recognition have proposed either completely or partially independent processing of identity and emotional expression. Unfortunately, most previous research has only vaguely defined what is meant by “independence,” which hinders its precise quantification and testing. This article develops a new quantitative framework that links signal detection theory from psychophysics and encoding models from computational neuroscience, focusing on a special form of independence defined in the psychophysics literature: perceptual separability. The new theory allowed us, for the first time, to precisely define separability of neural representations and to theoretically link behavioral and brain measures of separability. The framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach. In particular, the theory identifies exactly what valid inferences can be made about independent encoding of stimulus dimensions from the results of multivariate analyses of neuroimaging data and psychophysical studies. In addition, commonly used operational tests of independence are re-interpreted within this new theoretical framework, providing insights on their correct use and interpretation. Finally, we apply this new framework to the study of separability of brain representations of face identity and emotional expression (neutral/sad) in a human fMRI study with male and female participants. A common question in vision research is whether certain stimulus properties, like face identity and expression, are represented and processed independently. We develop a theoretical framework that allowed us, for the first time, to link behavioral and brain measures of independence. Unlike previous approaches, our framework formally specifies the relation between these different levels of perceptual and brain representation, providing the tools for a truly integrative research approach in the study of independence. This allows to identify what kind of inferences can be made about brain representations from multivariate analyses of neuroimaging data or psychophysical studies. We apply this framework to the study of independent processing of face identity and expression.
Collapse
Affiliation(s)
- Fabian A. Soto
- Department of Psychology, Florida International University, Miami, Florida, United States of America
- * E-mail:
| | - Lauren E. Vucovich
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| | - F. Gregory Ashby
- Department of Psychological and Brain Sciences, University of California, Santa Barbara, Santa Barbara, California, United States of America
| |
Collapse
|
31
|
Dima DC, Perry G, Messaritaki E, Zhang J, Singh KD. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces. Hum Brain Mapp 2018; 39:3993-4006. [PMID: 29885055 PMCID: PMC6175429 DOI: 10.1002/hbm.24226] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/13/2018] [Accepted: 05/14/2018] [Indexed: 12/05/2022] Open
Abstract
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
Collapse
Affiliation(s)
- Diana C. Dima
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Eirini Messaritaki
- BRAIN Unit, School of MedicineCardiff UniversityCardiffCF24 4HQUnited Kingdom
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Jiaxiang Zhang
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| |
Collapse
|
32
|
Greening SG, Mitchell DG, Smith FW. Spatially generalizable representations of facial expressions: Decoding across partial face samples. Cortex 2018; 101:31-43. [DOI: 10.1016/j.cortex.2017.11.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 11/02/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
|
33
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|
34
|
Dobs K, Schultz J, Bülthoff I, Gardner JL. Task-dependent enhancement of facial expression and identity representations in human cortex. Neuroimage 2018; 172:689-702. [PMID: 29432802 DOI: 10.1016/j.neuroimage.2018.02.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 02/02/2018] [Accepted: 02/06/2018] [Indexed: 11/24/2022] Open
Abstract
What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.
Collapse
Affiliation(s)
- Katharina Dobs
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Division of Medical Psychology and Department of Psychiatry, University of Bonn, Sigmund Freud Str. 25, 53105 Bonn, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Psychology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| |
Collapse
|
35
|
Yang X, Xu J, Cao L, Li X, Wang P, Wang B, Liu B. Linear Representation of Emotions in Whole Persons by Combining Facial and Bodily Expressions in the Extrastriate Body Area. Front Hum Neurosci 2018; 11:653. [PMID: 29375348 PMCID: PMC5767685 DOI: 10.3389/fnhum.2017.00653] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/22/2017] [Accepted: 12/21/2017] [Indexed: 11/13/2022] Open
Abstract
Our human brain can rapidly and effortlessly perceive a person’s emotional state by integrating the isolated emotional faces and bodies into a whole. Behavioral studies have suggested that the human brain encodes whole persons in a holistic rather than part-based manner. Neuroimaging studies have also shown that body-selective areas prefer whole persons to the sum of their parts. The body-selective areas played a crucial role in representing the relationships between emotions expressed by different parts. However, it remains unclear in which regions the perception of whole persons is represented by a combination of faces and bodies, and to what extent the combination can be influenced by the whole person’s emotions. In the present study, functional magnetic resonance imaging data were collected when participants performed an emotion distinction task. Multi-voxel pattern analysis was conducted to examine how the whole person-evoked responses were associated with the face- and body-evoked responses in several specific brain areas. We found that in the extrastriate body area (EBA), the whole person patterns were most closely correlated with weighted sums of face and body patterns, using different weights for happy expressions but equal weights for angry and fearful ones. These results were unique for the EBA. Our findings tentatively support the idea that the whole person patterns are represented in a part-based manner in the EBA, and modulated by emotions. These data will further our understanding of the neural mechanism underlying perceiving emotional persons.
Collapse
Affiliation(s)
- Xiaoli Yang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Linjing Cao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Applications, Tianjin University, Tianjin, China.,Research State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
36
|
Manfredi M, Proverbio AM, Gonçalves Donate AP, Macarini Gonçalves Vieira S, Comfort WE, De Araújo Andreoli M, Boggio PS. tDCS application over the STG improves the ability to recognize and appreciate elements involved in humor processing. Exp Brain Res 2017; 235:1843-1852. [PMID: 28299412 DOI: 10.1007/s00221-017-4932-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2016] [Accepted: 02/22/2017] [Indexed: 11/24/2022]
Abstract
The superior temporal gyrus (STG) has been found to play a crucial role in the recognition of actions and facial expressions and may, therefore, be critical for the processing of humorous information. Here we investigated whether tDCS application to the STG would modulate the ability to recognize and appreciate the comic element in serious and comedic situations of misfortune. To this aim, the effects of different types of tDCS stimulation on the STG were analyzed during a task in which the participants were instructed to categorize various misfortunate situations as "comic" or "not comic". Participants underwent three different tDCS conditions: Anodal-right/Cathodal-left; Cathodal-right/Anodal-left; Sham. Images depicting people involved in accidents were grouped into three categories based on the facial expression of the victim: angry or painful (Affective); bewildered and funny (Comic); and images that did not contain the victim's face (No Face). An improvement in mean reaction times in response to both the Comic and No Face stimuli was observed following Anodal-left/Cathodal-right stimulation when compared to sham stimulation. This suggests that this stimulation type reduced the reaction times to socio-emotional complex scenes, regardless of facial expression. The Anodal-right/Cathodal-left stimulation reduced the mean reaction times for Comic stimuli only, suggesting that specifically the right STG may be involved in facial expression recognition and in the appreciation of the comic element in misfortunate situations. These results suggest a functional hemispheric asymmetry in STG response to social stimuli: the left STG might have a role in a general comprehension of social complex situations, while the right STG may be involved in the ability to recognize and integrate specific emotional aspects in a complex scene.
Collapse
Affiliation(s)
- Mirella Manfredi
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil.
| | | | - Ana Paula Gonçalves Donate
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil
| | - Sofia Macarini Gonçalves Vieira
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil
| | - William Edgar Comfort
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil
| | - Mariana De Araújo Andreoli
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil
| | - Paulo Sérgio Boggio
- Social and Cognitive Neuroscience Laboratory and Developmental Disorders Program, Center for Health and Biological Sciences, Mackenzie Presbyterian University, Rua Piaui, 181, São Paulo, 01241-001, Brazil
| |
Collapse
|
37
|
Abstract
Progress in understanding the relation between brain profiles and emotions is being slowed by the belief in a collection of basic emotional states, with the names: fear, anger, joy, disgust, and sadness, that do not specify the species or age of the experiencing agent, the origin of the state, or the evidence used to infer it. This article evaluates critically the premise that decontextualized emotional words refer to natural kinds. It also suggests that investigators set aside the currently popular words and search for relations, in humans and animals, between patterns of measures to varied incentives presented in distinctive contexts.
Collapse
Affiliation(s)
- Jerome Kagan
- Department of Psychology, Harvard University, USA
| |
Collapse
|
38
|
Objects Categorization on fMRI Data: Evidences for Feature-Map Representation of Objects in Human Brain. Brain Inform 2017. [DOI: 10.1007/978-3-319-70772-3_10] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022] Open
|
39
|
Müller-Bardorff M, Schulz C, Peterburs J, Bruchmann M, Mothes-Lasch M, Miltner W, Straube T. Effects of emotional intensity under perceptual load: An event-related potentials (ERPs) study. Biol Psychol 2016; 117:141-149. [DOI: 10.1016/j.biopsycho.2016.03.006] [Citation(s) in RCA: 23] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2015] [Revised: 03/13/2016] [Accepted: 03/14/2016] [Indexed: 10/22/2022]
|