1
|
Nicholson AA, Lieberman JM, Hosseini-Kamkar N, Eckstrand K, Rabellino D, Kearney B, Steyrl D, Narikuzhy S, Densmore M, Théberge J, Hosseiny F, Lanius RA. Exploring the impact of biological sex on intrinsic connectivity networks in PTSD: A data-driven approach. Prog Neuropsychopharmacol Biol Psychiatry 2024; 136:111180. [PMID: 39447688 DOI: 10.1016/j.pnpbp.2024.111180] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Revised: 09/26/2024] [Accepted: 10/21/2024] [Indexed: 10/26/2024]
Abstract
INTRODUCTION Sex as a biological variable (SABV) may help to account for the differential development and expression of post-traumatic stress disorder (PTSD) symptoms among trauma-exposed males and females. Here, we investigate the impact of SABV on PTSD-related neural alterations in resting-state functional connectivity (rsFC) within three core intrinsic connectivity networks (ICNs): the salience network (SN), central executive network (CEN), and default mode network (DMN). METHODS Using an independent component analysis (ICA), we compared rsFC of the SN, CEN, and DMN between males and females, with and without PTSD (n = 47 females with PTSD, n = 34 males with PTSD, n = 36 healthy control females, n = 20 healthy control males) via full factorial ANCOVAs. Additionally, linear regression analyses were conducted with clinical variables (i.e., PTSD and depression symptoms, childhood trauma scores) in order to determine intrinsic network connectivity characteristics specific to SABV. Furthermore, we utilized machine learning classification models to predict the biological sex and PTSD diagnosis of individual participants based on intrinsic network activity patterns. RESULTS Our findings revealed differential network connectivity patterns based on SABV and PTSD diagnosis. Males with PTSD exhibited increased intra-SN (i.e., SN-anterior insula) rsFC and increased DMN-right superior parietal lobule/precuneus/superior occipital gyrus rsFC as compared to females with PTSD. There were also differential network connectivity patterns for comparisons between the PTSD and healthy control groups for males and females, separately. We did not observe significant correlations between clinical measures of interest and brain region clusters which displayed significant between group differences as a function of biological sex, thus further reinforcing that SABV analyses are likely not confounded by these variables. Furthermore, machine learning classification models accurately predicted biological sex and PTSD diagnosis among novel/unseen participants based on ICN activation patterns. CONCLUSION This study reveals groundbreaking insights surrounding the impact of SABV on PTSD-related ICN alterations using data-driven methods. Our discoveries contribute to further defining neurobiological markers of PTSD among females and males and may offer guidance for differential sex-related treatment needs.
Collapse
Affiliation(s)
- Andrew A Nicholson
- The Institute of Mental Health Research, University of Ottawa, Royal Ottawa Hospital, Ontario, Canada; School of Psychology, University of Ottawa, Ottawa, Ontario, Canada; Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada; Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria; Department of Medical Biophysics, Western University, London, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada.
| | - Jonathan M Lieberman
- Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada; Imaging, Lawson Health Research Institute, London, Ontario, Canada
| | - Niki Hosseini-Kamkar
- The Institute of Mental Health Research, University of Ottawa, Royal Ottawa Hospital, Ontario, Canada; Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada
| | - Kristen Eckstrand
- Department of Psychiatry, University of Pittsburgh, Pittsburgh, PA, USA
| | - Daniela Rabellino
- Imaging, Lawson Health Research Institute, London, Ontario, Canada; Department of Neuroscience, Western University, London, Ontario, Canada
| | - Breanne Kearney
- Department of Neuroscience, Western University, London, Ontario, Canada
| | - David Steyrl
- Department of Cognition, Emotion, and Methods in Psychology, University of Vienna, Vienna, Austria
| | - Sandhya Narikuzhy
- Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, Ontario, Canada
| | - Maria Densmore
- Imaging, Lawson Health Research Institute, London, Ontario, Canada; Department of Psychiatry, Western University, London, Ontario, Canada
| | - Jean Théberge
- Department of Medical Biophysics, Western University, London, Ontario, Canada; Imaging, Lawson Health Research Institute, London, Ontario, Canada; Department of Psychiatry, Western University, London, Ontario, Canada; Department of Diagnostic Imaging, St. Joseph's Healthcare, London, Ontario, Canada
| | - Fardous Hosseiny
- Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada
| | - Ruth A Lanius
- Atlas Institute for Veterans and Families, Ottawa, Ontario, Canada; Imaging, Lawson Health Research Institute, London, Ontario, Canada; Department of Neuroscience, Western University, London, Ontario, Canada; Department of Psychiatry, Western University, London, Ontario, Canada
| |
Collapse
|
2
|
Zhang Z, Chen T, Liu Y, Wang C, Zhao K, Liu CH, Fu X. Decoding the temporal representation of facial expression in face-selective regions. Neuroimage 2023; 283:120442. [PMID: 37926217 DOI: 10.1016/j.neuroimage.2023.120442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2023] [Revised: 10/23/2023] [Accepted: 11/02/2023] [Indexed: 11/07/2023] Open
Abstract
The ability of humans to discern facial expressions in a timely manner typically relies on distributed face-selective regions for rapid neural computations. To study the time course in regions of interest for this process, we used magnetoencephalography (MEG) to measure neural responses participants viewed facial expressions depicting seven types of emotions (happiness, sadness, anger, disgust, fear, surprise, and neutral). Analysis of the time-resolved decoding of neural responses in face-selective sources within the inferior parietal cortex (IP-faces), lateral occipital cortex (LO-faces), fusiform gyrus (FG-faces), and posterior superior temporal sulcus (pSTS-faces) revealed that facial expressions were successfully classified starting from ∼100 to 150 ms after stimulus onset. Interestingly, the LO-faces and IP-faces showed greater accuracy than FG-faces and pSTS-faces. To examine the nature of the information processed in these face-selective regions, we entered with facial expression stimuli into a convolutional neural network (CNN) to perform similarity analyses against human neural responses. The results showed that neural responses in the LO-faces and IP-faces, starting ∼100 ms after the stimuli, were more strongly correlated with deep representations of emotional categories than with image level information from the input images. Additionally, we observed a relationship between the behavioral performance and the neural responses in the LO-faces and IP-faces, but not in the FG-faces and lpSTS-faces. Together, these results provided a comprehensive picture of the time course and nature of information involved in facial expression discrimination across multiple face-selective regions, which advances our understanding of how the human brain processes facial expressions.
Collapse
Affiliation(s)
- Zhihao Zhang
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Tong Chen
- Chongqing Key Laboratory of Non-Linear Circuit and Intelligent Information Processing, Southwest University, Chongqing 400715, China; Chongqing Key Laboratory of Artificial Intelligence and Service Robot Control Technology, Chongqing 400715, China
| | - Ye Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China
| | - Chongyang Wang
- Department of Computer Science and Technology, Tsinghua University, Beijing 100084, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing 100049, China.
| |
Collapse
|
3
|
Pitcher D, Sliwinska MW, Kaiser D. TMS disruption of the lateral prefrontal cortex increases neural activity in the default mode network when naming facial expressions. Soc Cogn Affect Neurosci 2023; 18:nsad072. [PMID: 38048419 PMCID: PMC10695328 DOI: 10.1093/scan/nsad072] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/12/2023] [Revised: 10/17/2023] [Accepted: 11/15/2023] [Indexed: 12/06/2023] Open
Abstract
Recognizing facial expressions is dependent on multiple brain networks specialized for different cognitive functions. In the current study, participants (N = 20) were scanned using functional magnetic resonance imaging (fMRI), while they performed a covert facial expression naming task. Immediately prior to scanning thetaburst transcranial magnetic stimulation (TMS) was delivered over the right lateral prefrontal cortex (PFC), or the vertex control site. A group whole-brain analysis revealed that TMS induced opposite effects in the neural responses across different brain networks. Stimulation of the right PFC (compared to stimulation of the vertex) decreased neural activity in the left lateral PFC but increased neural activity in three nodes of the default mode network (DMN): the right superior frontal gyrus, right angular gyrus and the bilateral middle cingulate gyrus. A region of interest analysis showed that TMS delivered over the right PFC reduced neural activity across all functionally localised face areas (including in the PFC) compared to TMS delivered over the vertex. These results suggest that visually recognizing facial expressions is dependent on the dynamic interaction of the face-processing network and the DMN. Our study also demonstrates the utility of combined TMS/fMRI studies for revealing the dynamic interactions between different functional brain networks.
Collapse
Affiliation(s)
- David Pitcher
- Department of Psychology, University of York, Heslington, York YO105DD, UK
| | | | - Daniel Kaiser
- Mathematical Institute, Department of Mathematics and Computer Science, Physics, Geography, Justus-Liebig-Universität Gießen, Gießen 35392, Germany
- Center for Mind, Brain and Behaviour, Philipps-Universität Marburg, and Justus-Liebig-Universität Gießen, Marburg 35032, Germany
| |
Collapse
|
4
|
Karl V, Rohe T. Structural brain changes in emotion recognition across the adult lifespan. Soc Cogn Affect Neurosci 2023; 18:nsad052. [PMID: 37769357 PMCID: PMC10627307 DOI: 10.1093/scan/nsad052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2022] [Revised: 06/22/2023] [Accepted: 09/19/2023] [Indexed: 09/30/2023] Open
Abstract
Emotion recognition (ER) declines with increasing age, yet little is known whether this observation is based on structural brain changes conveyed by differential atrophy. To investigate whether age-related ER decline correlates with reduced grey matter (GM) volume in emotion-related brain regions, we conducted a voxel-based morphometry analysis using data of the Human Connectome Project-Aging (N = 238, aged 36-87) in which facial ER was tested. We expected to find brain regions that show an additive or super-additive age-related change in GM volume indicating atrophic processes that reduce ER in older adults. The data did not support our hypotheses after correction for multiple comparisons. Exploratory analyses with a threshold of P < 0.001 (uncorrected), however, suggested that relationships between GM volume and age-related general ER may be widely distributed across the cortex. Yet, small effect sizes imply that only a small fraction of the decline of ER in older adults can be attributed to local GM volume changes in single voxels or their multivariate patterns.
Collapse
Affiliation(s)
- Valerie Karl
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
- NORMENT, Division of Mental Health and Addiction, Oslo University Hospital & Institute of Clinical Medicine, University of Oslo, Oslo 0424, Norway
- PROMENTA Research Center, Department of Psychology, University of Oslo, Oslo 0373, Norway
| | - Tim Rohe
- Institute of Psychology, Friedrich-Alexander-Universität Erlangen-Nürnberg, Erlangen 91054, Germany
| |
Collapse
|
5
|
Qian H, Shen Z, Zhou D, Huang Y. Intratumoral and peritumoral radiomics model based on abdominal ultrasound for predicting Ki-67 expression in patients with hepatocellular cancer. Front Oncol 2023; 13:1209111. [PMID: 37711208 PMCID: PMC10498123 DOI: 10.3389/fonc.2023.1209111] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2023] [Accepted: 08/09/2023] [Indexed: 09/16/2023] Open
Abstract
Background Hepatocellular cancer (HCC) is one of the most common tumors worldwide, and Ki-67 is highly important in the assessment of HCC. Our study aimed to evaluate the value of ultrasound radiomics based on intratumoral and peritumoral tissues in predicting Ki-67 expression levels in patients with HCC. Methods We conducted a retrospective analysis of ultrasonic and clinical data from 118 patients diagnosed with HCC through histopathological examination of surgical specimens in our hospital between September 2019 and January 2023. Radiomics features were extracted from ultrasound images of both intratumoral and peritumoral regions. To select the optimal features, we utilized the t-test and the least absolute shrinkage and selection operator (LASSO). We compared the area under the curve (AUC) values to determine the most effective modeling method. Subsequently, we developed four models: the intratumoral model, the peritumoral model, combined model #1, and combined model #2. Results Of the 118 patients, 64 were confirmed to have high Ki-67 expression while 54 were confirmed to have low Ki-67 expression. The AUC of the intratumoral model was 0.796 (0.649-0.942), and the AUC of the peritumoral model was 0.772 (0.619-0.926). Furthermore, combined model#1 yielded an AUC of 0.870 (0.751-0.989), and the AUC of combined model#2 was 0.762 (0.605-0.918). Among these models, combined model#1 showed the best performance in terms of AUC, accuracy, F1-score, and decision curve analysis (DCA). Conclusion We presented an ultrasound radiomics model that utilizes both intratumoral and peritumoral tissue information to accurately predict Ki-67 expression in HCC patients. We believe that incorporating both regions in a proper manner can enhance the diagnostic performance of the prediction model. Nevertheless, it is not sufficient to include both regions in the region of interest (ROI) without careful consideration.
Collapse
Affiliation(s)
- Hongwei Qian
- Department of Hepatobiliary and Pancreatic Surgery, Shaoxing People’s Hospital, Shaoxing, China
- Shaoxing Key Laboratory of Minimally Invasive Abdominal Surgery and Precise Treatment of Tumor, Shaoxing, China
| | - Zhihong Shen
- Department of Hepatobiliary and Pancreatic Surgery, Shaoxing People’s Hospital, Shaoxing, China
- Shaoxing Key Laboratory of Minimally Invasive Abdominal Surgery and Precise Treatment of Tumor, Shaoxing, China
| | - Difan Zhou
- Department of Hepatobiliary and Pancreatic Surgery, Shaoxing People’s Hospital, Shaoxing, China
- Shaoxing Key Laboratory of Minimally Invasive Abdominal Surgery and Precise Treatment of Tumor, Shaoxing, China
| | - Yanhua Huang
- Department of Ultrasound, Shaoxing People’s Hospital, Shaoxing, China
| |
Collapse
|
6
|
Törnqvist H, Höller H, Vsetecka K, Hoehl S, Kujala MV. Matters of development and experience: Evaluation of dog and human emotional expressions by children and adults. PLoS One 2023; 18:e0288137. [PMID: 37494304 PMCID: PMC10370749 DOI: 10.1371/journal.pone.0288137] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 06/20/2023] [Indexed: 07/28/2023] Open
Abstract
Emotional facial expressions are an important part of across species social communication, yet the factors affecting human recognition of dog emotions have received limited attention. Here, we characterize the recognition and evaluation of dog and human emotional facial expressions by 4-and 6-year-old children and adult participants, as well as the effect of dog experience in emotion recognition. Participants rated the happiness, anger, valence, and arousal from happy, aggressive, and neutral facial images of dogs and humans. Both respondent age and experience influenced the dog emotion recognition and ratings. Aggressive dog faces were rated more often correctly by adults than 4-year-olds regardless of dog experience, whereas the 6-year-olds' and adults' performances did not differ. Happy human and dog expressions were recognized equally by all groups. Children rated aggressive dogs as more positive and lower in arousal than adults, and participants without dog experience rated aggressive dogs as more positive than those with dog experience. Children also rated aggressive dogs as more positive and lower in arousal than aggressive humans. The results confirm that recognition of dog emotions, especially aggression, increases with age, which can be related to general dog experience and brain structure maturation involved in facial emotion recognition.
Collapse
Affiliation(s)
- Heini Törnqvist
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
| | - Hanna Höller
- Department of Developmental and Educational Psychology, University of Vienna, Vienna, Austria
| | - Kerstin Vsetecka
- Department of Developmental and Educational Psychology, University of Vienna, Vienna, Austria
| | - Stefanie Hoehl
- Department of Developmental and Educational Psychology, University of Vienna, Vienna, Austria
| | - Miiamaaria V Kujala
- Department of Psychology, University of Jyväskylä, Jyväskylä, Finland
- Faculty of Veterinary Medicine, University of Helsinki, Helsinki, Finland
- Department of Neuroscience and Biomedical Engineering, Aalto University, Espoo, Finland
| |
Collapse
|
7
|
Santavirta S, Karjalainen T, Nazari-Farsani S, Hudson M, Putkinen V, Seppälä K, Sun L, Glerean E, Hirvonen J, Karlsson HK, Nummenmaa L. Functional organization of social perception in the human brain. Neuroimage 2023; 272:120025. [PMID: 36958619 PMCID: PMC10112277 DOI: 10.1016/j.neuroimage.2023.120025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2022] [Revised: 03/07/2023] [Accepted: 03/11/2023] [Indexed: 03/25/2023] Open
Abstract
Humans rapidly extract diverse and complex information from ongoing social interactions, but the perceptual and neural organization of the different aspects of social perception remains unresolved. We showed short movie clips with rich social content to 97 healthy participants while their haemodynamic brain activity was measured with fMRI. The clips were annotated moment-to-moment for a large set of social features and 45 of the features were evaluated reliably between annotators. Cluster analysis of the social features revealed that 13 dimensions were sufficient for describing the social perceptual space. Three different analysis methods were used to map the social perceptual processes in the human brain. Regression analysis mapped regional neural response profiles for different social dimensions. Multivariate pattern analysis then established the spatial specificity of the responses and intersubject correlation analysis connected social perceptual processing with neural synchronization. The results revealed a gradient in the processing of social information in the brain. Posterior temporal and occipital regions were broadly tuned to most social dimensions and the classifier revealed that these responses showed spatial specificity for social dimensions; in contrast Heschl gyri and parietal areas were also broadly associated with different social signals, yet the spatial patterns of responses did not differentiate social dimensions. Frontal and subcortical regions responded only to a limited number of social dimensions and the spatial response patterns did not differentiate social dimension. Altogether these results highlight the distributed nature of social processing in the brain.
Collapse
Affiliation(s)
- Severi Santavirta
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland.
| | - Tomi Karjalainen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Sanaz Nazari-Farsani
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Matthew Hudson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; School of Psychology, University of Plymouth, Plymouth, United Kingdom
| | - Vesa Putkinen
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Kerttu Seppälä
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Medical Physics, Turku University Hospital, Turku, Finland
| | - Lihua Sun
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Nuclear Medicine, Pudong Hospital, Fudan University, Shanghai, China
| | - Enrico Glerean
- Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, Finland
| | - Jussi Hirvonen
- Department of Radiology, University of Turku and Turku University Hospital, Turku, Finland; Medical Imaging Center, Department of Radiology, Tampere University and Tampere University Hospital, Tampere, Finland
| | - Henry K Karlsson
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland
| | - Lauri Nummenmaa
- Turku PET Centre, University of Turku, Turku, Finland; Turku PET Centre, Turku University Hospital, Turku, Finland; Department of Psychology, University of Turku, Turku, Finland
| |
Collapse
|
8
|
Tanaka T, Okamoto N, Kida I, Haruno M. The initial decrease in 7T-BOLD signals detected by hyperalignment contains information to decode facial expressions. Neuroimage 2022; 262:119537. [DOI: 10.1016/j.neuroimage.2022.119537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/11/2022] [Revised: 07/11/2022] [Accepted: 08/02/2022] [Indexed: 10/31/2022] Open
|
9
|
Li Y, Zhang M, Liu S, Luo W. EEG decoding of multidimensional information from emotional faces. Neuroimage 2022; 258:119374. [PMID: 35700944 DOI: 10.1016/j.neuroimage.2022.119374] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 06/03/2022] [Accepted: 06/10/2022] [Indexed: 10/18/2022] Open
Abstract
Humans can detect and recognize faces quickly, but there has been little research on the temporal dynamics of the different dimensional face information that is extracted. The present study aimed to investigate the time course of neural responses to the representation of different dimensional face information, such as age, gender, emotion, and identity. We used support vector machine decoding to obtain representational dissimilarity matrices of event-related potential responses to different faces for each subject over time. In addition, we performed representational similarity analysis with the model representational dissimilarity matrices that contained different dimensional face information. Three significant findings were observed. First, the extraction process of facial emotion occurred before that of facial identity and lasted for a long time, which was specific to the right frontal region. Second, arousal was preferentially extracted before valence during the processing of facial emotional information. Third, different dimensional face information exhibited representational stability during different periods. In conclusion, these findings reveal the precise temporal dynamics of multidimensional information processing in faces and provide powerful support for computational models on emotional face perception.
Collapse
Affiliation(s)
- Yiwen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Mingming Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Shuaicheng Liu
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, China.
| |
Collapse
|
10
|
Izumika R, Cabeza R, Tsukiura T. Neural Mechanisms of Perceiving and Subsequently Recollecting Emotional Facial Expressions in Young and Older Adults. J Cogn Neurosci 2022; 34:1183-1204. [PMID: 35468212 DOI: 10.1162/jocn_a_01851] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
It is known that emotional facial expressions modulate the perception and subsequent recollection of faces and that aging alters these modulatory effects. Yet, the underlying neural mechanisms are not well understood, and they were the focus of the current fMRI study. We scanned healthy young and older adults while perceiving happy, neutral, or angry faces paired with names. Participants were then provided with the names of the faces and asked to recall the facial expression of each face. fMRI analyses focused on the fusiform face area (FFA), the posterior superior temporal sulcus (pSTS), the OFC, the amygdala, and the hippocampus (HC). Univariate activity, multivariate pattern (MVPA), and functional connectivity analyses were performed. The study yielded two main sets of findings. First, in pSTS and the amygdala, univariate activity and MVPA discrimination during the processing of facial expressions were similar in young and older adults, whereas in FFA and OFC, MVPA discriminated facial expressions less accurately in older than young adults. These findings suggest that facial expression representations in FFA and OFC reflect age-related dedifferentiation and positivity effect. Second, HC-OFC connectivity showed subsequent memory effects (SMEs) for happy expressions in both age groups, HC-FFA connectivity exhibited SMEs for happy and neutral expressions in young adults, and HC-pSTS interactions displayed SMEs for happy expressions in older adults. These results could be related to compensatory mechanisms and positivity effects in older adults. Taken together, the results clarify the effects of aging on the neural mechanisms in perceiving and encoding facial expressions.
Collapse
|
11
|
Research on Voice-Driven Facial Expression Film and Television Animation Based on Compromised Node Detection in Wireless Sensor Networks. COMPUTATIONAL INTELLIGENCE AND NEUROSCIENCE 2022; 2022:8563818. [PMID: 35111214 PMCID: PMC8803464 DOI: 10.1155/2022/8563818] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 12/01/2021] [Revised: 12/18/2021] [Accepted: 12/27/2021] [Indexed: 11/18/2022]
Abstract
With the continuous development of social economy, film and television animation, as the spiritual needs of ordinary people, is more and more popular. Especially for the development of emerging technologies, the corresponding voice can be used to change AI expression. But at the same time, how to ensure the synchronization of language sound and facial expression is one of the difficulties in animation transformation. Relying on the compromised node detection of wireless sensor networks, this paper combs the synchronous traffic flow between the speech signals and facial expressions, finds the pattern distribution of facial motion based on unsupervised classification, realizes training and learning through neural networks, and realizes one-to-one mapping to facial expressions by using the rhyme distribution of speech features. It avoids the defect of robustness of speech recognition, improves the learning ability of speech recognition, and realizes the driving analysis of facial expression film and television animation. The simulation results show that the compromised node detection in wireless sensor networks is effective and can support the analysis and research of speech-driven facial expression film and television animation.
Collapse
|
12
|
Abstract
Face perception is a socially important but complex process with many stages and many facets. There is substantial evidence from many sources that it involves a large extent of the temporal lobe, from the ventral occipitotemporal cortex and superior temporal sulci to anterior temporal regions. While early human neuroimaging work suggested a core face network consisting of the occipital face area, fusiform face area, and posterior superior temporal sulcus, studies in both humans and monkeys show a system of face patches stretching from posterior to anterior in both the superior temporal sulcus and inferotemporal cortex. Sophisticated techniques such as fMRI adaptation have shown that these face-activated regions show responses that have many of the attributes of human face processing. Lesions of some of these regions in humans lead to variants of prosopagnosia, the inability to recognize the identity of a face. Lesion, imaging, and electrophysiologic data all suggest that there is a segregation between identity and expression processing, though some suggest this may be better characterized as a distinction between static and dynamic facial information.
Collapse
Affiliation(s)
- Jason J S Barton
- Division of Neuro-ophthalmology, Departments of Medicine (Neurology), Ophthalmology and Visual Sciences, Psychology, University of British Columbia, Vancouver, BC, Canada.
| |
Collapse
|
13
|
Murray T, O'Brien J, Sagiv N, Garrido L. The role of stimulus-based cues and conceptual information in processing facial expressions of emotion. Cortex 2021; 144:109-132. [PMID: 34666297 DOI: 10.1016/j.cortex.2021.08.007] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2020] [Revised: 07/16/2021] [Accepted: 08/09/2021] [Indexed: 01/07/2023]
Abstract
Face shape and surface textures are two important cues that aid in the perception of facial expressions of emotion. Additionally, this perception is also influenced by high-level emotion concepts. Across two studies, we use representational similarity analysis to investigate the relative roles of shape, surface, and conceptual information in the perception, categorisation, and neural representation of facial expressions. In Study 1, 50 participants completed a perceptual task designed to measure the perceptual similarity of expression pairs, and a categorical task designed to measure the confusability between expression pairs when assigning emotion labels to a face. We used representational similarity analysis and constructed three models of the similarities between emotions using distinct information. Two models were based on stimulus-based cues (face shapes and surface textures) and one model was based on emotion concepts. Using multiple linear regression, we found that behaviour during both tasks was related with the similarity of emotion concepts. The model based on face shapes was more related with behaviour in the perceptual task than in the categorical, and the model based on surface textures was more related with behaviour in the categorical than the perceptual task. In Study 2, 30 participants viewed facial expressions while undergoing fMRI, allowing for the measurement of brain representational geometries of facial expressions of emotion in three core face-responsive regions (the Fusiform Face Area, Occipital Face Area, and Superior Temporal Sulcus), and a region involved in theory of mind (Medial Prefrontal Cortex). Across all four regions, the representational distances between facial expression pairs were related to the similarities of emotion concepts, but not to either of the stimulus-based cues. Together, these results highlight the important top-down influence of high-level emotion concepts both in behavioural tasks and in the neural representation of facial expressions.
Collapse
Affiliation(s)
- Thomas Murray
- Psychology Department, School of Biological and Behavioural Sciences, Queen Mary University London, United Kingdom.
| | - Justin O'Brien
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Noam Sagiv
- Centre for Cognitive Neuroscience, Department of Life Sciences, Brunel University London, United Kingdom
| | - Lúcia Garrido
- Department of Psychology, City, University of London, United Kingdom
| |
Collapse
|
14
|
Matt S, Dzhelyova M, Maillard L, Lighezzolo-Alnot J, Rossion B, Caharel S. The rapid and automatic categorization of facial expression changes in highly variable natural images. Cortex 2021; 144:168-184. [PMID: 34666300 DOI: 10.1016/j.cortex.2021.08.005] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Revised: 07/08/2021] [Accepted: 08/09/2021] [Indexed: 01/23/2023]
Abstract
Emotional expressions are quickly and automatically read from human faces under natural viewing conditions. Yet, categorization of facial expressions is typically measured in experimental contexts with homogenous sets of face stimuli. Here we evaluated how the 6 basic facial emotions (Fear, Disgust, Happiness, Anger, Surprise or Sadness) can be rapidly and automatically categorized with faces varying in head orientation, lighting condition, identity, gender, age, ethnic origin and background context. High-density electroencephalography was recorded in 17 participants viewing 50 s sequences with natural variable images of neutral-expression faces alternating at a 6 Hz rate. Every five stimuli (1.2 Hz), variable natural images of one of the six basic expressions were presented. Despite the wide physical variability across images, a significant F/5 = 1.2 Hz response and its harmonics (e.g., 2F/5 = 2.4 Hz, etc.) was observed for all expression changes at the group-level and in every individual participant. Facial categorization responses were found mainly over occipito-temporal sites, with distinct hemispheric lateralization and cortical topographies according to the different expressions. Specifically, a stronger response was found to Sadness categorization, especially over the left hemisphere, as compared to Fear and Happiness, together with a right hemispheric dominance for categorization of Fearful faces. Importantly, these differences were specific to upright faces, ruling out the contribution of low-level visual cues. Overall, these observations point to robust rapid and automatic facial expression categorization processes in the human brain.
Collapse
Affiliation(s)
- Stéphanie Matt
- Université de Lorraine, 2LPN, Nancy, France; Université de Lorraine, Laboratoire INTERPSY, Nancy, France.
| | - Milena Dzhelyova
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium.
| | - Louis Maillard
- Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | | | - Bruno Rossion
- Université Catholique de Louvain, Institute of Research in Psychological Science, Louvain-la-Neuve, Belgium; Université de Lorraine, CNRS, CRAN, Nancy, France; Université de Lorraine, CHRU-Nancy, Service de Neurologie, Nancy, France.
| | - Stéphanie Caharel
- Université de Lorraine, 2LPN, Nancy, France; Institut Universitaire de France, Paris, France.
| |
Collapse
|
15
|
Liu M, Liu CH, Zheng S, Zhao K, Fu X. Reexamining the neural network involved in perception of facial expression: A meta-analysis. Neurosci Biobehav Rev 2021; 131:179-191. [PMID: 34536463 DOI: 10.1016/j.neubiorev.2021.09.024] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2021] [Revised: 08/19/2021] [Accepted: 09/09/2021] [Indexed: 11/15/2022]
Abstract
Perception of facial expression is essential for social interactions. Although a few competing models have enjoyed some success to map brain regions, they are also facing difficult challenges. The current study used an updated activation likelihood estimation (ALE) method of meta-analysis to explore the involvement of brain regions in facial expression processing. The sample contained 96 functional magnetic resonance imaging (fMRI) and positron emission tomography (PET) studies of healthy adults with the results of whole-brain analyses. The key findings revealed that the ventral pathway, especially the left fusiform face area (FFA) region, was more responsive to facial expression. The left posterior FFA showed strong involvement when participants passively viewing emotional faces without being asked to judge the type of expression or other attributes of the stimuli. Through meta-analytic connectivity modeling (MACM) of the main brain regions in the ventral pathway, we constructed a co-activating neural network as a revised model of facial expression processing that assigns prominent roles to the amygdala, FFA, the occipital gyrus, and the inferior frontal gyrus.
Collapse
Affiliation(s)
- Mingtong Liu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Chang Hong Liu
- Department of Psychology, Bournemouth University, Dorset, United Kingdom
| | - Shuang Zheng
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Ke Zhao
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Xiaolan Fu
- State Key Laboratory of Brain and Cognitive Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, 100101, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, 100049, China.
| |
Collapse
|
16
|
Brooks JA, Stolier RM, Freeman JB. Computational approaches to the neuroscience of social perception. Soc Cogn Affect Neurosci 2021; 16:827-837. [PMID: 32986115 PMCID: PMC8343569 DOI: 10.1093/scan/nsaa127] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 07/23/2020] [Accepted: 09/09/2020] [Indexed: 11/14/2022] Open
Abstract
Across multiple domains of social perception-including social categorization, emotion perception, impression formation and mentalizing-multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and discuss how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Department of Psychology, New York University, New York, NY, USA
| | - Ryan M Stolier
- Columbia University, 1190 Amsterdam Ave., New York, NY 10027, USA
| | | |
Collapse
|
17
|
Takahashi Y, Murata S, Idei H, Tomita H, Yamashita Y. Neural network modeling of altered facial expression recognition in autism spectrum disorders based on predictive processing framework. Sci Rep 2021; 11:14684. [PMID: 34312400 PMCID: PMC8313712 DOI: 10.1038/s41598-021-94067-x] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2021] [Accepted: 07/06/2021] [Indexed: 11/20/2022] Open
Abstract
The mechanism underlying the emergence of emotional categories from visual facial expression information during the developmental process is largely unknown. Therefore, this study proposes a system-level explanation for understanding the facial emotion recognition process and its alteration in autism spectrum disorder (ASD) from the perspective of predictive processing theory. Predictive processing for facial emotion recognition was implemented as a hierarchical recurrent neural network (RNN). The RNNs were trained to predict the dynamic changes of facial expression movies for six basic emotions without explicit emotion labels as a developmental learning process, and were evaluated by the performance of recognizing unseen facial expressions for the test phase. In addition, the causal relationship between the network characteristics assumed in ASD and ASD-like cognition was investigated. After the developmental learning process, emotional clusters emerged in the natural course of self-organization in higher-level neurons, even though emotional labels were not explicitly instructed. In addition, the network successfully recognized unseen test facial sequences by adjusting higher-level activity through the process of minimizing precision-weighted prediction error. In contrast, the network simulating altered intrinsic neural excitability demonstrated reduced generalization capability and impaired emotional clustering in higher-level neurons. Consistent with previous findings from human behavioral studies, an excessive precision estimation of noisy details underlies this ASD-like cognition. These results support the idea that impaired facial emotion recognition in ASD can be explained by altered predictive processing, and provide possible insight for investigating the neurophysiological basis of affective contact.
Collapse
Affiliation(s)
- Yuta Takahashi
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
- Department of Information Medicine, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo, 187-8502, Japan
| | - Shingo Murata
- Department of Electronics and Electrical Engineering, Faculty of Science and Technology, Keio University, Tokyo, Japan
| | - Hayato Idei
- Department of Intermedia Studies, Waseda University, Tokyo, Japan
| | - Hiroaki Tomita
- Department of Psychiatry, Tohoku University Hospital, Sendai, Japan
| | - Yuichi Yamashita
- Department of Information Medicine, National Center of Neurology and Psychiatry, 4-1-1 Ogawa-Higashi, Kodaira, Tokyo, 187-8502, Japan.
| |
Collapse
|
18
|
Volynets S, Smirnov D, Saarimäki H, Nummenmaa L. Statistical pattern recognition reveals shared neural signatures for displaying and recognizing specific facial expressions. Soc Cogn Affect Neurosci 2021; 15:803-813. [PMID: 33007782 PMCID: PMC7543934 DOI: 10.1093/scan/nsaa110] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 05/25/2020] [Accepted: 08/03/2020] [Indexed: 12/30/2022] Open
Abstract
Human neuroimaging and behavioural studies suggest that somatomotor ‘mirroring’ of seen facial expressions may support their recognition. Here we show that viewing specific facial expressions triggers the representation corresponding to that expression in the observer’s brain. Twelve healthy female volunteers underwent two separate fMRI sessions: one where they observed and another where they displayed three types of facial expressions (joy, anger and disgust). Pattern classifier based on Bayesian logistic regression was trained to classify facial expressions (i) within modality (trained and tested with data recorded while observing or displaying expressions) and (ii) between modalities (trained with data recorded while displaying expressions and tested with data recorded while observing the expressions). Cross-modal classification was performed in two ways: with and without functional realignment of the data across observing/displaying conditions. All expressions could be accurately classified within and also across modalities. Brain regions contributing most to cross-modal classification accuracy included primary motor and somatosensory cortices. Functional realignment led to only minor increases in cross-modal classification accuracy for most of the examined ROIs. Substantial improvement was observed in the occipito-ventral components of the core system for facial expression recognition. Altogether these results support the embodied emotion recognition model and show that expression-specific somatomotor neural signatures could support facial expression recognition.
Collapse
Affiliation(s)
- Sofia Volynets
- Correspondence should be addressed to Lauri Nummenmaa, Turku PET Centre c/o Turku University Hospital, Kiinamyllynkatu 4-6, 20520 Turku, Finland. E-mail:
| | - Dmitry Smirnov
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, FI-0076 Aalto, Finland
| | - Heini Saarimäki
- Department of Neuroscience and Biomedical Engineering, School of Science, Aalto University, FI-0076 Aalto, Finland
- Faculty of Social Sciences, Tampere University, FI-33014 Tampere, Finland
| | - Lauri Nummenmaa
- Turku PET Centre and Department of Psychology, University of Turku, FI-20520 Turku, Finland
- Turku University Hospital, University of Turku, FI-20520 Turku, Finland
| |
Collapse
|
19
|
Barnett BO, Brooks JA, Freeman JB. Stereotypes bias face perception via orbitofrontal-fusiform cortical interaction. Soc Cogn Affect Neurosci 2021; 16:302-314. [PMID: 33270131 PMCID: PMC7943359 DOI: 10.1093/scan/nsaa165] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2020] [Revised: 11/11/2020] [Accepted: 12/02/2020] [Indexed: 11/13/2022] Open
Abstract
Previous research has shown that social-conceptual associations, such as stereotypes, can influence the visual representation of faces and neural pattern responses in ventral temporal cortex (VTC) regions, such as the fusiform gyrus (FG). Current models suggest that this social-conceptual impact requires medial orbitofrontal cortex (mOFC) feedback signals during perception. Backward masking can disrupt such signals, as it is a technique known to reduce functional connectivity between VTC regions and regions outside VTC. During functional magnetic resonance imaging (fMRI), subjects passively viewed masked and unmasked faces, and following the scan, perceptual biases and stereotypical associations were assessed. Multi-voxel representations of faces across the VTC, and in the FG and mOFC, reflected stereotypically biased perceptions when faces were unmasked, but this effect was abolished when faces were masked. However, the VTC still retained the ability to process masked faces and was sensitive to their categorical distinctions. Functional connectivity analyses confirmed that masking disrupted mOFC-FG connectivity, which predicted a reduced impact of stereotypical associations in the FG. Taken together, our findings suggest that the biasing of face representations in line with stereotypical associations does not arise from intrinsic processing within the VTC and FG alone, but instead it depends in part on top-down feedback from the mOFC during perception.
Collapse
Affiliation(s)
- Benjamin O Barnett
- Division of Psychology and Language Sciences, University College London, London WC1E 6BT, UK
| | - Jeffrey A Brooks
- Department of Psychology, New York University, New York, NY 10003, USA
| | - Jonathan B Freeman
- Department of Psychology, New York University, New York, NY 10003, USA
- Center for Neural Science, New York University, New York, NY 10003, USA
| |
Collapse
|
20
|
Hendriks MHA, Dillen C, Vettori S, Vercammen L, Daniels N, Steyaert J, Op de Beeck H, Boets B. Neural processing of facial identity and expression in adults with and without autism: A multi-method approach. NEUROIMAGE-CLINICAL 2020; 29:102520. [PMID: 33338966 PMCID: PMC7750419 DOI: 10.1016/j.nicl.2020.102520] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/15/2020] [Revised: 10/23/2020] [Accepted: 11/30/2020] [Indexed: 11/28/2022]
Abstract
The ability to recognize faces and facial expressions is a common human talent. It has, however, been suggested to be impaired in individuals with autism spectrum disorder (ASD). The goal of this study was to compare the processing of facial identity and emotion between individuals with ASD and neurotypicals (NTs). Behavioural and functional magnetic resonance imaging (fMRI) data from 46 young adults (aged 17-23 years, NASD = 22, NNT = 24) was analysed. During fMRI data acquisition, participants discriminated between short clips of a face transitioning from a neutral to an emotional expression. Stimuli included four identities and six emotions. We performed behavioural, univariate, multi-voxel, adaptation and functional connectivity analyses to investigate potential group differences. The ASD-group did not differ from the NT-group on behavioural identity and expression processing tasks. At the neural level, we found no differences in average neural activation, neural activation patterns and neural adaptation to faces in face-related brain regions. In terms of functional connectivity, we found that amygdala seems to be more strongly connected to inferior occipital cortex and V1 in individuals with ASD. Overall, the findings indicate that neural representations of facial identity and expression have a similar quality in individuals with and without ASD, but some regions containing these representations are connected differently in the extended face processing network.
Collapse
Affiliation(s)
- Michelle H A Hendriks
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Claudia Dillen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Sofie Vettori
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Laura Vercammen
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium
| | - Nicky Daniels
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Jean Steyaert
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium
| | - Hans Op de Beeck
- Department of Brain and Cognition, KU Leuven, Tiensestraat 102 - bus 3714, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium
| | - Bart Boets
- Centre for Developmental Psychiatry, KU Leuven, Kapucijnenvoer 7 blok h - bus 7001, Leuven, Belgium; Leuven Brain Institute, KU Leuven, Leuven, Belgium; Leuven Autism Research Consortium, KU Leuven, Leuven, Belgium.
| |
Collapse
|
21
|
Liang Y, Liu B. Cross-Subject Commonality of Emotion Representations in Dorsal Motion-Sensitive Areas. Front Neurosci 2020; 14:567797. [PMID: 33177977 PMCID: PMC7591793 DOI: 10.3389/fnins.2020.567797] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2020] [Accepted: 09/22/2020] [Indexed: 11/13/2022] Open
Abstract
Emotion perception is a crucial question in cognitive neuroscience and the underlying neural substrates have been the subject of intense study. One of our previous studies demonstrated that motion-sensitive areas are involved in the perception of facial expressions. However, it remains unclear whether emotions perceived from whole-person stimuli can be decoded from the motion-sensitive areas. In addition, if emotions are represented in the motion-sensitive areas, we may further ask whether the representations of emotions in the motion-sensitive areas can be shared across individual subjects. To address these questions, this study collected neural images while participants viewed emotions (joy, anger, and fear) from videos of whole-person expressions (contained both face and body parts) in a block-design functional magnetic resonance imaging (fMRI) experiment. Multivariate pattern analysis (MVPA) was conducted to explore the emotion decoding performance in individual-defined dorsal motion-sensitive regions of interest (ROIs). Results revealed that emotions could be successfully decoded from motion-sensitive ROIs with statistically significant classification accuracies for three emotions as well as positive versus negative emotions. Moreover, results from the cross-subject classification analysis showed that a person’s emotion representation could be robustly predicted by others’ emotion representations in motion-sensitive areas. Together, these results reveal that emotions are represented in dorsal motion-sensitive areas and that the representation of emotions is consistent across subjects. Our findings provide new evidence of the involvement of motion-sensitive areas in the emotion decoding, and further suggest that there exists a common emotion code in the motion-sensitive areas across individual subjects.
Collapse
Affiliation(s)
- Yin Liang
- Faculty of Information Technology, College of Computer Science and Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China
| |
Collapse
|
22
|
Samaey C, Van der Donck S, van Winkel R, Boets B. Facial Expression Processing Across the Autism-Psychosis Spectra: A Review of Neural Findings and Associations With Adverse Childhood Events. Front Psychiatry 2020; 11:592937. [PMID: 33281648 PMCID: PMC7691238 DOI: 10.3389/fpsyt.2020.592937] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/08/2020] [Accepted: 10/09/2020] [Indexed: 11/13/2022] Open
Abstract
Autism spectrum disorder (ASD) and primary psychosis are classified as distinct neurodevelopmental disorders, yet they display overlapping epidemiological, environmental, and genetic components as well as endophenotypic similarities. For instance, both disorders are characterized by impairments in facial expression processing, a crucial skill for effective social communication, and both disorders display an increased prevalence of adverse childhood events (ACE). This narrative review provides a brief summary of findings from neuroimaging studies investigating facial expression processing in ASD and primary psychosis with a focus on the commonalities and differences between these disorders. Individuals with ASD and primary psychosis activate the same brain regions as healthy controls during facial expression processing, albeit to a different extent. Overall, both groups display altered activation in the fusiform gyrus and amygdala as well as altered connectivity among the broader face processing network, probably indicating reduced facial expression processing abilities. Furthermore, delayed or reduced N170 responses have been reported in ASD and primary psychosis, but the significance of these findings is questioned, and alternative frequency-tagging electroencephalography (EEG) measures are currently explored to capture facial expression processing impairments more selectively. Face perception is an innate process, but it is also guided by visual learning and social experiences. Extreme environmental factors, such as adverse childhood events, can disrupt normative development and alter facial expression processing. ACE are hypothesized to induce altered neural facial expression processing, in particular a hyperactive amygdala response toward negative expressions. Future studies should account for the comorbidity among ASD, primary psychosis, and ACE when assessing facial expression processing in these clinical groups, as it may explain some of the inconsistencies and confound reported in the field.
Collapse
Affiliation(s)
- Celine Samaey
- Department of Neurosciences, Center for Clinical Psychiatry, KU Leuven, Leuven, Belgium
| | - Stephanie Van der Donck
- Department of Neurosciences, Center for Developmental Psychiatry, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| | - Ruud van Winkel
- Department of Neurosciences, Center for Clinical Psychiatry, KU Leuven, Leuven, Belgium
- University Psychiatric Center (UPC), KU Leuven, Leuven, Belgium
| | - Bart Boets
- Department of Neurosciences, Center for Developmental Psychiatry, KU Leuven, Leuven, Belgium
- Leuven Autism Research (LAuRes), KU Leuven, Leuven, Belgium
| |
Collapse
|
23
|
Stanković M. A conceptual critique of brain lateralization models in emotional face perception: Toward a hemispheric functional-equivalence (HFE) model. Int J Psychophysiol 2020; 160:57-70. [PMID: 33186657 DOI: 10.1016/j.ijpsycho.2020.11.001] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2020] [Revised: 10/15/2020] [Accepted: 11/04/2020] [Indexed: 01/21/2023]
Abstract
The present review proposes a novel dynamic model of brain lateralization of emotional (happy, surprised, fearful, sad, angry, and disgusted) and neutral face perception. Evidence to date suggests that emotional face perception is lateralized in the brain. At least five prominent hypotheses of the lateralization of emotional face perception have been previously proposed; the right-hemisphere hypothesis; the valence-specific hypothesis; the modified valence-specific hypothesis; the motivational hypothesis; and behavioral activation/inhibition system hypothesis. However, a growing number of recent replication studies exploring those hypotheses frequently provide inconsistent or even contradictory results. The latest neuroimaging and behavioral studies strongly demonstrate the functional capacity of both hemispheres to process emotions relatively successfully. Moreover, the flexibility of emotional brain-networks in both hemispheres is functionally high even to the extent of a possible reversed asymmetry of the left and the right hemisphere performance under altered neurophysiological and psychological conditions. The present review aims to a) provide a critical conceptual analysis of prior and current hypotheses of brain lateralization of emotional and neutral face perception; b) propose an integrative introduction of a novel hemispheric functional-equivalence (HFE) model in emotional and neutral face perception based on the evaluation of theoretical considerations, behavioral and neuroimaging studies: the brain is initially right-biased in emotional and neutral face perception by default; however, altered psychophysiological conditions (e.g., acute stress, a demanding emotional task) activate a distributed brain-network of both hemispheres toward functional equivalence that results in relatively equalized behavioral performance in emotional and neutral face perception. The proposed novel model may provide a practical tool in further experimental investigation of brain lateralization of emotional face perception.
Collapse
Affiliation(s)
- Miloš Stanković
- General and Experimental Psychology, Department of Psychology, Ludwig-Maximilians-Universität München, Munich, Germany.
| |
Collapse
|
24
|
Zhao Y, Rütgen M, Zhang L, Lamm C. Pharmacological fMRI provides evidence for opioidergic modulation of discrimination of facial pain expressions. Psychophysiology 2020; 58:e13717. [PMID: 33140886 PMCID: PMC7816233 DOI: 10.1111/psyp.13717] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2020] [Revised: 09/03/2020] [Accepted: 10/09/2020] [Indexed: 12/21/2022]
Abstract
The endogenous opioid system is strongly involved in the modulation of pain. However, the potential role of this system in perceiving painful facial expressions from others has not been sufficiently explored as of yet. To elucidate the contribution of the opioid system to the perception of painful facial expressions, we conducted a double‐blind, within‐subjects pharmacological functional magnetic resonance imaging (fMRI) study, in which 42 participants engaged in an emotion discrimination task (pain vs. disgust expressions) in two experimental sessions, receiving either the opioid receptor antagonist naltrexone or an inert substance (placebo). On the behavioral level, participants less frequently judged an expression as pain under naltrexone as compared to placebo. On the neural level, parametric modulation of activation in the (putative) right fusiform face area (FFA), which was correlated with increased pain intensity, was higher under naltrexone than placebo. Regression analyses revealed that brain activity in the right FFA significantly predicted behavioral performance in disambiguating pain from disgust, both under naltrexone and placebo. These findings suggest that reducing opioid system activity decreased participants' sensitivity for facial expressions of pain, and that this was linked to possibly compensatory engagement of processes related to visual perception, rather than to higher level affective processes, and pain regulation. The behavioral and neural findings of this psychopharmacological fMRI study shed light on a causal role of the opioid system in the discrimination of painful facial expressions, paving the way for further exploration of clinical implications in the domains of pain diagnosis and treatment, on the one hand, and future research on the relationship between basic socio‐perceptual processing and empathy, on the other.
Collapse
Affiliation(s)
- Yili Zhao
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Markus Rütgen
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.,Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria
| | - Lei Zhang
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.,Neuropsychopharmacology and Biopsychology Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Claus Lamm
- Social, Cognitive and Affective Neuroscience Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.,Vienna Cognitive Science Hub, University of Vienna, Vienna, Austria.,Neuropsychopharmacology and Biopsychology Unit, Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| |
Collapse
|
25
|
Brain connectivity analysis in fathers of children with autism. Cogn Neurodyn 2020; 14:781-793. [PMID: 33101531 DOI: 10.1007/s11571-020-09625-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2020] [Revised: 07/28/2020] [Accepted: 08/16/2020] [Indexed: 01/24/2023] Open
Abstract
Autism Spectrum Disorder (ASD) is a neurodevelopmental disorder in which changes in brain connectivity, associated with autistic-like traits in some individuals. First-degree relatives of children with autism may show mild deficits in social interaction. The present study investigates electroencephalography (EEG) brain connectivity patterns of the fathers who have children with autism while performing facial emotion labeling task. Fifteen biological fathers of children with the diagnosis of autism (Test Group) and fifteen fathers of neurotypical children with no personal or family history of autism (Control Group) participated in this study. Facial emotion labeling task was evaluated using a set of photos consisting of six categories (mild and extreme: anger, happiness, and sadness). Group Independent Component Analysis method was applied to EEG data to extract neural sources. Dynamic causal connectivity of neural sources signals was estimated using the multivariate autoregressive model and quantified by using the Granger causality-based methods. Statistical analysis showed significant differences (p value < 0.01) in the connectivity of neural sources in recognition of some emotions in two groups, which the most differences observed in the mild anger and mild sadness emotions. Short-range connectivity appeared in Test Group and conversely, long-range and interhemispheric connections are observed in Control Group. Finally, it can be concluded that the Test Group showed abnormal activity and connectivity in the brain network for the processing of emotional faces compared to the Control Group. We conclude that neural source connectivity analysis in fathers may be considered as a potential and promising biomarker of ASD.
Collapse
|
26
|
Poyo Solanas M, Vaessen M, de Gelder B. Computation-Based Feature Representation of Body Expressions in the Human Brain. Cereb Cortex 2020; 30:6376-6390. [DOI: 10.1093/cercor/bhaa196] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2020] [Revised: 06/04/2020] [Accepted: 06/26/2020] [Indexed: 01/31/2023] Open
Abstract
Abstract
Humans and other primate species are experts at recognizing body expressions. To understand the underlying perceptual mechanisms, we computed postural and kinematic features from affective whole-body movement videos and related them to brain processes. Using representational similarity and multivoxel pattern analyses, we showed systematic relations between computation-based body features and brain activity. Our results revealed that postural rather than kinematic features reflect the affective category of the body movements. The feature limb contraction showed a central contribution in fearful body expression perception, differentially represented in action observation, motor preparation, and affect coding regions, including the amygdala. The posterior superior temporal sulcus differentiated fearful from other affective categories using limb contraction rather than kinematics. The extrastriate body area and fusiform body area also showed greater tuning to postural features. The discovery of midlevel body feature encoding in the brain moves affective neuroscience beyond research on high-level emotion representations and provides insights in the perceptual features that possibly drive automatic emotion perception.
Collapse
Affiliation(s)
- Marta Poyo Solanas
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Maarten Vaessen
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
| | - Beatrice de Gelder
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Maastricht, Limburg 6200 MD, The Netherlands
- Department of Computer Science, University College London, London WC1E 6BT, UK
| |
Collapse
|
27
|
Yau Y, Dadar M, Taylor M, Zeighami Y, Fellows LK, Cisek P, Dagher A. Neural Correlates of Evidence and Urgency During Human Perceptual Decision-Making in Dynamically Changing Conditions. Cereb Cortex 2020; 30:5471-5483. [PMID: 32500144 DOI: 10.1093/cercor/bhaa129] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/13/2019] [Revised: 03/27/2020] [Accepted: 04/22/2020] [Indexed: 12/31/2022] Open
Abstract
Current models of decision-making assume that the brain gradually accumulates evidence and drifts toward a threshold that, once crossed, results in a choice selection. These models have been especially successful in primate research; however, transposing them to human fMRI paradigms has proved it to be challenging. Here, we exploit the face-selective visual system and test whether decoded emotional facial features from multivariate fMRI signals during a dynamic perceptual decision-making task are related to the parameters of computational models of decision-making. We show that trial-by-trial variations in the pattern of neural activity in the fusiform gyrus reflect facial emotional information and modulate drift rates during deliberation. We also observed an inverse-urgency signal based in the caudate nucleus that was independent of sensory information but appeared to slow decisions, particularly when information in the task was ambiguous. Taken together, our results characterize how decision parameters from a computational model (i.e., drift rate and urgency signal) are involved in perceptual decision-making and reflected in the activity of the human brain.
Collapse
Affiliation(s)
- Y Yau
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada
| | - M Dadar
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada
| | - M Taylor
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada.,Schulich School of Medicine and Dentistry, University of Western Ontario, London, Ontario, N6A 5C1, Canada
| | - Y Zeighami
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada
| | - L K Fellows
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada
| | - P Cisek
- Département of Neuroscience, Université of Montréal, Montréal, Quebec H3C 3J7, Canada
| | - A Dagher
- McConnell Brain Imaging Centre, Montreal Neurological Institute, McGill University, Montréal, Quebec H3A 2B4, Canada
| |
Collapse
|
28
|
Nicholson AA, Harricharan S, Densmore M, Neufeld RWJ, Ros T, McKinnon MC, Frewen PA, Théberge J, Jetly R, Pedlar D, Lanius RA. Classifying heterogeneous presentations of PTSD via the default mode, central executive, and salience networks with machine learning. Neuroimage Clin 2020; 27:102262. [PMID: 32446241 PMCID: PMC7240193 DOI: 10.1016/j.nicl.2020.102262] [Citation(s) in RCA: 40] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2019] [Revised: 04/15/2020] [Accepted: 04/16/2020] [Indexed: 01/26/2023]
Abstract
Intrinsic connectivity networks (ICNs), including the default mode network (DMN), the central executive network (CEN), and the salience network (SN) have been shown to be aberrant in patients with posttraumatic stress disorder (PTSD). The purpose of the current study was to a) compare ICN functional connectivity between PTSD, dissociative subtype PTSD (PTSD+DS) and healthy individuals; and b) to examine the use of multivariate machine learning algorithms in classifying PTSD, PTSD+DS, and healthy individuals based on ICN functional activation. Our neuroimaging dataset consisted of resting-state fMRI scans from 186 participants [PTSD (n = 81); PTSD + DS (n = 49); and healthy controls (n = 56)]. We performed group-level independent component analyses to evaluate functional connectivity differences within each ICN. Multiclass Gaussian Process Classification algorithms within PRoNTo software were then used to predict the diagnosis of PTSD, PTSD+DS, and healthy individuals based on ICN functional activation. When comparing the functional connectivity of ICNs between PTSD, PTSD+DS and healthy controls, we found differential patterns of connectivity to brain regions involved in emotion regulation, in addition to limbic structures and areas involved in self-referential processing, interoception, bodily self-consciousness, and depersonalization/derealization. Machine learning algorithms were able to predict with high accuracy the classification of PTSD, PTSD+DS, and healthy individuals based on ICN functional activation. Our results suggest that alterations within intrinsic connectivity networks may underlie unique psychopathology and symptom presentation among PTSD subtypes. Furthermore, the current findings substantiate the use of machine learning algorithms for classifying subtypes of PTSD illness based on ICNs.
Collapse
Affiliation(s)
- Andrew A Nicholson
- Department of Cognition, Emotion and Methods in Psychology, University of Vienna, Austria; Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada.
| | - Sherain Harricharan
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada
| | - Maria Densmore
- Department of Psychiatry, Western University, London, ON, Canada; Imaging Division, Lawson Health Research Institute, London, ON, Canada
| | - Richard W J Neufeld
- Department of Psychiatry, Western University, London, ON, Canada; Department of Psychology, Western University, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada
| | - Tomas Ros
- Department of Neuroscience, University of Geneva, Switzerland
| | - Margaret C McKinnon
- Department of Psychiatry and Behavioural Neurosciences, McMaster University, Hamilton, ON, Canada; Mood Disorders Program, St. Joseph's Healthcare, Hamilton, ON, Canada; Homewood Research Institute, Guelph, ON, Canada
| | - Paul A Frewen
- Department of Psychiatry, Western University, London, ON, Canada; Department of Neuroscience, Western University, London, ON, Canada
| | - Jean Théberge
- Department of Psychiatry, Western University, London, ON, Canada; Department of Medical Imaging, Western University, London, ON, Canada; Imaging Division, Lawson Health Research Institute, London, ON, Canada; Department of Diagnostic Imaging, St. Joseph's Health Care, London, ON, Canada
| | - Rakesh Jetly
- Canadian Forces, Health Services, Ottawa, Ontario, Canada
| | - David Pedlar
- Canadian Institute for Military and Veteran Health Research (CIMVHR), Canada
| | - Ruth A Lanius
- Department of Psychiatry, Western University, London, ON, Canada; Department of Neuroscience, Western University, London, ON, Canada; Imaging Division, Lawson Health Research Institute, London, ON, Canada
| |
Collapse
|
29
|
Suzuki S, O'Doherty JP. Breaking human social decision making into multiple components and then putting them together again. Cortex 2020; 127:221-230. [PMID: 32224320 DOI: 10.1016/j.cortex.2020.02.014] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Revised: 01/23/2020] [Accepted: 02/28/2020] [Indexed: 10/24/2022]
Abstract
Most of our waking time as human beings is spent interacting with other individuals. In order to make good decisions in this social milieu, it is often necessary to make inferences about the internal states, traits and intentions of others. Recently, some progress has been made toward uncovering the neural computations underlying human social decision-making by combining functional magnetic resonance neuroimaging (fMRI) with computational modeling of behavior. Modeling of behavioral data allows us to identify the key computations necessary for social decision-making and to determine how these computations are integrated. Furthermore, by correlating these variables against neuroimaging data, it has become possible to elucidate where in the brain various computations are implemented. Here we review the current state of knowledge in the domain of social computational neuroscience. Findings to date have emphasized that social decisions are driven by multiple computations conducted in parallel, and implemented in distinct brain regions. We suggest that further progress is going to depend on identifying how and where such variables get integrated in order to yield a coherent behavioral output.
Collapse
Affiliation(s)
- Shinsuke Suzuki
- Brain, Mind and Markets Laboratory, Department of Finance, Faculty of Business and Economics, The University of Melbourne, Parkville, Australia; Frontier Research Institute for Interdisciplinary Sciences, Tohoku University, Sendai, Japan.
| | - John P O'Doherty
- Division of the Humanities and Social Sciences, California Institute of Technology, Pasadena, USA; Computation and Neural Systems, California Institute of Technology, Pasadena, USA
| |
Collapse
|
30
|
Liang Y, Liu B, Ji J, Li X. Network Representations of Facial and Bodily Expressions: Evidence From Multivariate Connectivity Pattern Classification. Front Neurosci 2019; 13:1111. [PMID: 31736683 PMCID: PMC6828617 DOI: 10.3389/fnins.2019.01111] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 10/02/2019] [Indexed: 01/21/2023] Open
Abstract
Emotions can be perceived from both facial and bodily expressions. Our previous study has found the successful decoding of facial expressions based on the functional connectivity (FC) patterns. However, the role of the FC patterns in the recognition of bodily expressions remained unclear, and no neuroimaging studies have adequately addressed the question of whether emotions perceiving from facial and bodily expressions are processed rely upon common or different neural networks. To address this, the present study collected functional magnetic resonance imaging (fMRI) data from a block design experiment with facial and bodily expression videos as stimuli (three emotions: anger, fear, and joy), and conducted multivariate pattern classification analysis based on the estimated FC patterns. We found that in addition to the facial expressions, bodily expressions could also be successfully decoded based on the large-scale FC patterns. The emotion classification accuracies for the facial expressions were higher than that for the bodily expressions. Further contributive FC analysis showed that emotion-discriminative networks were widely distributed in both hemispheres, containing regions that ranged from primary visual areas to higher-level cognitive areas. Moreover, for a particular emotion, discriminative FCs for facial and bodily expressions were distinct. Together, our findings highlight the key role of the FC patterns in the emotion processing, indicating how large-scale FC patterns reconfigure in processing of facial and bodily expressions, and suggest the distributed neural representation for the emotion recognition. Furthermore, our results also suggest that the human brain employs separate network representations for facial and bodily expressions of the same emotions. This study provides new evidence for the network representations for emotion perception and may further our understanding of the potential mechanisms underlying body language emotion recognition.
Collapse
Affiliation(s)
- Yin Liang
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Baolin Liu
- Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology, Tianjin University, Tianjin, China.,School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China.,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| | - Junzhong Ji
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| |
Collapse
|
31
|
Neural time course and brain sources of facial attractiveness vs. trustworthiness judgment. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2019; 18:1233-1247. [PMID: 30187360 DOI: 10.3758/s13415-018-0634-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Prior research has shown that the more (or less) attractive a face is judged, the more (or less) trustworthy the person is deemed and that some common neural networks are recruited during facial attractiveness and trustworthiness evaluation. To interpret the relationship between attractiveness and trustworthiness (e.g., whether perception of personal trustworthiness may depend on perception of facial attractiveness), we investigated their relative neural processing time course. An event-related potential (ERP) paradigm was used, with localization of brain sources of the scalp neural activity. Face stimuli with a neutral, angry, happy, or surprised expression were presented in an attractiveness judgment, a trustworthiness judgment, or a control (no explicit social judgment) task. Emotional facial expression processing occurred earlier (N170 and EPN, 150-290 ms post-stimulus onset) than attractiveness and trustworthiness processing (P3b, 400-700 ms). Importantly, right-central ERP (C2, C4, C6) differences reflecting discrimination between "yes" (attractive or trustworthy) and "no" (unattractive or untrustworthy) decisions occurred at least 400 ms earlier for attractiveness than for trustworthiness, in the absence of LRP motor preparation differences. Neural source analysis indicated that facial processing brain networks (e.g., LG, FG, and IPL-extending to pSTS), also right-lateralized, were involved in the discrimination time course differences. This suggests that attractiveness impressions precede and might prime trustworthiness inferences and that the neural time course differences reflect truly facial encoding processes.
Collapse
|
32
|
Nicholson AA, Densmore M, McKinnon MC, Neufeld RWJ, Frewen PA, Théberge J, Jetly R, Richardson JD, Lanius RA. Machine learning multivariate pattern analysis predicts classification of posttraumatic stress disorder and its dissociative subtype: a multimodal neuroimaging approach. Psychol Med 2019; 49:2049-2059. [PMID: 30306886 DOI: 10.1017/s0033291718002866] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
BACKGROUND The field of psychiatry would benefit significantly from developing objective biomarkers that could facilitate the early identification of heterogeneous subtypes of illness. Critically, although machine learning pattern recognition methods have been applied recently to predict many psychiatric disorders, these techniques have not been utilized to predict subtypes of posttraumatic stress disorder (PTSD), including the dissociative subtype of PTSD (PTSD + DS). METHODS Using Multiclass Gaussian Process Classification within PRoNTo, we examined the classification accuracy of: (i) the mean amplitude of low-frequency fluctuations (mALFF; reflecting spontaneous neural activity during rest); and (ii) seed-based amygdala complex functional connectivity within 181 participants [PTSD (n = 81); PTSD + DS (n = 49); and age-matched healthy trauma-unexposed controls (n = 51)]. We also computed mass-univariate analyses in order to observe regional group differences [false-discovery-rate (FDR)-cluster corrected p < 0.05, k = 20]. RESULTS We found that extracted features could predict accurately the classification of PTSD, PTSD + DS, and healthy controls, using both resting-state mALFF (91.63% balanced accuracy, p < 0.001) and amygdala complex connectivity maps (85.00% balanced accuracy, p < 0.001). These results were replicated using independent machine learning algorithms/cross-validation procedures. Moreover, areas weighted as being most important for group classification also displayed significant group differences at the univariate level. Here, whereas the PTSD + DS group displayed increased activation within emotion regulation regions, the PTSD group showed increased activation within the amygdala, globus pallidus, and motor/somatosensory regions. CONCLUSION The current study has significant implications for advancing machine learning applications within the field of psychiatry, as well as for developing objective biomarkers indicative of diagnostic heterogeneity.
Collapse
Affiliation(s)
- Andrew A Nicholson
- Department of Neuroscience, Western University, London, ON, Canada
- Department of Psychiatry, Western University, London, ON, Canada
- Department of Psychiatry and Behavioural Neuroscience, McMaster University, Hamilton, ON, Canada
- Homewood Research Institute, Guelph, ON, Canada
- Imaging, Lawson Health Research Institute, London, ON, Canada
| | - Maria Densmore
- Department of Psychiatry, Western University, London, ON, Canada
- Imaging, Lawson Health Research Institute, London, ON, Canada
| | - Margaret C McKinnon
- Department of Psychiatry and Behavioural Neuroscience, McMaster University, Hamilton, ON, Canada
- Homewood Research Institute, Guelph, ON, Canada
- Department of Mood Disorders Program, St. Joseph's Healthcare, Hamilton, ON, Canada
| | - Richard W J Neufeld
- Department of Neuroscience, Western University, London, ON, Canada
- Department of Psychiatry, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Paul A Frewen
- Department of Neuroscience, Western University, London, ON, Canada
- Department of Psychology, Western University, London, ON, Canada
| | - Jean Théberge
- Department of Psychiatry, Western University, London, ON, Canada
- Imaging, Lawson Health Research Institute, London, ON, Canada
- Department of Medical Imaging, Western University, London, ON, Canada
- Department of Medial Biophysics, Western University, London, ON, Canada
- Department of Diagnostic Imaging, St. Joseph's Healthcare, London, ON, Canada
| | - Rakesh Jetly
- Canadian Forces, Health Services, Ottawa, Ontario, Canada
| | - J Donald Richardson
- Department of Psychiatry and Behavioural Neuroscience, McMaster University, Hamilton, ON, Canada
- Homewood Research Institute, Guelph, ON, Canada
- Department of Mood Disorders Program, St. Joseph's Healthcare, Hamilton, ON, Canada
| | - Ruth A Lanius
- Department of Neuroscience, Western University, London, ON, Canada
- Department of Psychiatry, Western University, London, ON, Canada
- Imaging, Lawson Health Research Institute, London, ON, Canada
| |
Collapse
|
33
|
Fernandes O, Portugal LCL, Alves RDCS, Arruda-Sanchez T, Volchan E, Pereira MG, Mourão-Miranda J, Oliveira L. How do you perceive threat? It's all in your pattern of brain activity. Brain Imaging Behav 2019; 14:2251-2266. [PMID: 31446554 PMCID: PMC7648008 DOI: 10.1007/s11682-019-00177-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Whether subtle differences in the emotional context during threat perception can be detected by multi-voxel pattern analysis (MVPA) remains a topic of debate. To investigate this question, we compared the ability of pattern recognition analysis to discriminate between patterns of brain activity to a threatening versus a physically paired neutral stimulus in two different emotional contexts (the stimulus being directed towards or away from the viewer). The directionality of the stimuli is known to be an important factor in activating different defensive responses. Using multiple kernel learning (MKL) classification models, we accurately discriminated patterns of brain activation to threat versus neutral stimuli in the directed towards context but not during the directed away context. Furthermore, we investigated whether it was possible to decode an individual’s subjective threat perception from patterns of whole-brain activity to threatening stimuli in the different emotional contexts using MKL regression models. Interestingly, we were able to accurately predict the subjective threat perception index from the pattern of brain activation to threat only during the directed away context. These results show that subtle differences in the emotional context during threat perception can be detected by MVPA. In the directed towards context, the threat perception was more intense, potentially producing more homogeneous patterns of brain activation across individuals. In the directed away context, the threat perception was relatively less intense and more variable across individuals, enabling the regression model to successfully capture the individual differences and predict the subjective threat perception.
Collapse
Affiliation(s)
- Orlando Fernandes
- Laboratory of Neuroimaging and Psychophysiology, Department of Radiology, Faculty of Medicine, Federal University of Rio de Janeiro, 255 Rodolpho Paulo Rocco st., Ilha do Fundão, Rio de Janeiro, RJ, 21941-590, Brazil.
| | - Liana Catrina Lima Portugal
- Laboratory of Behavioral Neurophysiology, Department of Physiology and Pharmacology, Biomedical Institute, Federal Fluminense University, Niterói, RJ, Brazil
| | - Rita de Cássia S Alves
- Laboratory of Behavioral Neurophysiology, Department of Physiology and Pharmacology, Biomedical Institute, Federal Fluminense University, Niterói, RJ, Brazil.,IBMR University Center, Rio de Janeiro, RJ, Brazil
| | - Tiago Arruda-Sanchez
- Laboratory of Neuroimaging and Psychophysiology, Department of Radiology, Faculty of Medicine, Federal University of Rio de Janeiro, 255 Rodolpho Paulo Rocco st., Ilha do Fundão, Rio de Janeiro, RJ, 21941-590, Brazil
| | - Eliane Volchan
- Laboratory of Neurobiology II, Institute of Biophysics Carlos Chagas Filho, Federal University of Rio de Janeiro, Rio de Janeiro, RJ, Brazil
| | - Mirtes Garcia Pereira
- Laboratory of Behavioral Neurophysiology, Department of Physiology and Pharmacology, Biomedical Institute, Federal Fluminense University, Niterói, RJ, Brazil
| | - Janaina Mourão-Miranda
- Centre for Medical Image Computing, Department of Computer Science, University College London, London, UK.,Max Planck University College London Centre for Computational Psychiatry and Ageing Research, University College London, London, UK
| | - Letícia Oliveira
- Laboratory of Behavioral Neurophysiology, Department of Physiology and Pharmacology, Biomedical Institute, Federal Fluminense University, Niterói, RJ, Brazil
| |
Collapse
|
34
|
The neural representation of facial-emotion categories reflects conceptual structure. Proc Natl Acad Sci U S A 2019; 116:15861-15870. [PMID: 31332015 DOI: 10.1073/pnas.1816408116] [Citation(s) in RCA: 35] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/09/2023] Open
Abstract
Humans reliably categorize configurations of facial actions into specific emotion categories, leading some to argue that this process is invariant between individuals and cultures. However, growing behavioral evidence suggests that factors such as emotion-concept knowledge may shape the way emotions are visually perceived, leading to variability-rather than universality-in facial-emotion perception. Understanding variability in emotion perception is only emerging, and the neural basis of any impact from the structure of emotion-concept knowledge remains unknown. In a neuroimaging study, we used a representational similarity analysis (RSA) approach to measure the correspondence between the conceptual, perceptual, and neural representational structures of the six emotion categories Anger, Disgust, Fear, Happiness, Sadness, and Surprise. We found that subjects exhibited individual differences in their conceptual structure of emotions, which predicted their own unique perceptual structure. When viewing faces, the representational structure of multivoxel patterns in the right fusiform gyrus was significantly predicted by a subject's unique conceptual structure, even when controlling for potential physical similarity in the faces themselves. Finally, cross-cultural differences in emotion perception were also observed, which could be explained by individual differences in conceptual structure. Our results suggest that the representational structure of emotion expressions in visual face-processing regions may be shaped by idiosyncratic conceptual understanding of emotion categories.
Collapse
|
35
|
Lane J, Robbins RA, Rohan EMF, Crookes K, Essex RW, Maddess T, Sabeti F, Mazlin JL, Irons J, Gradden T, Dawel A, Barnes N, He X, Smithson M, McKone E. Caricaturing can improve facial expression recognition in low-resolution images and age-related macular degeneration. J Vis 2019; 19:18. [DOI: 10.1167/19.6.18] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Affiliation(s)
- Jo Lane
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Rachel A. Robbins
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Emilie M. F. Rohan
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Kate Crookes
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
- School of Psychological Science, University of Western Australia, Perth, WA, Australia
| | - Rohan W. Essex
- Academic Unit of Ophthalmology, Medical School, The Australian National University, Canberra, ACT, Australia
| | - Ted Maddess
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
| | - Faran Sabeti
- John Curtin School of Medical Research (JCSMR), The Australian National University, Canberra, ACT, Australia
- Discipline of Optometry and Vision Science, The University of Canberra, Bruce, ACT, Australia
- Collaborative Research in Bioactives and Biomarkers (CRIBB) Group, Canberra, ACT, Australia
| | - Jamie-Lee Mazlin
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Jessica Irons
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Tamara Gradden
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Amy Dawel
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| | - Nick Barnes
- Research School of Engineering, The Australian National University and Data61, Commonwealth Scientific and Industrial Research Organisation, Canberra, ACT, Australia
| | - Xuming He
- School of Information Science and Technology, Shanghai Tech University, Shanghai, China
| | - Michael Smithson
- Research School of Psychology, The Australian National University, Canberra, ACT, Australia
| | - Elinor McKone
- Research School of Psychology and ARC Centre of Excellence in Cognition and its Disorders, The Australian National University, Canberra, ACT, Australia
| |
Collapse
|
36
|
Smith FW, Smith ML. Decoding the dynamic representation of facial expressions of emotion in explicit and incidental tasks. Neuroimage 2019; 195:261-271. [PMID: 30940611 DOI: 10.1016/j.neuroimage.2019.03.065] [Citation(s) in RCA: 24] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 03/18/2019] [Accepted: 03/27/2019] [Indexed: 11/24/2022] Open
Abstract
Faces transmit a wealth of important social signals. While previous studies have elucidated the network of cortical regions important for perception of facial expression, and the associated temporal components such as the P100, N170 and EPN, it is still unclear how task constraints may shape the representation of facial expression (or other face categories) in these networks. In the present experiment, we used Multivariate Pattern Analysis (MVPA) with EEG to investigate the neural information available across time about two important face categories (expression and identity) when those categories are either perceived under explicit (e.g. decoding facial expression category from the EEG when task is on expression) or incidental task contexts (e.g. decoding facial expression category from the EEG when task is on identity). Decoding of both face categories, across both task contexts, peaked in time-windows spanning 91-170 ms (across posterior electrodes). Peak decoding of expression, however, was not affected by task context whereas peak decoding of identity was significantly reduced under incidental processing conditions. In addition, errors in EEG decoding correlated with errors in behavioral categorization under explicit processing for both expression and identity, however under incidental conditions only errors in EEG decoding of expression correlated with behavior. Furthermore, decoding time-courses and the spatial pattern of informative electrodes showed consistently better decoding of identity under explicit conditions at later-time periods, with weak evidence for similar effects for decoding of expression at isolated time-windows. Taken together, these results reveal differences and commonalities in the processing of face categories under explicit Vs incidental task contexts and suggest that facial expressions are processed to a richer degree under incidental processing conditions, consistent with prior work indicating the relative automaticity by which emotion is processed. Our work further demonstrates the utility in applying multivariate decoding analyses to EEG for revealing the dynamics of face perception.
Collapse
Affiliation(s)
- Fraser W Smith
- School of Psychology, University of East Anglia, Norwich, UK.
| | - Marie L Smith
- School of Psychological Sciences, Birkbeck College, University of London, London, UK
| |
Collapse
|
37
|
Rymarczyk K, Żurawski Ł, Jankowiak-Siuda K, Szatkowska I. Empathy in Facial Mimicry of Fear and Disgust: Simultaneous EMG-fMRI Recordings During Observation of Static and Dynamic Facial Expressions. Front Psychol 2019; 10:701. [PMID: 30971997 PMCID: PMC6445885 DOI: 10.3389/fpsyg.2019.00701] [Citation(s) in RCA: 25] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2018] [Accepted: 03/13/2019] [Indexed: 01/18/2023] Open
Abstract
Real-life faces are dynamic by nature, particularly when expressing emotion. Increasing evidence suggests that the perception of dynamic displays enhances facial mimicry and induces activation in widespread brain structures considered to be part of the mirror neuron system, a neuronal network linked to empathy. The present study is the first to investigate the relations among facial muscle responses, brain activity, and empathy traits while participants observed static and dynamic (videos) facial expressions of fear and disgust. During display presentation, blood-oxygen level-dependent (BOLD) signal as well as muscle reactions of the corrugator supercilii and levator labii were recorded simultaneously from 46 healthy individuals (21 females). It was shown that both fear and disgust faces caused activity in the corrugator supercilii muscle, while perception of disgust produced facial activity additionally in the levator labii muscle, supporting a specific pattern of facial mimicry for these emotions. Moreover, individuals with higher, compared to individuals with lower, empathy traits showed greater activity in the corrugator supercilii and levator labii muscles; however, these responses were not differentiable between static and dynamic mode. Conversely, neuroimaging data revealed motion and emotional-related brain structures in response to dynamic rather than static stimuli among high empathy individuals. In line with this, there was a correlation between electromyography (EMG) responses and brain activity suggesting that the Mirror Neuron System, the anterior insula and the amygdala might constitute the neural correlates of automatic facial mimicry for fear and disgust. These results revealed that the dynamic property of (emotional) stimuli facilitates the emotional-related processing of facial expressions, especially among whose with high trait empathy.
Collapse
Affiliation(s)
- Krystyna Rymarczyk
- Department of Experimental Psychology, Institute of Cognitive and Behavioural Neuroscience, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | - Łukasz Żurawski
- Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences (PAS), Warsaw, Poland
| | - Kamila Jankowiak-Siuda
- Department of Experimental Psychology, Institute of Cognitive and Behavioural Neuroscience, SWPS University of Social Sciences and Humanities, Warsaw, Poland
| | - Iwona Szatkowska
- Laboratory of Psychophysiology, Department of Neurophysiology, Nencki Institute of Experimental Biology, Polish Academy of Sciences (PAS), Warsaw, Poland
| |
Collapse
|
38
|
Freeman JB, Stolier RM, Brooks JA, Stillerman BS. The neural representational geometry of social perception. Curr Opin Psychol 2018; 24:83-91. [PMID: 30388494 PMCID: PMC6377247 DOI: 10.1016/j.copsyc.2018.10.003] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2018] [Revised: 09/26/2018] [Accepted: 10/02/2018] [Indexed: 01/26/2023]
Abstract
An emerging focus on the geometry of representational structures is advancing a variety of areas in social perception, including social categorization, emotion perception, and trait impressions. Here, we review recent studies adopting a representational geometry approach, and argue that important advances in social perception can be gained by triangulating on the structure of representations via three levels of analysis: neuroimaging, behavioral measures, and computational modeling. Among other uses, this approach permits broad and comprehensive tests of how bottom-up facial features and visual processes as well as top-down social cognitive factors and conceptual processes shape perceptions of social categories, emotion, and personality traits. Although such work is only in its infancy, a focus on corroborating representational geometry across modalities is allowing researchers to use multiple levels of analysis to constrain theoretical models in social perception. This approach holds promise to further our understanding of the multiply determined nature of social perception and its neural basis.
Collapse
|
39
|
Levine SM, Wackerle A, Rupprecht R, Schwarzbach JV. The neural representation of an individualized relational affective space. Neuropsychologia 2018; 120:35-42. [DOI: 10.1016/j.neuropsychologia.2018.10.008] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2018] [Revised: 08/11/2018] [Accepted: 10/10/2018] [Indexed: 10/28/2022]
|
40
|
Bush KA, Privratsky A, Gardner J, Zielinski MJ, Kilts CD. Common Functional Brain States Encode both Perceived Emotion and the Psychophysiological Response to Affective Stimuli. Sci Rep 2018; 8:15444. [PMID: 30337576 PMCID: PMC6194055 DOI: 10.1038/s41598-018-33621-6] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2018] [Accepted: 10/01/2018] [Indexed: 11/13/2022] Open
Abstract
Multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has critically advanced the neuroanatomical understanding of affect processing in the human brain. Central to these advancements is the brain state, a temporally-succinct fMRI-derived pattern of neural activation, which serves as a processing unit. Establishing the brain state's central role in affect processing, however, requires that it predicts multiple independent measures of affect. We employed MVPA-based regression to predict the valence and arousal properties of visual stimuli sampled from the International Affective Picture System (IAPS) along with the corollary skin conductance response (SCR) for demographically diverse healthy human participants (n = 19). We found that brain states significantly predicted the normative valence and arousal scores of the stimuli as well as the attendant individual SCRs. In contrast, SCRs significantly predicted arousal only. The prediction effect size of the brain state was more than three times greater than that of SCR. Moreover, neuroanatomical analysis of the regression parameters found remarkable agreement with regions long-established by fMRI univariate analyses in the emotion processing literature. Finally, geometric analysis of these parameters also found that the neuroanatomical encodings of valence and arousal are orthogonal as originally posited by the circumplex model of dimensional emotion.
Collapse
Affiliation(s)
- Keith A Bush
- Brain Imaging Research Center, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA.
| | - Anthony Privratsky
- Brain Imaging Research Center, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA
- College of Medicine, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA
| | - Jonathan Gardner
- College of Medicine, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA
| | - Melissa J Zielinski
- Brain Imaging Research Center, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA
| | - Clinton D Kilts
- Brain Imaging Research Center, University of Arkansas for Medical Sciences, 4301 W. Markham St., Little Rock, AR, 72205-7199, USA
| |
Collapse
|
41
|
Dima DC, Perry G, Messaritaki E, Zhang J, Singh KD. Spatiotemporal dynamics in human visual cortex rapidly encode the emotional content of faces. Hum Brain Mapp 2018; 39:3993-4006. [PMID: 29885055 PMCID: PMC6175429 DOI: 10.1002/hbm.24226] [Citation(s) in RCA: 28] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2018] [Revised: 04/13/2018] [Accepted: 05/14/2018] [Indexed: 12/05/2022] Open
Abstract
Recognizing emotion in faces is important in human interaction and survival, yet existing studies do not paint a consistent picture of the neural representation supporting this task. To address this, we collected magnetoencephalography (MEG) data while participants passively viewed happy, angry and neutral faces. Using time-resolved decoding of sensor-level data, we show that responses to angry faces can be discriminated from happy and neutral faces as early as 90 ms after stimulus onset and only 10 ms later than faces can be discriminated from scrambled stimuli, even in the absence of differences in evoked responses. Time-resolved relevance patterns in source space track expression-related information from the visual cortex (100 ms) to higher-level temporal and frontal areas (200-500 ms). Together, our results point to a system optimised for rapid processing of emotional faces and preferentially tuned to threat, consistent with the important evolutionary role that such a system must have played in the development of human social interactions.
Collapse
Affiliation(s)
- Diana C. Dima
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Gavin Perry
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Eirini Messaritaki
- BRAIN Unit, School of MedicineCardiff UniversityCardiffCF24 4HQUnited Kingdom
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Jiaxiang Zhang
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| | - Krish D. Singh
- Cardiff University Brain Research Imaging Centre (CUBRIC), School of Psychology, Cardiff UniversityCardiffCF24 4HQUnited Kingdom
| |
Collapse
|
42
|
Sachs ME, Habibi A, Damasio A, Kaplan JT. Decoding the neural signatures of emotions expressed through sound. Neuroimage 2018; 174:1-10. [DOI: 10.1016/j.neuroimage.2018.02.058] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/11/2017] [Revised: 02/23/2018] [Accepted: 02/27/2018] [Indexed: 12/15/2022] Open
|
43
|
Zinchenko O, Yaple ZA, Arsalidou M. Brain Responses to Dynamic Facial Expressions: A Normative Meta-Analysis. Front Hum Neurosci 2018; 12:227. [PMID: 29922137 PMCID: PMC5996092 DOI: 10.3389/fnhum.2018.00227] [Citation(s) in RCA: 29] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 05/16/2018] [Indexed: 01/08/2023] Open
Abstract
Identifying facial expressions is crucial for social interactions. Functional neuroimaging studies show that a set of brain areas, such as the fusiform gyrus and amygdala, become active when viewing emotional facial expressions. The majority of functional magnetic resonance imaging (fMRI) studies investigating face perception typically employ static images of faces. However, studies that use dynamic facial expressions (e.g., videos) are accumulating and suggest that a dynamic presentation may be more sensitive and ecologically valid for investigating faces. By using quantitative fMRI meta-analysis the present study examined concordance of brain regions associated with viewing dynamic facial expressions. We analyzed data from 216 participants that participated in 14 studies, which reported coordinates for 28 experiments. Our analysis revealed bilateral fusiform and middle temporal gyri, left amygdala, left declive of the cerebellum and the right inferior frontal gyrus. These regions are discussed in terms of their relation to models of face processing.
Collapse
Affiliation(s)
- Oksana Zinchenko
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia
| | - Zachary A Yaple
- Centre for Cognition and Decision Making, Institute for Cognitive Neuroscience, National Research University Higher School of Economics, Moscow, Russia.,Department of Psychology, National University of Singapore, Singapore, Singapore
| | - Marie Arsalidou
- Department of Psychology, National Research University Higher School of Economics, Moscow, Russia.,Department of Psychology, York University, Toronto, ON, Canada
| |
Collapse
|
44
|
Greening SG, Mitchell DG, Smith FW. Spatially generalizable representations of facial expressions: Decoding across partial face samples. Cortex 2018; 101:31-43. [DOI: 10.1016/j.cortex.2017.11.016] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2017] [Revised: 11/02/2017] [Accepted: 11/28/2017] [Indexed: 10/18/2022]
|
45
|
Weibert K, Flack TR, Young AW, Andrews TJ. Patterns of neural response in face regions are predicted by low-level image properties. Cortex 2018; 103:199-210. [PMID: 29655043 DOI: 10.1016/j.cortex.2018.03.009] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2017] [Revised: 01/26/2018] [Accepted: 03/13/2018] [Indexed: 11/30/2022]
Abstract
Models of face processing suggest that the neural response in different face regions is selective for higher-level attributes of the face, such as identity and expression. However, it remains unclear to what extent the response in these regions can also be explained by more basic organizing principles. Here, we used functional magnetic resonance imaging multivariate pattern analysis (fMRI-MVPA) to ask whether spatial patterns of response in the core face regions (occipital face area - OFA, fusiform face area - FFA, superior temporal sulcus - STS) can be predicted across different participants by lower level properties of the stimulus. First, we compared the neural response to face identity and viewpoint, by showing images of different identities from different viewpoints. The patterns of neural response in the core face regions were predicted by the viewpoint, but not the identity of the face. Next, we compared the neural response to viewpoint and expression, by showing images with different expressions from different viewpoints. Again, viewpoint, but not expression, predicted patterns of response in face regions. Finally, we show that the effect of viewpoint in both experiments could be explained by changes in low-level image properties. Our results suggest that a key determinant of the neural representation in these core face regions involves lower-level image properties rather than an explicit representation of higher-level attributes in the face. The advantage of a relatively image-based representation is that it can be used flexibly in the perception of faces.
Collapse
Affiliation(s)
- Katja Weibert
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Tessa R Flack
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Andrew W Young
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom
| | - Timothy J Andrews
- Department of Psychology and York Neuroimaging Centre, University of York, York, United Kingdom.
| |
Collapse
|
46
|
Liang Y, Liu B, Li X, Wang P. Multivariate Pattern Classification of Facial Expressions Based on Large-Scale Functional Connectivity. Front Hum Neurosci 2018; 12:94. [PMID: 29615882 PMCID: PMC5868121 DOI: 10.3389/fnhum.2018.00094] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2017] [Accepted: 02/27/2018] [Indexed: 01/15/2023] Open
Abstract
It is an important question how human beings achieve efficient recognition of others’ facial expressions in cognitive neuroscience, and it has been identified that specific cortical regions show preferential activation to facial expressions in previous studies. However, the potential contributions of the connectivity patterns in the processing of facial expressions remained unclear. The present functional magnetic resonance imaging (fMRI) study explored whether facial expressions could be decoded from the functional connectivity (FC) patterns using multivariate pattern analysis combined with machine learning algorithms (fcMVPA). We employed a block design experiment and collected neural activities while participants viewed facial expressions of six basic emotions (anger, disgust, fear, joy, sadness, and surprise). Both static and dynamic expression stimuli were included in our study. A behavioral experiment after scanning confirmed the validity of the facial stimuli presented during the fMRI experiment with classification accuracies and emotional intensities. We obtained whole-brain FC patterns for each facial expression and found that both static and dynamic facial expressions could be successfully decoded from the FC patterns. Moreover, we identified the expression-discriminative networks for the static and dynamic facial expressions, which span beyond the conventional face-selective areas. Overall, these results reveal that large-scale FC patterns may also contain rich expression information to accurately decode facial expressions, suggesting a novel mechanism, which includes general interactions between distributed brain regions, and that contributes to the human facial expression recognition.
Collapse
Affiliation(s)
- Yin Liang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, China
| |
Collapse
|
47
|
Dobs K, Schultz J, Bülthoff I, Gardner JL. Task-dependent enhancement of facial expression and identity representations in human cortex. Neuroimage 2018; 172:689-702. [PMID: 29432802 DOI: 10.1016/j.neuroimage.2018.02.013] [Citation(s) in RCA: 25] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2017] [Revised: 02/02/2018] [Accepted: 02/06/2018] [Indexed: 11/24/2022] Open
Abstract
What cortical mechanisms allow humans to easily discern the expression or identity of a face? Subjects detected changes in expression or identity of a stream of dynamic faces while we measured BOLD responses from topographically and functionally defined areas throughout the visual hierarchy. Responses in dorsal areas increased during the expression task, whereas responses in ventral areas increased during the identity task, consistent with previous studies. Similar to ventral areas, early visual areas showed increased activity during the identity task. If visual responses are weighted by perceptual mechanisms according to their magnitude, these increased responses would lead to improved attentional selection of the task-appropriate facial aspect. Alternatively, increased responses could be a signature of a sensitivity enhancement mechanism that improves representations of the attended facial aspect. Consistent with the latter sensitivity enhancement mechanism, attending to expression led to enhanced decoding of exemplars of expression both in early visual and dorsal areas relative to attending identity. Similarly, decoding identity exemplars when attending to identity was improved in dorsal and ventral areas. We conclude that attending to expression or identity of dynamic faces is associated with increased selectivity in representations consistent with sensitivity enhancement.
Collapse
Affiliation(s)
- Katharina Dobs
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 43 Vassar Street, Cambridge, MA 02139, USA.
| | - Johannes Schultz
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany; Division of Medical Psychology and Department of Psychiatry, University of Bonn, Sigmund Freud Str. 25, 53105 Bonn, Germany
| | - Isabelle Bülthoff
- Department Human Perception, Cognition and Action, Max Planck Institute for Biological Cybernetics, Max-Planck-Ring 8, 72076 Tübingen, Germany
| | - Justin L Gardner
- Laboratory for Human Systems Neuroscience, RIKEN Brain Science Institute, 2-1 Hirosawa, Wako, Saitama 351-0198, Japan; Department of Psychology, Stanford University, 450 Serra Mall, Stanford, CA 94305, USA
| |
Collapse
|
48
|
Brooks JA, Freeman JB. Neuroimaging of person perception: A social-visual interface. Neurosci Lett 2017; 693:40-43. [PMID: 29275186 DOI: 10.1016/j.neulet.2017.12.046] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Revised: 12/19/2017] [Accepted: 12/20/2017] [Indexed: 10/18/2022]
Abstract
The visual system is able to extract an enormous amount of socially relevant information from the face, including social categories, personality traits, and emotion. While facial features may be directly tied to certain perceptions, emerging research suggests that top-down social cognitive factors (e.g., stereotypes, social-conceptual knowledge, prejudice) considerably influence and shape the perceptual process. The rapid integration of higher-order social cognitive processes into visual perception can give rise to systematic biases in face perception and may potentially act as a mediating factor for intergroup behavioral and evaluative biases. Drawing on neuroimaging evidence, we review the ways that top-down social cognitive factors shape visual perception of facial features. This emerging work in social and affective neuroscience builds upon work on predictive coding and perceptual priors in cognitive neuroscience and visual cognition, suggesting domain-general mechanisms that underlie a social-visual interface through which social cognition affects visual perception.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, United States.
| | - Jonathan B Freeman
- Department of Psychology, New York University, 6 Washington Place, New York, NY 10003, United States.
| |
Collapse
|
49
|
Ito A, Niwano K, Tanabe M, Sato Y, Fujii T. Activity changes in the left superior temporal sulcus reflect the effects of childcare training on young female students' perceptions of infants' negative facial expressions. Neurosci Res 2017; 131:36-44. [PMID: 28916469 DOI: 10.1016/j.neures.2017.09.003] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2017] [Revised: 08/25/2017] [Accepted: 09/07/2017] [Indexed: 10/18/2022]
Abstract
In many developed countries, the number of infants who experience non-parent childcare is increasing, and the role of preschool teachers is becoming more important. However, little attention has been paid to the effects of childcare training on students who are studying to become preschool teachers. We used functional magnetic resonance imaging (fMRI) to investigate whether and how childcare training affects brain responses to infants' facial expressions among young females studying to become preschool teachers. Twenty-seven subjects who attended a childcare training session (i.e., the experimental group) and 28 subjects who did not attend the training (i.e., the control group) participated in this study. The participants went through fMRI scanning twice: before and after the childcare training session. They were presented with happy, neutral, and sad infant faces one by one during fMRI scanning. The present neuroimaging results revealed that the activity patterns of the left superior temporal sulcus (STS) for sad faces were modulated by the interaction between the time point of the data collection and group differences. The present results are the first to highlight the effects of childcare training on the human brain.
Collapse
Affiliation(s)
- Ayahito Ito
- Kansei Fukushi Research Institute, Tohoku Fukushi University, 6-149-1, Kunimigaoka, Aoba-ku, Sendai, 989-3201, Japan.
| | - Katsuko Niwano
- Faculty of Education, Tohoku Fukushi University, 1-8-1, Kunimi, Aoba-ku, Sendai, 981-8522, Japan
| | - Motoko Tanabe
- Faculty of Health Science, Tohoku Fukushi University, 6-149-1, Kunimigaoka, Aoba-ku, Sendai, 989-3201, Japan
| | - Yosuke Sato
- Faculty of Health Science, Tohoku Fukushi University, 6-149-1, Kunimigaoka, Aoba-ku, Sendai, 989-3201, Japan
| | - Toshikatsu Fujii
- Kansei Fukushi Research Institute, Tohoku Fukushi University, 6-149-1, Kunimigaoka, Aoba-ku, Sendai, 989-3201, Japan
| |
Collapse
|
50
|
Abstract
We cannot help but impute emotions to the behaviors of others, and constantly infer not only what others are feeling, but also why they feel that way. The comprehension of other people's emotional states is computationally complex and difficult, requiring the flexible, context-sensitive deployment of cognitive operations that encompass rapid orienting to, and recognition of, emotionally salient cues; classification of emotions into culturally-learned categories; and using an abstract theory of mind to reason about what caused the emotion, what future actions the person might be planning, and what we should do next in response. This review summarizes what neuroscience data - primarily functional neuroimaging data - has so far taught us about the cognitive architecture enabling emotion understanding in its various forms.
Collapse
Affiliation(s)
- Robert P Spunt
- California Institute of Technology, 1200 E California Blvd, HSS 228-77, Pasadena, CA 91125, United States.
| | - Ralph Adolphs
- California Institute of Technology, 1200 E California Blvd, HSS 228-77, Pasadena, CA 91125, United States.
| |
Collapse
|