1
|
Li Y, Li S, Hu W, Yang L, Luo W. Spatial representation of multidimensional information in emotional faces revealed by fMRI. Neuroimage 2024; 290:120578. [PMID: 38499051 DOI: 10.1016/j.neuroimage.2024.120578] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/20/2023] [Revised: 03/13/2024] [Accepted: 03/15/2024] [Indexed: 03/20/2024] Open
Abstract
Face perception is a complex process that involves highly specialized procedures and mechanisms. Investigating into face perception can help us better understand how the brain processes fine-grained, multidimensional information. This research aimed to delve deeply into how different dimensions of facial information are represented in specific brain regions or through inter-regional connections via an implicit face recognition task. To capture the representation of various facial information in the brain, we employed support vector machine decoding, functional connectivity, and model-based representational similarity analysis on fMRI data, resulting in the identification of three crucial findings. Firstly, despite the implicit nature of the task, emotions were still represented in the brain, contrasting with all other facial information. Secondly, the connection between the medial amygdala and the parahippocampal gyrus was found to be essential for the representation of facial emotion in implicit tasks. Thirdly, in implicit tasks, arousal representation occurred in the parahippocampal gyrus, while valence depended on the connection between the primary visual cortex and the parahippocampal gyrus. In conclusion, these findings dissociate the neural mechanisms of emotional valence and arousal, revealing the precise spatial patterns of multidimensional information processing in faces.
Collapse
Affiliation(s)
- Yiwen Li
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China; Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Shuaixia Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Weiyu Hu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, PR China
| | - Lan Yang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian 116029, PR China; Key Laboratory of Brain and Cognitive Neuroscience, Liaoning Province, Dalian 116029, PR China.
| |
Collapse
|
2
|
Decoding six basic emotions from brain functional connectivity patterns. SCIENCE CHINA LIFE SCIENCES 2022; 66:835-847. [PMID: 36378473 DOI: 10.1007/s11427-022-2206-3] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 09/26/2022] [Indexed: 11/16/2022]
Abstract
Although distinctive neural and physiological states are suggested to underlie the six basic emotions, basic emotions are often indistinguishable from functional magnetic resonance imaging (fMRI) voxelwise activation (VA) patterns. Here, we hypothesize that functional connectivity (FC) patterns across brain regions may contain emotion-representation information beyond VA patterns. We collected whole-brain fMRI data while human participants viewed pictures of faces expressing one of the six basic emotions (i.e., anger, disgust, fear, happiness, sadness, and surprise) or showing neutral expressions. We obtained FC patterns for each emotion across brain regions over the whole brain and applied multivariate pattern decoding to decode emotions in the FC pattern representation space. Our results showed that the whole-brain FC patterns successfully classified not only the six basic emotions from neutral expressions but also each basic emotion from other emotions. An emotion-representation network for each basic emotion that spanned beyond the classical brain regions for emotion processing was identified. Finally, we demonstrated that within the same brain regions, FC-based decoding consistently performed better than VA-based decoding. Taken together, our findings revealed that FC patterns contained emotional information and advocated for paying further attention to the contribution of FCs to emotion processing.
Collapse
|
3
|
Klumpp H, Jimmy J, Burkhouse KL, Bhaumik R, Francis J, Craske MG, Phan KL, Ajilore O. Brain response to emotional faces in anxiety and depression: neural predictors of cognitive behavioral therapy outcome and predictor-based subgroups following therapy. Psychol Med 2022; 52:2095-2105. [PMID: 33168110 DOI: 10.1017/s0033291720003979] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/24/2022]
Abstract
BACKGROUND Neuroimaging studies have shown variance in brain response to emotional faces predicts cognitive behavioral therapy (CBT) outcome. An important next step is to determine if individual differences in neural predictors of CBT response represent distinct patient groups. METHODS In total, 90 patients with internalizing disorders completed a face-matching task during functional magnetic resonance imaging before and after 12 weeks of CBT and 45 healthy controls completed the task before and after 12 weeks. Patients exhibiting a pre-to-post CBT >50% reduction in symptom severity on two measures were considered treatment responders. Regions of interest (ROIs) for angry, fearful, and happy faces were submitted to receiver operating characteristic (ROC) curve analysis. Significant ROIs were then submitted to decision tree analysis to classify responder/non-responder subgroups. Psychophysiological interactions (PPI) were used to explore functional connectivity in the region(s) that delineated subgroups. RESULTS A total of 51 patients were treatment responders and ROC curve results were significant for all face types though specific regions varied. Decision tree results revealed superior occipital response to angry faces identified patient subgroups such that the subgroup with 'high' occipital activity had more responders than the 'low' occipital subgroup. Following CBT, the high, relative to low, occipital subgroup was less symptomatic. Controls exhibited stable superior occipital activation over time. Whole-brain PPI showed reduced baseline superior occipital-postcentral gyrus functional connectivity in responders compared to non-responders. CONCLUSIONS Preliminary findings indicate patients characterized by relatively more pre-treatment superior occipital gyrus engagement to angry faces and reduced superior occipital-postcentral gyrus connectivity, relative to non-responders, may represent a phenotype likely to benefit from CBT.
Collapse
Affiliation(s)
- Heide Klumpp
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Jagan Jimmy
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Katie L Burkhouse
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Runa Bhaumik
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| | - Jennifer Francis
- Department of Psychiatry & Behavioral Sciences, Rush University Medical Center, Chicago, IL, USA
| | - Michelle G Craske
- Department of Psychology and Department of Psychiatry and Biobehavioral Sciences, University of California-Los Angeles, Los Angeles, CA, USA
| | - K Luan Phan
- Department of Psychiatry and Behavioral Health, The Ohio State University, Columbus, OH, USA
| | - Olusola Ajilore
- Department of Psychiatry, University of Illinois at Chicago, Chicago, IL, USA
| |
Collapse
|
4
|
Brooks JA, Stolier RM, Freeman JB. Computational approaches to the neuroscience of social perception. Soc Cogn Affect Neurosci 2021; 16:827-837. [PMID: 32986115 PMCID: PMC8343569 DOI: 10.1093/scan/nsaa127] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2019] [Revised: 07/23/2020] [Accepted: 09/09/2020] [Indexed: 11/14/2022] Open
Abstract
Across multiple domains of social perception-including social categorization, emotion perception, impression formation and mentalizing-multivariate pattern analysis (MVPA) of functional magnetic resonance imaging (fMRI) data has permitted a more detailed understanding of how social information is processed and represented in the brain. As in other neuroimaging fields, the neuroscientific study of social perception initially relied on broad structure-function associations derived from univariate fMRI analysis to map neural regions involved in these processes. In this review, we trace the ways that social neuroscience studies using MVPA have built on these neuroanatomical associations to better characterize the computational relevance of different brain regions, and discuss how MVPA allows explicit tests of the correspondence between psychological models and the neural representation of social information. We also describe current and future advances in methodological approaches to multivariate fMRI data and their theoretical value for the neuroscience of social perception.
Collapse
Affiliation(s)
- Jeffrey A Brooks
- Department of Psychology, New York University, New York, NY, USA
| | - Ryan M Stolier
- Columbia University, 1190 Amsterdam Ave., New York, NY 10027, USA
| | | |
Collapse
|
5
|
Quettier T, Gambarota F, Tsuchiya N, Sessa P. Blocking facial mimicry during binocular rivalry modulates visual awareness of faces with a neutral expression. Sci Rep 2021; 11:9972. [PMID: 33976281 PMCID: PMC8113223 DOI: 10.1038/s41598-021-89355-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/03/2021] [Accepted: 04/21/2021] [Indexed: 01/09/2023] Open
Abstract
Several previous studies have interfered with the observer’s facial mimicry during a variety of facial expression recognition tasks providing evidence in favor of the role of facial mimicry and sensorimotor activity in emotion processing. In this theoretical context, a particularly intriguing facet has been neglected, namely whether blocking facial mimicry modulates conscious perception of facial expressions of emotions. To address this issue, we used a binocular rivalry paradigm, in which two dissimilar stimuli presented to the two eyes alternatingly dominate conscious perception. On each trial, female participants (N = 32) were exposed to a rivalrous pair of a neutral and a happy expression of the same individual through anaglyph glasses in two conditions: in one, they could freely use their facial mimicry, in the other they had to keep a chopstick between their lips, constraining the mobility of the zygomatic muscle and producing ‘noise’ for sensorimotor simulation. We found that blocking facial mimicry affected the perceptual dominance in terms of cumulative time favoring neutral faces, but it did not change the time before the first dominance was established. Taken together, our results open a door to future investigation of the intersection between sensorimotor simulation models and conscious perception of emotional facial expressions.
Collapse
Affiliation(s)
- Thomas Quettier
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35121, Padua, Italy
| | - Filippo Gambarota
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35121, Padua, Italy
| | - Naotsugu Tsuchiya
- School of Psychological Sciences, Monash University, Clayton, Australia
| | - Paola Sessa
- Department of Developmental and Social Psychology, University of Padua, Via Venezia 8, 35121, Padua, Italy. .,Padova Neuroscience Center (PNC), University of Padua, Padua, Italy.
| |
Collapse
|
6
|
Sliwinska MW, Elson R, Pitcher D. Dual-site TMS demonstrates causal functional connectivity between the left and right posterior temporal sulci during facial expression recognition. Brain Stimul 2020; 13:1008-1013. [PMID: 32335230 PMCID: PMC7301156 DOI: 10.1016/j.brs.2020.04.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2019] [Revised: 03/24/2020] [Accepted: 04/17/2020] [Indexed: 01/16/2023] Open
Abstract
Background Neuroimaging studies suggest that facial expression recognition is processed in the bilateral posterior superior temporal sulcus (pSTS). Our recent repetitive transcranial magnetic stimulation (rTMS) study demonstrates that the bilateral pSTS is causally involved in expression recognition, although involvement of the right pSTS is greater than involvement of the left pSTS. Objective /Hypothesis: In this study, we used a dual-site TMS to investigate whether the left pSTS is functionally connected to the right pSTS during expression recognition. We predicted that if this connection exists, simultaneous TMS disruption of the bilateral pSTS would impair expression recognition to a greater extent than unilateral stimulation of the right pSTS alone. Methods Participants attended two TMS sessions. In Session 1, participants performed an expression recognition task while rTMS was delivered to the face-sensitive right pSTS (experimental site), object-sensitive right lateral occipital complex (control site) or no rTMS was delivered (behavioural control). In Session 2, the same experimental design was used, except that continuous theta-burst stimulation (cTBS) was delivered to the left pSTS immediately before behavioural testing commenced. Session order was counter-balanced across participants. Results In Session 1, rTMS to the rpSTS impaired performance accuracy compared to the control conditions. Crucially in Session 2, the size of this impairment effect doubled after cTBS was delivered to the left pSTS. Conclusions Our results provide evidence for a causal functional connection between the left and right pSTS during expression recognition. In addition, this study further demonstrates the utility of the dual-site TMS for investigating causal functional links between brain regions. Dual-site TMS was used to test causal functional connectivity between left and right pSTS during expression recognition. rTMS impaired facial expression recognition when delivered to the right pSTS during a facial expression recognition task. cTBS delivered to the left pSTS prior to the task doubled the impairment effect of rTMS to the right pSTS during the task. The results demonstrate causal functional connectivity between the left and right pSTS during expression recognition. The results also demonstrate the utility of dual-site TMS for investigating interregional causal functional connectivity.
Collapse
Affiliation(s)
| | - Ryan Elson
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| | - David Pitcher
- Department of Psychology, University of York, Heslington, York, YO10 5DD, UK
| |
Collapse
|
7
|
Liang Y, Liu B, Ji J, Li X. Network Representations of Facial and Bodily Expressions: Evidence From Multivariate Connectivity Pattern Classification. Front Neurosci 2019; 13:1111. [PMID: 31736683 PMCID: PMC6828617 DOI: 10.3389/fnins.2019.01111] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2019] [Accepted: 10/02/2019] [Indexed: 01/21/2023] Open
Abstract
Emotions can be perceived from both facial and bodily expressions. Our previous study has found the successful decoding of facial expressions based on the functional connectivity (FC) patterns. However, the role of the FC patterns in the recognition of bodily expressions remained unclear, and no neuroimaging studies have adequately addressed the question of whether emotions perceiving from facial and bodily expressions are processed rely upon common or different neural networks. To address this, the present study collected functional magnetic resonance imaging (fMRI) data from a block design experiment with facial and bodily expression videos as stimuli (three emotions: anger, fear, and joy), and conducted multivariate pattern classification analysis based on the estimated FC patterns. We found that in addition to the facial expressions, bodily expressions could also be successfully decoded based on the large-scale FC patterns. The emotion classification accuracies for the facial expressions were higher than that for the bodily expressions. Further contributive FC analysis showed that emotion-discriminative networks were widely distributed in both hemispheres, containing regions that ranged from primary visual areas to higher-level cognitive areas. Moreover, for a particular emotion, discriminative FCs for facial and bodily expressions were distinct. Together, our findings highlight the key role of the FC patterns in the emotion processing, indicating how large-scale FC patterns reconfigure in processing of facial and bodily expressions, and suggest the distributed neural representation for the emotion recognition. Furthermore, our results also suggest that the human brain employs separate network representations for facial and bodily expressions of the same emotions. This study provides new evidence for the network representations for emotion perception and may further our understanding of the potential mechanisms underlying body language emotion recognition.
Collapse
Affiliation(s)
- Yin Liang
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Baolin Liu
- Tianjin Key Laboratory of Cognitive Computing and Application, School of Computer Science and Technology, Tianjin University, Tianjin, China.,School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, China.,State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| | - Junzhong Ji
- Faculty of Information Technology, Beijing Artificial Intelligence Institute, Beijing University of Technology, Beijing, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| |
Collapse
|
8
|
Ge S, Wang P, Liu H, Lin P, Gao J, Wang R, Iramina K, Zhang Q, Zheng W. Neural Activity and Decoding of Action Observation Using Combined EEG and fNIRS Measurement. Front Hum Neurosci 2019; 13:357. [PMID: 31680910 PMCID: PMC6803538 DOI: 10.3389/fnhum.2019.00357] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2018] [Accepted: 09/24/2019] [Indexed: 12/17/2022] Open
Abstract
In a social world, observing the actions of others is fundamental to understanding what they are doing, as well as their intentions and feelings. Studies of the neural basis and decoding of action observation are important for understanding action-related processes and have implications for cognitive, social neuroscience, and human-machine interaction (HMI). In the current study, we first investigated temporal-spatial dynamics during action observation using a combined 64-channel electroencephalography (EEG) and 48-channel functional near-infrared spectroscopy (fNIRS) system. We measured brain activation while 16 healthy participants observed three action tasks: (1) grasping a cup with the intention of drinking; (2) grasping a cup with the intention of moving it; and (3) touching a cup with an unclear intention. The EEG and fNIRS source analysis results revealed the dynamic involvement of both the mirror neuron system (MNS) and the theory of mind (ToM)/mentalizing network during action observation. The source analysis results suggested that the extent to which these two systems were engaged was determined by the clarity of the intention of the observed action. Based on the difference in neural activity observed among different action-observation tasks in the first experiment, we conducted a second experiment to classify the neural processes underlying action observation using a feature classification method. We constructed complex brain networks based on the EEG and fNIRS data. Fusing features from both EEG and fNIRS complex brain networks resulted in a classification accuracy of 72.7% for the three action observation tasks. This study provides a theoretical and empirical basis for elucidating the neural mechanisms of action observation and intention understanding, and a feasible method for decoding the underlying neural processes.
Collapse
Affiliation(s)
- Sheng Ge
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Peng Wang
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Hui Liu
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| | - Pan Lin
- Department of Psychology and Cognition and Human Behavior Key Laboratory of Hunan Province, Hunan Normal University, Changsha, China
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, China
| | - Junfeng Gao
- College of Biomedical Engineering, South-Central University for Nationalities, Wuhan, China
| | - Ruimin Wang
- Department of Graduate School of Systems Life Sciences, Kyushu University, Fukuoka, Japan
| | - Keiji Iramina
- Department of Graduate School of Systems Life Sciences, Kyushu University, Fukuoka, Japan
| | - Quan Zhang
- Neural Systems Group, Massachusetts General Hospital, Harvard Medical School, Charlestown, MA, United States
| | - Wenming Zheng
- Key Laboratory of Child Development and Learning Science of Ministry of Education, School of Biological Science and Medical Engineering, Southeast University, Nanjing, China
| |
Collapse
|
9
|
Gu J, Cao L, Liu B. Modality-general representations of valences perceived from visual and auditory modalities. Neuroimage 2019; 203:116199. [PMID: 31536804 DOI: 10.1016/j.neuroimage.2019.116199] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/26/2018] [Revised: 08/31/2019] [Accepted: 09/14/2019] [Indexed: 01/29/2023] Open
Abstract
Valence is a dimension of emotion and can be either positive, negative, or neutral. Valences can be expressed through the visual and auditory modalities, and the valences of each modality can be conveyed by different types of stimuli (face, body, voice or music). This study focused on the modality-general representations of valences, that is, valence information can be shared across not only visual and auditory modalities but also different types of stimuli within each modality. Functional magnetic resonance imaging (fMRI) data were collected when subjects made affective judgment on silent videos (face and body) and audio clips (voice and music). The searchlight analysis helped to locate four areas that might be sensitive to the representations of modality-general valences, including the bilateral postcentral gyrus, left middle temporal gyrus (MTG) and right middle frontal gyrus (MFG). Further cross-modal classification based on multivoxel pattern analysis (MVPA) was performed as a validation analysis, which suggested that only the left postcentral gyrus could successfully distinguish three valences (positive versus negative and versus neutral: PvsNvs0) across different types of stimuli (face, body, voice or music), and the classification was also successful in left MTG across the stimuli types of face and body. The univariate analysis further found the valence-specific activation differences across stimulus types in MTG. Our study showed that the left postcentral gyrus was informative to valence representations, and extended the research about valence representation that the modality-general representation of valences across not only visual and auditory modalities but also different types of stimuli within each modality.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Linjing Cao
- College of Intelligence and Computing, Tianjin University, Tianjin, 300350, PR China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, PR China.
| |
Collapse
|
10
|
Gu J, Liu B, Li X, Wang P, Wang B. Cross-modal representations in early visual and auditory cortices revealed by multi-voxel pattern analysis. Brain Imaging Behav 2019; 14:1908-1920. [PMID: 31183774 DOI: 10.1007/s11682-019-00135-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Primary sensory cortices can respond not only to their defined sensory modality but also to cross-modal information. In addition to the observed cross-modal phenomenon, it is valuable to research further whether cross-modal information can be valuable for categorizing stimuli and what effect other factors, such as experience and imagination, may have on cross-modal processing. In this study, we researched cross-modal information processing in the early visual cortex (EVC, including the visual area 1, 2, and 3 (V1, V2, and V3)) and auditory cortex (primary (A1) and secondary (A2) auditory cortex). Images and sound clips were presented to participants separately in two experiments in which participants' imagination and expectations were restricted by an orthogonal fixation task and the data were collected by functional magnetic resonance imaging (fMRI). We successfully decoded categories of the cross-modal stimuli in the ROIs except for V1 by multi-voxel pattern analysis (MVPA). It was further shown that familiar sounds had the advantage of classification accuracies in V2 and V3 when compared with unfamiliar sounds. The results of the cross-classification analysis showed that there was no significant similarity between the activity patterns induced by different stimulus modalities. Even though the cross-modal representation is robust when considering the restriction of top-down expectations and mental imagery in our experiments, the sound experience showed effects on cross-modal representation in V2 and V3. In addition, primary sensory cortices may receive information from different modalities in different ways, so the activity patterns between two modalities were not similar enough to complete the cross-classification successfully.
Collapse
Affiliation(s)
- Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, People's Republic of China
| | - Baolin Liu
- School of Computer and Communication Engineering, University of Science and Technology Beijing, Beijing, 100083, People's Republic of China.
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Peiyuan Wang
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong, 264003, People's Republic of China
| |
Collapse
|
11
|
Hesse E, Mikulan E, Sitt JD, Garcia MDC, Silva W, Ciraolo C, Vaucheret E, Raimondo F, Baglivo F, Adolfi F, Herrera E, Bekinschtein TA, Petroni A, Lew S, Sedeno L, Garcia AM, Ibanez A. Consistent Gradient of Performance and Decoding of Stimulus Type and Valence From Local and Network Activity. IEEE Trans Neural Syst Rehabil Eng 2019; 27:619-629. [PMID: 30869625 DOI: 10.1109/tnsre.2019.2903921] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
The individual differences approach focuses on the variation of behavioral and neural signatures across subjects. In this context, we searched for intracranial neural markers of performance in three individuals with distinct behavioral patterns (efficient, borderline, and inefficient) in a dual-valence task assessing facial and lexical emotion recognition. First, we performed a preliminary study to replicate well-established evoked responses in relevant brain regions. Then, we examined time series data and network connectivity, combined with multivariate pattern analyses and machine learning, to explore electrophysiological differences in resting-state versus task-related activity across subjects. Next, using the same methodological approach, we assessed the neural decoding of performance for different dimensions of the task. The classification of time series data mirrored the behavioral gradient across subjects for stimulus type but not for valence. However, network-based measures reflected the subjects' hierarchical profiles for both stimulus types and valence. Therefore, this measure serves as a sensitive marker for capturing distributed processes such as emotional valence discrimination, which relies on an extended set of regions. Network measures combined with classification methods may offer useful insights to study single subjects and understand inter-individual performance variability. Promisingly, this approach could eventually be extrapolated to other neuroscientific techniques.
Collapse
|
12
|
Cao L, Xu J, Yang X, Li X, Liu B. Abstract Representations of Emotions Perceived From the Face, Body, and Whole-Person Expressions in the Left Postcentral Gyrus. Front Hum Neurosci 2018; 12:419. [PMID: 30405375 PMCID: PMC6200969 DOI: 10.3389/fnhum.2018.00419] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 09/27/2018] [Indexed: 12/03/2022] Open
Abstract
Emotions can be perceived through the face, body, and whole-person, while previous studies on the abstract representations of emotions only focused on the emotions of the face and body. It remains unclear whether emotions can be represented at an abstract level regardless of all three sensory cues in specific brain regions. In this study, we used the representational similarity analysis (RSA) to explore the hypothesis that the emotion category is independent of all three stimulus types and can be decoded based on the activity patterns elicited by different emotions. Functional magnetic resonance imaging (fMRI) data were collected when participants classified emotions (angry, fearful, and happy) expressed by videos of faces, bodies, and whole-persons. An abstract emotion model was defined to estimate the neural representational structure in the whole-brain RSA, which assumed that the neural patterns were significantly correlated in within-emotion conditions ignoring the stimulus types but uncorrelated in between-emotion conditions. A neural representational dissimilarity matrix (RDM) for each voxel was then compared to the abstract emotion model to examine whether specific clusters could identify the abstract representation of emotions that generalized across stimulus types. The significantly positive correlations between neural RDMs and models suggested that the abstract representation of emotions could be successfully captured by the representational space of specific clusters. The whole-brain RSA revealed an emotion-specific but stimulus category-independent neural representation in the left postcentral gyrus, left inferior parietal lobe (IPL) and right superior temporal sulcus (STS). Further cluster-based MVPA revealed that only the left postcentral gyrus could successfully distinguish three types of emotions for the two stimulus type pairs (face-body and body-whole person) and happy versus angry/fearful, which could be considered as positive versus negative for three stimulus type pairs, when the cross-modal classification analysis was performed. Our study suggested that abstract representations of three emotions (angry, fearful, and happy) could extend from the face and body stimuli to whole-person stimuli and the findings of this study provide support for abstract representations of emotions in the left postcentral gyrus.
Collapse
Affiliation(s)
- Linjing Cao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Junhai Xu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xiaoli Yang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, China.,State Key Laboratory of Intelligent Technology and Systems, Tsinghua National Laboratory for Information Science and Technology, Tsinghua University, Beijing, China
| |
Collapse
|
13
|
Miao Q, Zhang G, Yan W, Liu B. Investigating the Brain Neural Mechanism when Signature Objects were Masked during a Scene Categorization Task using Functional MRI. Neuroscience 2018; 388:248-262. [PMID: 30056114 DOI: 10.1016/j.neuroscience.2018.07.030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2018] [Revised: 07/17/2018] [Accepted: 07/18/2018] [Indexed: 11/17/2022]
Abstract
Objects play vital roles in scene categorization. Although a number of studies have researched on the neural responses during object and object-based scene recognition, few studies have investigated the neural mechanism underlying object-masked scene categorization. Here, we used functional magnetic resonance imaging (fMRI) to measure the changes in brain activations and functional connectivity (FC) while subjects performed a visual scene-categorization task with different numbers of 'signature objects' masked. The object-selective region in the lateral occipital complex (LOC) showed a decrease in activations and changes in FC with the default mode network (DMN), indicating changes in object attention after the masking of signature objects. Changes in top-down modulation effect were revealed in the FC from the dorsolateral prefrontal cortex (DLPFC) to LOC and the extrastriate visual cortex, possibly participating in conscious object recognition. The whole-brain analyses showed the participation of fronto-parietal network (FPN) in scene categorization judgment, and right DLPFC served as the core hub in this network. Another core hub was found in left middle temporal gyrus (MTG) and its connection with middle cingulate cortex (MCC), supramarginal gyrus (SMG) and insula might serve in the processing of motor response and the semantic relations between objects and scenes. Brain-behavior correlation analysis substantiated the contributions of the FC to the different processes in the object-masked scene-categorization tasks. Altogether, the results suggest that masking of objects significantly affected the object attention, cognitive demand, top-down modulation effect, and semantic judgment.
Collapse
Affiliation(s)
- Qiaomu Miao
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Gaoyan Zhang
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Weiran Yan
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China
| | - Baolin Liu
- School of Computer Science and Technology, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin 300350, PR China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing 100084, PR China.
| |
Collapse
|