1
|
Lee J, Park S. Multi-modal Representation of the Size of Space in the Human Brain. J Cogn Neurosci 2024; 36:340-361. [PMID: 38010320 DOI: 10.1162/jocn_a_02092] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes the geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small- and large-sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberations. By using a multivoxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory-specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that angular gyrus and the right medial frontal gyrus had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory-specific regions and modality-integrated regions increases in the multimodal condition compared with single modality conditions. Our results suggest that spatial size perception relies on both sensory-specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
|
2
|
Lee J, Park S. Multi-modal representation of the size of space in the human brain. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.07.24.550343. [PMID: 37546991 PMCID: PMC10402083 DOI: 10.1101/2023.07.24.550343] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/08/2023]
Abstract
To estimate the size of an indoor space, we must analyze the visual boundaries that limit the spatial extent and acoustic cues from reflected interior surfaces. We used fMRI to examine how the brain processes geometric size of indoor scenes when various types of sensory cues are presented individually or together. Specifically, we asked whether the size of space is represented in a modality-specific way or in an integrative way that combines multimodal cues. In a block-design study, images or sounds that depict small and large sized indoor spaces were presented. Visual stimuli were real-world pictures of empty spaces that were small or large. Auditory stimuli were sounds convolved with different reverberation. By using a multi-voxel pattern classifier, we asked whether the two sizes of space can be classified in visual, auditory, and visual-auditory combined conditions. We identified both sensory specific and multimodal representations of the size of space. To further investigate the nature of the multimodal region, we specifically examined whether it contained multimodal information in a coexistent or integrated form. We found that AG and the right IFG pars opercularis had modality-integrated representation, displaying sensitivity to the match in the spatial size information conveyed through image and sound. Background functional connectivity analysis further demonstrated that the connection between sensory specific regions and modality-integrated regions increase in the multimodal condition compared to single modality conditions. Our results suggest that the spatial size perception relies on both sensory specific and multimodal representations, as well as their interplay during multimodal perception.
Collapse
Affiliation(s)
- Jaeeun Lee
- Department of Psychology, University of Minnesota, Minneapolis, MN
| | - Soojin Park
- Department of Psychology, Yonsei University, Seoul, South Korea
| |
Collapse
|
3
|
Xu S, Zhang Z, Li L, Zhou Y, Lin D, Zhang M, Zhang L, Huang G, Liu X, Becker B, Liang Z. Functional connectivity profiles of the default mode and visual networks reflect temporal accumulative effects of sustained naturalistic emotional experience. Neuroimage 2023; 269:119941. [PMID: 36791897 DOI: 10.1016/j.neuroimage.2023.119941] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2022] [Revised: 01/30/2023] [Accepted: 02/11/2023] [Indexed: 02/15/2023] Open
Abstract
Determining and decoding emotional brain processes under ecologically valid conditions remains a key challenge in affective neuroscience. The current functional Magnetic Resonance Imaging (fMRI) based emotion decoding studies are mainly based on brief and isolated episodes of emotion induction, while sustained emotional experience in naturalistic environments that mirror daily life experiences are scarce. Here we used 12 different 10-minute movie clips as ecologically valid emotion-evoking procedures in n = 52 individuals to explore emotion-specific fMRI functional connectivity (FC) profiles on the whole-brain level at high spatial resolution (432 parcellations including cortical and subcortical structures). Employing machine-learning based decoding and cross validation procedures allowed to investigate FC profiles contributing to classification that can accurately distinguish sustained happiness and sadness and that generalize across subjects, movie clips, and parcellations. Both functional brain network-based and subnetwork-based emotion classification results suggested that emotion manifests as distributed representation of multiple networks, rather than a single functional network or subnetwork. Further, the results showed that the Visual Network (VN) and Default Mode Network (DMN) associated functional networks, especially VN-DMN, exhibited a strong contribution to emotion classification. To further estimate the temporal accumulative effect of naturalistic long-term movie-based video-evoking emotions, we divided the 10-min episode into three stages: early stimulation (1∼200 s), middle stimulation (201∼400 s), and late stimulation (401∼600 s) and examined the emotion classification performance at different stimulation stages. We found that the late stimulation contributes most to the classification (accuracy=85.32%, F1-score=85.62%) compared to early and middle stimulation stages, implying that continuous exposure to emotional stimulation can lead to more intense emotions and further enhance emotion-specific distinguishable representations. The present work demonstrated that sustained happiness and sadness under naturalistic conditions are presented in emotion-specific network profiles and these expressions may play different roles in the generation and modulation of emotions. These findings elucidated the importance of network level adaptations for sustained emotional experiences during naturalistic contexts and open new venues for imaging network level contributions under naturalistic conditions.
Collapse
Affiliation(s)
- Shuyue Xu
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Zhiguo Zhang
- Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China; Peng Cheng Laboratory, Shenzhen 518055, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China
| | - Linling Li
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Yongjie Zhou
- Department of Psychiatric Rehabilitation, Shenzhen Kangning Hospital, Shenzhen, China
| | - Danyi Lin
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Min Zhang
- Institute of Computing and Intelligence, Harbin Institute of Technology, Shenzhen, China
| | - Li Zhang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Gan Huang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China
| | - Xiqin Liu
- Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, MOE Key Laboratory for Neuroinformation, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China
| | - Benjamin Becker
- Center of Psychosomatic Medicine, Sichuan Provincial Center for Mental Health, MOE Key Laboratory for Neuroinformation, Sichuan Provincial People's Hospital, University of Electronic Science and Technology of China, Chengdu 611731, China.
| | - Zhen Liang
- School of Biomedical Engineering, Medical School, Shenzhen University, Shenzhen 518060, China; Guangdong Provincial Key Laboratory of Biomedical Measurements and Ultrasound Imaging, Shenzhen 518060, China; Marshall Laboratory of Biomedical Engineering, Shenzhen 518060, China.
| |
Collapse
|