1
|
Lankinen K, Ahveninen J, Jas M, Raij T, Ahlfors SP. Neuronal Modeling of Cross-Sensory Visual Evoked Magnetoencephalography Responses in the Auditory Cortex. J Neurosci 2024; 44:e1119232024. [PMID: 38508715 PMCID: PMC11044114 DOI: 10.1523/jneurosci.1119-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2023] [Revised: 02/13/2024] [Accepted: 02/14/2024] [Indexed: 03/22/2024] Open
Abstract
Previous studies have demonstrated that auditory cortex activity can be influenced by cross-sensory visual inputs. Intracortical laminar recordings in nonhuman primates have suggested a feedforward (FF) type profile for auditory evoked but feedback (FB) type for visual evoked activity in the auditory cortex. To test whether cross-sensory visual evoked activity in the auditory cortex is associated with FB inputs also in humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex regions of interest, auditory evoked response showed peaks at 37 and 90 ms and visual evoked response at 125 ms. The inputs to the auditory cortex were modeled through FF- and FB-type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which links cellular- and circuit-level mechanisms to MEG signals. HNN modeling suggested that the experimentally observed auditory response could be explained by an FF input followed by an FB input, whereas the cross-sensory visual response could be adequately explained by just an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| |
Collapse
|
2
|
Wang G, Zheng C, Wu X, Deng Z, Sperandio I, Goodale MA, Chen J. The contribution of semantic distance knowledge to size constancy in perception and grasping when visual cues are limited. Neuropsychologia 2024; 196:108838. [PMID: 38401629 DOI: 10.1016/j.neuropsychologia.2024.108838] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2023] [Revised: 01/04/2024] [Accepted: 02/21/2024] [Indexed: 02/26/2024]
Abstract
To achieve a stable perception of object size in spite of variations in viewing distance, our visual system needs to combine retinal image information and distance cues. Previous research has shown that, not only retinal cues, but also extraretinal sensory signals can provide reliable information about depth and that different neural networks (perception versus action) can exhibit preferences in the use of these different sources of information during size-distance computations. Semantic knowledge of distance, a purely cognitive signal, can also provide distance information. Do the perception and action systems show differences in their ability to use this information in calculating object size and distance? To address this question, we presented 'glow-in-the-dark' objects of different physical sizes at different real distances in a completely dark room. Participants viewed the objects monocularly through a 1-mm pinhole. They either estimated the size and distance of the objects or attempted to grasp them. Semantic knowledge was manipulated by providing an auditory cue about the actual distance of the object: "20 cm", "30 cm", and "40 cm". We found that semantic knowledge of distance contributed to some extent to size constancy operations during perceptual estimation and grasping, but size constancy was never fully restored. Importantly, the contribution of knowledge about distance to size constancy was equivalent between perception and action. Overall, our study reveals similarities and differences between the perception and action systems in the use of semantic distance knowledge and suggests that this cognitive signal is useful but not a reliable depth cue for size constancy under restricted viewing conditions.
Collapse
Affiliation(s)
- Gexiu Wang
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Chao Zheng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Xiaoqian Wu
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Zhiqing Deng
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China
| | - Irene Sperandio
- Department of Psychology and Cognitive Science, University of Trento, Rovereto, TN, 38068, Italy
| | - Melvyn A Goodale
- Western Institute for Neuroscience and the Department of Psychology, The University of Western Ontario, London, ON, N6A 5C2, Canada
| | - Juan Chen
- Center for the Study of Applied Psychology, Guangdong Key Laboratory of Mental Health and Cognitive Science, and the School of Psychology, South China Normal University, Guangzhou, Guangdong Province, 510631, China; Key Laboratory of Brain, Cognition and Education Sciences (South China Normal University), Ministry of Education, Guangzhou, Guangdong Province, 510631, China.
| |
Collapse
|
3
|
Li Y, Wang J, Liang J, Zhu C, Zhang Z, Luo W. The impact of degraded vision on emotional perception of audiovisual stimuli: An event-related potential study. Neuropsychologia 2024; 194:108785. [PMID: 38159799 DOI: 10.1016/j.neuropsychologia.2023.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2023] [Revised: 12/25/2023] [Accepted: 12/27/2023] [Indexed: 01/03/2024]
Abstract
Emotion recognition will be challenged for individuals when visual signals are degraded in real-life scenarios. Recently, researchers have conducted many studies on the distinct neural activity between clear and degraded audiovisual stimuli. These findings addressed the "how" question, but the precise stage of the distinct activity that occurred remains unknown. Therefore, it is crucial to use event-related potential (ERP) to explore the "when" question, just the time course of the neural activity of degraded audiovisual stimuli. In the present research, we established two conditions: clear auditory + degraded visual (AcVd) and clear auditory + clear visual (AcVc) multisensory conditions. We enlisted 31 participants to evaluate the emotional valence of audiovisual stimuli. The resulting data were analyzed using ERP in time domains and Microstate analysis. Current results suggest that degraded vision impairs the early-stage processing of audiovisual stimuli, with the superior parietal lobule (SPL) regulating audiovisual processing in a top-down fashion. Additionally, our findings indicate that negative and positive stimuli elicit greater EPN compared to neutral stimuli, pointing towards a subjective motivation-related attentional regulation. To sum up, in the early stage of emotional audiovisual processing, the degraded visual signal affected the perception of the physical attributes of audiovisual stimuli and had a further influence on emotion extraction processing, leading to the different regulation of top-down attention resources in the later stage.
Collapse
Affiliation(s)
- Yuchen Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Institute of Psychology, Shandong Second Medical University, Weifang, 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Jing Wang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Junyu Liang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; School of Psychology, South China Normal University, Guangzhou, 510631, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China
| | - Chuanlin Zhu
- School of Educational Science, Yangzhou University, Yangzhou, 225002, China.
| | - Zhao Zhang
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Institute of Psychology, Shandong Second Medical University, Weifang, 216053, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China.
| | - Wenbo Luo
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, 116029, China; Key Laboratory of Brain and Cognitive Neuroscience, Dalian, 116029, China.
| |
Collapse
|
4
|
Lankinen K, Ahveninen J, Jas M, Raij T, Ahlfors SP. Neuronal modeling of magnetoencephalography responses in auditory cortex to auditory and visual stimuli. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2024:2023.06.16.545371. [PMID: 37398025 PMCID: PMC10312796 DOI: 10.1101/2023.06.16.545371] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 07/04/2023]
Abstract
Previous studies have demonstrated that auditory cortex activity can be influenced by crosssensory visual inputs. Intracortical recordings in non-human primates (NHP) have suggested a bottom-up feedforward (FF) type laminar profile for auditory evoked but top-down feedback (FB) type for cross-sensory visual evoked activity in the auditory cortex. To test whether this principle applies also to humans, we analyzed magnetoencephalography (MEG) responses from eight human subjects (six females) evoked by simple auditory or visual stimuli. In the estimated MEG source waveforms for auditory cortex region of interest, auditory evoked responses showed peaks at 37 and 90 ms and cross-sensory visual responses at 125 ms. The inputs to the auditory cortex were then modeled through FF and FB type connections targeting different cortical layers using the Human Neocortical Neurosolver (HNN), which consists of a neocortical circuit model linking the cellular- and circuit-level mechanisms to MEG. The HNN models suggested that the measured auditory response could be explained by an FF input followed by an FB input, and the crosssensory visual response by an FB input. Thus, the combined MEG and HNN results support the hypothesis that cross-sensory visual input in the auditory cortex is of FB type. The results also illustrate how the dynamic patterns of the estimated MEG/EEG source activity can provide information about the characteristics of the input into a cortical area in terms of the hierarchical organization among areas.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Mainak Jas
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA 02129
- Department of Radiology, Harvard Medical School, Boston, MA 02115
| |
Collapse
|
5
|
Kotlarz P, Lankinen K, Hakonen M, Turpin T, Polimeni JR, Ahveninen J. Multilayer Network Analysis across Cortical Depths in Resting-State 7T fMRI. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.12.23.573208. [PMID: 38187540 PMCID: PMC10769454 DOI: 10.1101/2023.12.23.573208] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/09/2024]
Abstract
In graph theory, "multilayer networks" represent systems involving several interconnected topological levels. A neuroscience example is the hierarchy of connections between different cortical depths or "lamina". This hierarchy is becoming non-invasively accessible in humans using ultra-high-resolution functional MRI (fMRI). Here, we applied multilayer graph theory to examine functional connectivity across different cortical depths in humans, using 7T fMRI (1-mm3 voxels; 30 participants). Blood oxygenation level dependent (BOLD) signals were derived from five depths between the white matter and pial surface. We then compared networks where the inter-regional connections were limited to a single cortical depth only ("layer-by-layer matrices") to those considering all possible connections between regions and cortical depths ("multilayer matrix"). We utilized global and local graph theory features that quantitatively characterize network attributes such as network composition, nodal centrality, path-based measures, and hub segregation. Detecting functional differences between cortical depths was improved using multilayer connectomics compared to the layer-by-layer versions. Superficial aspects of the cortex dominated information transfer and deeper aspects clustering. These differences were largest in frontotemporal and limbic brain regions. fMRI functional connectivity across different cortical depths may contain neurophysiologically relevant information. Multilayer connectomics could provide a methodological framework for studies on how information flows across this hierarchy.
Collapse
Affiliation(s)
- Parker Kotlarz
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Boston, MA, USA
| | - Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Maria Hakonen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| | | | - Jonathan R Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
- Division of Health Sciences and Technology, Massachusetts Institute of Technology, Cambridge, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Massachusetts General Hospital, Charlestown, MA, USA
- Department of Radiology, Harvard Medical School, Boston, MA, USA
| |
Collapse
|
6
|
Faes LK, De Martino F, Huber L(R. Cerebral blood volume sensitive layer-fMRI in the human auditory cortex at 7T: Challenges and capabilities. PLoS One 2023; 18:e0280855. [PMID: 36758009 PMCID: PMC9910709 DOI: 10.1371/journal.pone.0280855] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 01/09/2023] [Indexed: 02/10/2023] Open
Abstract
The development of ultra high field fMRI signal readout strategies and contrasts has led to the possibility of imaging the human brain in vivo and non-invasively at increasingly higher spatial resolutions of cortical layers and columns. One emergent layer-fMRI acquisition method with increasing popularity is the cerebral blood volume sensitive sequence named vascular space occupancy (VASO). This approach has been shown to be mostly sensitive to locally-specific changes of laminar microvasculature, without unwanted biases of trans-laminar draining veins. Until now, however, VASO has not been applied in the technically challenging cortical area of the auditory cortex. Here, we describe the main challenges we encountered when developing a VASO protocol for auditory neuroscientific applications and the solutions we have adopted. With the resulting protocol, we present preliminary results of laminar responses to sounds and as a proof of concept for future investigations, we map the topographic representation of frequency preference (tonotopy) in the auditory cortex.
Collapse
Affiliation(s)
- Lonike K. Faes
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
- * E-mail:
| | - Federico De Martino
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
- Department of Radiology, Center for Magnetic Resonance Research, University of Minnesota, Minneapolis, Minnesota, United States of America
| | - Laurentius (Renzo) Huber
- Faculty of Psychology and Neuroscience, Department of Cognitive Neuroscience, Maastricht University, Maastricht, The Netherlands
| |
Collapse
|
7
|
Lankinen K, Ahlfors SP, Mamashli F, Blazejewska AI, Raij T, Turpin T, Polimeni JR, Ahveninen J. Cortical depth profiles of auditory and visual 7 T functional MRI responses in human superior temporal areas. Hum Brain Mapp 2023; 44:362-372. [PMID: 35980015 PMCID: PMC9842898 DOI: 10.1002/hbm.26046] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2022] [Revised: 07/06/2022] [Accepted: 07/16/2022] [Indexed: 02/02/2023] Open
Abstract
Invasive neurophysiological studies in nonhuman primates have shown different laminar activation profiles to auditory vs. visual stimuli in auditory cortices and adjacent polymodal areas. Means to examine the underlying feedforward vs. feedback type influences noninvasively have been limited in humans. Here, using 1-mm isotropic resolution 3D echo-planar imaging at 7 T, we studied the intracortical depth profiles of functional magnetic resonance imaging (fMRI) blood oxygenation level dependent (BOLD) signals to brief auditory (noise bursts) and visual (checkerboard) stimuli. BOLD percent-signal-changes were estimated at 11 equally spaced intracortical depths, within regions-of-interest encompassing auditory (Heschl's gyrus, Heschl's sulcus, planum temporale, and posterior superior temporal gyrus) and polymodal (middle and posterior superior temporal sulcus) areas. Effects of differing BOLD signal strengths for auditory and visual stimuli were controlled via normalization and statistical modeling. The BOLD depth profile shapes, modeled with quadratic regression, were significantly different for auditory vs. visual stimuli in auditory cortices, but not in polymodal areas. The different depth profiles could reflect sensory-specific feedforward versus cross-sensory feedback influences, previously shown in laminar recordings in nonhuman primates. The results suggest that intracortical BOLD profiles can help distinguish between feedforward and feedback type influences in the human brain. Further experimental studies are still needed to clarify how underlying signal strength influences BOLD depth profiles under different stimulus conditions.
Collapse
Affiliation(s)
- Kaisu Lankinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Seppo P. Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Anna I. Blazejewska
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tommi Raij
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| | - Tori Turpin
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
| | - Jonathan R. Polimeni
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
- Division of Health Sciences and TechnologyMassachusetts Institute of TechnologyCambridgeMassachusettsUSA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of RadiologyMassachusetts General HospitalCharlestownMassachusettsUSA
- Department of RadiologyHarvard Medical SchoolBostonMassachusettsUSA
| |
Collapse
|