1
|
Senkowski D, Engel AK. Multi-timescale neural dynamics for multisensory integration. Nat Rev Neurosci 2024:10.1038/s41583-024-00845-7. [PMID: 39090214 DOI: 10.1038/s41583-024-00845-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/02/2024] [Indexed: 08/04/2024]
Abstract
Carrying out any everyday task, be it driving in traffic, conversing with friends or playing basketball, requires rapid selection, integration and segregation of stimuli from different sensory modalities. At present, even the most advanced artificial intelligence-based systems are unable to replicate the multisensory processes that the human brain routinely performs, but how neural circuits in the brain carry out these processes is still not well understood. In this Perspective, we discuss recent findings that shed fresh light on the oscillatory neural mechanisms that mediate multisensory integration (MI), including power modulations, phase resetting, phase-amplitude coupling and dynamic functional connectivity. We then consider studies that also suggest multi-timescale dynamics in intrinsic ongoing neural activity and during stimulus-driven bottom-up and cognitive top-down neural network processing in the context of MI. We propose a new concept of MI that emphasizes the critical role of neural dynamics at multiple timescales within and across brain networks, enabling the simultaneous integration, segregation, hierarchical structuring and selection of information in different time windows. To highlight predictions from our multi-timescale concept of MI, real-world scenarios in which multi-timescale processes may coordinate MI in a flexible and adaptive manner are considered.
Collapse
Affiliation(s)
- Daniel Senkowski
- Department of Psychiatry and Neurosciences, Charité - Universitätsmedizin Berlin, Berlin, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| |
Collapse
|
2
|
Ahveninen J, Lee HJ, Yu HY, Lee CC, Chou CC, Ahlfors SP, Kuo WJ, Jääskeläinen IP, Lin FH. Visual Stimuli Modulate Local Field Potentials But Drive No High-Frequency Activity in Human Auditory Cortex. J Neurosci 2024; 44:e0890232023. [PMID: 38129133 PMCID: PMC10869150 DOI: 10.1523/jneurosci.0890-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2023] [Revised: 11/06/2023] [Accepted: 11/07/2023] [Indexed: 12/23/2023] Open
Abstract
Neuroimaging studies suggest cross-sensory visual influences in human auditory cortices (ACs). Whether these influences reflect active visual processing in human ACs, which drives neuronal firing and concurrent broadband high-frequency activity (BHFA; >70 Hz), or whether they merely modulate sound processing is still debatable. Here, we presented auditory, visual, and audiovisual stimuli to 16 participants (7 women, 9 men) with stereo-EEG depth electrodes implanted near ACs for presurgical monitoring. Anatomically normalized group analyses were facilitated by inverse modeling of intracranial source currents. Analyses of intracranial event-related potentials (iERPs) suggested cross-sensory responses to visual stimuli in ACs, which lagged the earliest auditory responses by several tens of milliseconds. Visual stimuli also modulated the phase of intrinsic low-frequency oscillations and triggered 15-30 Hz event-related desynchronization in ACs. However, BHFA, a putative correlate of neuronal firing, was not significantly increased in ACs after visual stimuli, not even when they coincided with auditory stimuli. Intracranial recordings demonstrate cross-sensory modulations, but no indication of active visual processing in human ACs.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Hsin-Ju Lee
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada
| | - Hsiang-Yu Yu
- Department of Epilepsy, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Cheng-Chia Lee
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
- Department of Neurosurgery, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
| | - Chien-Chen Chou
- Department of Epilepsy, Neurological Institute, Taipei Veterans General Hospital, Taipei 11217, Taiwan
- School of Medicine, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, Massachusetts 02129
- Department of Radiology, Harvard Medical School, Boston, Massachusetts 02115
| | - Wen-Jui Kuo
- Institute of Neuroscience, National Yang Ming Chiao Tung University, Taipei 112304, Taiwan
| | - Iiro P Jääskeläinen
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, FI-00076 AALTO, Finland
- International Laboratory of Social Neurobiology, Institute of Cognitive Neuroscience, Higher School of Economics, Moscow 101000, Russia
| | - Fa-Hsuan Lin
- Physical Sciences Platform, Sunnybrook Research Institute, Toronto, Ontario M4N 3M5, Canada
- Department of Medical Biophysics, University of Toronto, Toronto, Ontario M5G 1L7, Canada
- Brain and Mind Laboratory, Department of Neuroscience and Biomedical Engineering, Aalto University School of Science, Espoo, FI-00076 AALTO, Finland
| |
Collapse
|
3
|
Shan Y, Wang H, Yang Y, Wang J, Zhao W, Huang Y, Wang H, Han B, Pan N, Jin X, Fan X, Liu Y, Wang J, Wang C, Zhang H, Chen S, Liu T, Yan T, Si T, Yin L, Li X, Cosci F, Zhang X, Zhang G, Gao K, Zhao G. Evidence of a large current of transcranial alternating current stimulation directly to deep brain regions. Mol Psychiatry 2023; 28:5402-5410. [PMID: 37468529 DOI: 10.1038/s41380-023-02150-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 10.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/14/2023] [Revised: 05/31/2023] [Accepted: 06/16/2023] [Indexed: 07/21/2023]
Abstract
Deep brain regions such as hippocampus, insula, and amygdala are involved in neuropsychiatric disorders, including chronic insomnia and depression. Our recent reports showed that transcranial alternating current stimulation (tACS) with a current of 15 mA and a frequency of 77.5 Hz, delivered through a montage of the forehead and both mastoids was safe and effective in intervening chronic insomnia and depression over 8 weeks. However, there is no physical evidence to support whether a large alternating current of 15 mA in tACS can send electrical currents to deep brain tissue in awake humans. Here, we directly recorded local field potentials (LFPs) in the hippocampus, insula and amygdala at different current strengths (1 to 15 mA) in 11 adult patients with drug-resistant epilepsy implanted with stereoelectroencephalography (SEEG) electrodes who received tACS at 77.5 Hz from 1 mA to 15 mA at 77.5 Hz for five minutes at each current for a total of 40 min. For the current of 15 mA at 77.5 Hz, additional 55 min were applied to add up a total of 60 min. Linear regression analysis revealed that the average LFPs for the remaining contacts on both sides of the hippocampus, insula, and amygdala of each patient were statistically associated with the given currents in each patient (p < 0.05-0.01), except for the left insula of one subject (p = 0.053). Alternating currents greater than 7 mA were required to produce significant differences in LFPs in the three brain regions compared to LFPs at 0 mA (p < 0.05). The differences remained significant after adjusting for multiple comparisons (p < 0.05). Our study provides direct evidence that the specific tACS procedures are capable of delivering electrical currents to deep brain tissues, opening a realistic avenue for modulating or treating neuropsychiatric disorders associated with hippocampus, insula, and amygdala.
Collapse
Affiliation(s)
- Yongzhi Shan
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Hongxing Wang
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China.
- Beijing Institute of Brain Disorders, Beijing, 100069, China.
| | - Yanfeng Yang
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Jiahao Wang
- Beijing Key Laboratory of Bioelectromagnetism, Institute of Electrical Engineering, Chinese Academy of Sciences, Beijing, 100190, China
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China
| | - Wenfeng Zhao
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
| | - Yuda Huang
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Huang Wang
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
| | - Bing Han
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
| | - Na Pan
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
| | - Xiukun Jin
- Division of Neuropsychiatry and Psychosomatics, Department of Neurology, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
| | - Xiaotong Fan
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Yunyun Liu
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Jun Wang
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Changming Wang
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Huaqiang Zhang
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Sichang Chen
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Ting Liu
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China
| | - Tianyi Yan
- School of Life Science, Beijing Institute of Technology, Beijing, 100081, China
| | - Tianmei Si
- Peking University Sixth Hospital, Peking University Institute of Mental Health, National Clinical Research Center for Mental Disorders, Beijing, 100191, China
| | - Lu Yin
- Medical Research & Biometrics Centre, Fuwai Hospital, National Centre for Cardiovascular Diseases, Peking Union Medical College & Chinese Academy of Medical Sciences, Beijing, 102300, China
| | - Xinmin Li
- Department of Psychiatry, Faculty of Medicine and Dentistry, University of Alberta, Edmonton, Albert, T6G 2B7, Canada
| | - Fiammetta Cosci
- Department of Health Sciences, University of Florence, Florence, 50135, Italy.
| | - Xiangyang Zhang
- CAS Key Laboratory of Mental Health, Chinese Academy of Sciences, Beijing, 100101, China.
| | - Guanghao Zhang
- Beijing Key Laboratory of Bioelectromagnetism, Institute of Electrical Engineering, Chinese Academy of Sciences, Beijing, 100190, China.
- School of Electronic, Electrical and Communication Engineering, University of Chinese Academy of Sciences, Beijing, 100049, China.
| | - Keming Gao
- Department of Psychiatry, University Hospitals Cleveland Medical Center, Cleveland, Ohio, USA; Case Western Reserve University School of Medicine, Cleveland, OH, 44106, USA.
| | - Guoguang Zhao
- Department of Neurosurgery, Xuanwu Hospital, National Center for Neurological Disorders, National Clinical Research Center for Geriatric Diseases, Capital Medical University, Beijing, 100053, China.
- China International Neuroscience Institute (CHINA-INI), Beijing, 100053, China.
- Beijing Municipal Geriatric Medical Research Center, Beijing, 100053, China.
- Center of Epilepsy, Beijing Institute of Brain Disorders, Beijing, 100069, China.
| |
Collapse
|
4
|
Murray CA, Shams L. Crossmodal interactions in human learning and memory. Front Hum Neurosci 2023; 17:1181760. [PMID: 37266327 PMCID: PMC10229776 DOI: 10.3389/fnhum.2023.1181760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Most studies of memory and perceptual learning in humans have employed unisensory settings to simplify the study paradigm. However, in daily life we are often surrounded by complex and cluttered scenes made up of many objects and sources of sensory stimulation. Our experiences are, therefore, highly multisensory both when passively observing the world and when acting and navigating. We argue that human learning and memory systems are evolved to operate under these multisensory and dynamic conditions. The nervous system exploits the rich array of sensory inputs in this process, is sensitive to the relationship between the sensory inputs, and continuously updates sensory representations, and encodes memory traces based on the relationship between the senses. We review some recent findings that demonstrate a range of human learning and memory phenomena in which the interactions between visual and auditory modalities play an important role, and suggest possible neural mechanisms that can underlie some surprising recent findings. We outline open questions as well as directions of future research to unravel human perceptual learning and memory.
Collapse
Affiliation(s)
- Carolyn A. Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Bioengineering, Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
5
|
Gan S, Li W. Aberrant neural correlates of multisensory processing of audiovisual social cues related to social anxiety: An electrophysiological study. Front Psychiatry 2023; 14:1020812. [PMID: 36761870 PMCID: PMC9902659 DOI: 10.3389/fpsyt.2023.1020812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/16/2022] [Accepted: 01/03/2023] [Indexed: 01/26/2023] Open
Abstract
BACKGROUND Social anxiety disorder (SAD) is characterized by abnormal fear to social cues. Although unisensory processing to social stimuli associated with social anxiety (SA) has been well described, how multisensory processing relates to SA is still open to clarification. Using electroencephalography (EEG) measurement, we investigated the neural correlates of multisensory processing and related temporal dynamics in social anxiety disorder (SAD). METHODS Twenty-five SAD participants and 23 healthy control (HC) participants were presented with angry and neutral faces, voices and their combinations with congruent emotions and they completed an emotional categorization task. RESULTS We found that face-voice combinations facilitated auditory processing in multiple stages indicated by the acceleration of auditory N1 latency, attenuation of auditory N1 and P250 amplitudes, and decrease of theta power. In addition, bimodal inputs elicited cross-modal integrative activity which is indicated by the enhancement of visual P1, N170, and P3/LPP amplitudes and superadditive response of P1 and P3/LPP. More importantly, excessively greater integrative activity (at P3/LPP amplitude) was found in SAD participants, and this abnormal integrative activity in both early and late temporal stages was related to the larger interpretation bias of miscategorizing neutral face-voice combinations as angry. CONCLUSION The study revealed that neural correlates of multisensory processing was aberrant in SAD and it was related to the interpretation bias to multimodal social cues in multiple processing stages. Our findings suggest that deficit in multisensory processing might be an important factor in the psychopathology of SA.
Collapse
Affiliation(s)
- Shuzhen Gan
- Shanghai Changning Mental Health Center, Shanghai, China.,Shanghai Mental Health Center, Shanghai, China
| | - Weijun Li
- Research Center of Brain and Cognitive Neuroscience, Liaoning Normal University, Dalian, China.,Key Laboratory of Brain and Cognitive Neuroscience, Dalian, Liaoning, China
| |
Collapse
|
6
|
Wang Y, Yang Y, Cao G, Guo J, Wei P, Feng T, Dai Y, Huang J, Kang G, Zhao G. SEEG-Net: An explainable and deep learning-based cross-subject pathological activity detection method for drug-resistant epilepsy. Comput Biol Med 2022; 148:105703. [PMID: 35791972 DOI: 10.1016/j.compbiomed.2022.105703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/21/2022] [Revised: 05/16/2022] [Accepted: 06/04/2022] [Indexed: 11/30/2022]
Abstract
OBJECTIVE Precise preoperative evaluation of drug-resistant epilepsy (DRE) requires accurate analysis of invasive stereoelectroencephalography (SEEG). With the tremendous breakthrough of Artificial intelligence (AI), previous studies can help clinical experts to identify pathological activities automatically. However, they still face limitations when applied in real-world clinical DRE scenarios, such as sample imbalance, cross-subject domain shift, and poor interpretability. Our objective is to propose a model that can address the above problems and realizes high-sensitivity SEEG pathological activity detection based on two real clinical datasets. METHODS Our proposed innovative and effective SEEG-Net introduces a multiscale convolutional neural network (MSCNN) to increase the receptive field of the model, and to learn SEEG multiple frequency domain features, local and global features. Moreover, we designed a novel focal domain generalization loss (FDG-loss) function to enhance the target sample weight and to learn domain consistency features. Furthermore, to enhance the interpretability and flexibility of SEEG-Net, we explain SEEG-Net from multiple perspectives, such as significantly different features, interpretable models, and model learning process interpretation by Grad-CAM++. RESULTS The performance of our proposed method is verified on a public benchmark multicenter SEEG dataset and a private clinical SEEG dataset for a robust comparison. The experimental results demonstrate that the SEEG-Net model achieves the highest sensitivity and is state-of-the-art on cross-subject (for different patients) evaluation, and well deal with the known problems. Besides, we provide an SEEG processing and database construction flow, by maintaining consistency with the real-world clinical scenarios. SIGNIFICANCE According to the results, SEEG-Net is constructed to increase the sensitivity of SEEG pathological activity detection. Simultaneously, we settled certain problems about AI assistance in clinical DRE, built a bridge between AI algorithm application and clinical practice.
Collapse
Affiliation(s)
- Yiping Wang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Yanfeng Yang
- Department of Neurosurgery, Xuan Wu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Gongpeng Cao
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Jinjie Guo
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Penghu Wei
- Department of Neurosurgery, Xuan Wu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Tao Feng
- Department of Neurosurgery, Xuan Wu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Yang Dai
- Department of Neurosurgery, Xuan Wu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China
| | - Jinguo Huang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China
| | - Guixia Kang
- Key Laboratory of Universal Wireless Communications, Ministry of Education, Beijing University of Posts and Telecommunications, No. 10 Xitucheng Road, Haidian District, Beijing, 100876, China.
| | - Guoguang Zhao
- Department of Neurosurgery, Xuan Wu Hospital, Capital Medical University, No. 45 Changchun Street, Xicheng District, Beijing, 100053, China.
| |
Collapse
|
7
|
A studyforrest extension, MEG recordings while watching the audio-visual movie "Forrest Gump". Sci Data 2022; 9:206. [PMID: 35562378 PMCID: PMC9106652 DOI: 10.1038/s41597-022-01299-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/02/2021] [Accepted: 03/30/2022] [Indexed: 01/01/2023] Open
Abstract
Naturalistic stimuli, such as movies, are being increasingly used to map brain function because of their high ecological validity. The pioneering studyforrest and other naturalistic neuroimaging projects have provided free access to multiple movie-watching functional magnetic resonance imaging (fMRI) datasets to prompt the community for naturalistic experimental paradigms. However, sluggish blood-oxygenation-level-dependent fMRI signals are incapable of resolving neuronal activity with the temporal resolution at which it unfolds. Instead, magnetoencephalography (MEG) measures changes in the magnetic field produced by neuronal activity and is able to capture rich dynamics of the brain at the millisecond level while watching naturalistic movies. Herein, we present the first public prolonged MEG dataset collected from 11 participants while watching the 2 h long audio-visual movie “Forrest Gump”. Minimally preprocessed data was also provided to facilitate the use of the dataset. As a studyforrest extension, we envision that this dataset, together with fMRI data from the studyforrest project, will serve as a foundation for exploring the neural dynamics of various cognitive functions in real-world contexts. Measurement(s) | Brain activity measurement • Brain structure | Technology Type(s) | Magnetoencephalography • Magnetic Resonance Imaging | Factor Type(s) | Audiovisual movie | Sample Characteristic - Organism | Homo sapiens |
Collapse
|
8
|
Brang D, Plass J, Sherman A, Stacey WC, Wasade VS, Grabowecky M, Ahn E, Towle VL, Tao JX, Wu S, Issa NP, Suzuki S. Visual cortex responds to sound onset and offset during passive listening. J Neurophysiol 2022; 127:1547-1563. [PMID: 35507478 DOI: 10.1152/jn.00164.2021] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Sounds enhance our ability to detect, localize, and respond to co-occurring visual targets. Research suggests that sounds improve visual processing by resetting the phase of ongoing oscillations in visual cortex. However, it remains unclear what information is relayed from the auditory system to visual areas and if sounds modulate visual activity even in the absence of visual stimuli (e.g., during passive listening). Using intracranial electroencephalography (iEEG) in humans, we examined the sensitivity of visual cortex to three forms of auditory information during a passive listening task: auditory onset responses, auditory offset responses, and rhythmic entrainment to sounds. Because some auditory neurons respond to both sound onsets and offsets, visual timing and duration processing may benefit from each. Additionally, if auditory entrainment information is relayed to visual cortex, it could support the processing of complex stimulus dynamics that are aligned between auditory and visual stimuli. Results demonstrate that in visual cortex, amplitude-modulated sounds elicited transient onset and offset responses in multiple areas, but no entrainment to sound modulation frequencies. These findings suggest that activity in visual cortex (as measured with iEEG in response to auditory stimuli) may not be affected by temporally fine-grained auditory stimulus dynamics during passive listening (though it remains possible that this signal may be observable with simultaneous auditory-visual stimuli). Moreover, auditory responses were maximal in low-level visual cortex, potentially implicating a direct pathway for rapid interactions between auditory and visual cortices. This mechanism may facilitate perception by time-locking visual computations to environmental events marked by auditory discontinuities.
Collapse
Affiliation(s)
- David Brang
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - John Plass
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Aleksandra Sherman
- Department of Cognitive Science, Occidental College, Los Angeles, CA, United States
| | - William C Stacey
- Department of Neurology, University of Michigan, Ann Arbor, MI, United States
| | | | - Marcia Grabowecky
- Department of Psychology, Northwestern University, Evanston, IL, United States
| | - EunSeon Ahn
- Department of Psychology, University of Michigan, Ann Arbor, MI, United States
| | - Vernon L Towle
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - James X Tao
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Shasha Wu
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Naoum P Issa
- Department of Neurology, The University of Chicago, Chicago, IL, United States
| | - Satoru Suzuki
- Department of Psychology, Northwestern University, Evanston, IL, United States
| |
Collapse
|
9
|
Benetti S, Collignon O. Cross-modal integration and plasticity in the superior temporal cortex. HANDBOOK OF CLINICAL NEUROLOGY 2022; 187:127-143. [PMID: 35964967 DOI: 10.1016/b978-0-12-823493-8.00026-2] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
In congenitally deaf people, temporal regions typically believed to be primarily auditory enhance their response to nonauditory information. The neural mechanisms and functional principles underlying this phenomenon, as well as its impact on auditory recovery after sensory restoration, yet remain debated. In this chapter, we demonstrate that the cross-modal recruitment of temporal regions by visual inputs in congenitally deaf people follows organizational principles known to be present in the hearing brain. We propose that the functional and structural mechanisms allowing optimal convergence of multisensory information in the temporal cortex of hearing people also provide the neural scaffolding for feeding visual or tactile information into the deafened temporal areas. Innate in their nature, such anatomo-functional links between the auditory and other sensory systems would represent the common substrate of both early multisensory integration and expression of selective cross-modal plasticity in the superior temporal cortex.
Collapse
Affiliation(s)
- Stefania Benetti
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy
| | - Olivier Collignon
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Trento, Italy; Institute for Research in Psychology and Neuroscience, Faculty of Psychology and Educational Science, UC Louvain, Louvain-la-Neuve, Belgium.
| |
Collapse
|
10
|
Kulkarni A, Kegler M, Reichenbach T. Effect of visual input on syllable parsing in a computational model of a neural microcircuit for speech processing. J Neural Eng 2021; 18. [PMID: 34547737 DOI: 10.1088/1741-2552/ac28d3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 09/21/2021] [Indexed: 11/12/2022]
Abstract
Objective.Seeing a person talking can help us understand them, particularly in a noisy environment. However, how the brain integrates the visual information with the auditory signal to enhance speech comprehension remains poorly understood.Approach.Here we address this question in a computational model of a cortical microcircuit for speech processing. The model consists of an excitatory and an inhibitory neural population that together create oscillations in the theta frequency range. When stimulated with speech, the theta rhythm becomes entrained to the onsets of syllables, such that the onsets can be inferred from the network activity. We investigate how well the obtained syllable parsing performs when different types of visual stimuli are added. In particular, we consider currents related to the rate of syllables as well as currents related to the mouth-opening area of the talking faces.Main results.We find that currents that target the excitatory neuronal population can influence speech comprehension, both boosting it or impeding it, depending on the temporal delay and on whether the currents are excitatory or inhibitory. In contrast, currents that act on the inhibitory neurons do not impact speech comprehension significantly.Significance.Our results suggest neural mechanisms for the integration of visual information with the acoustic information in speech and make experimentally-testable predictions.
Collapse
Affiliation(s)
- Anirudh Kulkarni
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Mikolaj Kegler
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom
| | - Tobias Reichenbach
- Department of Bioengineering and Centre for Neurotechnology, Imperial College London, South Kensington Campus, SW7 2AZ London, United Kingdom.,Department Artificial Intelligence in Biomedical Engineering, Friedrich-Alexander-Universität Erlangen-Nürnberg, Konrad-Zuse-Strasse 3/5, Erlangen, 91056, Germany
| |
Collapse
|
11
|
Direct Structural Connections between Auditory and Visual Motion-Selective Regions in Humans. J Neurosci 2021; 41:2393-2405. [PMID: 33514674 DOI: 10.1523/jneurosci.1552-20.2021] [Citation(s) in RCA: 7] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2020] [Revised: 12/23/2020] [Accepted: 01/04/2021] [Indexed: 11/21/2022] Open
Abstract
In humans, the occipital middle-temporal region (hMT+/V5) specializes in the processing of visual motion, while the planum temporale (hPT) specializes in auditory motion processing. It has been hypothesized that these regions might communicate directly to achieve fast and optimal exchange of multisensory motion information. Here we investigated, for the first time in humans (male and female), the presence of direct white matter connections between visual and auditory motion-selective regions using a combined fMRI and diffusion MRI approach. We found evidence supporting the potential existence of direct white matter connections between individually and functionally defined hMT+/V5 and hPT. We show that projections between hMT+/V5 and hPT do not overlap with large white matter bundles, such as the inferior longitudinal fasciculus and the inferior frontal occipital fasciculus. Moreover, we did not find evidence suggesting the presence of projections between the fusiform face area and hPT, supporting the functional specificity of hMT+/V5-hPT connections. Finally, the potential presence of hMT+/V5-hPT connections was corroborated in a large sample of participants (n = 114) from the human connectome project. Together, this study provides a first indication for potential direct occipitotemporal projections between hMT+/V5 and hPT, which may support the exchange of motion information between functionally specialized auditory and visual regions.SIGNIFICANCE STATEMENT Perceiving and integrating moving signal across the senses is arguably one of the most important perceptual skills for the survival of living organisms. In order to create a unified representation of movement, the brain must therefore integrate motion information from separate senses. Our study provides support for the potential existence of direct connections between motion-selective regions in the occipital/visual (hMT+/V5) and temporal/auditory (hPT) cortices in humans. This connection could represent the structural scaffolding for the rapid and optimal exchange and integration of multisensory motion information. These findings suggest the existence of computationally specific pathways that allow information flow between areas that share a similar computational goal.
Collapse
|
12
|
Keitel A, Gross J, Kayser C. Shared and modality-specific brain regions that mediate auditory and visual word comprehension. eLife 2020; 9:e56972. [PMID: 32831168 PMCID: PMC7470824 DOI: 10.7554/elife.56972] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2020] [Accepted: 08/18/2020] [Indexed: 12/22/2022] Open
Abstract
Visual speech carried by lip movements is an integral part of communication. Yet, it remains unclear in how far visual and acoustic speech comprehension are mediated by the same brain regions. Using multivariate classification of full-brain MEG data, we first probed where the brain represents acoustically and visually conveyed word identities. We then tested where these sensory-driven representations are predictive of participants' trial-wise comprehension. The comprehension-relevant representations of auditory and visual speech converged only in anterior angular and inferior frontal regions and were spatially dissociated from those representations that best reflected the sensory-driven word identity. These results provide a neural explanation for the behavioural dissociation of acoustic and visual speech comprehension and suggest that cerebral representations encoding word identities may be more modality-specific than often upheld.
Collapse
Affiliation(s)
- Anne Keitel
- Psychology, University of DundeeDundeeUnited Kingdom
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
| | - Joachim Gross
- Institute of Neuroscience and Psychology, University of GlasgowGlasgowUnited Kingdom
- Institute for Biomagnetism and Biosignalanalysis, University of MünsterMünsterGermany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld UniversityBielefeldGermany
| |
Collapse
|
13
|
Responses to Visual Speech in Human Posterior Superior Temporal Gyrus Examined with iEEG Deconvolution. J Neurosci 2020; 40:6938-6948. [PMID: 32727820 PMCID: PMC7470920 DOI: 10.1523/jneurosci.0279-20.2020] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2020] [Revised: 06/01/2020] [Accepted: 06/02/2020] [Indexed: 12/22/2022] Open
Abstract
Experimentalists studying multisensory integration compare neural responses to multisensory stimuli with responses to the component modalities presented in isolation. This procedure is problematic for multisensory speech perception since audiovisual speech and auditory-only speech are easily intelligible but visual-only speech is not. To overcome this confound, we developed intracranial encephalography (iEEG) deconvolution. Individual stimuli always contained both auditory and visual speech, but jittering the onset asynchrony between modalities allowed for the time course of the unisensory responses and the interaction between them to be independently estimated. We applied this procedure to electrodes implanted in human epilepsy patients (both male and female) over the posterior superior temporal gyrus (pSTG), a brain area known to be important for speech perception. iEEG deconvolution revealed sustained positive responses to visual-only speech and larger, phasic responses to auditory-only speech. Confirming results from scalp EEG, responses to audiovisual speech were weaker than responses to auditory-only speech, demonstrating a subadditive multisensory neural computation. Leveraging the spatial resolution of iEEG, we extended these results to show that subadditivity is most pronounced in more posterior aspects of the pSTG. Across electrodes, subadditivity correlated with visual responsiveness, supporting a model in which visual speech enhances the efficiency of auditory speech processing in pSTG. The ability to separate neural processes may make iEEG deconvolution useful for studying a variety of complex cognitive and perceptual tasks.SIGNIFICANCE STATEMENT Understanding speech is one of the most important human abilities. Speech perception uses information from both the auditory and visual modalities. It has been difficult to study neural responses to visual speech because visual-only speech is difficult or impossible to comprehend, unlike auditory-only and audiovisual speech. We used intracranial encephalography deconvolution to overcome this obstacle. We found that visual speech evokes a positive response in the human posterior superior temporal gyrus, enhancing the efficiency of auditory speech processing.
Collapse
|