1
|
Van Caenegem EE, Moreno-Verdú M, Waltzing BM, Hamoline G, McAteer SM, Frahm L, Hardwick RM. Multisensory approach in Mental Imagery: ALE meta-analyses comparing Motor, Visual and Auditory Imagery. Neurosci Biobehav Rev 2024; 167:105902. [PMID: 39303775 DOI: 10.1016/j.neubiorev.2024.105902] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2024] [Revised: 08/29/2024] [Accepted: 09/17/2024] [Indexed: 09/22/2024]
Abstract
Mental Imagery is a topic of longstanding and widespread scientific interest. Individual studies have typically focused on a single modality (e.g. Motor, Visual, Auditory) of Mental Imagery. Relatively little work has considered directly comparing and contrasting the brain networks associated with these different modalities of Imagery. The present study integrates data from 439 neuroimaging experiments to identify both modality-specific and shared neural networks involved in Mental Imagery. Comparing the networks involved in Motor, Visual, and Auditory Imagery identified a pattern whereby each form of Imagery preferentially recruited 'higher level' associative brain regions involved in the associated 'real' experience. Results also indicate significant overlap in a left-lateralized network including the pre-supplementary motor area, ventral premotor cortex and inferior parietal lobule. This pattern of results supports the existence of a 'core' network that supports the attentional, spatial, and decision-making demands of Mental Imagery. Together these results offer new insights into the brain networks underlying human imagination.
Collapse
Affiliation(s)
- Elise E Van Caenegem
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium.
| | - Marcos Moreno-Verdú
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium
| | - Baptiste M Waltzing
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium
| | - Gautier Hamoline
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium
| | - Siobhan M McAteer
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium
| | - Lennart Frahm
- Institute of Neuroscience and Medicine, Brain & Behaviour (INM7), Research Centre Jülich, Jülich, Germany; Department of Psychiatry, Psychotherapy and Psychosomatics, School of Medicine,RWTH Aachen University, Aachen, Germany
| | - Robert M Hardwick
- Brain, Action, And Skill Laboratory, Institute of Neurosciences, UCLouvain, Belgium
| |
Collapse
|
2
|
Gao D, Liang X, Ting Q, Nichols ES, Bai Z, Xu C, Cai M, Liu L. A meta-analysis of letter-sound integration: Assimilation and accommodation in the superior temporal gyrus. Hum Brain Mapp 2024; 45:e26713. [PMID: 39447213 PMCID: PMC11501095 DOI: 10.1002/hbm.26713] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2023] [Revised: 04/15/2024] [Accepted: 05/02/2024] [Indexed: 10/26/2024] Open
Abstract
Despite being a relatively new cultural phenomenon, the ability to perform letter-sound integration is readily acquired even though it has not had time to evolve in the brain. Leading theories of how the brain accommodates literacy acquisition include the neural recycling hypothesis and the assimilation-accommodation hypothesis. The neural recycling hypothesis proposes that a new cultural skill is developed by "invading" preexisting neural structures to support a similar cognitive function, while the assimilation-accommodation hypothesis holds that a new cognitive skill relies on direct invocation of preexisting systems (assimilation) and adds brain areas based on task requirements (accommodation). Both theories agree that letter-sound integration may be achieved by reusing pre-existing functionally similar neural bases, but differ in their proposals of how this occurs. We examined the evidence for each hypothesis by systematically comparing the similarities and differences between letter-sound integration and two other types of preexisting and functionally similar audiovisual (AV) processes, namely object-sound and speech-sound integration, by performing an activation likelihood estimation (ALE) meta-analysis. All three types of AV integration recruited the left posterior superior temporal gyrus (STG), while speech-sound integration additionally activated the bilateral middle STG and letter-sound integration directly invoked the AV areas involved in speech-sound integration. These findings suggest that letter-sound integration may reuse the STG for speech-sound and object-sound integration through an assimilation-accommodation mechanism.
Collapse
Affiliation(s)
- Danqi Gao
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Xitong Liang
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Qi Ting
- Department of Brain Cognition and Intelligent MedicineBeijing University of Posts and TelecommunicationsBeijingChina
| | | | - Zilin Bai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Chaoying Xu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Mingnan Cai
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| | - Li Liu
- State Key Laboratory of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain ResearchBeijing Normal UniversityBeijingChina
| |
Collapse
|
3
|
Cai XL, Pu CC, Zhou SZ, Wang Y, Huang J, Lui SSY, Møller A, Cheung EFC, Madsen KH, Xue R, Yu X, Chan RCK. Anterior cingulate glutamate levels associate with functional activation and connectivity during sensory integration in schizophrenia: a multimodal 1H-MRS and fMRI study. Psychol Med 2023; 53:4904-4914. [PMID: 35791929 DOI: 10.1017/s0033291722001817] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
BACKGROUND Glutamatergic dysfunction has been implicated in sensory integration deficits in schizophrenia, yet how glutamatergic function contributes to behavioural impairments and neural activities of sensory integration remains unknown. METHODS Fifty schizophrenia patients and 43 healthy controls completed behavioural assessments for sensory integration and underwent magnetic resonance spectroscopy (MRS) for measuring the anterior cingulate cortex (ACC) glutamate levels. The correlation between glutamate levels and behavioural sensory integration deficits was examined in each group. A subsample of 20 pairs of patients and controls further completed an audiovisual sensory integration functional magnetic resonance imaging (fMRI) task. Blood Oxygenation Level Dependent (BOLD) activation and task-dependent functional connectivity (FC) were assessed based on fMRI data. Full factorial analyses were performed to examine the Group-by-Glutamate Level interaction effects on fMRI measurements (group differences in correlation between glutamate levels and fMRI measurements) and the correlation between glutamate levels and fMRI measurements within each group. RESULTS We found that schizophrenia patients exhibited impaired sensory integration which was positively correlated with ACC glutamate levels. Multimodal analyses showed significantly Group-by-Glutamate Level interaction effects on BOLD activation as well as task-dependent FC in a 'cortico-subcortical-cortical' network (including medial frontal gyrus, precuneus, ACC, middle cingulate gyrus, thalamus and caudate) with positive correlations in patients and negative in controls. CONCLUSIONS Our findings indicate that ACC glutamate influences neural activities in a large-scale network during sensory integration, but the effects have opposite directionality between schizophrenia patients and healthy people. This implicates the crucial role of glutamatergic system in sensory integration processing in schizophrenia.
Collapse
Affiliation(s)
- Xin-Lu Cai
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
| | - Cheng-Cheng Pu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Shu-Zhe Zhou
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Yi Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Jia Huang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Simon S Y Lui
- Department of Psychiatry, School of Clinical Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, China
| | - Arne Møller
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- Centre of Functionally Integrative Neuroscience, Aarhus University, Aarhus, Denmark
- Department of Nuclear Medicine and PET Centre, Aarhus University Hospital, Aarhus, Denmark
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, China
| | - Kristoffer H Madsen
- Sino-Danish Centre for Education and Research, Beijing, China
- Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital, Amager and Hvidovre, Denmark
- Department of Applied Mathematics and Computer Science, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Rong Xue
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- State Key Laboratory of Brain and Cognitive Science, Institute of Biophysics, Chinese Academy of Sciences, Beijing, China
- Beijing Institute for Brain Disorders, Beijing, China
| | - Xin Yu
- Peking University Sixth Hospital, Peking University Institute of Mental Health, Beijing, China
- NHC Key Laboratory of Mental Health (Peking University), National Clinical Research Center for Mental Disorders (Peking University Sixth Hospital), Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College, University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
- Department of Diagnostic Radiology, the University of Hong Kong, Hong Kong Special Administrative Region, China
| |
Collapse
|
4
|
Krason A, Vigliocco G, Mailend ML, Stoll H, Varley R, Buxbaum LJ. Benefit of visual speech information for word comprehension in post-stroke aphasia. Cortex 2023; 165:86-100. [PMID: 37271014 PMCID: PMC10850036 DOI: 10.1016/j.cortex.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/13/2023] [Accepted: 04/22/2023] [Indexed: 06/06/2023]
Abstract
Aphasia is a language disorder that often involves speech comprehension impairments affecting communication. In face-to-face settings, speech is accompanied by mouth and facial movements, but little is known about the extent to which they benefit aphasic comprehension. This study investigated the benefit of visual information accompanying speech for word comprehension in people with aphasia (PWA) and the neuroanatomic substrates of any benefit. Thirty-six PWA and 13 neurotypical matched control participants performed a picture-word verification task in which they indicated whether a picture of an animate/inanimate object matched a subsequent word produced by an actress in a video. Stimuli were either audiovisual (with visible mouth and facial movements) or auditory-only (still picture of a silhouette) with audio being clear (unedited) or degraded (6-band noise-vocoding). We found that visual speech information was more beneficial for neurotypical participants than PWA, and more beneficial for both groups when speech was degraded. A multivariate lesion-symptom mapping analysis for the degraded speech condition showed that lesions to superior temporal gyrus, underlying insula, primary and secondary somatosensory cortices, and inferior frontal gyrus were associated with reduced benefit of audiovisual compared to auditory-only speech, suggesting that the integrity of these fronto-temporo-parietal regions may facilitate cross-modal mapping. These findings provide initial insights into our understanding of the impact of audiovisual information on comprehension in aphasia and the brain regions mediating any benefit.
Collapse
Affiliation(s)
- Anna Krason
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA.
| | - Gabriella Vigliocco
- Experimental Psychology, University College London, UK; Moss Rehabilitation Research Institute, Elkins Park, PA, USA
| | - Marja-Liisa Mailend
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Special Education, University of Tartu, Tartu Linn, Estonia
| | - Harrison Stoll
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Applied Cognitive and Brain Science, Drexel University, Philadelphia, PA, USA
| | | | - Laurel J Buxbaum
- Moss Rehabilitation Research Institute, Elkins Park, PA, USA; Department of Rehabilitation Medicine, Thomas Jefferson University, Philadelphia, PA, USA
| |
Collapse
|
5
|
Zhou HY, Zhang YJ, Hu HX, Yan YJ, Wang LL, Lui SSY, Chan RCK. Neural correlates of audiovisual speech synchrony perception and its relationship with autistic traits. Psych J 2023; 12:514-523. [PMID: 36517928 DOI: 10.1002/pchj.624] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2022] [Accepted: 11/10/2022] [Indexed: 08/12/2023]
Abstract
The anterior insula (AI) has the central role in coordinating attention and integrating information from multiple sensory modalities. AI dysfunction may contribute to both sensory and social impairments in autism spectrum disorder (ASD). Little is known regarding the brain mechanisms that guide multisensory integration, and how such neural activity might be affected by autistic-like symptoms in the general population. In this study, 72 healthy young adults performed an audiovisual speech synchrony judgment (SJ) task during fMRI scanning. We aimed to investigate the SJ-related brain activations and connectivity, with a focus on the AI. Compared with synchronous speech, asynchrony perception triggered stronger activations in the bilateral AI, and other frontal-cingulate-parietal regions. In contrast, synchronous perception resulted in greater involvement of the primary auditory and visual areas, indicating multisensory validation and fusion. Moreover, the AI demonstrated a stronger connection with the anterior cingulate gyrus (ACC) in the audiovisual asynchronous (vs. synchronous) condition. To facilitate asynchrony detection, the AI may integrate auditory and visual speech stimuli, and generate a control signal to the ACC that further supports conflict-resolving and response selection. Correlation analysis, however, suggested that audiovisual synchrony perception and its related AI activation and connectivity did not significantly vary with different levels of autistic traits. These findings provide novel evidence for the neural mechanisms underlying multisensory temporal processing in healthy people. Future research should examine whether such findings would be extended to ASD patients.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Shanghai Key Laboratory of Mental Health and Psychological Crisis Intervention, School of Psychology and Cognitive Science, East China Normal University, Shanghai, China
| | - Yi-Jing Zhang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Hui-Xin Hu
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yong-Jie Yan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Sino-Danish College of University of Chinese Academy of Sciences, Beijing, China
- Sino-Danish Centre for Education and Research, Beijing, China
| | - Ling-Ling Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Simon S Y Lui
- Department of Psychiatry, School of Clinical Medicine, The University of Hong Kong, Hong Kong Special Administrative Region, Hong Kong, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China
- Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
6
|
Scheliga S, Kellermann T, Lampert A, Rolke R, Spehr M, Habel U. Neural correlates of multisensory integration in the human brain: an ALE meta-analysis. Rev Neurosci 2023; 34:223-245. [PMID: 36084305 DOI: 10.1515/revneuro-2022-0065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 07/22/2022] [Indexed: 02/07/2023]
Abstract
Previous fMRI research identified superior temporal sulcus as central integration area for audiovisual stimuli. However, less is known about a general multisensory integration network across senses. Therefore, we conducted activation likelihood estimation meta-analysis with multiple sensory modalities to identify a common brain network. We included 49 studies covering all Aristotelian senses i.e., auditory, visual, tactile, gustatory, and olfactory stimuli. Analysis revealed significant activation in bilateral superior temporal gyrus, middle temporal gyrus, thalamus, right insula, and left inferior frontal gyrus. We assume these regions to be part of a general multisensory integration network comprising different functional roles. Here, thalamus operate as first subcortical relay projecting sensory information to higher cortical integration centers in superior temporal gyrus/sulcus while conflict-processing brain regions as insula and inferior frontal gyrus facilitate integration of incongruent information. We additionally performed meta-analytic connectivity modelling and found each brain region showed co-activations within the identified multisensory integration network. Therefore, by including multiple sensory modalities in our meta-analysis the results may provide evidence for a common brain network that supports different functional roles for multisensory integration.
Collapse
Affiliation(s)
- Sebastian Scheliga
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Thilo Kellermann
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Angelika Lampert
- Institute of Physiology, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Roman Rolke
- Department of Palliative Medicine, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany
| | - Marc Spehr
- Department of Chemosensation, RWTH Aachen University, Institute for Biology, Worringerweg 3, 52074 Aachen, Germany
| | - Ute Habel
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical Faculty RWTH Aachen University, Pauwelsstraße 30, 52074 Aachen, Germany.,JARA-Institute Brain Structure Function Relationship, Pauwelsstraße 30, 52074 Aachen, Germany
| |
Collapse
|
7
|
Li J, Yang Y, Viñas-Guasch N, Yang Y, Bi HY. Differences in brain functional networks for audiovisual integration during reading between children and adults. Ann N Y Acad Sci 2023; 1520:127-139. [PMID: 36478220 DOI: 10.1111/nyas.14943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022]
Abstract
Building robust letter-to-sound correspondences is a prerequisite for developing reading capacity. However, the neural mechanisms underlying the development of audiovisual integration for reading are largely unknown. This study used functional magnetic resonance imaging in a lexical decision task to investigate functional brain networks that support audiovisual integration during reading in developing child readers (10-12 years old) and skilled adult readers (20-28 years old). The results revealed enhanced connectivity in a prefrontal-superior temporal network (including the right medial frontal gyrus, right superior frontal gyrus, and left superior temporal gyrus) in adults relative to children, reflecting the development of attentional modulation of audiovisual integration involved in reading processing. Furthermore, the connectivity strength of this brain network was correlated with reading accuracy. Collectively, this study, for the first time, elucidates the differences in brain networks of audiovisual integration for reading between children and adults, promoting the understanding of the neurodevelopment of multisensory integration in high-level human cognition.
Collapse
Affiliation(s)
- Junjun Li
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yang Yang
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Yinghui Yang
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.,China Welfare Institute Information and Research Center, Soong Ching Ling Children Development Center, Shanghai, China
| | - Hong-Yan Bi
- CAS Key Laboratory of Behavioral Science, Center for Brain Science and Learning Difficulties, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
8
|
Rodríguez-Nieto G, Seer C, Sidlauskaite J, Vleugels L, Van Roy A, Hardwick R, Swinnen S. Inhibition, Shifting and Updating: Inter and intra-domain commonalities and differences from an executive functions activation likelihood estimation meta-analysis. Neuroimage 2022; 264:119665. [PMID: 36202157 DOI: 10.1016/j.neuroimage.2022.119665] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2022] [Revised: 09/12/2022] [Accepted: 10/02/2022] [Indexed: 11/09/2022] Open
Abstract
Executive functions are higher-order mental processes that support goal-directed behavior. Among these processes, Inhibition, Updating, and Shifting have been considered core executive domains. In this meta-analysis, we comprehensively investigate the neural networks of these executive domains and we synthesize for the first time the neural convergences and divergences among the most frequently used executive paradigms within those domains. A systematic search yielded 1055 published neuroimaging studies (including 26,191 participants in total). Our study revealed that a fronto-parietal network was shared by the three main domains. Furthermore, we executed conjunction analyses among the paradigms of the same domain to extract the core distinctive components of the main executive domains. This approach showed that Inhibition and Shifting are characterized by a strongly lateralized neural activation in the right and left hemisphere, respectively. In addition, both networks overlapped with the Updating network but not with each other. Remarkably, our study detected heterogeneity among the paradigms from the same domain. More specifically, analysis of Inhibition tasks revealed differing activations for Response Inhibition compared to Interference Control paradigms, suggesting that Inhibition encompasses relatively heterogeneous sub-functions. Shifting analyses revealed a bilateral overlap of the Wisconsin Card Sorting Task with the Updating network, but this pattern was absent for Rule Switching and Dual Task paradigms. Moreover, our Updating meta-analyses revealed the neural signatures associated with the specific modules of the Working Memory model from Baddeley and Hitch. To our knowledge, this is the most comprehensive meta-analysis of executive functions to date. Its paradigm-driven analyses provide a unique contribution to a better understanding of the neural convergences and divergences among executive processes that are relevant for clinical applications, such as cognitive enhancement and neurorehabilitation interventions.
Collapse
Affiliation(s)
- Geraldine Rodríguez-Nieto
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium
| | - Caroline Seer
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium
| | - Justina Sidlauskaite
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium
| | - Lore Vleugels
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium; Institute of Neuroscience, UC Louvain, Av. Mounier 54, Bruxelles 1200, Belgium
| | - Anke Van Roy
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium
| | - Robert Hardwick
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium; Institute of Neuroscience, UC Louvain, Av. Mounier 54, Bruxelles 1200, Belgium
| | - Stephan Swinnen
- Movement Control and Neuroplasticity Research Group, Biomedical Sciences, KU Leuven, Tervuursevest 101 box 1501, Leuven 3001, Belgium; Leuven Brain Institute (LBI), KU Leuven, Oude Markt 13, Leuven 5005, Belgium.
| |
Collapse
|
9
|
Ross LA, Molholm S, Butler JS, Bene VAD, Foxe JJ. Neural correlates of multisensory enhancement in audiovisual narrative speech perception: a fMRI investigation. Neuroimage 2022; 263:119598. [PMID: 36049699 DOI: 10.1016/j.neuroimage.2022.119598] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2022] [Revised: 08/26/2022] [Accepted: 08/28/2022] [Indexed: 11/25/2022] Open
Abstract
This fMRI study investigated the effect of seeing articulatory movements of a speaker while listening to a naturalistic narrative stimulus. It had the goal to identify regions of the language network showing multisensory enhancement under synchronous audiovisual conditions. We expected this enhancement to emerge in regions known to underlie the integration of auditory and visual information such as the posterior superior temporal gyrus as well as parts of the broader language network, including the semantic system. To this end we presented 53 participants with a continuous narration of a story in auditory alone, visual alone, and both synchronous and asynchronous audiovisual speech conditions while recording brain activity using BOLD fMRI. We found multisensory enhancement in an extensive network of regions underlying multisensory integration and parts of the semantic network as well as extralinguistic regions not usually associated with multisensory integration, namely the primary visual cortex and the bilateral amygdala. Analysis also revealed involvement of thalamic brain regions along the visual and auditory pathways more commonly associated with early sensory processing. We conclude that under natural listening conditions, multisensory enhancement not only involves sites of multisensory integration but many regions of the wider semantic network and includes regions associated with extralinguistic sensory, perceptual and cognitive processing.
Collapse
Affiliation(s)
- Lars A Ross
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; Department of Imaging Sciences, University of Rochester Medical Center, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| | - Sophie Molholm
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA
| | - John S Butler
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; School of Mathematical Sciences, Technological University Dublin, Kevin Street Campus, Dublin, Ireland
| | - Victor A Del Bene
- The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA; University of Alabama at Birmingham, Heersink School of Medicine, Department of Neurology, Birmingham, Alabama, 35233, USA
| | - John J Foxe
- The Frederick J. and Marion A. Schindler Cognitive Neurophysiology Laboratory, The Ernest J. Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, New York, 14642, USA; The Cognitive Neurophysiology Laboratory, Departments of Pediatrics and Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, New York, 10461, USA.
| |
Collapse
|
10
|
Zhang L, Du Y. Lip movements enhance speech representations and effective connectivity in auditory dorsal stream. Neuroimage 2022; 257:119311. [PMID: 35589000 DOI: 10.1016/j.neuroimage.2022.119311] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2022] [Revised: 05/09/2022] [Accepted: 05/11/2022] [Indexed: 11/25/2022] Open
Abstract
Viewing speaker's lip movements facilitates speech perception, especially under adverse listening conditions, but the neural mechanisms of this perceptual benefit at the phonemic and feature levels remain unclear. This fMRI study addressed this question by quantifying regional multivariate representation and network organization underlying audiovisual speech-in-noise perception. Behaviorally, valid lip movements improved recognition of place of articulation to aid phoneme identification. Meanwhile, lip movements enhanced neural representations of phonemes in left auditory dorsal stream regions, including frontal speech motor areas and supramarginal gyrus (SMG). Moreover, neural representations of place of articulation and voicing features were promoted differentially by lip movements in these regions, with voicing enhanced in Broca's area while place of articulation better encoded in left ventral premotor cortex and SMG. Next, dynamic causal modeling (DCM) analysis showed that such local changes were accompanied by strengthened effective connectivity along the dorsal stream. Moreover, the neurite orientation dispersion of the left arcuate fasciculus, the bearing skeleton of auditory dorsal stream, predicted the visual enhancements of neural representations and effective connectivity. Our findings provide novel insight to speech science that lip movements promote both local phonemic and feature encoding and network connectivity in the dorsal pathway and the functional enhancement is mediated by the microstructural architecture of the circuit.
Collapse
Affiliation(s)
- Lei Zhang
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049
| | - Yi Du
- CAS Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China 100101; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China 100049; CAS Center for Excellence in Brain Science and Intelligence Technology, Shanghai, China 200031; Chinese Institute for Brain Research, Beijing, China 102206.
| |
Collapse
|
11
|
Fiber tracing and microstructural characterization among audiovisual integration brain regions in neonates compared with young adults. Neuroimage 2022; 254:119141. [PMID: 35342006 DOI: 10.1016/j.neuroimage.2022.119141] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2021] [Revised: 02/23/2022] [Accepted: 03/21/2022] [Indexed: 11/23/2022] Open
Abstract
Audiovisual integration has been related with cognitive-processing and behavioral advantages, as well as with various socio-cognitive disorders. While some studies have identified brain regions instantiating this ability shortly after birth, little is known about the structural pathways connecting them. The goal of the present study was to reconstruct fiber tracts linking AVI regions in the newborn in-vivo brain and assess their adult-likeness by comparing them with analogous fiber tracts of young adults. We performed probabilistic tractography and compared connective probabilities between a sample of term-born neonates (N = 311; the Developing Human Connectome Project (dHCP, http://www.developingconnectome.org) and young adults (N = 311 The Human Connectome Project; https://www.humanconnectome.org/) by means of a classification algorithm. Furthermore, we computed Dice coefficients to assess between-group spatial similarity of the reconstructed fibers and used diffusion metrics to characterize neonates' AVI brain network in terms of microstructural properties, interhemispheric differences and the association with perinatal covariates and biological sex. Overall, our results indicate that the AVI fiber bundles were successfully reconstructed in a vast majority of neonates, similarly to adults. Connective probability distributional similarities and spatial overlaps of AVI fibers between the two groups differed across the reconstructed fibers. There was a rank-order correspondence of the fibers' connective strengths across the groups. Additionally, the study revealed patterns of diffusion metrics in line with early white matter developmental trajectories and a developmental advantage for females. Altogether, these findings deliver evidence of meaningful structural connections among AVI regions in the newborn in-vivo brain.
Collapse
|
12
|
Peelle JE, Spehar B, Jones MS, McConkey S, Myerson J, Hale S, Sommers MS, Tye-Murray N. Increased Connectivity among Sensory and Motor Regions during Visual and Audiovisual Speech Perception. J Neurosci 2022; 42:435-442. [PMID: 34815317 PMCID: PMC8802926 DOI: 10.1523/jneurosci.0114-21.2021] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/18/2021] [Revised: 10/29/2021] [Accepted: 11/08/2021] [Indexed: 11/21/2022] Open
Abstract
In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is degraded. Here, we used fMRI to monitor brain activity while adult humans (n = 60) were presented with visual-only, auditory-only, and audiovisual words. The audiovisual words were presented in quiet and in several signal-to-noise ratios. As expected, audiovisual speech perception recruited both auditory and visual cortex, with some evidence for increased recruitment of premotor cortex in some conditions (including in substantial background noise). We then investigated neural connectivity using psychophysiological interaction analysis with seed regions in both primary auditory cortex and primary visual cortex. Connectivity between auditory and visual cortices was stronger in audiovisual conditions than in unimodal conditions, including a wide network of regions in posterior temporal cortex and prefrontal cortex. In addition to whole-brain analyses, we also conducted a region-of-interest analysis on the left posterior superior temporal sulcus (pSTS), implicated in many previous studies of audiovisual speech perception. We found evidence for both activity and effective connectivity in pSTS for visual-only and audiovisual speech, although these were not significant in whole-brain analyses. Together, our results suggest a prominent role for cross-region synchronization in understanding both visual-only and audiovisual speech that complements activity in integrative brain regions like pSTS.SIGNIFICANCE STATEMENT In everyday conversation, we usually process the talker's face as well as the sound of the talker's voice. Access to visual speech information is particularly useful when the auditory signal is hard to understand (e.g., background noise). Prior work has suggested that specialized regions of the brain may play a critical role in integrating information from visual and auditory speech. Here, we show a complementary mechanism relying on synchronized brain activity among sensory and motor regions may also play a critical role. These findings encourage reconceptualizing audiovisual integration in the context of coordinated network activity.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Brent Spehar
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Michael S Jones
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| | - Joel Myerson
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Sandra Hale
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Mitchell S Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130
| | - Nancy Tye-Murray
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, Missouri 63110
| |
Collapse
|
13
|
McCormick K, Lacey S, Stilla R, Nygaard LC, Sathian K. Neural Basis of the Sound-Symbolic Crossmodal Correspondence Between Auditory Pseudowords and Visual Shapes. Multisens Res 2021; 35:29-78. [PMID: 34384048 PMCID: PMC9196751 DOI: 10.1163/22134808-bja10060] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 07/17/2021] [Indexed: 11/19/2022]
Abstract
Sound symbolism refers to the association between the sounds of words and their meanings, often studied using the crossmodal correspondence between auditory pseudowords, e.g., 'takete' or 'maluma', and pointed or rounded visual shapes, respectively. In a functional magnetic resonance imaging study, participants were presented with pseudoword-shape pairs that were sound-symbolically congruent or incongruent. We found no significant congruency effects in the blood oxygenation level-dependent (BOLD) signal when participants were attending to visual shapes. During attention to auditory pseudowords, however, we observed greater BOLD activity for incongruent compared to congruent audiovisual pairs bilaterally in the intraparietal sulcus and supramarginal gyrus, and in the left middle frontal gyrus. We compared this activity to independent functional contrasts designed to test competing explanations of sound symbolism, but found no evidence for mediation via language, and only limited evidence for accounts based on multisensory integration and a general magnitude system. Instead, we suggest that the observed incongruency effects are likely to reflect phonological processing and/or multisensory attention. These findings advance our understanding of sound-to-meaning mapping in the brain.
Collapse
Affiliation(s)
- Kelly McCormick
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Simon Lacey
- Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
- Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| | - Randall Stilla
- Winship Cancer Institute, Emory University, Atlanta, GA 30322, USA
| | - Lynne C. Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K. Sathian
- Department of Neurology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
- Department of Neural and Behavioral Sciences, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
- Department of Psychology, Milton S. Hershey Medical Center, Penn State College of Medicine, Hershey, PA 17033-0859, USA
| |
Collapse
|
14
|
Xie Y, Li Y, Duan H, Xu X, Zhang W, Fang P. Theta Oscillations and Source Connectivity During Complex Audiovisual Object Encoding in Working Memory. Front Hum Neurosci 2021; 15:614950. [PMID: 33762914 PMCID: PMC7982740 DOI: 10.3389/fnhum.2021.614950] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/08/2020] [Accepted: 01/28/2021] [Indexed: 12/02/2022] Open
Abstract
Working memory is a limited capacity memory system that involves the short-term storage and processing of information. Neuroscientific studies of working memory have mostly focused on the essential roles of neural oscillations during item encoding from single sensory modalities (e.g., visual and auditory). However, the characteristics of neural oscillations during multisensory encoding in working memory are rarely studied. Our study investigated the oscillation characteristics of neural signals in scalp electrodes and mapped functional brain connectivity while participants encoded complex audiovisual objects in a working memory task. Experimental results showed that theta oscillations (4–8 Hz) were prominent and topographically distributed across multiple cortical regions, including prefrontal (e.g., superior frontal gyrus), parietal (e.g., precuneus), temporal (e.g., inferior temporal gyrus), and occipital (e.g., cuneus) cortices. Furthermore, neural connectivity at the theta oscillation frequency was significant in these cortical regions during audiovisual object encoding compared with single modality object encoding. These results suggest that local oscillations and interregional connectivity via theta activity play an important role during audiovisual object encoding and may contribute to the formation of working memory traces from multisensory items.
Collapse
Affiliation(s)
- Yuanjun Xie
- School of Education, Xin Yang College, Xinyang, China.,Department of Radiology, Xijing Hospital, Fourth Military Medical University, Xi'an, China
| | - Yanyan Li
- School of Education, Xin Yang College, Xinyang, China
| | - Haidan Duan
- School of Education, Xin Yang College, Xinyang, China
| | - Xiliang Xu
- School of Education, Xin Yang College, Xinyang, China
| | - Wenmo Zhang
- Department of Fundamental, Army Logistical University, Chongqing, China.,Department of Social Medicine and Health and Management, College of Military Preventive Medicine, Army Medical University, Chongqing, China
| | - Peng Fang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
15
|
Gawęda Ł, Moritz S. The role of expectancies and emotional load in false auditory perceptions among patients with schizophrenia spectrum disorders. Eur Arch Psychiatry Clin Neurosci 2021; 271:713-722. [PMID: 31493150 PMCID: PMC8119254 DOI: 10.1007/s00406-019-01065-2] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/01/2019] [Accepted: 08/29/2019] [Indexed: 11/25/2022]
Abstract
Cognitive models suggest that top-down and emotional processes increase false perceptions in schizophrenia spectrum disorders (SSD). However, little is still known about the interaction of these processes in false auditory perceptions. The present study aimed at investigating the specific as well as joint impacts of expectancies and emotional load on false auditory perceptions in SSD. Thirty-three patients with SSD and 33 matched healthy controls were assessed with a false perception task. Participants were asked to detect a target stimulus (a word) in a white noise background (the word was present in 60% of the cases and absent in 40%). Conditions varied in terms of the level of expectancy (1. no cue prior to the stimulus, 2. semantic priming, 3. semantic priming accompanied by a video of a man's mouth spelling out the word). The words used were neutral or emotionally negative. Symptom severity was assessed with the Positive and Negative Syndrome Scale. Higher expectancy significantly increased the likelihood of false auditory perceptions only among the patients with SSD (the group x expectancy condition interaction was significant), which was unrelated to general cognitive performance. Emotional load had no impact on false auditory perceptions in either group. Patients made more false auditory perceptions with high confidence than controls did. False auditory perceptions were significantly correlated with the severity of positive symptoms and disorganization, but not with other dimensions. Perception in SSD seems to be susceptible to top-down processes, increasing the likelihood of high-confidence false auditory perceptions.
Collapse
Affiliation(s)
- Łukasz Gawęda
- Psychopathology and Early Intervention Lab II, Department of Psychiatry, The Medical University of Warsaw, Ul. Kondratowicza 8, 03-242, Warsaw, Poland.
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany.
| | - Steffen Moritz
- Department of Psychiatry and Psychotherapy, University Medical Center Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
16
|
Michaelis K, Erickson LC, Fama ME, Skipper-Kallal LM, Xing S, Lacey EH, Anbari Z, Norato G, Rauschecker JP, Turkeltaub PE. Effects of age and left hemisphere lesions on audiovisual integration of speech. BRAIN AND LANGUAGE 2020; 206:104812. [PMID: 32447050 PMCID: PMC7379161 DOI: 10.1016/j.bandl.2020.104812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/21/2019] [Revised: 04/02/2020] [Accepted: 05/04/2020] [Indexed: 06/11/2023]
Abstract
Neuroimaging studies have implicated left temporal lobe regions in audiovisual integration of speech and inferior parietal regions in temporal binding of incoming signals. However, it remains unclear which regions are necessary for audiovisual integration, especially when the auditory and visual signals are offset in time. Aging also influences integration, but the nature of this influence is unresolved. We used a McGurk task to test audiovisual integration and sensitivity to the timing of audiovisual signals in two older adult groups: left hemisphere stroke survivors and controls. We observed a positive relationship between age and audiovisual speech integration in both groups, and an interaction indicating that lesions reduce sensitivity to timing offsets between signals. Lesion-symptom mapping demonstrated that damage to the left supramarginal gyrus and planum temporale reduces temporal acuity in audiovisual speech perception. This suggests that a process mediated by these structures identifies asynchronous audiovisual signals that should not be integrated.
Collapse
Affiliation(s)
- Kelly Michaelis
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Laura C Erickson
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Mackenzie E Fama
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, USA
| | - Laura M Skipper-Kallal
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Shihui Xing
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Department of Neurology, First Affiliated Hospital of Sun Yat-Sen University, Guangzhou, China
| | - Elizabeth H Lacey
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA
| | - Zainab Anbari
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA
| | - Gina Norato
- Clinical Trials Unit, National Institute of Neurological Disorders and Stroke, National Institutes of Health, Bethesda, MD, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, Washington DC, USA
| | - Peter E Turkeltaub
- Neurology Department and Center for Brain Plasticity and Recovery, Georgetown University Medical Center, Washington DC, USA; Research Division, MedStar National Rehabilitation Hospital, Washington DC, USA.
| |
Collapse
|
17
|
Age-related hearing loss influences functional connectivity of auditory cortex for the McGurk illusion. Cortex 2020; 129:266-280. [PMID: 32535378 DOI: 10.1016/j.cortex.2020.04.022] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2019] [Revised: 03/30/2020] [Accepted: 04/09/2020] [Indexed: 01/23/2023]
Abstract
Age-related hearing loss affects hearing at high frequencies and is associated with difficulties in understanding speech. Increased audio-visual integration has recently been found in age-related hearing impairment, the brain mechanisms that contribute to this effect are however unclear. We used functional magnetic resonance imaging in elderly subjects with normal hearing and mild to moderate uncompensated hearing loss. Audio-visual integration was studied using the McGurk task. In this task, an illusionary fused percept can occur if incongruent auditory and visual syllables are presented. The paradigm included unisensory stimuli (auditory only, visual only), congruent audio-visual and incongruent (McGurk) audio-visual stimuli. An illusionary precept was reported in over 60% of incongruent trials. These McGurk illusion rates were equal in both groups of elderly subjects and correlated positively with speech-in-noise perception and daily listening effort. Normal-hearing participants showed an increased neural response in left pre- and postcentral gyri and right middle frontal gyrus for incongruent stimuli (McGurk) compared to congruent audio-visual stimuli. Activation patterns were however not different between groups. Task-modulated functional connectivity differed between groups showing increased connectivity from auditory cortex to visual, parietal and frontal areas in hard of hearing participants as compared to normal-hearing participants when comparing incongruent stimuli (McGurk) with congruent audio-visual stimuli. These results suggest that changes in functional connectivity of auditory cortex rather than activation strength during processing of audio-visual McGurk stimuli accompany age-related hearing loss.
Collapse
|
18
|
Saarinen T, Kujala J, Laaksonen H, Jalava A, Salmelin R. Task-Modulated Corticocortical Synchrony in the Cognitive-Motor Network Supporting Handwriting. Cereb Cortex 2020; 30:1871-1886. [PMID: 31670795 PMCID: PMC7132916 DOI: 10.1093/cercor/bhz210] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2018] [Revised: 08/18/2019] [Accepted: 08/19/2019] [Indexed: 01/06/2023] Open
Abstract
Both motor and cognitive aspects of behavior depend on dynamic, accurately timed neural processes in large-scale brain networks. Here, we studied synchronous interplay between cortical regions during production of cognitive-motor sequences in humans. Specifically, variants of handwriting that differed in motor variability, linguistic content, and memorization of movement cues were contrasted to unveil functional sensitivity of corticocortical connections. Data-driven magnetoencephalography mapping (n = 10) uncovered modulation of mostly left-hemispheric corticocortical interactions, as quantified by relative changes in phase synchronization. At low frequencies (~2–13 Hz), enhanced frontoparietal synchrony was related to regular handwriting, whereas premotor cortical regions synchronized for simple loop production and temporo-occipital areas for a writing task substituting normal script with loop patterns. At the beta-to-gamma band (~13–45 Hz), enhanced synchrony was observed for regular handwriting in the central and frontoparietal regions, including connections between the sensorimotor and supplementary motor cortices and between the parietal and dorsal premotor/precentral cortices. Interpreted within a modular framework, these modulations of synchrony mainly highlighted interactions of the putative pericentral subsystem of hand coordination and the frontoparietal subsystem mediating working memory operations. As part of cortical dynamics, interregional phase synchrony varies depending on task demands in production of cognitive-motor sequences.
Collapse
Affiliation(s)
- Timo Saarinen
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 AALTO, Espoo, Finland
- Aalto NeuroImaging, Aalto University, FI-00076 AALTO, Espoo, Finland
- Address correspondence to Timo Saarinen, Department of Neuroscience and Biomedical Engineering, Aalto University, P.O. Box 12200, FI-00076 AALTO, Espoo, Finland.
| | - Jan Kujala
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 AALTO, Espoo, Finland
- Department of Psychology, University of Jyväskylä, FI-40014, Jyväskylä, Finland
| | - Hannu Laaksonen
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 AALTO, Espoo, Finland
- Aalto NeuroImaging, Aalto University, FI-00076 AALTO, Espoo, Finland
| | - Antti Jalava
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 AALTO, Espoo, Finland
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, Aalto University, FI-00076 AALTO, Espoo, Finland
- Aalto NeuroImaging, Aalto University, FI-00076 AALTO, Espoo, Finland
| |
Collapse
|
19
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 40] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
20
|
Zhou HY, Shi LJ, Yang HX, Cheung EFC, Chan RCK. Audiovisual temporal integration and rapid temporal recalibration in adolescents and adults: Age-related changes and its correlation with autistic traits. Autism Res 2019; 13:615-626. [PMID: 31808321 DOI: 10.1002/aur.2249] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2019] [Accepted: 11/19/2019] [Indexed: 12/26/2022]
Abstract
Temporal structure is a key factor in determining the relatedness of multisensory stimuli. Stimuli that are close in time are more likely to be integrated into a unified perceptual representation. To investigate the age-related developmental differences in audiovisual temporal integration and rapid temporal recalibration, we administered simultaneity judgment (SJ) tasks to a group of adolescents (11-14 years) and young adults (18-28 years). No age-related changes were found in the width of the temporal binding window within which participants are highly likely to combine multisensory stimuli. The main distinction between adolescents and adults was audiovisual temporal recalibration. Although participants of both age groups could rapidly recalibrate based on the previous trial for speech stimuli (i.e., syllable utterances), only adults but not adolescents showed short-term recalibration for simple and non-speech stimuli. In both adolescents and adults, no significant correlation was found between audiovisual temporal integration ability and autistic or schizotypal traits. These findings provide new information on the developmental trajectory of basic multisensory function and may have implications for neurodevelopmental disorders (e.g., autism) with altered audiovisual temporal integration. Autism Res 2020, 13: 615-626. © 2019 International Society for Autism Research, Wiley Periodicals, Inc. LAY SUMMARY: Utilizing temporal cues to integrate and separate audiovisual information is a fundamental ability underlying higher order social communicative functions. This study examines the developmental changes of the ability to detect audiovisual asynchrony and rapidly adjust sensory decisions based on previous sensory input. In healthy adolescents and young adults, the correlation between autistic traits and audiovisual integration ability failed to reach a significant level. Therefore, more research is needed to examine whether impairment in basic sensory functions is correlated with broader autism phenotype in nonclinical populations. These results may help us understand altered multisensory integration in people with autism.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Li-Juan Shi
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,School of Education, Hunan University of Science and Technology, Xiangtan, China
| | - Han-Xue Yang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, Beijing, China
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
21
|
Hardwick RM, Caspers S, Eickhoff SB, Swinnen SP. Neural correlates of action: Comparing meta-analyses of imagery, observation, and execution. Neurosci Biobehav Rev 2018; 94:31-44. [DOI: 10.1016/j.neubiorev.2018.08.003] [Citation(s) in RCA: 289] [Impact Index Per Article: 41.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 08/03/2018] [Accepted: 08/03/2018] [Indexed: 11/30/2022]
|
22
|
Casado-Aranda LA, Van der Laan LN, Sánchez-Fernández J. Neural correlates of gender congruence in audiovisual commercials for gender-targeted products: An fMRI study. Hum Brain Mapp 2018; 39:4360-4372. [PMID: 29964348 DOI: 10.1002/hbm.24276] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2018] [Revised: 05/08/2018] [Accepted: 05/31/2018] [Indexed: 11/07/2022] Open
Abstract
This article explores neural and self-report responses to gender congruence in product-voice combinations in commercials. An fMRI study was carried out in which participants (n = 30) were presented with gender-targeted pictures of characteristic male or female products accompanied by either gender congruent or incongruent voices. The findings show that attitudes are more positive toward commercials with gender congruent than with gender incongruent product-voice combinations. fMRI analyses revealed that primary visual brain areas, namely calcarine and cuneus, responded stronger to congruent than incongruent combinations suggesting that participants enhanced their endogenous attention toward congruent commercials. Incongruent combinations, by contrast, elicited stronger activation in areas related to the perception of conflicts in information processing and error monitoring, such as the supramarginal, inferior parietal gyri and superior, and middle temporal gyri. Interestingly, increased activation in the posterior cingulate cortex (an area related to value encoding) predicted more positive attitudes toward congruent commercials. Together, these results advance our understanding of the neural correlates of processing congruent and incongruent audiovisual stimuli. These findings may advice advertising professionals in designing successful campaigns of everyday products, namely by making use of congruent instead of incongruent product-voice combinations.
Collapse
Affiliation(s)
- Luis-Alberto Casado-Aranda
- Department of Marketing and Market Research, University of Granada, Campus Universitario la Cartuja, Granada, Spain
| | - Laura Nynke Van der Laan
- University of Amsterdam, Amsterdam School of Communication Research (ASCoR), NG Amsterdam, The Netherlands
| | - Juan Sánchez-Fernández
- Department of Marketing and Market Research, University of Granada, Campus Universitario la Cartuja, Granada, Spain
| |
Collapse
|
23
|
Rauschecker JP. Where did language come from? Precursor mechanisms in nonhuman primates. Curr Opin Behav Sci 2018; 21:195-204. [PMID: 30778394 PMCID: PMC6377164 DOI: 10.1016/j.cobeha.2018.06.003] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
At first glance, the monkey brain looks like a smaller version of the human brain. Indeed, the anatomical and functional architecture of the cortical auditory system in monkeys is very similar to that of humans, with dual pathways segregated into a ventral and a dorsal processing stream. Yet, monkeys do not speak. Repeated attempts to pin this inability on one particular cause have failed. A closer look at the necessary components of language, according to Darwin, reveals that all of them got a significant boost during evolution from nonhuman to human primates. The vocal-articulatory system, in particular, has developed into the most sophisticated of all human sensorimotor systems with about a dozen effectors that, in combination with each other, result in an auditory communication system like no other. This sensorimotor network possesses all the ingredients of an internal model system that permits the emergence of sequence processing, as required for phonology and syntax in modern languages.
Collapse
Affiliation(s)
- Josef P Rauschecker
- Department of Neuroscience, Georgetown University, Washington, DC 20057, USA
| |
Collapse
|
24
|
Rosemann S, Thiel CM. Audio-visual speech processing in age-related hearing loss: Stronger integration and increased frontal lobe recruitment. Neuroimage 2018; 175:425-437. [PMID: 29655940 DOI: 10.1016/j.neuroimage.2018.04.023] [Citation(s) in RCA: 58] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2017] [Revised: 03/09/2018] [Accepted: 04/09/2018] [Indexed: 11/19/2022] Open
Abstract
Hearing loss is associated with difficulties in understanding speech, especially under adverse listening conditions. In these situations, seeing the speaker improves speech intelligibility in hearing-impaired participants. On the neuronal level, previous research has shown cross-modal plastic reorganization in the auditory cortex following hearing loss leading to altered processing of auditory, visual and audio-visual information. However, how reduced auditory input effects audio-visual speech perception in hearing-impaired subjects is largely unknown. We here investigated the impact of mild to moderate age-related hearing loss on processing audio-visual speech using functional magnetic resonance imaging. Normal-hearing and hearing-impaired participants performed two audio-visual speech integration tasks: a sentence detection task inside the scanner and the McGurk illusion outside the scanner. Both tasks consisted of congruent and incongruent audio-visual conditions, as well as auditory-only and visual-only conditions. We found a significantly stronger McGurk illusion in the hearing-impaired participants, which indicates stronger audio-visual integration. Neurally, hearing loss was associated with an increased recruitment of frontal brain areas when processing incongruent audio-visual, auditory and also visual speech stimuli, which may reflect the increased effort to perform the task. Hearing loss modulated both the audio-visual integration strength measured with the McGurk illusion and brain activation in frontal areas in the sentence task, showing stronger integration and higher brain activation with increasing hearing loss. Incongruent compared to congruent audio-visual speech revealed an opposite brain activation pattern in left ventral postcentral gyrus in both groups, with higher activation in hearing-impaired participants in the incongruent condition. Our results indicate that already mild to moderate hearing loss impacts audio-visual speech processing accompanied by changes in brain activation particularly involving frontal areas. These changes are modulated by the extent of hearing loss.
Collapse
Affiliation(s)
- Stephanie Rosemann
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany.
| | - Christiane M Thiel
- Biological Psychology, Department of Psychology, Department for Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany; Cluster of Excellence "Hearing4all", Carl von Ossietzky Universität Oldenburg, Oldenburg, Germany
| |
Collapse
|
25
|
McCormick K, Lacey S, Stilla R, Nygaard LC, Sathian K. Neural basis of the crossmodal correspondence between auditory pitch and visuospatial elevation. Neuropsychologia 2018; 112:19-30. [PMID: 29501792 DOI: 10.1016/j.neuropsychologia.2018.02.029] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2017] [Revised: 02/22/2018] [Accepted: 02/26/2018] [Indexed: 01/09/2023]
Abstract
Crossmodal correspondences refer to associations between otherwise unrelated stimulus features in different sensory modalities. For example, high and low auditory pitches are associated with high and low visuospatial elevation, respectively. The neural mechanisms underlying crossmodal correspondences are currently unknown. Here, we used functional magnetic resonance imaging (fMRI) to investigate the neural basis of the pitch-elevation correspondence. Pitch-elevation congruency effects were observed bilaterally in the inferior frontal and insular cortex, the right frontal eye field and right inferior parietal cortex. Independent functional localizers failed to provide strong evidence for any of three proposed mechanisms for crossmodal correspondences: semantic mediation, magnitude estimation, and multisensory integration. Instead, pitch-elevation congruency effects overlapped with areas selective for visually presented non-word strings relative to sentences, and with regions sensitive to audiovisual asynchrony. Taken together with the prior literature, the observed congruency effects are most consistent with mediation by multisensory attention.
Collapse
Affiliation(s)
- Kelly McCormick
- Depart ment of Neurology, Emory University, Atlanta, GA 30322, USA; Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - Simon Lacey
- Depart ment of Neurology, Emory University, Atlanta, GA 30322, USA
| | - Randall Stilla
- Depart ment of Neurology, Emory University, Atlanta, GA 30322, USA
| | - Lynne C Nygaard
- Department of Psychology, Emory University, Atlanta, GA 30322, USA
| | - K Sathian
- Depart ment of Neurology, Emory University, Atlanta, GA 30322, USA; Department of Rehabilitation Medicine, Emory University, Atlanta, GA 30322, USA; Department of Psychology, Emory University, Atlanta, GA 30322, USA; Center for Visual and Neurocognitive Rehabilitation, Atlanta VAMC, Decatur, GA 30033, USA.
| |
Collapse
|
26
|
Jiang J, Borowiak K, Tudge L, Otto C, von Kriegstein K. Neural mechanisms of eye contact when listening to another person talking. Soc Cogn Affect Neurosci 2017; 12:319-328. [PMID: 27576745 PMCID: PMC5390711 DOI: 10.1093/scan/nsw127] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/26/2016] [Accepted: 08/24/2016] [Indexed: 11/14/2022] Open
Abstract
Eye contact occurs frequently and voluntarily during face-to-face verbal communication. However, the neural mechanisms underlying eye contact when it is accompanied by spoken language remain unexplored to date. Here we used a novel approach, fixation-based event-related functional magnetic resonance imaging (fMRI), to simulate the listener making eye contact with a speaker during verbal communication. Participants’ eye movements and fMRI data were recorded simultaneously while they were freely viewing a pre-recorded speaker talking. The eye tracking data were then used to define events for the fMRI analyses. The results showed that eye contact in contrast to mouth fixation involved visual cortical areas (cuneus, calcarine sulcus), brain regions related to theory of mind/intentionality processing (temporoparietal junction, posterior superior temporal sulcus, medial prefrontal cortex) and the dorsolateral prefrontal cortex. In addition, increased effective connectivity was found between these regions for eye contact in contrast to mouth fixations. The results provide first evidence for neural mechanisms underlying eye contact when watching and listening to another person talking. The network we found might be well suited for processing the intentions of communication partners during eye contact in verbal communication.
Collapse
Affiliation(s)
- Jing Jiang
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10117, Germany.,Institute of Psychology, Humboldt-Universität zu Berlin, Berlin 12489, Germany
| | - Kamila Borowiak
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Luke Tudge
- Berlin School of Mind and Brain, Humboldt-Universität zu Berlin, Berlin 10117, Germany
| | - Carolin Otto
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany
| | - Katharina von Kriegstein
- Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig 04103, Germany.,Institute of Psychology, Humboldt-Universität zu Berlin, Berlin 12489, Germany
| |
Collapse
|
27
|
Erickson LC, Rauschecker JP, Turkeltaub PE. Meta-analytic connectivity modeling of the human superior temporal sulcus. Brain Struct Funct 2016; 222:267-285. [PMID: 27003288 DOI: 10.1007/s00429-016-1215-z] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 03/06/2016] [Indexed: 12/11/2022]
Abstract
The superior temporal sulcus (STS) is a critical region for multiple neural processes in the human brain Hein and Knight (J Cogn Neurosci 20(12): 2125-2136, 2008). To better understand the multiple functions of the STS it would be useful to know more about its consistent functional coactivations with other brain regions. We used the meta-analytic connectivity modeling technique to determine consistent functional coactivation patterns across experiments and behaviors associated with bilateral anterior, middle, and posterior anatomical STS subregions. Based on prevailing models for the cortical organization of audition and language, we broadly hypothesized that across various behaviors the posterior STS (pSTS) would coactivate with dorsal-stream regions, whereas the anterior STS (aSTS) would coactivate with ventral-stream regions. The results revealed distinct coactivation patterns for each STS subregion, with some overlap in the frontal and temporal areas, and generally similar coactivation patterns for the left and right STS. Quantitative comparison of STS subregion coactivation maps demonstrated that the pSTS coactivated more strongly than other STS subregions in the same hemisphere with dorsal-stream regions, such as the inferior parietal lobule (only left pSTS), homotopic pSTS, precentral gyrus and supplementary motor area. In contrast, the aSTS showed more coactivation with some ventral-stream regions, such as the homotopic anterior temporal cortex and left inferior frontal gyrus, pars orbitalis (only right aSTS). These findings demonstrate consistent coactivation maps across experiments and behaviors for different anatomical STS subregions, which may help future studies consider various STS functions in the broader context of generalized coactivations for individuals with and without neurological disorders.
Collapse
Affiliation(s)
- Laura C Erickson
- Neurology Department, Georgetown University Medical Center, 4000 Reservoir Road NW, Building D, Suite 165, Washington, DC, 20057, USA.,Neuroscience Department, Georgetown University Medical Center, 3900 Reservoir Road NW, New Research Building, Room WP19, Washington, DC, 20057, USA
| | - Josef P Rauschecker
- Neuroscience Department, Georgetown University Medical Center, 3900 Reservoir Road NW, New Research Building, Room WP19, Washington, DC, 20057, USA.,Institute for Advanced Study, Technische Universität München, Lichtenbergstraße 2, 85748, Garching bei München, Germany
| | - Peter E Turkeltaub
- Neurology Department, Georgetown University Medical Center, 4000 Reservoir Road NW, Building D, Suite 165, Washington, DC, 20057, USA. .,Research Division, MedStar National Rehabilitation Hospital, 102 Irving St NW, Washington, DC, 20010, USA.
| |
Collapse
|
28
|
Xia Z, Hoeft F, Zhang L, Shu H. Neuroanatomical anomalies of dyslexia: Disambiguating the effects of disorder, performance, and maturation. Neuropsychologia 2015; 81:68-78. [PMID: 26679527 DOI: 10.1016/j.neuropsychologia.2015.12.003] [Citation(s) in RCA: 40] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2015] [Revised: 09/17/2015] [Accepted: 12/06/2015] [Indexed: 12/26/2022]
Abstract
An increasing body of studies has revealed neuroanatomical impairments in developmental dyslexia. However, whether these structural anomalies are driven by dyslexia (disorder-specific effects), absolute reading performance (performance-dependent effects), and/or further influenced by age (maturation-sensitive effects) remains elusive. To help disentangle these sources, the current study used a novel disorder (dyslexia vs. control) by maturation (younger vs. older) factorial design in 48 Chinese children who were carefully matched. This design not only allows for direct comparison between dyslexics versus controls matched for chronological age and reading ability, but also enables examination of the influence of maturation and its interaction with dyslexia. Voxel-based morphometry (VBM) showed that dyslexic children had reduced regional gray matter volume in the left temporo-parietal cortex (spanning over Heschl's gyrus, planum temporale and supramarginal gyrus), middle frontal gyrus, superior occipital gyrus, and reduced regional white matter in bilateral parieto-occipital regions (left cuneus and right precuneus) compared with both age-matched and reading-level matched controls. Therefore, maturational stage-invariant neurobiological signatures of dyslexia were found in brain regions that have been associated with impairments in the auditory/phonological and attentional systems. On the other hand, maturational stage-dependent effects on dyslexia were observed in three regions (left ventral occipito-temporal cortex, left dorsal pars opercularis and genu of the corpus callosum), all of which were previously reported to be involved in fluent reading and its development. These striking dissociations collectively suggest potential atypical developmental trajectories of dyslexia, where underlying mechanisms are currently unknown but may be driven by interactions between genetic and/or environmental factors. In summary, this is the first study to disambiguate maturational stage on neuroanatomical anomalies of dyslexia in addition to the effects of disorder, reading performance and maturational stage on neuroanatomical anomalies of dyslexia, despite the limitation of a relatively small sample-size. These results will hopefully encourage future research to place greater emphasis on taking a developmental perspective to dyslexia, which may, in turn, further our understanding of the etiological basis of this neurodevelopmental disorder, and ultimately optimize early identification and remediation.
Collapse
Affiliation(s)
- Zhichao Xia
- State Key Lab of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China; Division of Child and Adolescent Psychiatry, Department of Psychiatry, UCSF, 401 Parnassus Ave, San Francisco, CA 94143, USA
| | - Fumiko Hoeft
- Division of Child and Adolescent Psychiatry, Department of Psychiatry, UCSF, 401 Parnassus Ave, San Francisco, CA 94143, USA; Haskins Laboratories, 300 George St #900, New Haven, CT 06511, USA; Department of Neuropsychiatry, Keio University School of Medicine, 35 Shinanomachi Shinjuku Tokyo, 160-8582, Japan
| | - Linjun Zhang
- College of Chinese Studies, Beijing Language and Culture University, Beijing 100083, China
| | - Hua Shu
- State Key Lab of Cognitive Neuroscience and Learning & IDG/McGovern Institute for Brain Research, Beijing Normal University, Beijing 100875, China; Center for Collaboration and Innovation in Brain and Learning Sciences, Beijing Normal University, Beijing 100875, China.
| |
Collapse
|
29
|
Man K, Damasio A, Meyer K, Kaplan JT. Convergent and invariant object representations for sight, sound, and touch. Hum Brain Mapp 2015; 36:3629-40. [PMID: 26047030 DOI: 10.1002/hbm.22867] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 05/21/2015] [Accepted: 05/21/2015] [Indexed: 12/30/2022] Open
Abstract
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Kaspar Meyer
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089.,Institute of Anesthesiology, University Hospital, University of Zurich, Zurich, Switzerland
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| |
Collapse
|
30
|
Prediction and constraint in audiovisual speech perception. Cortex 2015; 68:169-81. [PMID: 25890390 DOI: 10.1016/j.cortex.2015.03.006] [Citation(s) in RCA: 128] [Impact Index Per Article: 12.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2014] [Revised: 01/28/2015] [Accepted: 03/08/2015] [Indexed: 11/23/2022]
Abstract
During face-to-face conversational speech listeners must efficiently process a rapid and complex stream of multisensory information. Visual speech can serve as a critical complement to auditory information because it provides cues to both the timing of the incoming acoustic signal (the amplitude envelope, influencing attention and perceptual sensitivity) and its content (place and manner of articulation, constraining lexical selection). Here we review behavioral and neurophysiological evidence regarding listeners' use of visual speech information. Multisensory integration of audiovisual speech cues improves recognition accuracy, particularly for speech in noise. Even when speech is intelligible based solely on auditory information, adding visual information may reduce the cognitive demands placed on listeners through increasing the precision of prediction. Electrophysiological studies demonstrate that oscillatory cortical entrainment to speech in auditory cortex is enhanced when visual speech is present, increasing sensitivity to important acoustic cues. Neuroimaging studies also suggest increased activity in auditory cortex when congruent visual information is available, but additionally emphasize the involvement of heteromodal regions of posterior superior temporal sulcus as playing a role in integrative processing. We interpret these findings in a framework of temporally-focused lexical competition in which visual speech information affects auditory processing to increase sensitivity to acoustic information through an early integration mechanism, and a late integration stage that incorporates specific information about a speaker's articulators to constrain the number of possible candidates in a spoken utterance. Ultimately it is words compatible with both auditory and visual information that most strongly determine successful speech perception during everyday listening. Thus, audiovisual speech perception is accomplished through multiple stages of integration, supported by distinct neuroanatomical mechanisms.
Collapse
|