1
|
Xue K, Chen J, Wei Y, Chen Y, Han S, Wang C, Zhang Y, Song X, Cheng J. Altered dynamic functional connectivity of auditory cortex and medial geniculate nucleus in first-episode, drug-naïve schizophrenia patients with and without auditory verbal hallucinations. Front Psychiatry 2022; 13:963634. [PMID: 36159925 PMCID: PMC9489854 DOI: 10.3389/fpsyt.2022.963634] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/07/2022] [Accepted: 08/18/2022] [Indexed: 11/25/2022] Open
Abstract
BACKGROUND AND OBJECTIVE As a key feature of schizophrenia, auditory verbal hallucination (AVH) is causing concern. Altered dynamic functional connectivity (dFC) patterns involving in auditory related regions were rarely reported in schizophrenia patients with AVH. The goal of this research was to find out the dFC abnormalities of auditory related regions in first-episode, drug-naïve schizophrenia patients with and without AVH using resting state functional magnetic resonance imaging (rs-fMRI). METHODS A total of 107 schizophrenia patients with AVH, 85 schizophrenia patients without AVH (NAVH) underwent rs-fMRI examinations, and 104 healthy controls (HC) were matched. Seed-based dFC of the primary auditory cortex (Heschl's gyrus, HES), auditory association cortex (AAC, including Brodmann's areas 22 and 42), and medial geniculate nucleus (MGN) was conducted to build a whole-brain dFC diagram, then inter group comparison and correlation analysis were performed. RESULTS In comparison to the NAVH and HC groups, the AVH group showed increased dFC from left ACC to the right middle temporal gyrus and right middle occipital gyrus, decreased dFC from left HES to the left superior occipital gyrus, left cuneus gyrus, left precuneus gyrus, decreased dFC from right HES to the posterior cingulate gyrus, and decreased dFC from left MGN to the bilateral calcarine gyrus, bilateral cuneus gyrus, bilateral lingual gyrus. The Auditory Hallucination Rating Scale (AHRS) was significantly positively correlated with the dFC values of cluster 1 (bilateral calcarine gyrus, cuneus gyrus, lingual gyrus, superior occipital gyrus, precuneus gyrus, and posterior cingulate gyrus) using left AAC seed, cluster 2 (right middle temporal gyrus and right middle occipital gyrus) using left AAC seed, cluster 1 (bilateral calcarine gyrus, cuneus gyrus, lingual gyrus, superior occipital gyrus, precuneus gyrus and posterior cingulate gyrus) using right AAC seed and cluster 2 (posterior cingulate gyrus) using right HES seed in the AVH group. In both AVH and NAVH groups, a significantly negative correlation is also found between the dFC values of cluster 2 (posterior cingulate gyrus) using the right HES seed and the PANSS negative sub-scores. CONCLUSIONS The present findings demonstrate that schizophrenia patients with AVH showed multiple abnormal dFC regions using auditory related cortex and nucleus as seeds, particularly involving the occipital lobe, default mode network (DMN), and middle temporal lobe, implying that the different dFC patterns of auditory related areas could provide a neurological mechanism of AVH in schizophrenia.
Collapse
Affiliation(s)
- Kangkang Xue
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jingli Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yarui Wei
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yuan Chen
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Shaoqiang Han
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Caihong Wang
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Yong Zhang
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Xueqin Song
- Department of Psychiatry, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| | - Jingliang Cheng
- Department of Magnetic Resonance Imaging, The First Affiliated Hospital of Zhengzhou University, Zhengzhou, China
| |
Collapse
|
2
|
Aedo-Jury F, Cottereau BR, Celebrini S, Séverac Cauquil A. Antero-Posterior vs. Lateral Vestibular Input Processing in Human Visual Cortex. Front Integr Neurosci 2020; 14:43. [PMID: 32848650 PMCID: PMC7430162 DOI: 10.3389/fnint.2020.00043] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2019] [Accepted: 07/10/2020] [Indexed: 11/13/2022] Open
Abstract
Visuo-vestibular integration is crucial for locomotion, yet the cortical mechanisms involved remain poorly understood. We combined binaural monopolar galvanic vestibular stimulation (GVS) and functional magnetic resonance imaging (fMRI) to characterize the cortical networks activated during antero-posterior and lateral stimulations in humans. We focused on functional areas that selectively respond to egomotion-consistent optic flow patterns: the human middle temporal complex (hMT+), V6, the ventral intraparietal (VIP) area, the cingulate sulcus visual (CSv) area and the posterior insular cortex (PIC). Areas hMT+, CSv, and PIC were equivalently responsive during lateral and antero-posterior GVS while areas VIP and V6 were highly activated during antero-posterior GVS, but remained silent during lateral GVS. Using psychophysiological interaction (PPI) analyses, we confirmed that a cortical network including areas V6 and VIP is engaged during antero-posterior GVS. Our results suggest that V6 and VIP play a specific role in processing multisensory signals specific to locomotion during navigation.
Collapse
Affiliation(s)
- Felipe Aedo-Jury
- Centre de Recherche Cerveau et Cognition, Université Touloue III Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Benoit R. Cottereau
- Centre de Recherche Cerveau et Cognition, Université Touloue III Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Simona Celebrini
- Centre de Recherche Cerveau et Cognition, Université Touloue III Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| | - Alexandra Séverac Cauquil
- Centre de Recherche Cerveau et Cognition, Université Touloue III Paul Sabatier, Toulouse, France
- Centre National de la Recherche Scientifique, Toulouse, France
| |
Collapse
|
3
|
Rahman MS, Barnes KA, Crommett LE, Tommerdahl M, Yau JM. Auditory and tactile frequency representations are co-embedded in modality-defined cortical sensory systems. Neuroimage 2020; 215:116837. [PMID: 32289461 PMCID: PMC7292761 DOI: 10.1016/j.neuroimage.2020.116837] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 03/17/2020] [Accepted: 04/06/2020] [Indexed: 11/18/2022] Open
Abstract
Sensory information is represented and elaborated in hierarchical cortical systems that are thought to be dedicated to individual sensory modalities. This traditional view of sensory cortex organization has been challenged by recent evidence of multimodal responses in primary and association sensory areas. Although it is indisputable that sensory areas respond to multiple modalities, it remains unclear whether these multimodal responses reflect selective information processing for particular stimulus features. Here, we used fMRI adaptation to identify brain regions that are sensitive to the temporal frequency information contained in auditory, tactile, and audiotactile stimulus sequences. A number of brain regions distributed over the parietal and temporal lobes exhibited frequency-selective temporal response modulation for both auditory and tactile stimulus events, as indexed by repetition suppression effects. A smaller set of regions responded to crossmodal adaptation sequences in a frequency-dependent manner. Despite an extensive overlap of multimodal frequency-selective responses across the parietal and temporal lobes, representational similarity analysis revealed a cortical "regional landscape" that clearly reflected distinct somatosensory and auditory processing systems that converged on modality-invariant areas. These structured relationships between brain regions were also evident in spontaneous signal fluctuation patterns measured at rest. Our results reveal that multimodal processing in human cortex can be feature-specific and that multimodal frequency representations are embedded in the intrinsically hierarchical organization of cortical sensory systems.
Collapse
Affiliation(s)
- Md Shoaibur Rahman
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Kelly Anne Barnes
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA; Department of Behavioral and Social Sciences, San Jacinto College - South, Houston, 13735 Beamer Rd, S13.269, Houston, TX, 77089, USA
| | - Lexi E Crommett
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA
| | - Mark Tommerdahl
- Department of Biomedical Engineering, University of North Carolina at Chapel Hill, CB No. 7575, Chapel Hill, NC, 27599, USA
| | - Jeffrey M Yau
- Department of Neuroscience, Baylor College of Medicine, Houston, One Baylor Plaza, Houston, TX, 77030, USA.
| |
Collapse
|
4
|
Shiohama T, McDavid J, Levman J, Takahashi E. The left lateral occipital cortex exhibits decreased thickness in children with sensorineural hearing loss. Int J Dev Neurosci 2019; 76:34-40. [PMID: 31173823 DOI: 10.1016/j.ijdevneu.2019.05.009] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2019] [Revised: 05/10/2019] [Accepted: 05/30/2019] [Indexed: 10/26/2022] Open
Abstract
Patients with sensorineural hearing loss (SNHL) tend to show language delay, executive functioning deficits, and visual cognitive impairment, even after intervention with hearing amplification and cochlear implants, which suggest altered brain structures and functions in SNHL patients. In this study, we investigated structural brain MRI in 30 children with SNHL (18 mild to moderate [M-M] SNHL and 12 moderately severe to profound [M-P] SNHL) by comparing gender- and age-matched normal controls (NC). Region-based analyses did not show statistically significant differences in volumes of the cerebrum, basal ganglia, cerebellum, and the ventricles between SNHL and NC. On surface-based analyses, the global and lobar cortical surface area, thickness, and volumes were not statistically significantly different between SNHL and NC participants. Regional surface areas, cortical thicknesses, and cortical volumes were statistically significantly smaller in M-P SNHL compared to NC in the left middle occipital cortex, and left inferior occipital cortex after a correction for multiple comparisons using random field theory (p < 0.02). These regions were identified as areas known to be related to high level visual cognition including the human middle temporal area, lateral occipital area, occipital face area, and V8. The observed regional decreased thickness in M-P SNHL may be associated with dysfunctions of visual cognition in SNHL detectable in a clinical setting.
Collapse
Affiliation(s)
- Tadashi Shiohama
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA.,Department of Pediatrics, Chiba University Hospital, Inohana 1-8-1, Chiba-shi, Chiba, 2608670, Japan
| | - Jeremy McDavid
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA
| | - Jacob Levman
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA.,Department of Mathematics, Statistics and Computer Science, St. Francis Xavier University, 2323 Notre Dame Ave, Antigonish, Nova Scotia, B2G 2W5, Canada
| | - Emi Takahashi
- Division of Newborn Medicine, Department of Medicine, Boston Children's Hospital, Harvard Medical School, 300 Longwood Avenue, Boston, MA, 02115, USA
| |
Collapse
|
5
|
Wang X, Gu J, Xu J, Li X, Geng J, Wang B, Liu B. Decoding natural scenes based on sounds of objects within scenes using multivariate pattern analysis. Neurosci Res 2018; 148:9-18. [PMID: 30513353 DOI: 10.1016/j.neures.2018.11.009] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/27/2018] [Revised: 11/21/2018] [Accepted: 11/30/2018] [Indexed: 10/27/2022]
Abstract
Scene recognition plays an important role in spatial navigation and scene classification. It remains unknown whether the occipitotemporal cortex could represent the semantic association between the scenes and sounds of objects within the scenes. In this study, we used the functional magnetic resonance imaging (fMRI) technique and multivariate pattern analysis to assess whether diff ; ;erent scenes could be discriminated based on the patterns evoked by sounds of objects within the scenes. We found that patterns evoked by scenes could be predicted with patterns evoked by sounds of objects within the scenes in the posterior fusiform area (pF), lateral occipital area (LO) and superior temporal sulcus (STS). The further functional connectivity analysis suggested significant correlations between pF, LO and parahippocampal place area (PPA) except that between STS and other three regions under the scene and sound conditions. A distinct network in processing scenes and sounds was discovered using a seed-to-voxel analysis with STS as the seed. This study may provide a cross-modal channel of scene decoding through the sounds of objects within the scenes in the occipitotemporal cortex, which could complement the single-modal channel of scene decoding based on the global scene properties or objects within the scenes.
Collapse
Affiliation(s)
- Xiaojing Wang
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Jin Gu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Junhai Xu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong 264003, China
| | - Junzu Geng
- Department of Radiology, Yantai Affiliated Hospital of Binzhou Medical University, Yantai, Shandong, 264003, China
| | - Bin Wang
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, Shandong 264003, China
| | - Baolin Liu
- College of Intelligence and Computing, Tianjin Key Laboratory of Cognitive Computing and Application, Tianjin University, Tianjin, 300350, China; State Key Laboratory of Intelligent Technology and Systems, National Laboratory for Information Science and Technology, Tsinghua University, Beijing, 100084, China.
| |
Collapse
|
6
|
Embodying functionally relevant action sounds in patients with spinal cord injury. Sci Rep 2018; 8:15641. [PMID: 30353071 PMCID: PMC6199269 DOI: 10.1038/s41598-018-34133-z] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/29/2018] [Accepted: 10/06/2018] [Indexed: 02/06/2023] Open
Abstract
Growing evidence indicates that perceptual-motor codes may be associated with and influenced by actual bodily states. Following a spinal cord injury (SCI), for example, individuals exhibit reduced visual sensitivity to biological motion. However, a dearth of direct evidence exists about whether profound alterations in sensorimotor traffic between the body and brain influence audio-motor representations. We tested 20 wheelchair-bound individuals with lower skeletal-level SCI who were unable to feel and move their lower limbs, but have retained upper limb function. In a two-choice, matching-to-sample auditory discrimination task, the participants were asked to determine which of two action sounds matched a sample action sound presented previously. We tested aural discrimination ability using sounds that arose from wheelchair, upper limb, lower limb, and animal actions. Our results indicate that an inability to move the lower limbs did not lead to impairment in the discrimination of lower limb-related action sounds in SCI patients. Importantly, patients with SCI discriminated wheelchair sounds more quickly than individuals with comparable auditory experience (i.e. physical therapists) and inexperienced, able-bodied subjects. Audio-motor associations appear to be modified and enhanced to incorporate external salient tools that now represent extensions of their body schemas.
Collapse
|
7
|
Giovannelli F, Giganti F, Righi S, Peru A, Borgheresi A, Zaccara G, Viggiano M, Cincotta M. Audio–visual integration effect in lateral occipital cortex during an object recognition task: An interference pilot study. Brain Stimul 2016; 9:574-6. [DOI: 10.1016/j.brs.2016.02.009] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2015] [Revised: 02/10/2016] [Accepted: 02/11/2016] [Indexed: 11/29/2022] Open
|
8
|
Role of features and categories in the organization of object knowledge: Evidence from adaptation fMRI. Cortex 2016; 78:174-194. [DOI: 10.1016/j.cortex.2016.01.006] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2015] [Revised: 12/11/2015] [Accepted: 01/05/2016] [Indexed: 11/29/2022]
|
9
|
Sarmiento BR, Matusz PJ, Sanabria D, Murray MM. Contextual factors multiplex to control multisensory processes. Hum Brain Mapp 2015; 37:273-88. [PMID: 26466522 DOI: 10.1002/hbm.23030] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2015] [Revised: 10/02/2015] [Accepted: 10/05/2015] [Indexed: 12/22/2022] Open
Abstract
This study analyzed high-density event-related potentials (ERPs) within an electrical neuroimaging framework to provide insights regarding the interaction between multisensory processes and stimulus probabilities. Specifically, we identified the spatiotemporal brain mechanisms by which the proportion of temporally congruent and task-irrelevant auditory information influences stimulus processing during a visual duration discrimination task. The spatial position (top/bottom) of the visual stimulus was indicative of how frequently the visual and auditory stimuli would be congruent in their duration (i.e., context of congruence). Stronger influences of irrelevant sound were observed when contexts associated with a high proportion of auditory-visual congruence repeated and also when contexts associated with a low proportion of congruence switched. Context of congruence and context transition resulted in weaker brain responses at 228 to 257 ms poststimulus to conditions giving rise to larger behavioral cross-modal interactions. Importantly, a control oddball task revealed that both congruent and incongruent audiovisual stimuli triggered equivalent non-linear multisensory interactions when congruence was not a relevant dimension. Collectively, these results are well explained by statistical learning, which links a particular context (here: a spatial location) with a certain level of top-down attentional control that further modulates cross-modal interactions based on whether a particular context repeated or changed. The current findings shed new light on the importance of context-based control over multisensory processing, whose influences multiplex across finer and broader time scales.
Collapse
Affiliation(s)
- Beatriz R Sarmiento
- Brain, Mind and Behavior Research Center, Universidad De Granada, Spain.,Departamento De Psicología Experimental, Universidad De Granada, Spain
| | - Pawel J Matusz
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology and Department of Clinical Neurosciences, University Hospital Centre and University of Lausanne, Lausanne, Switzerland.,Faculty in Wroclaw, University of Social Sciences and Humanities, Wroclaw, Poland.,Department of Experimental Psychology, Attention, Brain and Cognitive Development Group, University of Oxford, United Kingdom
| | - Daniel Sanabria
- Brain, Mind and Behavior Research Center, Universidad De Granada, Spain.,Departamento De Psicología Experimental, Universidad De Granada, Spain
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Radiology and Department of Clinical Neurosciences, University Hospital Centre and University of Lausanne, Lausanne, Switzerland.,Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne and Geneva, Switzerland.,Department of Ophthalmology, University of Lausanne, Jules-Gonin Eye Hospital, Lausanne, Switzerland.,Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| |
Collapse
|
10
|
Man K, Damasio A, Meyer K, Kaplan JT. Convergent and invariant object representations for sight, sound, and touch. Hum Brain Mapp 2015; 36:3629-40. [PMID: 26047030 DOI: 10.1002/hbm.22867] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/16/2014] [Revised: 05/21/2015] [Accepted: 05/21/2015] [Indexed: 12/30/2022] Open
Abstract
We continuously perceive objects in the world through multiple sensory channels. In this study, we investigated the convergence of information from different sensory streams within the cerebral cortex. We presented volunteers with three common objects via three different modalities-sight, sound, and touch-and used multivariate pattern analysis of functional magnetic resonance imaging data to map the cortical regions containing information about the identity of the objects. We could reliably predict which of the three stimuli a subject had seen, heard, or touched from the pattern of neural activity in the corresponding early sensory cortices. Intramodal classification was also successful in large portions of the cerebral cortex beyond the primary areas, with multiple regions showing convergence of information from two or all three modalities. Using crossmodal classification, we also searched for brain regions that would represent objects in a similar fashion across different modalities of presentation. We trained a classifier to distinguish objects presented in one modality and then tested it on the same objects presented in a different modality. We detected audiovisual invariance in the right temporo-occipital junction, audiotactile invariance in the left postcentral gyrus and parietal operculum, and visuotactile invariance in the right postcentral and supramarginal gyri. Our maps of multisensory convergence and crossmodal generalization reveal the underlying organization of the association cortices, and may be related to the neural basis for mental concepts.
Collapse
Affiliation(s)
- Kingson Man
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Antonio Damasio
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| | - Kaspar Meyer
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089.,Institute of Anesthesiology, University Hospital, University of Zurich, Zurich, Switzerland
| | - Jonas T Kaplan
- Brain and Creativity Institute, University of Southern California, Los Angeles, California, 90089
| |
Collapse
|
11
|
Involvement of the human midbrain and thalamus in auditory deviance detection. Neuropsychologia 2015; 68:51-8. [DOI: 10.1016/j.neuropsychologia.2015.01.001] [Citation(s) in RCA: 36] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2014] [Revised: 12/29/2014] [Accepted: 01/01/2015] [Indexed: 10/24/2022]
|
12
|
Raz A, Grady SM, Krause BM, Uhlrich DJ, Manning KA, Banks MI. Preferential effect of isoflurane on top-down vs. bottom-up pathways in sensory cortex. Front Syst Neurosci 2014; 8:191. [PMID: 25339873 PMCID: PMC4188029 DOI: 10.3389/fnsys.2014.00191] [Citation(s) in RCA: 68] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2014] [Accepted: 09/18/2014] [Indexed: 12/31/2022] Open
Abstract
The mechanism of loss of consciousness (LOC) under anesthesia is unknown. Because consciousness depends on activity in the cortico-thalamic network, anesthetic actions on this network are likely critical for LOC. Competing theories stress the importance of anesthetic actions on bottom-up “core” thalamo-cortical (TC) vs. top-down cortico-cortical (CC) and matrix TC connections. We tested these models using laminar recordings in rat auditory cortex in vivo and murine brain slices. We selectively activated bottom-up vs. top-down afferent pathways using sensory stimuli in vivo and electrical stimulation in brain slices, and compared effects of isoflurane on responses evoked via the two pathways. Auditory stimuli in vivo and core TC afferent stimulation in brain slices evoked short latency current sinks in middle layers, consistent with activation of core TC afferents. By contrast, visual stimuli in vivo and stimulation of CC and matrix TC afferents in brain slices evoked responses mainly in superficial and deep layers, consistent with projection patterns of top-down afferents that carry visual information to auditory cortex. Responses to auditory stimuli in vivo and core TC afferents in brain slices were significantly less affected by isoflurane compared to responses triggered by visual stimuli in vivo and CC/matrix TC afferents in slices. At a just-hypnotic dose in vivo, auditory responses were enhanced by isoflurane, whereas visual responses were dramatically reduced. At a comparable concentration in slices, isoflurane suppressed both core TC and CC/matrix TC responses, but the effect on the latter responses was far greater than on core TC responses, indicating that at least part of the differential effects observed in vivo were due to local actions of isoflurane in auditory cortex. These data support a model in which disruption of top-down connectivity contributes to anesthesia-induced LOC, and have implications for understanding the neural basis of consciousness.
Collapse
Affiliation(s)
- Aeyal Raz
- Department of Anesthesiology, School of Medicine and Public Health, University of Wisconsin Madison, WI, USA ; Department of Anesthesiology, Rabin Medical Center, Petah-Tikva, Israel, Affiliated with Sackler School of Medicine, Tel Aviv University Tel Aviv, Israel
| | - Sean M Grady
- Department of Anesthesiology, School of Medicine and Public Health, University of Wisconsin Madison, WI, USA
| | - Bryan M Krause
- Neuroscience Training Program, University of Wisconsin Madison, WI, USA
| | - Daniel J Uhlrich
- Department of Neuroscience, University of Wisconsin Madison, WI, USA
| | - Karen A Manning
- Department of Neuroscience, University of Wisconsin Madison, WI, USA
| | - Matthew I Banks
- Department of Anesthesiology, School of Medicine and Public Health, University of Wisconsin Madison, WI, USA ; Department of Neuroscience, University of Wisconsin Madison, WI, USA
| |
Collapse
|
13
|
Abstract
This psychophysics study investigated whether prior auditory conditioning influences how a sound interacts with visual perception. In the conditioning phase, subjects were presented with three pure tones ( = conditioned stimuli, CS) that were paired with positive, negative or neutral unconditioned stimuli. As unconditioned reinforcers we employed pictures (highly pleasant, unpleasant and neutral) or monetary outcomes (+50 euro cents, −50 cents, 0 cents). In the subsequent visual selective attention paradigm, subjects were presented with near-threshold Gabors displayed in their left or right hemifield. Critically, the Gabors were presented in synchrony with one of the conditioned sounds. Subjects discriminated whether the Gabors were presented in their left or right hemifields. Participants determined the location more accurately when the Gabors were presented in synchrony with positive relative to neutral sounds irrespective of reinforcer type. Thus, previously rewarded relative to neutral sounds increased the bottom-up salience of the visual Gabors. Our results are the first demonstration that prior auditory conditioning is a potent mechanism to modulate the effect of sounds on visual perception.
Collapse
|
14
|
Peelle JE. Methodological challenges and solutions in auditory functional magnetic resonance imaging. Front Neurosci 2014; 8:253. [PMID: 25191218 PMCID: PMC4139601 DOI: 10.3389/fnins.2014.00253] [Citation(s) in RCA: 51] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/29/2014] [Indexed: 02/06/2023] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies involve substantial acoustic noise. This review covers the difficulties posed by such noise for auditory neuroscience, as well as a number of possible solutions that have emerged. Acoustic noise can affect the processing of auditory stimuli by making them inaudible or unintelligible, and can result in reduced sensitivity to auditory activation in auditory cortex. Equally importantly, acoustic noise may also lead to increased listening effort, meaning that even when auditory stimuli are perceived, neural processing may differ from when the same stimuli are presented in quiet. These and other challenges have motivated a number of approaches for collecting auditory fMRI data. Although using a continuous echoplanar imaging (EPI) sequence provides high quality imaging data, these data may also be contaminated by background acoustic noise. Traditional sparse imaging has the advantage of avoiding acoustic noise during stimulus presentation, but at a cost of reduced temporal resolution. Recently, three classes of techniques have been developed to circumvent these limitations. The first is Interleaved Silent Steady State (ISSS) imaging, a variation of sparse imaging that involves collecting multiple volumes following a silent period while maintaining steady-state longitudinal magnetization. The second involves active noise control to limit the impact of acoustic scanner noise. Finally, novel MRI sequences that reduce the amount of acoustic noise produced during fMRI make the use of continuous scanning a more practical option. Together these advances provide unprecedented opportunities for researchers to collect high-quality data of hemodynamic responses to auditory stimuli using fMRI.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
15
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Short-term plasticity of visuo-haptic object recognition. Front Psychol 2014; 5:274. [PMID: 24765082 PMCID: PMC3980106 DOI: 10.3389/fpsyg.2014.00274] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Accepted: 03/14/2014] [Indexed: 11/13/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) studies have provided ample evidence for the involvement of the lateral occipital cortex (LO), fusiform gyrus (FG), and intraparietal sulcus (IPS) in visuo-haptic object integration. Here we applied 30 min of sham (non-effective) or real offline 1 Hz repetitive transcranial magnetic stimulation (rTMS) to perturb neural processing in left LO immediately before subjects performed a visuo-haptic delayed-match-to-sample task during fMRI. In this task, subjects had to match sample (S1) and target (S2) objects presented sequentially within or across vision and/or haptics in both directions (visual-haptic or haptic-visual) and decide whether or not S1 and S2 were the same objects. Real rTMS transiently decreased activity at the site of stimulation and remote regions such as the right LO and bilateral FG during haptic S1 processing. Without affecting behavior, the same stimulation gave rise to relative increases in activation during S2 processing in the right LO, left FG, bilateral IPS, and other regions previously associated with object recognition. Critically, the modality of S2 determined which regions were recruited after rTMS. Relative to sham rTMS, real rTMS induced increased activations during crossmodal congruent matching in the left FG for haptic S2 and the temporal pole for visual S2. In addition, we found stronger activations for incongruent than congruent matching in the right anterior parahippocampus and middle frontal gyrus for crossmodal matching of haptic S2 and in the left FG and bilateral IPS for unimodal matching of visual S2, only after real but not sham rTMS. The results imply that a focal perturbation of the left LO triggers modality-specific interactions between the stimulated left LO and other key regions of object processing possibly to maintain unimpaired object recognition. This suggests that visual and haptic processing engage partially distinct brain networks during visuo-haptic object matching.
Collapse
Affiliation(s)
- Tanja Kassuba
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| | - Corinna Klinge
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Psychiatry, Warneford Hospital Oxford, UK
| | - Cordula Hölig
- NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, University of Hamburg Hamburg, Germany
| | - Hartwig R Siebner
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre Hvidovre, Denmark ; NeuroImageNord/Department of Systems Neuroscience, University Medical Center Hamburg-Eppendorf Hamburg, Germany ; Department of Neurology, Christian-Albrechts-University Kiel, Germany
| |
Collapse
|
16
|
Mind the blind brain to understand the sighted one! Is there a supramodal cortical functional architecture? Neurosci Biobehav Rev 2014; 41:64-77. [DOI: 10.1016/j.neubiorev.2013.10.006] [Citation(s) in RCA: 103] [Impact Index Per Article: 10.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2013] [Revised: 08/13/2013] [Accepted: 10/03/2013] [Indexed: 11/20/2022]
|
17
|
Coherent activity between auditory and visual modalities during the induction of peacefulness. Cogn Neurodyn 2014; 7:301-9. [PMID: 24427206 DOI: 10.1007/s11571-012-9234-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2012] [Revised: 10/31/2012] [Accepted: 12/14/2012] [Indexed: 10/27/2022] Open
Abstract
Multisensory integration involves combining information from different senses to create a perception. The diverse characteristics of different sensory systems make it interesting to determine how cooperation and competition contribute to emotional experiences. Therefore, the aim of this study were to estimate the bias from the match attributes of the auditory and visual modalities and to depict specific brain activity frequency (theta, alpha, beta, and gamma) patterns related to a peaceful mood by using magnetoencephalography. The present study provides evidence of auditory domination in perceptual bias during multimodality processing of peaceful consciousness. Coherence analysis suggested that the theta oscillations are a transmitter of emotion signals, with the left and right brains being active in peaceful and fearful moods, respectively. Notably, hemispheric lateralization was also apparent in the alpha and beta oscillations, which might govern simple or pure information (e.g. from single modality) in the right brain but complex or mixed information (e.g. from multiple modalities) in the left brain.
Collapse
|
18
|
Lanz F, Moret V, Rouiller EM, Loquet G. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task. Front Hum Neurosci 2013; 7:799. [PMID: 24319421 PMCID: PMC3837444 DOI: 10.3389/fnhum.2013.00799] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Accepted: 11/03/2013] [Indexed: 12/12/2022] Open
Abstract
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate's model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear.
Collapse
Affiliation(s)
- Florian Lanz
- Domain of Physiology, Department of Medicine, Fribourg Cognition Center, University of Fribourg , Fribourg , Switzerland
| | | | | | | |
Collapse
|
19
|
Doi H, Shinohara K. Unconscious presentation of fearful face modulates electrophysiological responses to emotional prosody. Cereb Cortex 2013; 25:817-32. [PMID: 24108801 DOI: 10.1093/cercor/bht282] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Cross-modal integration of visual and auditory emotional cues is supposed to be advantageous in the accurate recognition of emotional signals. However, the neural locus of cross-modal integration between affective prosody and unconsciously presented facial expression in the neurologically intact population is still elusive at this point. The present study examined the influences of unconsciously presented facial expressions on the event-related potentials (ERPs) in emotional prosody recognition. In the experiment, fearful, happy, and neutral faces were presented without awareness by continuous flash suppression simultaneously with voices containing laughter and a fearful shout. The conventional peak analysis revealed that the ERPs were modulated interactively by emotional prosody and facial expression at multiple latency ranges, indicating that audio-visual integration of emotional signals takes place automatically without conscious awareness. In addition, the global field power during the late-latency range was larger for shout than for laughter only when a fearful face was presented unconsciously. The neural locus of this effect was localized to the left posterior fusiform gyrus, giving support to the view that the cortical region, traditionally considered to be unisensory region for visual processing, functions as the locus of audiovisual integration of emotional signals.
Collapse
Affiliation(s)
- Hirokazu Doi
- Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki City, Nagasaki, Japan
| | - Kazuyuki Shinohara
- Graduate School of Biomedical Sciences, Nagasaki University, Nagasaki City, Nagasaki, Japan
| |
Collapse
|
20
|
Tian X, Poeppel D. The effect of imagination on stimulation: the functional specificity of efference copies in speech processing. J Cogn Neurosci 2013; 25:1020-36. [PMID: 23469885 DOI: 10.1162/jocn_a_00381] [Citation(s) in RCA: 83] [Impact Index Per Article: 7.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
The computational role of efference copies is widely appreciated in action and perception research, but their properties for speech processing remain murky. We tested the functional specificity of auditory efference copies using magnetoencephalography recordings in an unconventional pairing: We used a classical cognitive manipulation (mental imagery--to elicit internal simulation and estimation) with a well-established experimental paradigm (one shot repetition--to assess neuronal specificity). Participants performed tasks that differentially implicated internal prediction of sensory consequences (overt speaking, imagined speaking, and imagined hearing) and their modulatory effects on the perception of an auditory (syllable) probe were assessed. Remarkably, the neural responses to overt syllable probes vary systematically, both in terms of directionality (suppression, enhancement) and temporal dynamics (early, late), as a function of the preceding covert mental imagery adaptor. We show, in the context of a dual-pathway model, that internal simulation shapes perception in a context-dependent manner.
Collapse
Affiliation(s)
- Xing Tian
- New York University, New York, NY, USA.
| | | |
Collapse
|
21
|
Tyll S, Bonath B, Schoenfeld MA, Heinze HJ, Ohl FW, Noesselt T. Neural basis of multisensory looming signals. Neuroimage 2013; 65:13-22. [DOI: 10.1016/j.neuroimage.2012.09.056] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/21/2012] [Revised: 09/03/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
22
|
Kassuba T, Klinge C, Hölig C, Röder B, Siebner HR. Vision holds a greater share in visuo-haptic object recognition than touch. Neuroimage 2013; 65:59-68. [DOI: 10.1016/j.neuroimage.2012.09.054] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2012] [Revised: 09/19/2012] [Accepted: 09/20/2012] [Indexed: 10/27/2022] Open
|
23
|
Shergill SS, White TP, Joyce DW, Bays PM, Wolpert DM, Frith CD. Modulation of somatosensory processing by action. Neuroimage 2012; 70:356-62. [PMID: 23277112 DOI: 10.1016/j.neuroimage.2012.12.043] [Citation(s) in RCA: 57] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2012] [Revised: 11/29/2012] [Accepted: 12/18/2012] [Indexed: 11/28/2022] Open
Abstract
Psychophysical evidence suggests that sensations arising from our own movements are diminished when predicted by motor forward models and that these models may also encode the timing and intensity of movement. Here we report a functional magnetic resonance imaging study in which the effects on sensation of varying the occurrence, timing and force of movements were measured. We observed that tactile-related activity in a region of secondary somatosensory cortex is reduced when sensation is associated with movement and further that this reduction is maximal when movement and sensation occur synchronously. Motor force is not represented in the degree of attenuation but rather in the magnitude of this region's response. These findings provide neurophysiological correlates of previously-observed behavioural forward-model phenomena, and advocate the adopted approach for the study of clinical conditions in which forward-model deficits have been posited to play a crucial role.
Collapse
Affiliation(s)
- Sukhwinder S Shergill
- Department of Psychosis Studies, Institute of Psychiatry, King's College London, UK.
| | | | | | | | | | | |
Collapse
|
24
|
Dorsal stream activity and connectivity associated with action priming of ambiguous apparent motion. Neuroimage 2012; 63:687-97. [DOI: 10.1016/j.neuroimage.2012.07.010] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2012] [Revised: 07/03/2012] [Accepted: 07/08/2012] [Indexed: 11/19/2022] Open
|
25
|
A corticostriatal neural system enhances auditory perception through temporal context processing. J Neurosci 2012; 32:6177-82. [PMID: 22553024 DOI: 10.1523/jneurosci.5153-11.2012] [Citation(s) in RCA: 78] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The temporal context of an acoustic signal can greatly influence its perception. The present study investigated the neural correlates underlying perceptual facilitation by regular temporal contexts in humans. Participants listened to temporally regular (periodic) or temporally irregular (nonperiodic) sequences of tones while performing an intensity discrimination task. Participants performed significantly better on intensity discrimination during periodic than nonperiodic tone sequences. There was greater activation in the putamen for periodic than nonperiodic sequences. Conversely, there was greater activation in bilateral primary and secondary auditory cortices (planum polare and planum temporale) for nonperiodic than periodic sequences. Across individuals, greater putamen activation correlated with lesser auditory cortical activation in both right and left hemispheres. These findings suggest that temporal regularity is detected in the putamen, and that such detection facilitates temporal-lobe cortical processing associated with superior auditory perception. Thus, this study reveals a corticostriatal system associated with contextual facilitation for auditory perception through temporal regularity processing.
Collapse
|
26
|
Ahveninen J, Jääskeläinen IP, Belliveau JW, Hämäläinen M, Lin FH, Raij T. Dissociable influences of auditory object vs. spatial attention on visual system oscillatory activity. PLoS One 2012; 7:e38511. [PMID: 22693642 PMCID: PMC3367912 DOI: 10.1371/journal.pone.0038511] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2011] [Accepted: 05/09/2012] [Indexed: 12/03/2022] Open
Abstract
Given that both auditory and visual systems have anatomically separate object identification (“what”) and spatial (“where”) pathways, it is of interest whether attention-driven cross-sensory modulations occur separately within these feature domains. Here, we investigated how auditory “what” vs. “where” attention tasks modulate activity in visual pathways using cortically constrained source estimates of magnetoencephalograpic (MEG) oscillatory activity. In the absence of visual stimuli or tasks, subjects were presented with a sequence of auditory-stimulus pairs and instructed to selectively attend to phonetic (“what”) vs. spatial (“where”) aspects of these sounds, or to listen passively. To investigate sustained modulatory effects, oscillatory power was estimated from time periods between sound-pair presentations. In comparison to attention to sound locations, phonetic auditory attention was associated with stronger alpha (7–13 Hz) power in several visual areas (primary visual cortex; lingual, fusiform, and inferior temporal gyri, lateral occipital cortex), as well as in higher-order visual/multisensory areas including lateral/medial parietal and retrosplenial cortices. Region-of-interest (ROI) analyses of dynamic changes, from which the sustained effects had been removed, suggested further power increases during Attend Phoneme vs. Location centered at the alpha range 400–600 ms after the onset of second sound of each stimulus pair. These results suggest distinct modulations of visual system oscillatory activity during auditory attention to sound object identity (“what”) vs. sound location (“where”). The alpha modulations could be interpreted to reflect enhanced crossmodal inhibition of feature-specific visual pathways and adjacent audiovisual association areas during “what” vs. “where” auditory attention.
Collapse
Affiliation(s)
- Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Harvard Medical School, Charlestown, Massachusetts, United States of America.
| | | | | | | | | | | |
Collapse
|
27
|
Kassuba T, Menz MM, Röder B, Siebner HR. Multisensory interactions between auditory and haptic object recognition. ACTA ACUST UNITED AC 2012; 23:1097-107. [PMID: 22518017 DOI: 10.1093/cercor/bhs076] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
Object manipulation produces characteristic sounds and causes specific haptic sensations that facilitate the recognition of the manipulated object. To identify the neural correlates of audio-haptic binding of object features, healthy volunteers underwent functional magnetic resonance imaging while they matched a target object to a sample object within and across audition and touch. By introducing a delay between the presentation of sample and target stimuli, it was possible to dissociate haptic-to-auditory and auditory-to-haptic matching. We hypothesized that only semantically coherent auditory and haptic object features activate cortical regions that host unified conceptual object representations. The left fusiform gyrus (FG) and posterior superior temporal sulcus (pSTS) showed increased activation during crossmodal matching of semantically congruent but not incongruent object stimuli. In the FG, this effect was found for haptic-to-auditory and auditory-to-haptic matching, whereas the pSTS only displayed a crossmodal matching effect for congruent auditory targets. Auditory and somatosensory association cortices showed increased activity during crossmodal object matching which was, however, independent of semantic congruency. Together, the results show multisensory interactions at different hierarchical stages of auditory and haptic object processing. Object-specific crossmodal interactions culminate in the left FG, which may provide a higher order convergence zone for conceptual object knowledge.
Collapse
Affiliation(s)
- Tanja Kassuba
- Danish Research Centre for Magnetic Resonance, Copenhagen University Hospital Hvidovre, 2650 Hvidovre, Denmark.
| | | | | | | |
Collapse
|
28
|
Schepers IM, Hipp JF, Schneider TR, Röder B, Engel AK. Functionally specific oscillatory activity correlates between visual and auditory cortex in the blind. Brain 2012; 135:922-34. [DOI: 10.1093/brain/aws014] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/04/2023] Open
|
29
|
Wang J, Baucom LB, Shinkareva SV. Decoding abstract and concrete concept representations based on single-trial fMRI data. Hum Brain Mapp 2012; 34:1133-47. [PMID: 23568269 DOI: 10.1002/hbm.21498] [Citation(s) in RCA: 40] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2011] [Revised: 09/21/2011] [Accepted: 09/26/2011] [Indexed: 11/08/2022] Open
Abstract
Previously, multi-voxel pattern analysis has been used to decode words referring to concrete object categories. In this study we investigated if single-trial-based brain activity was sufficient to distinguish abstract (e.g., mercy) versus concrete (e.g., barn) concept representations. Multiple neuroimaging studies have identified differences in the processing of abstract versus concrete concepts based on the averaged activity across time by using univariate methods. In this study we used multi-voxel pattern analysis to decode functional magnetic resonance imaging (fMRI) data when participants perform a semantic similarity judgment task on triplets of either abstract or concrete words with similar meanings. Classifiers were trained to identify individual trials as concrete or abstract. Cross-validated accuracies for classifying trials as abstract or concrete were significantly above chance (P < 0.05) for all participants. Discriminating information was distributed in multiple brain regions. Moreover, accuracy of identifying single trial data for any one participant as abstract or concrete was also reliably above chance (P < 0.05) when the classifier was trained solely on data from other participants. These results suggest abstract and concrete concepts differ in representations in terms of neural activity patterns during a short period of time across the whole brain.
Collapse
Affiliation(s)
- Jing Wang
- Department of Psychology, University of South Carolina, Columbia, South Carolina 29208, USA
| | | | | |
Collapse
|
30
|
|
31
|
Naumer MJ, van den Bosch JJF, Wibral M, Kohler A, Singer W, Kaiser J, van de Ven V, Muckli L. Investigating human audio-visual object perception with a combination of hypothesis-generating and hypothesis-testing fMRI analysis tools. Exp Brain Res 2011; 213:309-20. [PMID: 21503649 PMCID: PMC3155044 DOI: 10.1007/s00221-011-2669-0] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Accepted: 03/28/2011] [Indexed: 11/18/2022]
Abstract
Primate multisensory object perception involves distributed brain regions. To investigate the network character of these regions of the human brain, we applied data-driven group spatial independent component analysis (ICA) to a functional magnetic resonance imaging (fMRI) data set acquired during a passive audio-visual (AV) experiment with common object stimuli. We labeled three group-level independent component (IC) maps as auditory (A), visual (V), and AV, based on their spatial layouts and activation time courses. The overlap between these IC maps served as definition of a distributed network of multisensory candidate regions including superior temporal, ventral occipito-temporal, posterior parietal and prefrontal regions. During an independent second fMRI experiment, we explicitly tested their involvement in AV integration. Activations in nine out of these twelve regions met the max-criterion (A < AV > V) for multisensory integration. Comparison of this approach with a general linear model-based region-of-interest definition revealed its complementary value for multisensory neuroimaging. In conclusion, we estimated functional networks of uni- and multisensory functional connectivity from one dataset and validated their functional roles in an independent dataset. These findings demonstrate the particular value of ICA for multisensory neuroimaging research and using independent datasets to test hypotheses generated from a data-driven analysis.
Collapse
Affiliation(s)
- Marcus J Naumer
- Crossmodal Neuroimaging Lab, Institute of Medical Psychology, Goethe-University of Frankfurt, Heinrich-Hoffmann-Strasse 10, 60528 Frankfurt am Main, Germany.
| | | | | | | | | | | | | | | |
Collapse
|
32
|
Hales JB, Brewer JB. The timing of associative memory formation: frontal lobe and anterior medial temporal lobe activity at associative binding predicts memory. J Neurophysiol 2011; 105:1454-63. [PMID: 21248058 DOI: 10.1152/jn.00902.2010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The process of associating items encountered over time and across variable time delays is fundamental for creating memories in daily life, such as for stories and episodes. Forming associative memory for temporally discontiguous items involves medial temporal lobe structures and additional neocortical processing regions, including prefrontal cortex, parietal lobe, and lateral occipital regions. However, most prior memory studies, using concurrently presented stimuli, have failed to examine the temporal aspect of successful associative memory formation to identify when activity in these brain regions is predictive of associative memory formation. In the current study, functional MRI data were acquired while subjects were shown pairs of sequentially presented visual images with a fixed interitem delay within pairs. This design allowed the entire time course of the trial to be analyzed, starting from onset of the first item, across the 5.5-s delay period, and through offset of the second item. Subjects then completed a postscan recognition test for the items and associations they encoded during the scan and their confidence for each. After controlling for item-memory strength, we isolated brain regions selectively involved in associative encoding. Consistent with prior findings, increased regional activity predicting subsequent associative memory success was found in anterior medial temporal lobe regions of left perirhinal and entorhinal cortices and in left prefrontal cortex and lateral occipital regions. The temporal separation within each pair, however, allowed extension of these findings by isolating the timing of regional involvement, showing that increased response in these regions occurs during binding but not during maintenance.
Collapse
Affiliation(s)
- J B Hales
- Department of Neurosciences, University of California, San Diego, California, USA
| | | |
Collapse
|
33
|
Congruence of happy and sad emotion in music and faces modifies cortical audiovisual activation. Neuroimage 2010; 54:2973-82. [PMID: 21073970 DOI: 10.1016/j.neuroimage.2010.11.017] [Citation(s) in RCA: 49] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2010] [Revised: 10/25/2010] [Accepted: 11/04/2010] [Indexed: 11/21/2022] Open
Abstract
BACKGROUND The powerful emotion inducing properties of music are well-known, yet music may convey differing emotional responses depending on environmental factors. We hypothesized that neural mechanisms involved in listening to music may differ when presented together with visual stimuli that conveyed the same emotion as the music when compared to visual stimuli with incongruent emotional content. METHODS We designed this study to determine the effect of auditory (happy and sad instrumental music) and visual stimuli (happy and sad faces) congruent or incongruent for emotional content on audiovisual processing using fMRI blood oxygenation level-dependent (BOLD) signal contrast. The experiment was conducted in the context of a conventional block-design experiment. A block consisted of three emotional ON periods, music alone (happy or sad music), face alone (happy or sad faces), and music combined with faces where the music excerpt was played while presenting either congruent emotional faces or incongruent emotional faces. RESULTS We found activity in the superior temporal gyrus (STG) and fusiform gyrus (FG) to be differentially modulated by music and faces depending on the congruence of emotional content. There was a greater BOLD response in STG when the emotion signaled by the music and faces was congruent. Furthermore, the magnitude of these changes differed for happy congruence and sad congruence, i.e., the activation of STG when happy music was presented with happy faces was greater than the activation seen when sad music was presented with sad faces. In contrast, incongruent stimuli diminished the BOLD response in STG and elicited greater signal change in bilateral FG. Behavioral testing supplemented these findings by showing that subject ratings of emotion in faces were influenced by emotion in music. When presented with happy music, happy faces were rated as more happy (p=0.051) and sad faces were rated as less sad (p=0.030). When presented with sad music, happy faces were rated as less happy (p=0.008) and sad faces were rated as sadder (p=0.002). INTERPRETATION Happy-sad congruence across modalities may enhance activity in auditory regions while incongruence appears to impact the perception of visual affect, leading to increased activation in face processing regions such as the FG. We suggest that greater understanding of the neural bases of happy-sad congruence across modalities can shed light on basic mechanisms of affective perception and experience and may lead to novel insights in the study of emotion regulation and therapeutic use of music.
Collapse
|
34
|
Meyer GF, Greenlee M, Wuerger S. Interactions between auditory and visual semantic stimulus classes: evidence for common processing networks for speech and body actions. J Cogn Neurosci 2010; 23:2291-308. [PMID: 20954938 DOI: 10.1162/jocn.2010.21593] [Citation(s) in RCA: 31] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Incongruencies between auditory and visual signals negatively affect human performance and cause selective activation in neuroimaging studies; therefore, they are increasingly used to probe audiovisual integration mechanisms. An open question is whether the increased BOLD response reflects computational demands in integrating mismatching low-level signals or reflects simultaneous unimodal conceptual representations of the competing signals. To address this question, we explore the effect of semantic congruency within and across three signal categories (speech, body actions, and unfamiliar patterns) for signals with matched low-level statistics. In a localizer experiment, unimodal (auditory and visual) and bimodal stimuli were used to identify ROIs. All three semantic categories cause overlapping activation patterns. We find no evidence for areas that show greater BOLD response to bimodal stimuli than predicted by the sum of the two unimodal responses. Conjunction analysis of the unimodal responses in each category identifies a network including posterior temporal, inferior frontal, and premotor areas. Semantic congruency effects are measured in the main experiment. We find that incongruent combinations of two meaningful stimuli (speech and body actions) but not combinations of meaningful with meaningless stimuli lead to increased BOLD response in the posterior STS (pSTS) bilaterally, the left SMA, the inferior frontal gyrus, the inferior parietal lobule, and the anterior insula. These interactions are not seen in premotor areas. Our findings are consistent with the hypothesis that pSTS and frontal areas form a recognition network that combines sensory categorical representations (in pSTS) with action hypothesis generation in inferior frontal gyrus/premotor areas. We argue that the same neural networks process speech and body actions.
Collapse
Affiliation(s)
- Georg F Meyer
- School of Psychology, Liverpool University, Eleanor Rathbone Building, Liverpool, United Kingdom.
| | | | | |
Collapse
|
35
|
Altmann CF, Júnior CGDO, Heinemann L, Kaiser J. Processing of spectral and amplitude envelope of animal vocalizations in the human auditory cortex. Neuropsychologia 2010; 48:2824-32. [DOI: 10.1016/j.neuropsychologia.2010.05.024] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2009] [Revised: 05/09/2010] [Accepted: 05/12/2010] [Indexed: 11/28/2022]
|
36
|
Naumer MJ, Ratz L, Yalachkov Y, Polony A, Doehrmann O, Van De Ven V, Müller NG, Kaiser J, Hein G. Visuohaptic convergence in a corticocerebellar network. Eur J Neurosci 2010; 31:1730-6. [DOI: 10.1111/j.1460-9568.2010.07208.x] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|