1
|
Pasqualotto A, Cochrane A, Bavelier D, Altarelli I. A novel task and methods to evaluate inter-individual variation in audio-visual associative learning. Cognition 2024; 242:105658. [PMID: 37952371 DOI: 10.1016/j.cognition.2023.105658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2022] [Revised: 10/24/2023] [Accepted: 10/31/2023] [Indexed: 11/14/2023]
Abstract
Learning audio-visual associations is foundational to a number of real-world skills, such as reading acquisition or social communication. Characterizing individual differences in such learning has therefore been of interest to researchers in the field. Here, we present a novel audio-visual associative learning task designed to efficiently capture inter-individual differences in learning, with the added feature of using non-linguistic stimuli, so as to unconfound language and reading proficiency of the learner from their more domain-general learning capability. By fitting trial-by-trial performance in our novel learning task using simple-to-use statistical tools, we demonstrate the expected inter-individual variability in learning rate as well as high precision in its estimation. We further demonstrate that such measured learning rate is linked to working memory performance in Italian-speaking (N = 58) and French-speaking (N = 51) adults. Finally, we investigate the extent to which learning rate in our task, which measures cross-modal audio-visual associations while mitigating familiarity confounds, predicts reading ability across participants with different linguistic backgrounds. The present work thus introduces a novel non-linguistic audio-visual associative learning task that can be used across languages. In doing so, it brings a new tool to researchers in the various domains that rely on multi-sensory integration from reading to social cognition or socio-emotional learning.
Collapse
Affiliation(s)
- Angela Pasqualotto
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland
| | - Aaron Cochrane
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland
| | - Daphne Bavelier
- Faculty of Psychology and Education Sciences (FPSE), University of Geneva, Geneva, Switzerland; Campus Biotech, Geneva, Switzerland.
| | | |
Collapse
|
2
|
Pan N, Zheng K, Zhao Y, Zhang D, Dong C, Xu J, Li X, Zheng Y. Morphometry Difference of the Hippocampal Formation Between Blind and Sighted Individuals. Front Neurosci 2021; 15:715749. [PMID: 34803579 PMCID: PMC8601390 DOI: 10.3389/fnins.2021.715749] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 10/07/2021] [Indexed: 11/25/2022] Open
Abstract
The detailed morphometry alterations of the human hippocampal formation (HF) for blind individuals are still understudied. 50 subjects were recruited from Yantai Affiliated Hospital of Binzhou Medical University, including 16 congenital blindness, 14 late blindness, and 20 sighted controls. Volume and shape analysis were conducted between the blind (congenital or late) and sighted groups to observe the (sub)regional alterations of the HF. No significant difference of the hippocampal volume was observed between the blind and sighted subjects. Rightward asymmetry of the hippocampal volume was found for both congenital and late blind individuals, while no significant hemispheric difference was observed for the sighted controls. Shape analysis showed that the superior and inferior parts of both the hippocampal head and tail expanded, while the medial and lateral parts constrained for the blind individuals as compared to the sighted controls. The morphometry alterations for the congenital blind and late blind individuals are nearly the same. Significant expansion of the superior part of the hippocampal tail for both congenital and late blind groups were observed for the left hippocampi after FDR correction. Current results suggest that the cross-model plastic may occur in both hemispheres of the HF to improve the navigation ability without the stimuli of visual cues, and the alteration is more prominent for the left hemisphere.
Collapse
Affiliation(s)
- Ningning Pan
- School of Information Science and Engineering, Shandong Normal University, Jinan, China.,Master of Public Administration Education Center, Xinjiang Agricultural University, Xinjiang, China
| | - Ke Zheng
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Yanna Zhao
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Dan Zhang
- Department of Mathematics and Computer Science, Eindhoven University of Technology, Eindhoven, Netherlands
| | - Changxu Dong
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| | - Junhai Xu
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Xianglin Li
- Medical Imaging Research Institute, Binzhou Medical University, Yantai, China
| | - Yuanjie Zheng
- School of Information Science and Engineering, Shandong Normal University, Jinan, China
| |
Collapse
|
3
|
Csonka M, Mardmomen N, Webster PJ, Brefczynski-Lewis JA, Frum C, Lewis JW. Meta-Analyses Support a Taxonomic Model for Representations of Different Categories of Audio-Visual Interaction Events in the Human Brain. Cereb Cortex Commun 2021; 2:tgab002. [PMID: 33718874 PMCID: PMC7941256 DOI: 10.1093/texcom/tgab002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2020] [Revised: 12/31/2020] [Accepted: 01/06/2021] [Indexed: 01/23/2023] Open
Abstract
Our ability to perceive meaningful action events involving objects, people, and other animate agents is characterized in part by an interplay of visual and auditory sensory processing and their cross-modal interactions. However, this multisensory ability can be altered or dysfunctional in some hearing and sighted individuals, and in some clinical populations. The present meta-analysis sought to test current hypotheses regarding neurobiological architectures that may mediate audio-visual multisensory processing. Reported coordinates from 82 neuroimaging studies (137 experiments) that revealed some form of audio-visual interaction in discrete brain regions were compiled, converted to a common coordinate space, and then organized along specific categorical dimensions to generate activation likelihood estimate (ALE) brain maps and various contrasts of those derived maps. The results revealed brain regions (cortical "hubs") preferentially involved in multisensory processing along different stimulus category dimensions, including 1) living versus nonliving audio-visual events, 2) audio-visual events involving vocalizations versus actions by living sources, 3) emotionally valent events, and 4) dynamic-visual versus static-visual audio-visual stimuli. These meta-analysis results are discussed in the context of neurocomputational theories of semantic knowledge representations and perception, and the brain volumes of interest are available for download to facilitate data interpretation for future neuroimaging studies.
Collapse
Affiliation(s)
- Matt Csonka
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Nadia Mardmomen
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Paula J Webster
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Julie A Brefczynski-Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - Chris Frum
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| | - James W Lewis
- Department of Neuroscience, Rockefeller Neuroscience Institute, West Virginia University, Morgantown, WV 26506, USA
| |
Collapse
|
4
|
Zhou HY, Wang YM, Zhang RT, Cheung EFC, Pantelis C, Chan RCK. Neural Correlates of Audiovisual Temporal Binding Window in Individuals With Schizotypal and Autistic Traits: Evidence From Resting-State Functional Connectivity. Autism Res 2020; 14:668-680. [PMID: 33314710 DOI: 10.1002/aur.2456] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/19/2020] [Revised: 12/01/2020] [Accepted: 12/03/2020] [Indexed: 01/02/2023]
Abstract
Temporal proximity is an important clue for multisensory integration. Previous evidence indicates that individuals with autism and schizophrenia are more likely to integrate multisensory inputs over a longer temporal binding window (TBW). However, whether such deficits in audiovisual temporal integration extend to subclinical populations with high schizotypal and autistic traits are unclear. Using audiovisual simultaneity judgment (SJ) tasks for nonspeech and speech stimuli, our results suggested that the width of the audiovisual TBW was not significantly correlated with self-reported schizotypal and autistic traits in a group of young adults. Functional magnetic resonance imaging (fMRI) resting-state activity was also acquired to explore the neural correlates underlying inter-individual variability of TBW width. Across the entire sample, stronger resting-state functional connectivity (rsFC) between the left superior temporal cortex and the left precuneus, and weaker rsFC between the left cerebellum and the right dorsal lateral prefrontal cortex were correlated with a narrower TBW for speech stimuli. Meanwhile, stronger rsFC between the left anterior superior temporal gyrus and the right inferior temporal gyrus was correlated with a wider audiovisual TBW for non-speech stimuli. The TBW-related rsFC was not affected by levels of subclinical traits. In conclusion, this study indicates that audiovisual temporal processing may not be affected by autistic and schizotypal traits and rsFC between brain regions responding to multisensory information and timing may account for the inter-individual difference in TBW width. LAY SUMMARY: Individuals with ASD and schizophrenia are more likely to perceive asynchronous auditory and visual events as occurring simultaneously even if they are well separated in time. We investigated whether similar difficulties in audiovisual temporal processing were present in subclinical populations with high autistic and schizotypal traits. We found that the ability to detect audiovisual asynchrony was not affected by different levels of autistic and schizotypal traits. We also found that connectivity of some brain regions engaging in multisensory and timing tasks might explain an individual's tendency to bind multisensory information within a wide or narrow time window. Autism Res 2021, 14: 668-680. © 2020 International Society for Autism Research and Wiley Periodicals LLC.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Yong-Ming Wang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Rui-Ting Zhang
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Eric F C Cheung
- Castle Peak Hospital, Hong Kong Special Administrative Region, China
| | - Christos Pantelis
- Melbourne Neuropsychiatry Centre, Department of Psychiatry, The University of Melbourne & Melbourne Health, Carlton South, Victoria, Australia.,Florey Institute for Neurosciences and Mental Health, Parkville, Victoria, Australia
| | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| |
Collapse
|
5
|
Baker JM, Gillam RB, Jordan KE. Children's neural activity during number line estimations assessed by functional near-infrared spectroscopy (fNIRS). Brain Cogn 2020; 144:105601. [PMID: 32739744 PMCID: PMC7855273 DOI: 10.1016/j.bandc.2020.105601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2020] [Revised: 07/10/2020] [Accepted: 07/17/2020] [Indexed: 10/23/2022]
Abstract
Number line estimation (NLE) is an educational task in which children estimate the location of a value (e.g., 25) on a blank line that represents a numerical range (e.g., 0-100). NLE performance is a strong predictor of success in mathematics, and error patterns on this task help provide a glimpse into how children may represent number internally. However, a missing and fundamental element of this puzzle is the identification of neural correlates of NLE in children. That is, understanding possible neural signatures related to NLE performance will provide valuable insight into the cognitive processes that underlie children's development of NLE ability. Using functional near-infrared spectroscopy (fNIRS), we provide the first investigation of concurrent behavioral and cortical signatures of NLE performance in children. Specifically, our results highlight significant fronto-parietal changes in cortical activation in response to increases in NLE scale (e.g., 0-100 vs. 0-100,000). Furthermore, our results demonstrate that NLE performance feedback (auditory, visual, or audiovisual), as well as children's grade (2nd vs. 3rd) influence cortical responding during an NLE task.
Collapse
Affiliation(s)
- Joseph M Baker
- Center for Interdisciplinary Brain Sciences Research, Division of Interdisciplinary Brain Sciences, Department of Psychiatry and Behavioral Sciences, School of Medicine, Stanford University, United States.
| | - Ronald B Gillam
- Department of Communicative Disorders and Deaf Education, Utah State University, United States
| | - Kerry E Jordan
- Department of Psychology, Utah State University, United States
| |
Collapse
|
6
|
Hämäläinen JA, Parviainen T, Hsu YF, Salmelin R. Dynamics of brain activation during learning of syllable-symbol paired associations. Neuropsychologia 2019; 129:93-103. [PMID: 30930303 DOI: 10.1016/j.neuropsychologia.2019.03.016] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2018] [Revised: 02/20/2019] [Accepted: 03/25/2019] [Indexed: 11/15/2022]
Abstract
Initial stages of reading acquisition require the learning of letter and speech sound combinations. While the long-term effects of audio-visual learning are rather well studied, relatively little is known about the short-term learning effects at the brain level. Here we examined the cortical dynamics of short-term learning using magnetoencephalography (MEG) and electroencephalography (EEG) in two experiments that respectively addressed active and passive learning of the association between shown symbols and heard syllables. In experiment 1, learning was based on feedback provided after each trial. The learning of the audio-visual associations was contrasted with items for which the feedback was meaningless. In experiment 2, learning was based on statistical learning through passive exposure to audio-visual stimuli that were consistently presented with each other and contrasted with audio-visual stimuli that were randomly paired with each other. After 5-10 min of training and exposure, learning-related changes emerged in neural activation around 200 and 350 ms in the two experiments. The MEG results showed activity changes at 350 ms in caudal middle frontal cortex and posterior superior temporal sulcus, and at 500 ms in temporo-occipital cortex. Changes in brain activity coincided with a decrease in reaction times and an increase in accuracy scores. Changes in EEG activity were observed starting at the auditory P2 response followed by later changes after 300 ms. The results show that the short-term learning effects emerge rapidly (manifesting in later stages of audio-visual integration processes) and that these effects are modulated by selective attention processes.
Collapse
Affiliation(s)
- Jarmo A Hämäläinen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland.
| | - Tiina Parviainen
- Centre for Interdisciplinary Brain Research, Department of Psychology, P.O. Box 35, 40014, University of Jyväskylä, Finland
| | - Yi-Fang Hsu
- Department of Educational Psychology and Counseling, National Taiwan Normal University, 10610, Taipei, Taiwan; Institute for Research Excellence in Learning Sciences, National Taiwan Normal University, 10610, Taipei, Taiwan
| | - Riitta Salmelin
- Department of Neuroscience and Biomedical Engineering, 00076, Aalto University, Finland; Aalto NeuroImaging, 00076, Aalto University, Finland
| |
Collapse
|
7
|
Single-Trial Phase Entrainment of Theta Oscillations in Sensory Regions Predicts Human Associative Memory Performance. J Neurosci 2018; 38:6299-6309. [PMID: 29899027 DOI: 10.1523/jneurosci.0349-18.2018] [Citation(s) in RCA: 46] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2018] [Revised: 05/22/2018] [Accepted: 05/24/2018] [Indexed: 11/21/2022] Open
Abstract
Episodic memories are rich in sensory information and often contain integrated information from different sensory modalities. For instance, we can store memories of a recent concert with visual and auditory impressions being integrated in one episode. Theta oscillations have recently been implicated in playing a causal role synchronizing and effectively binding the different modalities together in memory. However, an open question is whether momentary fluctuations in theta synchronization predict the likelihood of associative memory formation for multisensory events. To address this question we entrained the visual and auditory cortex at theta frequency (4 Hz) and in a synchronous or asynchronous manner by modulating the luminance and volume of movies and sounds at 4 Hz, with a phase offset at 0° or 180°. EEG activity from human subjects (both sexes) was recorded while they memorized the association between a movie and a sound. Associative memory performance was significantly enhanced in the 0° compared with the 180° condition. Source-level analysis demonstrated that the physical stimuli effectively entrained their respective cortical areas with a corresponding phase offset. The findings suggested a successful replication of a previous study (Clouter et al., 2017). Importantly, the strength of entrainment during encoding correlated with the efficacy of associative memory such that small phase differences between visual and auditory cortex predicted a high likelihood of correct retrieval in a later recall test. These findings suggest that theta oscillations serve a specific function in the episodic memory system: binding the contents of different modalities into coherent memory episodes.SIGNIFICANCE STATEMENT How multisensory experiences are bound to form a coherent episodic memory representation is one of the fundamental questions in human episodic memory research. Evidence from animal literature suggests that the relative timing between an input and theta oscillations in the hippocampus is crucial for memory formation. We precisely controlled the timing between visual and auditory stimuli and the neural oscillations at 4 Hz using a multisensory entrainment paradigm. Human associative memory formation depends on coincident timing between sensory streams processed by the corresponding brain regions. We provide evidence for a significant role of relative timing of neural theta activity in human episodic memory on a single-trial level, which reveals a crucial mechanism underlying human episodic memory.
Collapse
|
8
|
Kitada R, Sasaki AT, Okamoto Y, Kochiyama T, Sadato N. Role of the precuneus in the detection of incongruency between tactile and visual texture information: A functional MRI study. Neuropsychologia 2014; 64:252-62. [DOI: 10.1016/j.neuropsychologia.2014.09.028] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2014] [Revised: 09/10/2014] [Accepted: 09/17/2014] [Indexed: 10/24/2022]
|
9
|
Lu Y, Paraskevopoulos E, Herholz SC, Kuchenbuch A, Pantev C. Temporal processing of audiovisual stimuli is enhanced in musicians: evidence from magnetoencephalography (MEG). PLoS One 2014; 9:e90686. [PMID: 24595014 PMCID: PMC3940930 DOI: 10.1371/journal.pone.0090686] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/06/2013] [Accepted: 02/03/2014] [Indexed: 11/28/2022] Open
Abstract
Numerous studies have demonstrated that the structural and functional differences between professional musicians and non-musicians are not only found within a single modality, but also with regard to multisensory integration. In this study we have combined psychophysical with neurophysiological measurements investigating the processing of non-musical, synchronous or various levels of asynchronous audiovisual events. We hypothesize that long-term multisensory experience alters temporal audiovisual processing already at a non-musical stage. Behaviorally, musicians scored significantly better than non-musicians in judging whether the auditory and visual stimuli were synchronous or asynchronous. At the neural level, the statistical analysis for the audiovisual asynchronous response revealed three clusters of activations including the ACC and the SFG and two bilaterally located activations in IFG and STG in both groups. Musicians, in comparison to the non-musicians, responded to synchronous audiovisual events with enhanced neuronal activity in a broad left posterior temporal region that covers the STG, the insula and the Postcentral Gyrus. Musicians also showed significantly greater activation in the left Cerebellum, when confronted with an audiovisual asynchrony. Taken together, our MEG results form a strong indication that long-term musical training alters the basic audiovisual temporal processing already in an early stage (direct after the auditory N1 wave), while the psychophysical results indicate that musical training may also provide behavioral benefits in the accuracy of the estimates regarding the timing of audiovisual events.
Collapse
Affiliation(s)
- Yao Lu
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | | | | | - Anja Kuchenbuch
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
| | - Christo Pantev
- Institute for Biomagnetism and Biosignalanalysis, University of Münster, Münster, Germany
- * E-mail:
| |
Collapse
|
10
|
Plaza M, Capelle L, Maigret G, Chaby L. Strengths and weaknesses of multimodal processing in a group of adults with gliomas. Neurocase 2013; 19:302-12. [PMID: 22554225 DOI: 10.1080/13554794.2012.667128] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
The present study aimed to analyze the multimodal skills that would be spared, altered, or impaired by gliomas that slowly infiltrate various and diversely localized areas in the cerebral hemispheres. Ten patients and 60 healthy controls were evaluated using four multimodal processing paradigms across 11 tasks. Our objectives were as follows: (a) to describe the strengths and weaknesses of the glioma patients' multimodal processing performance after accounting for task specificity and their individual performances compared to those of the control group; (b) to determine the correlation between lesion localization and impairments; and (c) to identify the tasks that were most sensitive to tumor infiltration and plasticity limits. Our results show that patients as a whole were efficient at most tasks; however, the patients exhibited difficulties in the productive picture-naming task, the receptive verbal judgment task, and the visual/graphic portion of the dual-attention task. The individual case reports show that the difficulties were distributed across the patients and did not correlate with lesion localization and tumor type.
Collapse
|
11
|
Li J, Liu Y, Qin W, Jiang J, Qiu Z, Xu J, Yu C, Jiang T. Age of onset of blindness affects brain anatomical networks constructed using diffusion tensor tractography. ACTA ACUST UNITED AC 2012; 23:542-51. [PMID: 22371309 DOI: 10.1093/cercor/bhs034] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2023]
Abstract
Studying blindness with various onset ages may elucidate the ways that unimodal sensory deprivation at different periods of development shape the human brain. In order to determine the effect of the onset age on brain anatomical networks, we extended a previous study of 17 early blind (EB) subjects with an additional 97 subjects with various onset ages. We constructed binary anatomical networks of these subjects and sighted controls (SC) using diffusion tensor tractography and calculated the topological properties of the network. Based on onset age, the subjects were divided into congenitally blind (CB), EB, adolescent-blind (AB), and late-blind (LB) subgroups. The LB subjects demonstrated a greater connectivity density and a higher global efficiency, similar to the SC. The CB and EB subgroups showed large group differences from the other groups in their topological networks, specifically, a reduced connectivity density and a decreased global efficiency compared with the SC, especially in the frontal and occipital cortices. Additionally, significant correlations were found between age of onset and the topological properties of the anatomical network in the blind. Our results suggest that visual experience during an early period of development is critical for establishing an intact efficient anatomical network in the human brain.
Collapse
Affiliation(s)
- Jiajia Li
- LIAMA Center for Computational Medicine, National Laboratory of Pattern Recognition, Institute of Automation, The Chinese Academy of Sciences, Beijing 100190, China
| | | | | | | | | | | | | | | |
Collapse
|
12
|
Butler AJ, James KH. Cross-modal versus within-modal recall: differences in behavioral and brain responses. Behav Brain Res 2011; 224:387-96. [PMID: 21723328 DOI: 10.1016/j.bbr.2011.06.017] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/04/2011] [Revised: 06/14/2011] [Accepted: 06/18/2011] [Indexed: 11/16/2022]
Abstract
Although human experience is multisensory in nature, previous research has focused predominantly on memory for unisensory as opposed to multisensory information. In this work, we sought to investigate behavioral and neural differences between the cued recall of cross-modal audiovisual associations versus within-modal visual or auditory associations. Participants were presented with cue-target associations comprised of pairs of nonsense objects, pairs of nonsense sounds, objects paired with sounds, and sounds paired with objects. Subsequently, they were required to recall the modality of the target given the cue while behavioral accuracy, reaction time, and blood oxygenation level dependent (BOLD) activation were measured. Successful within-modal recall was associated with modality-specific reactivation in primary perceptual regions, and was more accurate than cross-modal retrieval. When auditory targets were correctly or incorrectly recalled using a cross-modal visual cue, there was re-activation in auditory association cortex, and recall of information from cross-modal associations activated the hippocampus to a greater degree than within-modal associations. Findings support theories that propose an overlap between regions active during perception and memory, and show that behavioral and neural differences exist between within- and cross-modal associations. Overall the current study highlights the importance of the role of multisensory information in memory.
Collapse
Affiliation(s)
- Andrew J Butler
- Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA.
| | | |
Collapse
|
13
|
Physical and perceptual factors shape the neural mechanisms that integrate audiovisual signals in speech comprehension. J Neurosci 2011; 31:11338-50. [PMID: 21813693 DOI: 10.1523/jneurosci.6510-10.2011] [Citation(s) in RCA: 42] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Face-to-face communication challenges the human brain to integrate information from auditory and visual senses with linguistic representations. Yet the role of bottom-up physical (spectrotemporal structure) input and top-down linguistic constraints in shaping the neural mechanisms specialized for integrating audiovisual speech signals are currently unknown. Participants were presented with speech and sinewave speech analogs in visual, auditory, and audiovisual modalities. Before the fMRI study, they were trained to perceive physically identical sinewave speech analogs as speech (SWS-S) or nonspeech (SWS-N). Comparing audiovisual integration (interactions) of speech, SWS-S, and SWS-N revealed a posterior-anterior processing gradient within the left superior temporal sulcus/gyrus (STS/STG): Bilateral posterior STS/STG integrated audiovisual inputs regardless of spectrotemporal structure or speech percept; in left mid-STS, the integration profile was primarily determined by the spectrotemporal structure of the signals; more anterior STS regions discarded spectrotemporal structure and integrated audiovisual signals constrained by stimulus intelligibility and the availability of linguistic representations. In addition to this "ventral" processing stream, a "dorsal" circuitry encompassing posterior STS/STG and left inferior frontal gyrus differentially integrated audiovisual speech and SWS signals. Indeed, dynamic causal modeling and Bayesian model comparison provided strong evidence for a parallel processing structure encompassing a ventral and a dorsal stream with speech intelligibility training enhancing the connectivity between posterior and anterior STS/STG. In conclusion, audiovisual speech comprehension emerges in an interactive process with the integration of auditory and visual signals being progressively constrained by stimulus intelligibility along the STS and spectrotemporal structure in a dorsal fronto-temporal circuitry.
Collapse
|
14
|
Chan CCH, Wong AWK, Ting KH, Whitfield-Gabrieli S, He J, Lee TMC. Cross auditory-spatial learning in early-blind individuals. Hum Brain Mapp 2011; 33:2714-27. [PMID: 21932260 DOI: 10.1002/hbm.21395] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2010] [Revised: 05/29/2011] [Accepted: 05/31/2011] [Indexed: 11/10/2022] Open
Abstract
Cross-modal processing enables the utilization of information received via different sensory organs to facilitate more complicated human actions. We used functional MRI on early-blind individuals to study the neural processes associated with cross auditory-spatial learning. The auditory signals, converted from echoes of ultrasonic signals emitted from a navigation device, were novel to the participants. The subjects were trained repeatedly for 4 weeks in associating the auditory signals with different distances. Subjects' blood-oxygenation-level-dependent responses were captured at baseline and after training using a sound-to-distance judgment task. Whole-brain analyses indicated that the task used in the study involved auditory discrimination as well as spatial localization. The learning process was shown to be mediated by the inferior parietal cortex and the hippocampus, suggesting the integration and binding of auditory features to distances. The right cuneus was found to possibly serve a general rather than a specific role, forming an occipital-enhanced network for cross auditory-spatial learning. This functional network is likely to be unique to those with early blindness, since the normal-vision counterparts shared activities only in the parietal cortex.
Collapse
Affiliation(s)
- Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hong Kong, China.
| | | | | | | | | | | |
Collapse
|
15
|
Yalachkov Y, Kaiser J, Görres A, Seehaus A, Naumer MJ. Smoking experience modulates the cortical integration of vision and haptics. Neuroimage 2011; 59:547-55. [PMID: 21835248 DOI: 10.1016/j.neuroimage.2011.07.041] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2011] [Revised: 07/12/2011] [Accepted: 07/14/2011] [Indexed: 10/17/2022] Open
Abstract
Human neuroplasticity of multisensory integration has been studied mainly in the context of natural or artificial training situations in healthy subjects. However, regular smokers also offer the opportunity to assess the impact of intensive daily multisensory interactions with smoking-related objects on the neural correlates of crossmodal object processing. The present functional magnetic resonance imaging study revealed that smokers show a comparable visuo-haptic integration pattern for both smoking paraphernalia and control objects in the left lateral occipital complex, a region playing a crucial role in crossmodal object recognition. Moreover, the degree of nicotine dependence correlated positively with the magnitude of visuo-haptic integration in the left lateral occipital complex (LOC) for smoking-associated but not for control objects. In contrast, in the left LOC non-smokers displayed a visuo-haptic integration pattern for control objects, but not for smoking paraphernalia. This suggests that prolonged smoking-related multisensory experiences in smokers facilitate the merging of visual and haptic inputs in the lateral occipital complex for the respective stimuli. Studying clinical populations who engage in compulsive activities may represent an ecologically valid approach to investigating the neuroplasticity of multisensory integration.
Collapse
Affiliation(s)
- Yavor Yalachkov
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, D-60528 Frankfurt am Main, Germany.
| | | | | | | | | |
Collapse
|
16
|
Henke K. A model for memory systems based on processing modes rather than consciousness. Nat Rev Neurosci 2011; 11:523-32. [PMID: 20531422 DOI: 10.1038/nrn2850] [Citation(s) in RCA: 378] [Impact Index Per Article: 29.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Prominent models of human long-term memory distinguish between memory systems on the basis of whether learning and retrieval occur consciously or unconsciously. Episodic memory formation requires the rapid encoding of associations between different aspects of an event which, according to these models, depends on the hippocampus and on consciousness. However, recent evidence indicates that the hippocampus mediates rapid associative learning with and without consciousness in humans and animals, for long-term and short-term retention. Consciousness seems to be a poor criterion for differentiating between declarative (or explicit) and non declarative (or implicit) types of memory. A new model is therefore required in which memory systems are distinguished based on the processing operations involved rather than by consciousness.
Collapse
Affiliation(s)
- Katharina Henke
- University of Bern, Muesmattstrasse 45,3000 Bern 9, Switzerland.
| |
Collapse
|
17
|
Joassin F, Maurage P, Campanella S. The neural network sustaining the crossmodal processing of human gender from faces and voices: An fMRI study. Neuroimage 2011; 54:1654-61. [DOI: 10.1016/j.neuroimage.2010.08.073] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2010] [Revised: 08/30/2010] [Accepted: 08/31/2010] [Indexed: 10/19/2022] Open
|
18
|
Gaudes CC, Petridou N, Dryden IL, Bai L, Francis ST, Gowland PA. Detection and characterization of single-trial fMRI bold responses: paradigm free mapping. Hum Brain Mapp 2010; 32:1400-18. [PMID: 20963818 DOI: 10.1002/hbm.21116] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/22/2009] [Revised: 05/12/2010] [Accepted: 05/27/2010] [Indexed: 11/08/2022] Open
Abstract
This work presents a novel method of mapping the brain's response to single stimuli in space and time without prior knowledge of the paradigm timing: paradigm free mapping (PFM). This method is based on deconvolution of the hemodynamic response from the voxel time series assuming a linear response and using a ridge-regression algorithm. Statistical inference is performed by defining a spatio-temporal t-statistic and by controlling for multiple comparisons using the false discovery rate procedure. The methodology was validated on five subjects who performed self-paced and visually cued finger tapping at 7 Tesla, with moderate (TR = 2 s) and high (TR = 0.4 s) temporal resolution. The results demonstrate that detection of single-trial BOLD events is feasible without a priori information on the stimulus paradigm. The proposed method opens up the possibility of designing temporally unconstrained paradigms to study the cortical response to unpredictable mental events.
Collapse
Affiliation(s)
- César Caballero Gaudes
- Sir Peter Mansfield Magnetic Resonance Centre, School of Physics and Astronomy, University of Nottingham, Nottingham
| | | | | | | | | | | |
Collapse
|
19
|
|
20
|
Naumer MJ, Doehrmann O, Müller NG, Muckli L, Kaiser J, Hein G. Cortical plasticity of audio-visual object representations. Cereb Cortex 2008; 19:1641-53. [PMID: 19015373 PMCID: PMC2693620 DOI: 10.1093/cercor/bhn200] [Citation(s) in RCA: 55] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Several regions in human temporal and frontal cortex are known to integrate visual and auditory object features. The processing of audio–visual (AV) associations in these regions has been found to be modulated by object familiarity. The aim of the present study was to explore training-induced plasticity in human cortical AV integration. We used functional magnetic resonance imaging to analyze the neural correlates of AV integration for unfamiliar artificial object sounds and images in naïve subjects (PRE training) and after a behavioral training session in which subjects acquired associations between some of these sounds and images (POST-training). In the PRE-training session, unfamiliar artificial object sounds and images were mainly integrated in right inferior frontal cortex (IFC). The POST-training results showed extended integration-related IFC activations bilaterally, and a recruitment of additional regions in bilateral superior temporal gyrus/sulcus and intraparietal sulcus. Furthermore, training-induced differential response patterns to mismatching compared with matching (i.e., associated) artificial AV stimuli were most pronounced in left IFC. These effects were accompanied by complementary training-induced congruency effects in right posterior middle temporal gyrus and fusiform gyrus. Together, these findings demonstrate that short-term cross-modal association learning was sufficient to induce plastic changes of both AV integration of object stimuli and mechanisms of AV congruency processing.
Collapse
Affiliation(s)
- Marcus J Naumer
- Institute of Medical Psychology, Goethe-University, Heinrich-Hoffmann-Strasse 10, Frankfurt am Main, Germany.
| | | | | | | | | | | |
Collapse
|
21
|
Semantics and the multisensory brain: How meaning modulates processes of audio-visual integration. Brain Res 2008; 1242:136-50. [DOI: 10.1016/j.brainres.2008.03.071] [Citation(s) in RCA: 148] [Impact Index Per Article: 9.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2008] [Revised: 03/11/2008] [Accepted: 03/12/2008] [Indexed: 11/24/2022]
|
22
|
Decreasing task-related brain activity over repeated functional MRI scans and sessions with no change in performance: implications for serial investigations. Exp Brain Res 2008; 192:231-9. [DOI: 10.1007/s00221-008-1574-7] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2008] [Accepted: 09/08/2008] [Indexed: 11/30/2022]
|
23
|
Wysoski SG, Benuskova L, Kasabov N. Adaptive Spiking Neural Networks for Audiovisual Pattern Recognition. NEURAL INFORMATION PROCESSING 2008. [DOI: 10.1007/978-3-540-69162-4_42] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/07/2023]
|
24
|
Automatic auditory change detection in humans is influenced by visual–auditory associative learning. Neuroreport 2007; 18:1697-701. [DOI: 10.1097/wnr.0b013e3282f0d118] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
25
|
Hein G, Doehrmann O, Müller NG, Kaiser J, Muckli L, Naumer MJ. Object familiarity and semantic congruency modulate responses in cortical audiovisual integration areas. J Neurosci 2007; 27:7881-7. [PMID: 17652579 PMCID: PMC6672730 DOI: 10.1523/jneurosci.1740-07.2007] [Citation(s) in RCA: 153] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
The cortical integration of auditory and visual features is crucial for efficient object recognition. Previous studies have shown that audiovisual (AV) integration is affected by where and when auditory and visual features occur. However, because relatively little is known about the impact of what is integrated, we here investigated the impact of semantic congruency and object familiarity on the neural correlates of AV integration. We used functional magnetic resonance imaging to identify regions involved in the integration of both (congruent and incongruent) familiar animal sounds and images and of arbitrary combinations of unfamiliar artificial sounds and object images. Unfamiliar object images and sounds were integrated in the inferior frontal cortex (IFC), possibly reflecting learning of novel AV associations. Integration of familiar, but semantically incongruent combinations also correlated with IFC activation and additionally involved the posterior superior temporal sulcus (pSTS). For highly familiar semantically congruent AV pairings, we again found AV integration effects in pSTS and additionally in superior temporal gyrus. These findings demonstrate that the neural correlates of object-related AV integration reflect both semantic congruency and familiarity of the integrated sounds and images.
Collapse
Affiliation(s)
- Grit Hein
- Cognitive Neurology Unit
- Brain Imaging Center, and
- Helen Wills Neuroscience Institute, University of California, Berkeley, California 94720, and
| | - Oliver Doehrmann
- Institute of Medical Psychology, Johann Wolfgang Goethe-University, D-60528 Frankfurt am Main, Germany
| | | | - Jochen Kaiser
- Institute of Medical Psychology, Johann Wolfgang Goethe-University, D-60528 Frankfurt am Main, Germany
| | - Lars Muckli
- Brain Imaging Center, and
- Department of Neurophysiology, Max Planck Institute for Brain Research, D-60528 Frankfurt am Main, Germany
| | - Marcus J. Naumer
- Brain Imaging Center, and
- Institute of Medical Psychology, Johann Wolfgang Goethe-University, D-60528 Frankfurt am Main, Germany
- Department of Neurophysiology, Max Planck Institute for Brain Research, D-60528 Frankfurt am Main, Germany
| |
Collapse
|
26
|
Degerman A, Rinne T, Pekkola J, Autti T, Jääskeläinen IP, Sams M, Alho K. Human brain activity associated with audiovisual perception and attention. Neuroimage 2007; 34:1683-91. [PMID: 17204433 DOI: 10.1016/j.neuroimage.2006.11.019] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2006] [Revised: 10/13/2006] [Accepted: 11/09/2006] [Indexed: 10/23/2022] Open
Abstract
Coherent perception of objects in our environment often requires perceptual integration of auditory and visual information. Recent behavioral data suggest that audiovisual integration depends on attention. The current study investigated the neural basis of audiovisual integration using 3-Tesla functional magnetic resonance imaging (fMRI) in 12 healthy volunteers during attention to auditory or visual features, or audiovisual feature combinations of abstract stimuli (simultaneous harmonic sounds and colored circles). Audiovisual attention was found to modulate activity in the same frontal, temporal, parietal and occipital cortical regions as auditory and visual attention. In addition, attention to audiovisual feature combinations produced stronger activity in the superior temporal cortices than attention to only auditory or visual features. These modality-specific areas might be involved in attention-dependent perceptual binding of synchronous auditory and visual events into coherent audiovisual objects. Furthermore, the modality-specific temporal auditory and occipital visual cortical areas showed attention-related modulations during both auditory and visual attention tasks. This result supports the proposal that attention to stimuli in one modality can spread to encompass synchronously presented stimuli in another modality.
Collapse
|
27
|
Tanabe HC, Honda M, Sadato N. Functionally segregated neural substrates for arbitrary audiovisual paired-association learning. J Neurosci 2006; 25:6409-18. [PMID: 16000632 PMCID: PMC6725270 DOI: 10.1523/jneurosci.0636-05.2005] [Citation(s) in RCA: 77] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
To clarify the neural substrates and their dynamics during crossmodal association learning, we conducted functional magnetic resonance imaging (MRI) during audiovisual paired-association learning of delayed matching-to-sample tasks. Thirty subjects were involved in the study; 15 performed an audiovisual paired-association learning task, and the remainder completed a control visuo-visual task. Each trial consisted of the successive presentation of a pair of stimuli. Subjects were asked to identify predefined audiovisual or visuo-visual pairs by trial and error. Feedback for each trial was given regardless of whether the response was correct or incorrect. During the delay period, several areas showed an increase in the MRI signal as learning proceeded: crossmodal activity increased in unimodal areas corresponding to visual or auditory areas, and polymodal responses increased in the occipitotemporal junction and parahippocampal gyrus. This pattern was not observed in the visuo-visual intramodal paired-association learning task, suggesting that crossmodal associations might be formed by binding unimodal sensory areas via polymodal regions. In both the audiovisual and visuo-visual tasks, the MRI signal in the superior temporal sulcus (STS) in response to the second stimulus and feedback peaked during the early phase of learning and then decreased, indicating that the STS might be key to the creation of paired associations, regardless of stimulus type. In contrast to the activity changes in the regions discussed above, there was constant activity in the frontoparietal circuit during the delay period in both tasks, implying that the neural substrates for the formation and storage of paired associates are distinct from working memory circuits.
Collapse
Affiliation(s)
- Hiroki C Tanabe
- Division of Cerebral Integration, Department of Cerebral Research, National Institute for Physiological Sciences, Okazaki, Aichi 444-8585, Japan
| | | | | |
Collapse
|
28
|
Little DM, Thulborn KR. Prototype-distortion category learning: a two-phase learning process across a distributed network. Brain Cogn 2006; 60:233-43. [PMID: 16406637 DOI: 10.1016/j.bandc.2005.06.004] [Citation(s) in RCA: 14] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/30/2005] [Indexed: 10/25/2022]
Abstract
This paper reviews a body of work conducted in our laboratory that applies functional magnetic resonance imaging (fMRI) to better understand the biological response and change that occurs during prototype-distortion learning. We review results from two experiments (Little, Klein, Shobat, McClure, & Thulborn, 2004; Little & Thulborn, 2005) that provide support for increasing neuronal efficiency by way of a two-stage model that includes an initial period of recruitment of tissue across a distributed network that is followed by a period of increasing specialization with decreasing volume across the same network. Across the two studies, participants learned to classify patterns of random-dot distortions (Posner & Keele, 1968) into categories. At four points across this learning process subjects underwent examination by fMRI using a category-matching task. A large-scale network, altered across the protocol, was identified to include the frontal eye fields, both inferior and superior parietal lobules, and visual cortex. As behavioral performance increased, the volume of activation within these regions first increased and later in the protocol decreased. Based on our review of this work we propose that: (i) category learning is reflected as specialization of the same network initially implicated to complete the novel task, and (ii) this network encompasses regions not previously reported to be affected by prototype-distortion learning.
Collapse
Affiliation(s)
- Deborah M Little
- Center for Stroke Research, Department of Neurology and Rehabilitation, University of Illinois at Chicago, 60612, USA.
| | | |
Collapse
|
29
|
Prince SE, Daselaar SM, Cabeza R. Neural correlates of relational memory: successful encoding and retrieval of semantic and perceptual associations. J Neurosci 2005; 25:1203-10. [PMID: 15689557 PMCID: PMC6725951 DOI: 10.1523/jneurosci.2540-04.2005] [Citation(s) in RCA: 236] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/17/2022] Open
Abstract
Using event-related functional magnetic resonance imaging, we identified brain regions involved in successful relational memory (RM) during encoding and retrieval for semantic and perceptual associations or in general, independent of phase and content. Participants were scanned while encoding and later retrieving associations between pairs of words (semantic RM) or associations between words and fonts (perceptual RM). Encoding success activity (ESA) was identified by comparing study-phase activity for items subsequently remembered (hits) versus forgotten (misses) and retrieval success activity (RSA) by comparing test-phase activity for hits versus misses. The study yielded three main sets of findings. First, ESA-RSA differences were found within the medial temporal lobes (MTLs) and within the prefrontal cortex (PFC). Within the left MTL, ESA was greater in the anterior hippocampus, and RSA was greater in the posterior parahippocampal cortex/hippocampus. This finding is consistent with the notion of an encoding-retrieval gradient along the longitudinal MTL axis. Within the left PFC, ESA was greater in ventrolateral PFC, and RSA was greater in dorsolateral and anterior PFC. This is the first evidence of a dissociation in successful encoding and retrieval activity within left PFC. Second, consistent with the transfer-appropriate processing principle, some ESA regions were reactivated during RSA in a content-specific manner. For semantic RM, these regions included the left ventrolateral PFC, whereas for perceptual RM, they included occipitoparietal and right parahippocampal regions. Finally, only one region in the entire brain was associated with RM in general (i.e., for both semantic and perceptual ESA and RSA): the left hippocampus. This finding highlights the fundamental role of the hippocampus in RM.
Collapse
Affiliation(s)
- Steven E Prince
- Center for Cognitive Neuroscience and Department of Psychological and Brain Sciences, Duke University, Durham, North Carolina 27708, USA.
| | | | | |
Collapse
|
30
|
Little DM, Thulborn KR. Correlations of cortical activation and behavior during the application of newly learned categories. ACTA ACUST UNITED AC 2005; 25:33-47. [PMID: 15936179 DOI: 10.1016/j.cogbrainres.2005.04.015] [Citation(s) in RCA: 18] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2004] [Revised: 04/06/2005] [Accepted: 04/10/2005] [Indexed: 11/28/2022]
Abstract
Large individual differences are commonly observed during the early stages of category learning in both functional MRI (fMRI) activation maps and behavioral data. The current investigation characterizes this variability by correlating the volume of activation with behavioral performance. Healthy subjects were trained to classify patterns of random dots into categories. Training was carried out using a 4-choice categorization task with feedback. Functional MRI was performed prior to any training and then following each of 3 training sessions. The fMRI sessions involved the presentation of 3 separate paradigms which required the skill imparted by the training to determine whether two patterns of dots belonged to the same category. Contrasts between the 3 paradigms allowed the examination of the effects of training and of familiarity with the task. For fMRI performed with those materials used during training, increases in the volume of activation were observed initially. As behavioral performance continued to improve, reductions in activation were observed across regions involved in visuospatial processing and spatial attention. These reductions in activation were observed only for those materials used in training and only after high levels of performance were achieved. The magnitude of these reductions in activation correlated with each individual's own rate of learning. The present data support the observation that at least two stages of cortical activation underlie the use of newly learned categories. The first, recruitment of nearby tissue, is observed as initial increases in the volumes of activation. These initial stages of recruitment are followed by specialization across the same network which is observed as a reduction in activation with continued improvements in behavioral performance.
Collapse
Affiliation(s)
- Deborah M Little
- Department of Neurology and Rehabilitation and Anatomy and Cell Biology, Center for Stroke Research, University of Illinois at Chicago, Chicago, IL 60612, USA.
| | | |
Collapse
|
31
|
Little DM, Klein R, Shobat DM, McClure ED, Thulborn KR. Changing patterns of brain activation during category learning revealed by functional MRI. ACTA ACUST UNITED AC 2005; 22:84-93. [PMID: 15561504 DOI: 10.1016/j.cogbrainres.2004.07.011] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/23/2004] [Indexed: 11/22/2022]
Abstract
Functional magnetic resonance imaging (fMRI) was used to investigate neural changes as a function of category learning in normals (n=8). Subjects were trained to classify patterns of dots into four categories over 4 consecutive days. fMRI monitored the changes that occurred during learning prior to training and then following each training session. During fMRI, subjects determined whether two patterns of dots were members of the same category. The behavioral changes that occurred as a result of the training were observed as increases in response accuracy within shortened response times. fMRI illustrated initial increases in volumes of activation distributed across the known visuospatial processing networks. The regions affected by learning were identified as those involved in the planning and execution of eye movements (frontal and supplementary eye fields, FEF and SEF), spatial attention (superior and inferior parietal lobules, SPL and IPL) and visual processing (primary, secondary, and tertiary visual cortices). The volumes of activation then decreased as training progressed further. Of the two proposed mechanisms for learning, that of strengthened connectivity on a given network and that of selection of different networks, our data supports the former.
Collapse
Affiliation(s)
- Deborah M Little
- Center for Magnetic Resonance Research, College of Medicine, University of Illinois, Room 1193, MC 707, 1801 W. Taylor Street, Chicago, IL 60612, USA.
| | | | | | | | | |
Collapse
|
32
|
Amaro E, Williams SCR, Shergill SS, Fu CHY, MacSweeney M, Picchioni MM, Brammer MJ, McGuire PK. Acoustic noise and functional magnetic resonance imaging: current strategies and future prospects. J Magn Reson Imaging 2002; 16:497-510. [PMID: 12412026 DOI: 10.1002/jmri.10186] [Citation(s) in RCA: 119] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022] Open
Abstract
Functional magnetic resonance imaging (fMRI) has become the method of choice for studying the neural correlates of cognitive tasks. Nevertheless, the scanner produces acoustic noise during the image acquisition process, which is a problem in the study of auditory pathway and language generally. The scanner acoustic noise not only produces activation in brain regions involved in auditory processing, but also interferes with the stimulus presentation. Several strategies can be used to address this problem, including modifications of hardware and software. Although reduction of the source of the acoustic noise would be ideal, substantial hardware modifications to the current base of installed MRI systems would be required. Therefore, the most common strategy employed to minimize the problem involves software modifications. In this work we consider three main types of acquisitions: compressed, partially silent, and silent. For each implementation, paradigms using block and event-related designs are assessed. We also provide new data, using a silent event-related (SER) design, which demonstrate higher blood oxygen level-dependent (BOLD) response to a simple auditory cue when compared to a conventional image acquisition.
Collapse
Affiliation(s)
- Edson Amaro
- Institute of Psychiatry, King's College, University College, London, UK.
| | | | | | | | | | | | | | | |
Collapse
|
33
|
Fort A, Delpuech C, Pernier J, Giard MH. Early auditory-visual interactions in human cortex during nonredundant target identification. BRAIN RESEARCH. COGNITIVE BRAIN RESEARCH 2002; 14:20-30. [PMID: 12063127 DOI: 10.1016/s0926-6410(02)00058-7] [Citation(s) in RCA: 128] [Impact Index Per Article: 5.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
A common finding of behavioral studies is that objects characterized by redundant multisensory cues are identified more rapidly than the same objects presented in either unimodal condition. In a previous electrophysiological study in humans, we have described a network of crossmodal interactions that could be associated with this facilitation effect [M.H. Giard, F. Peronnet, J. Cogn. Neurosci. 11(5) (1999) 473-490]. Here, we sought to determine whether the recognition of objects characterized by nonredundant bimodal components may still induce crossmodal neural interactions. Subjects had to identify three objects defined either by auditory or visual features alone, or by the combination of nonredundant auditory and visual features. As expected, behavioral measures showed no sign of facilitation in bimodal processing. Yet, event-related potential analysis revealed the existence of early (<200 ms latency) crossmodal activities in sensory-specific and nonspecific cortical areas, that were partly dependent on the sensory dominance of the subjects to perform the task. Comparative analysis of the interaction patterns involved in redundant and nonredundant cue processing provides evidence for the robustness of the principle of crossmodal neural synergy that applies whatever the stimulus content (redundant or nonredundant information), and for the high flexibility of the neural networks of integration that are sensitive both to the nature of the perceptual task and to the sensory skill of the individual in that particular task.
Collapse
Affiliation(s)
- Alexandra Fort
- INSERM U280: Mental Processes and Brain Activation, 151 Cours Albert Thomas, 69424 Cedex 03, Lyon, France
| | | | | | | |
Collapse
|
34
|
Calvert GA, Hansen PC, Iversen SD, Brammer MJ. Detection of audio-visual integration sites in humans by application of electrophysiological criteria to the BOLD effect. Neuroimage 2001; 14:427-38. [PMID: 11467916 DOI: 10.1006/nimg.2001.0812] [Citation(s) in RCA: 321] [Impact Index Per Article: 14.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Electrophysiological studies in nonhuman primates and other mammals have shown that sensory cues from different modalities that appear at the same time and in the same location can increase the firing rate of multisensory cells in the superior colliculus to a level exceeding that predicted by summing the responses to the unimodal inputs. In contrast, spatially disparate multisensory cues can induce a profound response depression. We have previously demonstrated using functional magnetic resonance imaging (fMRI) that similar indices of crossmodal facilitation and inhibition are detectable in human cortex when subjects listen to speech while viewing visually congruent and incongruent lip and mouth movements. Here, we have used fMRI to investigate whether similar BOLD signal changes are observable during the crossmodal integration of nonspeech auditory and visual stimuli, matched or mismatched solely on the basis of their temporal synchrony, and if so, whether these crossmodal effects occur in similar brain areas as those identified during the integration of audio-visual speech. Subjects were exposed to synchronous and asynchronous auditory (white noise bursts) and visual (B/W alternating checkerboard) stimuli and to each modality in isolation. Synchronous and asynchronous bimodal inputs produced superadditive BOLD response enhancement and response depression across a large network of polysensory areas. The most highly significant of these crossmodal gains and decrements were observed in the superior colliculi. Other regions exhibiting these crossmodal interactions included cortex within the superior temporal sulcus, intraparietal sulcus, insula, and several foci in the frontal lobe, including within the superior and ventromedial frontal gyri. These data demonstrate the efficacy of using an analytic approach informed by electrophysiology to identify multisensory integration sites in humans and suggest that the particular network of brain areas implicated in these crossmodal integrative processes are dependent on the nature of the correspondence between the different sensory inputs (e.g. space, time, and/or form).
Collapse
Affiliation(s)
- G A Calvert
- Oxford Centre for Functional Magnetic Resonance Imaging of the Brain (FMRIB), Oxford, OX3 1DU, UK
| | | | | | | |
Collapse
|
35
|
Fernández G, Tendolkar I. Integrated brain activity in medial temporal and prefrontal areas predicts subsequent memory performance: human declarative memory formation at the system level. Brain Res Bull 2001; 55:1-9. [PMID: 11427332 DOI: 10.1016/s0361-9230(01)00494-4] [Citation(s) in RCA: 77] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
After an era in which lesion studies have identified the declarative memory system and its essential anatomical structures, functional imaging and event-related potential studies have begun to delineate the neural underpinnings of declarative memory formation at the system level. By memory formation, we refer to those mnemonic processes present during encoding that transform perceptual representations into enduring memories. Recent studies have revealed that distinct regions in medial temporal and prefrontal areas exhibit more neural activity during successful than unsuccessful memory formation. We attempt to identify the nature of the processes underlying these subsequent memory effects. Reviewed data suggest specific mnemonic operations in the medial temporal lobe that may be integrated with semantic/perceptual operations and subserving operations in the prefrontal cortex. The formation of relational and non-relational memories may be supported by distinct subregions within these two brain regions. While the medial temporal lobe may have a serial organizational structure, with a processing hierarchy, interactions between medial temporal and prefrontal areas seem to occur in a parallel and bi-directional fashion. Interacting with this system, emotionally arousing events enhance neural activity in the amygdala, which in turn may modulate processing in other brain regions responsible for declarative memory formation.
Collapse
Affiliation(s)
- G Fernández
- Department of Epileptology, University of Bonn, Bonn, Germany.
| | | |
Collapse
|
36
|
Hecke PV. Current awareness. NMR IN BIOMEDICINE 2000; 13:314-319. [PMID: 10960923 DOI: 10.1002/1099-1492(200008)13:5<314::aid-nbm627>3.0.co;2-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/23/2023]
Abstract
In order to keep subscribers up-to-date with the latest developments in their field, John Wiley & Sons are providing a current awareness service in each issue of the journal. The bibliography contains newly published material in the field of NMR in biomedicine. Each bibliography is divided into 9 sections: 1 Books, Reviews ' Symposia; 2 General; 3 Technology; 4 Brain and Nerves; 5 Neuropathology; 6 Cancer; 7 Cardiac, Vascular and Respiratory Systems; 8 Liver, Kidney and Other Organs; 9 Muscle and Orthopaedic. Within each section, articles are listed in alphabetical order with respect to author. If, in the preceding period, no publications are located relevant to any one of these headings, that section will be omitted.
Collapse
Affiliation(s)
- PV Hecke
- Katholicke Universiteit Leuven, Facultiet der Geneeskunde, Biomedische NMR Eenheid, Onderwijs en Navorsing, Gasthuisberg, B-3000 Leuven, Belgium
| |
Collapse
|