1
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
2
|
The development of audio-visual temporal precision precedes its rapid recalibration. Sci Rep 2022; 12:21591. [PMID: 36517503 PMCID: PMC9751280 DOI: 10.1038/s41598-022-25392-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/02/2022] [Accepted: 11/29/2022] [Indexed: 12/15/2022] Open
Abstract
Through development, multisensory systems reach a balance between stability and flexibility: the systems integrate optimally cross-modal signals from the same events, while remaining adaptive to environmental changes. Is continuous intersensory recalibration required to shape optimal integration mechanisms, or does multisensory integration develop prior to recalibration? Here, we examined the development of multisensory integration and rapid recalibration in the temporal domain by re-analyzing published datasets for audio-visual, audio-tactile, and visual-tactile combinations. Results showed that children reach an adult level of precision in audio-visual simultaneity perception and show the first sign of rapid recalibration at 9 years of age. In contrast, there was very weak rapid recalibration for other cross-modal combinations at all ages, even when adult levels of temporal precision had developed. Thus, the development of audio-visual rapid recalibration appears to require the maturation of temporal precision. It may serve to accommodate distance-dependent travel time differences between light and sound.
Collapse
|
3
|
Tao Q, Chan CCH, Luo YJ, Li JJ, Ting KH, Wang J, Lee TMC. How does experience modulate auditory spatial processing in individuals with blindness? Brain Topogr 2013; 28:506-19. [PMID: 24322827 PMCID: PMC4408360 DOI: 10.1007/s10548-013-0339-1] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2013] [Accepted: 11/21/2013] [Indexed: 11/24/2022]
Abstract
Comparing early- and late-onset blindness in individuals offers a unique model for studying the influence of visual experience on neural processing. This study investigated how prior visual experience would modulate auditory spatial processing among blind individuals. BOLD responses of early- and late-onset blind participants were captured while performing a sound localization task. The task required participants to listen to novel “Bat-ears” sounds, analyze the spatial information embedded in the sounds, and specify out of 15 locations where the sound would have been emitted. In addition to sound localization, participants were assessed on visuospatial working memory and general intellectual abilities. The results revealed common increases in BOLD responses in the middle occipital gyrus, superior frontal gyrus, precuneus, and precentral gyrus during sound localization for both groups. Between-group dissociations, however, were found in the right middle occipital gyrus and left superior frontal gyrus. The BOLD responses in the left superior frontal gyrus were significantly correlated with accuracy on sound localization and visuospatial working memory abilities among the late-onset blind participants. In contrast, the accuracy on sound localization only correlated with BOLD responses in the right middle occipital gyrus among the early-onset counterpart. The findings support the notion that early-onset blind individuals rely more on the occipital areas as a result of cross-modal plasticity for auditory spatial processing, while late-onset blind individuals rely more on the prefrontal areas which subserve visuospatial working memory.
Collapse
Affiliation(s)
- Qian Tao
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Hung Hom, Kowloon, Hong Kong, China
| | | | | | | | | | | | | |
Collapse
|
4
|
Beer AL, Plank T, Meyer G, Greenlee MW. Combined diffusion-weighted and functional magnetic resonance imaging reveals a temporal-occipital network involved in auditory-visual object processing. Front Integr Neurosci 2013; 7:5. [PMID: 23407860 PMCID: PMC3570774 DOI: 10.3389/fnint.2013.00005] [Citation(s) in RCA: 37] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2012] [Accepted: 01/25/2013] [Indexed: 11/22/2022] Open
Abstract
Functional magnetic resonance imaging (MRI) showed that the superior temporal and occipital cortex are involved in multisensory integration. Probabilistic fiber tracking based on diffusion-weighted MRI suggests that multisensory processing is supported by white matter connections between auditory cortex and the temporal and occipital lobe. Here, we present a combined functional MRI and probabilistic fiber tracking study that reveals multisensory processing mechanisms that remained undetected by either technique alone. Ten healthy participants passively observed visually presented lip or body movements, heard speech or body action sounds, or were exposed to a combination of both. Bimodal stimulation engaged a temporal-occipital brain network including the multisensory superior temporal sulcus (msSTS), the lateral superior temporal gyrus (lSTG), and the extrastriate body area (EBA). A region-of-interest (ROI) analysis showed multisensory interactions (e.g., subadditive responses to bimodal compared to unimodal stimuli) in the msSTS, the lSTG, and the EBA region. Moreover, sounds elicited responses in the medial occipital cortex. Probabilistic tracking revealed white matter tracts between the auditory cortex and the medial occipital cortex, the inferior occipital cortex (IOC), and the superior temporal sulcus (STS). However, STS terminations of auditory cortex tracts showed limited overlap with the msSTS region. Instead, msSTS was connected to primary sensory regions via intermediate nodes in the temporal and occipital cortex. Similarly, the lSTG and EBA regions showed limited direct white matter connections but instead were connected via intermediate nodes. Our results suggest that multisensory processing in the STS is mediated by separate brain areas that form a distinct network in the lateral temporal and inferior occipital cortex.
Collapse
Affiliation(s)
- Anton L. Beer
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| | - Tina Plank
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
| | - Georg Meyer
- Department of Experimental Psychology, University of LiverpoolLiverpool, UK
| | - Mark W. Greenlee
- Institut für Psychologie, Universität RegensburgRegensburg, Germany
- Experimental and Clinical Neurosciences Programme, Universität RegensburgRegensburg, Germany
| |
Collapse
|
5
|
Fifer JM, Barutchu A, Shivdasani MN, Crewther SG. Verbal and novel multisensory associative learning in adults. F1000Res 2013; 2:34. [PMID: 24627770 PMCID: PMC3907154 DOI: 10.12688/f1000research.2-34.v2] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Accepted: 05/01/2013] [Indexed: 11/20/2022] Open
Abstract
To date, few studies have focused on the behavioural differences between the learning of multisensory auditory-visual and intra-modal associations. More specifically, the relative benefits of novel auditory-visual and verbal-visual associations for learning have not been directly compared. In Experiment 1, 20 adult volunteers completed three paired associate learning tasks: non-verbal novel auditory-visual (novel-AV), verbal-visual (verbal-AV; using pseudowords), and visual-visual (shape-VV). Participants were directed to make a motor response to matching novel and arbitrarily related stimulus pairs. Feedback was provided to facilitate trial and error learning. The results of Signal Detection Theory analyses suggested a multisensory enhancement of learning, with significantly higher discriminability measures (d-prime) in both the novel-AV and verbal-AV tasks than the shape-VV task. Motor reaction times were also significantly faster during the verbal-AV task than during the non-verbal learning tasks. Experiment 2 (n = 12) used a forced-choice discrimination paradigm to assess whether a difference in unisensory stimulus discriminability could account for the learning trends in Experiment 1. Participants were significantly slower at discriminating unisensory pseudowords than the novel sounds and visual shapes, which was notable given that these stimuli produced superior learning. Together the findings suggest that verbal information has an added enhancing effect on multisensory associative learning in adults.
Collapse
Affiliation(s)
- Joanne M Fifer
- School of Psychological Science, La Trobe University, Bundoora, 3086, Australia
| | - Ayla Barutchu
- Florey Institutes of Neuroscience and Mental Health, University of Melbourne, Melbourne, 3010, Australia
| | | | - Sheila G Crewther
- School of Psychological Science, La Trobe University, Bundoora, 3086, Australia
| |
Collapse
|
6
|
Beer AL, Vartak D, Greenlee MW. Nicotine facilitates memory consolidation in perceptual learning. Neuropharmacology 2012; 64:443-51. [PMID: 22749926 DOI: 10.1016/j.neuropharm.2012.06.019] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2012] [Revised: 06/11/2012] [Accepted: 06/12/2012] [Indexed: 11/19/2022]
Abstract
Perceptual learning is a special type of non-declarative learning that involves experience-dependent plasticity in sensory cortices. The cholinergic system is known to modulate declarative learning. In particular, reduced levels or efficacy of the neurotransmitter acetylcholine were found to facilitate declarative memory consolidation. However, little is known about the role of the cholinergic system in memory consolidation of non-declarative learning. Here we compared two groups of non-smoking men who learned a visual texture discrimination task (TDT). One group received chewing tobacco containing nicotine for 1 h directly following the TDT training. The other group received a similar tasting control substance without nicotine. Electroencephalographic recordings during substance consumption showed reduced alpha activity and P300 latencies in the nicotine group compared to the control group. When re-tested on the TDT the following day, both groups responded more accurately and more rapidly than during training. These improvements were specific to the retinal location and orientation of the texture elements of the TDT suggesting that learning involved early visual cortex. A group comparison showed that learning effects were more pronounced in the nicotine group than in the control group. These findings suggest that oral consumption of nicotine enhances the efficacy of nicotinic acetylcholine receptors. Our findings further suggest that enhanced efficacy of the cholinergic system facilitates memory consolidation in perceptual learning (and possibly other types of non-declarative learning). In that regard acetylcholine seems to affect consolidation processes in perceptual learning in a different manner than in declarative learning. Alternatively, our findings might reflect dose-dependent cholinergic modulation of memory consolidation. This article is part of a Special Issue entitled 'Cognitive Enhancers'.
Collapse
Affiliation(s)
- Anton L Beer
- Institut für Psychologie, Universität Regensburg, Universitätsstr. 31, 93053 Regensburg, Germany.
| | | | | |
Collapse
|
7
|
Batson MA, Beer AL, Seitz AR, Watanabe T. Spatial shifts of audio-visual interactions by perceptual learning are specific to the trained orientation and eye. ACTA ACUST UNITED AC 2012; 24:579-94. [PMID: 22353537 DOI: 10.1163/187847611x603738] [Citation(s) in RCA: 21] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
A large proportion of the human cortex is devoted to visual processing. Contrary to the traditional belief that multimodal integration takes place in multimodal processing areas separate from visual cortex, several studies have found that sounds may directly alter processing in visual brain areas. Furthermore, recent findings show that perceptual learning can change the perceptual mechanisms that relate auditory and visual senses. However, there is still a debate about the systems involved in cross-modal learning. Here, we investigated the specificity of audio-visual perceptual learning. Audio-visual cuing effects were tested on a Gabor orientation task and an object discrimination task in the presence of lateralised sound cues before and after eight-days of cross-modal task-irrelevant perceptual learning. During training, the sound cues were paired with visual stimuli that were misaligned at a proximal (trained) visual field location relative to the sound. Training was performed with one eye patched and with only one Gabor orientation. Consistent with previous findings we found that cross-modal perceptual training shifted the audio-visual cueing effect towards the trained retinotopic location. However, this shift in audio-visual tuning was only observed for the trained stimulus (Gabors), at the trained orientation, and in the trained eye. This specificity suggests that multimodal interactions resulting from cross-modal (audio-visual) task-irrelevant perceptual learning involves so-called unisensory visual processing areas in humans. Our findings provide further support for recent anatomical and physiological findings that suggest relatively early interactions in cross-modal processing.
Collapse
|
8
|
Diffusion tensor imaging shows white matter tracts between human auditory and visual cortex. Exp Brain Res 2011; 213:299-308. [PMID: 21573953 DOI: 10.1007/s00221-011-2715-y] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
Although it is known that sounds can affect visual perception, the neural correlates for crossmodal interactions are still disputed. Previous tracer studies in non-human primates revealed direct anatomical connections between auditory and visual brain areas. We examined the structural connectivity of the auditory cortex in normal humans by diffusion-weighted tensor magnetic resonance imaging and probabilistic tractography. Tracts were seeded in Heschl's region or the planum temporale. Fibres crossed hemispheres at the posterior corpus callosum. Ipsilateral fibres seeded in Heschl's region projected to the superior temporal sulcus, the supramarginal gyrus and intraparietal sulcus and the occipital cortex including the calcarine sulcus. Fibres seeded in the planum temporale terminated primarily in the superior temporal sulcus, the supramarginal gyrus, the central sulcus and adjacent regions. Our findings suggest the existence of direct white matter connections between auditory and visual cortex--in addition to subcortical, temporal and parietal connections.
Collapse
|