1
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
2
|
Feldman JI, Dunham K, DiCarlo GE, Cassidy M, Liu Y, Suzman E, Williams ZJ, Pulliam G, Kaiser S, Wallace MT, Woynaroski TG. A Randomized Controlled Trial for Audiovisual Multisensory Perception in Autistic Youth. J Autism Dev Disord 2023; 53:4318-4335. [PMID: 36028729 PMCID: PMC9417081 DOI: 10.1007/s10803-022-05709-6] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/01/2022] [Indexed: 11/24/2022]
Abstract
Differences in audiovisual integration are commonly observed in autism. Temporal binding windows (TBWs) of audiovisual speech can be trained (i.e., narrowed) in non-autistic adults; this study evaluated a computer-based perceptual training in autistic youth and assessed whether treatment outcomes varied according to individual characteristics. Thirty autistic youth aged 8-21 were randomly assigned to a brief perceptual training (n = 15) or a control condition (n = 15). At post-test, the perceptual training group did not differ, on average, on TBWs for trained and untrained stimuli and perception of the McGurk illusion compared to the control group. The training benefited youth with higher language and nonverbal IQ scores; the training caused widened TBWs in youth with co-occurring cognitive and language impairments.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA.
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA.
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
| | - Gabriella E DiCarlo
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Mass General Brigham Neurology Residency Program, Harvard Medical School, Boston, MA, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- National Institutes of Health, Bethesda, MD, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Washington University School of Medicine, Washington University in St. Louis, St. Louis, MO, USA
| | - Evan Suzman
- Master's Program in Biomedical Science, Vanderbilt University, Nashville, TN, USA
- Southwestern School of Medicine, University of Texas, Dallas, TX, USA
| | - Zachary J Williams
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Medical Scientist Training Program, Vanderbilt University, Nashville, TN, USA
| | - Grace Pulliam
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Sophia Kaiser
- Cognitive Studies Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN, 37232, USA
- Frist Center for Autism & Innovation, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
3
|
Murray CA, Shams L. Crossmodal interactions in human learning and memory. Front Hum Neurosci 2023; 17:1181760. [PMID: 37266327 PMCID: PMC10229776 DOI: 10.3389/fnhum.2023.1181760] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2023] [Accepted: 05/02/2023] [Indexed: 06/03/2023] Open
Abstract
Most studies of memory and perceptual learning in humans have employed unisensory settings to simplify the study paradigm. However, in daily life we are often surrounded by complex and cluttered scenes made up of many objects and sources of sensory stimulation. Our experiences are, therefore, highly multisensory both when passively observing the world and when acting and navigating. We argue that human learning and memory systems are evolved to operate under these multisensory and dynamic conditions. The nervous system exploits the rich array of sensory inputs in this process, is sensitive to the relationship between the sensory inputs, and continuously updates sensory representations, and encodes memory traces based on the relationship between the senses. We review some recent findings that demonstrate a range of human learning and memory phenomena in which the interactions between visual and auditory modalities play an important role, and suggest possible neural mechanisms that can underlie some surprising recent findings. We outline open questions as well as directions of future research to unravel human perceptual learning and memory.
Collapse
Affiliation(s)
- Carolyn A. Murray
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
| | - Ladan Shams
- Department of Psychology, University of California, Los Angeles, Los Angeles, CA, United States
- Department of Bioengineering, Neuroscience Interdepartmental Program, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
4
|
Exposure to multisensory and visual static or moving stimuli enhances processing of nonoptimal visual rhythms. Atten Percept Psychophys 2022; 84:2655-2669. [PMID: 36241841 PMCID: PMC9630188 DOI: 10.3758/s13414-022-02569-1] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/05/2022] [Indexed: 11/25/2022]
Abstract
Research has shown that visual moving and multisensory stimuli can efficiently mediate rhythmic information. It is possible, therefore, that the previously reported auditory dominance in rhythm perception is due to the use of nonoptimal visual stimuli. Yet it remains unknown whether exposure to multisensory or visual-moving rhythms would benefit the processing of rhythms consisting of nonoptimal static visual stimuli. Using a perceptual learning paradigm, we tested whether the visual component of the multisensory training pair can affect processing of metric simple two integer-ratio nonoptimal visual rhythms. Participants were trained with static (AVstat), moving-inanimate (AVinan), or moving-animate (AVan) visual stimuli along with auditory tones and a regular beat. In the pre- and posttraining tasks, participants responded whether two static-visual rhythms differed or not. Results showed improved posttraining performance for all training groups irrespective of the type of visual stimulation. To assess whether this benefit was auditory driven, we introduced visual-only training with a moving or static stimulus and a regular beat (Vinan). Comparisons between Vinan and Vstat showed that, even in the absence of auditory information, training with visual-only moving or static stimuli resulted in an enhanced posttraining performance. Overall, our findings suggest that audiovisual and visual static or moving training can benefit processing of nonoptimal visual rhythms.
Collapse
|
5
|
Perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults. Neuropsychologia 2022; 173:108309. [PMID: 35752266 DOI: 10.1016/j.neuropsychologia.2022.108309] [Citation(s) in RCA: 11] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 05/16/2022] [Accepted: 06/20/2022] [Indexed: 10/17/2022]
Abstract
There is a growing body of evidence to suggest that multisensory processing changes with advancing age-usually in the form of an enlarged temporal binding window-with some studies linking these multisensory changes to negative clinical outcomes. Perceptual training regimes represent a promising means for enhancing the precision of multisensory integration in ageing; however, to date, the vast majority of studies examining the efficacy of multisensory perceptual learning have focused solely on healthy young adults. Here, we measured the temporal binding windows of younger and older participants before and after training on an audiovisual temporal discrimination task to assess (i) how perceptual training affected the shape of the temporal binding window and (ii) whether training effects were similar in both age groups. Our results replicated previous findings of an enlarged temporal binding window in older adults, as well as providing further evidence that both younger and older participants can improve the precision of their audiovisual timing estimation via perceptual training. We also show that this training protocol led to a narrowing of the temporal binding window associated with the sound-induced flash illusion in both age groups indicating a general refinement of audiovisual integration. However, while younger adults also displayed a general reduction in crossmodal interactions following training, this effect was not observed in the older adult group. Together, our results suggest that perceptual training narrows the temporal binding window of audiovisual integration in both younger and older adults but has less of an impact on prior expectations regarding the source of audiovisual signals in older adults.
Collapse
|
6
|
Atilgan H, Bizley JK. Training enhances the ability of listeners to exploit visual information for auditory scene analysis. Cognition 2021; 208:104529. [PMID: 33373937 PMCID: PMC7868888 DOI: 10.1016/j.cognition.2020.104529] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/25/2022]
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.
Collapse
|
7
|
Feldman JI, Dunham K, Conrad JG, Simon DM, Cassidy M, Liu Y, Tu A, Broderick N, Wallace MT, Woynaroski TG. Plasticity of Temporal Binding in Children with Autism Spectrum Disorder:A Single Case Experimental Design Perceptual Training Study. RESEARCH IN AUTISM SPECTRUM DISORDERS 2020; 74:101555. [PMID: 32440308 PMCID: PMC7241431 DOI: 10.1016/j.rasd.2020.101555] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
BACKGROUND Many children with autism spectrum disorder (ASD) demonstrate atypical responses to multisensory stimuli. These disruptions, which are frequently seen in response to audiovisual speech, may produce cascading effects on the broader development of children with ASD. Perceptual training has been shown to enhance multisensory speech perception in typically developed adults. This study was the first to examine the effects of perceptual training on audiovisual speech perception in children with ASD. METHOD A multiple baseline across participants design was utilized with four 7- to 13-year-old children with ASD. The dependent variable, which was probed outside the training task each day using a simultaneity judgment task in baseline, intervention, and maintenance conditions, was audiovisual temporal binding window (TBW), an index of multisensory temporal acuity. During perceptual training, participants completed the same simultaneity judgment task with feedback on their accuracy after each trial in easy-, medium-, and hard-difficulty blocks. RESULTS A functional relation between the multisensory perceptual training program and TBW size was not observed. Of the three participants who were entered into training, one participant demonstrated a strong effect, characterized by a fairly immediate change in TBW trend. The two remaining participants demonstrated a less clear response (i.e., longer latency to effect, lack of functional independence). The first participant to enter the training condition demonstrated some maintenance of a narrower TBW post-training. CONCLUSIONS Results indicate TBWs in children with ASD may be malleable, but additional research is needed and may entail further adaptation to the multisensory perceptual training paradigm.
Collapse
Affiliation(s)
- Jacob I. Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, MCE 8310 South Tower, 1215 21st Avenue South, Nashville, TN 37232
| | - Kacie Dunham
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
| | - Julie G. Conrad
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Illinois, Chicago, IL, USA
| | - David M. Simon
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Neuroscience Graduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: axialHealthcare, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Alexander Tu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
- Present Address: College of Medicine, University of Nebraska Medical Center, Omaha, NE, USA
| | - Neill Broderick
- Department of Pediatrics, Vanderbilt University Medical Center, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Psychology, Vanderbilt University, Nashville, TN, USA
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Pharmacology, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G. Woynaroski
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, USA
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN, USA
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
8
|
Zerr M, Freihorst C, Schütz H, Sinke C, Müller A, Bleich S, Münte TF, Szycik GR. Brief Sensory Training Narrows the Temporal Binding Window and Enhances Long-Term Multimodal Speech Perception. Front Psychol 2019; 10:2489. [PMID: 31749748 PMCID: PMC6848860 DOI: 10.3389/fpsyg.2019.02489] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/26/2019] [Accepted: 10/22/2019] [Indexed: 11/13/2022] Open
Abstract
Our ability to integrate multiple sensory-based representations of our surrounding supplies us with a more holistic view of our world. There are many complex algorithms our nervous system uses to construct a coherent perception. An indicator to solve this 'binding problem' are the temporal characteristics with the specificity that environmental information has different propagation speeds (e.g., sound and electromagnetic waves) and sensory processing time and thus the temporal relationship of a stimulus pair derived from the same event must be flexibly adjusted by our brain. This tolerance can be conceptualized in the form of the cross-modal temporal binding window (TBW). Several studies showed the plasticity of the TBW and its importance concerning audio-visual illusions, synesthesia, as well as psychiatric disturbances. Using three audio-visual paradigms, we investigated the importance of length (short vs. long) as well as modality (uni- vs. multimodal) of a perceptual training aiming at reducing the TBW in a healthy population. We also investigated the influence of the TBW on speech intelligibility, where participants had to integrate auditory and visual speech information from a videotaped speaker. We showed that simple sensory trainings can change the TBW and are capable of optimizing speech perception at a very naturalistic level. While the training-length had no different effect on the malleability of the TBW, the multisensory trainings induced a significantly stronger narrowing of the TBW than their unisensory counterparts. Furthermore, a narrowing of the TBW was associated with a better performance in speech perception, meaning that participants showed a greater capacity for integrating informations from different sensory modalities in situations with one modality impaired. All effects persisted at least seven days. Our findings show the significance of multisensory temporal processing regarding ecologically valid measures and have important clinical implications for interventions that may be used to alleviate debilitating conditions (e.g., autism, schizophrenia), in which multisensory temporal function is shown to be impaired.
Collapse
Affiliation(s)
- Michael Zerr
- Department of Psychosomatic Medicine and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christina Freihorst
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Helene Schütz
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Christopher Sinke
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Astrid Müller
- Department of Psychosomatic Medicine and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Stefan Bleich
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| | - Thomas F Münte
- Department of Neurology, University of Lübeck, Lübeck, Germany.,Institute of Psychology II, University of Lübeck, Lübeck, Germany
| | - Gregor R Szycik
- Department of Psychiatry, Social Psychiatry and Psychotherapy, Hannover Medical School, Hanover, Germany
| |
Collapse
|