1
|
Lucia S, Aydin M, Bianco V, Fiorini L, Mussini E, Di Russo F. Effect of anticipatory multisensory integration on sensory-motor performance. Brain Struct Funct 2023:10.1007/s00429-023-02620-3. [PMID: 36808005 DOI: 10.1007/s00429-023-02620-3] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2022] [Accepted: 02/10/2023] [Indexed: 02/23/2023]
Abstract
Multisensory integration (MSI) is a phenomenon that occurs in sensory areas after the presentation of multimodal stimuli. Nowadays, little is known about the anticipatory top-down processes taking place in the preparation stage of processing before the stimulus onset. Considering that the top-down modulation of modality-specific inputs might affect the MSI process, this study attempts to understand whether the direct modulation of the MSI process, beyond the well-known sensory effects, may lead to additional changes in multisensory processing also in non-sensory areas (i.e., those related to task preparation and anticipation). To this aim, event-related potentials (ERPs) were analyzed both before and after auditory and visual unisensory and multisensory stimuli during a discriminative response task (Go/No-go type). Results showed that MSI did not affect motor preparation in premotor areas, while cognitive preparation in the prefrontal cortex was increased and correlated with response accuracy. Early post-stimulus ERP activities were also affected by MSI and correlated with response time. Collectively, the present results point to the plasticity accommodating nature of the MSI processes, which are not limited to perception and extend to anticipatory cognitive preparation for task execution. Further, the enhanced cognitive control emerging during MSI is discussed in the context of Bayesian accounts of augmented predictive processing related to increased perceptual uncertainty.
Collapse
Affiliation(s)
- Stefania Lucia
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy.
| | - Merve Aydin
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
| | - Valentina Bianco
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
| | - Linda Fiorini
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- IMT School for Advanced Studies, Lucca, Italy
| | - Elena Mussini
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- Department of Neuroscience, Imaging and Clinical Sciences, "G. d'Annunzio" University of Chieti-Pescara, Chieti, Italy
| | - Francesco Di Russo
- Department of Movement, Human and Health Sciences, "Foro Italico" University of Rome, Rome, Italy
- IRCCS Fondazione Santa Lucia, Rome, Italy
| |
Collapse
|
2
|
Gabriel GA, Harris LR, Henriques DYP, Pandi M, Campos JL. Multisensory visual-vestibular training improves visual heading estimation in younger and older adults. Front Aging Neurosci 2022; 14:816512. [PMID: 36092809 PMCID: PMC9452741 DOI: 10.3389/fnagi.2022.816512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2021] [Accepted: 08/01/2022] [Indexed: 11/16/2022] Open
Abstract
Self-motion perception (e.g., when walking/driving) relies on the integration of multiple sensory cues including visual, vestibular, and proprioceptive signals. Changes in the efficacy of multisensory integration have been observed in older adults (OA), which can sometimes lead to errors in perceptual judgments and have been associated with functional declines such as increased falls risk. The objectives of this study were to determine whether passive, visual-vestibular self-motion heading perception could be improved by providing feedback during multisensory training, and whether training-related effects might be more apparent in OAs vs. younger adults (YA). We also investigated the extent to which training might transfer to improved standing-balance. OAs and YAs were passively translated and asked to judge their direction of heading relative to straight-ahead (left/right). Each participant completed three conditions: (1) vestibular-only (passive physical motion in the dark), (2) visual-only (cloud-of-dots display), and (3) bimodal (congruent vestibular and visual stimulation). Measures of heading precision and bias were obtained for each condition. Over the course of 3 days, participants were asked to make bimodal heading judgments and were provided with feedback ("correct"/"incorrect") on 900 training trials. Post-training, participants' biases, and precision in all three sensory conditions (vestibular, visual, bimodal), and their standing-balance performance, were assessed. Results demonstrated improved overall precision (i.e., reduced JNDs) in heading perception after training. Pre- vs. post-training difference scores showed that improvements in JNDs were only found in the visual-only condition. Particularly notable is that 27% of OAs initially could not discriminate their heading at all in the visual-only condition pre-training, but subsequently obtained thresholds in the visual-only condition post-training that were similar to those of the other participants. While OAs seemed to show optimal integration pre- and post-training (i.e., did not show significant differences between predicted and observed JNDs), YAs only showed optimal integration post-training. There were no significant effects of training for bimodal or vestibular-only heading estimates, nor standing-balance performance. These results indicate that it may be possible to improve unimodal (visual) heading perception using a multisensory (visual-vestibular) training paradigm. The results may also help to inform interventions targeting tasks for which effective self-motion perception is important.
Collapse
Affiliation(s)
- Grace A. Gabriel
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Laurence R. Harris
- Department of Psychology, York University, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| | - Denise Y. P. Henriques
- Centre for Vision Research, York University, Toronto, ON, Canada
- Department of Kinesiology, York University, Toronto, ON, Canada
| | - Maryam Pandi
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L. Campos
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Centre for Vision Research, York University, Toronto, ON, Canada
| |
Collapse
|
3
|
Induction Mechanism of Auditory-Assisted Vision for Target Search Localization in Mixed Reality (MR) Environments. AEROSPACE 2022. [DOI: 10.3390/aerospace9070340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
In MR (mixed reality) environments, visual searches are often used for search and localization missions. There are some problems with search and localization technologies, such as a limited field of view and information overload. They are unable to satisfy the need for the rapid and precise location of specific flying objects in a group of air and space targets under modern air and space situational requirements. They lead to inefficient interactions throughout the mission process. A human being’s decision and judgment will be affected by inefficient interactions. Based on this problem, we carried out a multimodal optimization study on the use of an auditory-assisted visual search for localization in an MR environment. In the spatial–spherical coordinate system, the target flight object position is uniquely determined by the height h, distance r, and azimuth θ. Therefore, there is an urgent need to study the cross-modal connections between the auditory elements and these three coordinates based on a visual search. In this paper, an experiment was designed to study the correlation between auditory intuitive perception and vision and the cognitive induction mechanism. The experiment included the three cross-modal mappings of pitch–height, volume–distance, and vocal tract alternation–spatial direction. The research conclusions are as follows: (1) Visual cognition is induced by high, medium, and low pitches to be biased towards the high, medium, and low spatial regions of the visual space. (2) Visual cognition is induced by loud, medium, and low volumes to be biased towards the near, middle, and far spatial regions of the visual space. (3) Based on the HRTF application, the vocal track alternation scheme is expected to significantly improve the efficiency of visual interactions. Visual cognition is induced by left short sounds, right short sounds, left short and long sounds, and right short and long sounds to be biased towards the left, right, left-rear, and right-rear directions of visual space. (4) The cognitive load of search and localization technologies is significantly reduced by incorporating auditory factors. In addition, the efficiency and effect of the accurate search and positioning of space-flying objects have been greatly improved. The above findings can be applied to the research on various types of target search and localization technologies in an MR environment and can provide a theoretical basis for the subsequent study of spatial information perception and cognitive induction mechanisms in an MR environment with visual–auditory coupling.
Collapse
|
4
|
Basharat A, Thayanithy A, Barnett-Cowan M. A Scoping Review of Audiovisual Integration Methodology: Screening for Auditory and Visual Impairment in Younger and Older Adults. Front Aging Neurosci 2022; 13:772112. [PMID: 35153716 PMCID: PMC8829696 DOI: 10.3389/fnagi.2021.772112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
With the rise of the aging population, many scientists studying multisensory integration have turned toward understanding how this process may change with age. This scoping review was conducted to understand and describe the scope and rigor with which researchers studying audiovisual sensory integration screen for hearing and vision impairment. A structured search in three licensed databases (Scopus, PubMed, and PsychInfo) using the key concepts of multisensory integration, audiovisual modality, and aging revealed 2,462 articles, which were screened for inclusion by two reviewers. Articles were included if they (1) tested healthy older adults (minimum mean or median age of 60) with younger adults as a comparison (mean or median age between 18 and 35), (2) measured auditory and visual integration, (3) were written in English, and (4) reported behavioral outcomes. Articles that included the following were excluded: (1) tested taste exclusively, (2) tested olfaction exclusively, (3) tested somatosensation exclusively, (4) tested emotion perception, (5) were not written in English, (6) were clinical commentaries, editorials, interviews, letters, newspaper articles, abstracts only, or non-peer reviewed literature (e.g., theses), and (7) focused on neuroimaging without a behavioral component. Data pertaining to the details of the study (e.g., country of publication, year of publication, etc.) were extracted, however, of higher importance to our research question, data pertaining to screening measures used for hearing and vision impairment (e.g., type of test used, whether hearing- and visual-aids were worn, thresholds used, etc.) were extracted, collated, and summarized. Our search revealed that only 64% of studies screened for age-abnormal hearing impairment, 51% screened for age-abnormal vision impairment, and that consistent definitions of normal or abnormal vision and hearing were not used among the studies that screened for sensory abilities. A total of 1,624 younger adults and 4,778 older participants were included in the scoping review with males composing approximately 44% and females composing 56% of the total sample and most of the data was obtained from only four countries. We recommend that studies investigating the effects of aging on multisensory integration should screen for normal vision and hearing by using the World Health Organization's (WHO) hearing loss and visual impairment cut-off scores in order to maintain consistency among other aging researchers. As mild cognitive impairment (MCI) has been defined as a “transitional” or a “transitory” stage between normal aging and dementia and because approximately 3–5% of the aging population will develop MCI each year, it is therefore important that when researchers aim to study a healthy aging population, that they appropriately screen for MCI. One of our secondary aims was to determine how often researchers were screening for cognitive impairment and the types of tests that were used to do so. Our results revealed that only 55 out of 72 studies tested for neurological and cognitive function, and only a subset used standardized tests. Additionally, among the studies that used standardized tests, the cut-off scores used were not always adequate for screening out mild cognitive impairment. An additional secondary aim of this scoping review was to determine the feasibility of whether a meta-analysis could be conducted in the future to further quantitatively evaluate the results (i.e., are the findings obtained from studies using self-reported vision and hearing impairment screening methods significantly different from those measuring vision and hearing impairment in the lab) and to assess the scope of this problem. We found that it may not be feasible to conduct a meta-analysis with the entire dataset of this scoping review. However, a meta-analysis can be conducted if stricter parameters are used (e.g., focusing on accuracy or response time data only).Systematic Review Registration:https://doi.org/10.17605/OSF.IO/GTUHD.
Collapse
|
5
|
Zhang S, Xu W, Zhu Y, Tian E, Kong W. Impaired Multisensory Integration Predisposes the Elderly People to Fall: A Systematic Review. Front Neurosci 2020; 14:411. [PMID: 32410958 PMCID: PMC7198912 DOI: 10.3389/fnins.2020.00411] [Citation(s) in RCA: 16] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/23/2019] [Accepted: 04/06/2020] [Indexed: 11/16/2022] Open
Abstract
Background: This systematic review pooled all the latest data and reviewed all the relevant studies to look into the effect of multisensory integration on the balance function in the elderly. Methods: PubMed, Web of Science and Scopus were searched to find eligible studies published prior to May 2019. The studies were limited to those published in Chinese and English language. The quality of the included studies was assessed against the Newcastle-Ottawa Scale or an 11-item checklist, as recommended by Agency for Healthcare Research and Quality (AHRQ). Any disagreement among reviewers was resolved by comparing notes and reaching a consensus. Results: Eight hundred thirty-nine records were identified and 17 of them were included for systematic review. The result supported our assumption that multisensory integration works on balance function in the elderly. All the 17 studies were believed to be of high or moderate quality. Conclusions: The systematic review found that the impairment of multisensory integration could predispose elderly people to fall. Accurate assessment of multisensory integration can help the elderly identify the impaired balance function and minimize the risk of fall. And our results provide a new basis for further understanding of balance maintenance mechanism. Further research is warranted to explore the change in brain areas related to multisensory integration in the elderly.
Collapse
Affiliation(s)
- Sulin Zhang
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.,Institute of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Wenchao Xu
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Yuting Zhu
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - E Tian
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| | - Weijia Kong
- Department of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.,Institute of Otorhinolaryngology, Union Hospital, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China.,Key Laboratory of Neurological Disorders of Education Ministry, Tongji Medical College, Huazhong University of Science and Technology, Wuhan, China
| |
Collapse
|
6
|
Brooks CJ, Chan YM, Anderson AJ, McKendrick AM. Audiovisual Temporal Perception in Aging: The Role of Multisensory Integration and Age-Related Sensory Loss. Front Hum Neurosci 2018; 12:192. [PMID: 29867415 PMCID: PMC5954093 DOI: 10.3389/fnhum.2018.00192] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2017] [Accepted: 04/20/2018] [Indexed: 11/26/2022] Open
Abstract
Within each sensory modality, age-related deficits in temporal perception contribute to the difficulties older adults experience when performing everyday tasks. Since perceptual experience is inherently multisensory, older adults also face the added challenge of appropriately integrating or segregating the auditory and visual cues present in our dynamic environment into coherent representations of distinct objects. As such, many studies have investigated how older adults perform when integrating temporal information across audition and vision. This review covers both direct judgments about temporal information (the sound-induced flash illusion, temporal order, perceived synchrony, and temporal rate discrimination) and judgments regarding stimuli containing temporal information (the audiovisual bounce effect and speech perception). Although an age-related increase in integration has been demonstrated on a variety of tasks, research specifically investigating the ability of older adults to integrate temporal auditory and visual cues has produced disparate results. In this short review, we explore what factors could underlie these divergent findings. We conclude that both task-specific differences and age-related sensory loss play a role in the reported disparity in age-related effects on the integration of auditory and visual temporal information.
Collapse
Affiliation(s)
- Cassandra J Brooks
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Yu Man Chan
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Andrew J Anderson
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| | - Allison M McKendrick
- Department of Optometry and Vision Sciences, The University of Melbourne, Melbourne, VIC, Australia
| |
Collapse
|
7
|
Roudaia E, Calabro F, Vaina L, Newell F. Aging Impairs Audiovisual Facilitation of Object Motion Within Self-Motion. Multisens Res 2018; 31:251-272. [DOI: 10.1163/22134808-00002600] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2017] [Accepted: 07/27/2017] [Indexed: 11/19/2022]
Abstract
The presence of a moving sound has been shown to facilitate the detection of an independently moving visual target embedded among an array of identical moving objects simulating forward self-motion (Calabro et al., Proc. R. Soc. B, 2011). Given that the perception of object motion within self-motion declines with aging, we investigated whether older adults can also benefit from the presence of a congruent dynamic sound when detecting object motion within self-motion. Visual stimuli consisted of nine identical spheres randomly distributed inside a virtual rectangular prism. For 1 s, all the spheres expanded outward simulating forward observer translation at a constant speed. One of the spheres (the target) had independent motion either approaching or moving away from the observer at three different speeds. In the visual condition, stimuli contained no sound. In the audiovisual condition, the visual stimulus was accompanied by a broadband noise sound co-localized with the target, whose loudness increased or decreased congruent with the target’s direction. Participants reported which of the spheres had independent motion. Younger participants showed higher target detection accuracy in the audiovisual compared to the visual condition at the slowest speed level. Older participants showed overall poorer target detection accuracy than the younger participants, but the presence of the sound had no effect on older participants’ target detection accuracy at either speed level. These results indicate that aging may impair cross-modal integration in some contexts. Potential reasons for the absence of auditory facilitation in older adults are discussed.
Collapse
Affiliation(s)
- Eugenie Roudaia
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| | - Finnegan J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Lucia M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Neurology, Harvard Medical School, Boston, MA, USA
| | - Fiona N. Newell
- School of Psychology and Institute of Neuroscience, Trinity College Dublin, Dublin, Ireland
| |
Collapse
|