1
|
Basharat A, Thayanithy A, Barnett-Cowan M. A Scoping Review of Audiovisual Integration Methodology: Screening for Auditory and Visual Impairment in Younger and Older Adults. Front Aging Neurosci 2022; 13:772112. [PMID: 35153716 PMCID: PMC8829696 DOI: 10.3389/fnagi.2021.772112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 09/07/2021] [Accepted: 12/17/2021] [Indexed: 11/13/2022] Open
Abstract
With the rise of the aging population, many scientists studying multisensory integration have turned toward understanding how this process may change with age. This scoping review was conducted to understand and describe the scope and rigor with which researchers studying audiovisual sensory integration screen for hearing and vision impairment. A structured search in three licensed databases (Scopus, PubMed, and PsychInfo) using the key concepts of multisensory integration, audiovisual modality, and aging revealed 2,462 articles, which were screened for inclusion by two reviewers. Articles were included if they (1) tested healthy older adults (minimum mean or median age of 60) with younger adults as a comparison (mean or median age between 18 and 35), (2) measured auditory and visual integration, (3) were written in English, and (4) reported behavioral outcomes. Articles that included the following were excluded: (1) tested taste exclusively, (2) tested olfaction exclusively, (3) tested somatosensation exclusively, (4) tested emotion perception, (5) were not written in English, (6) were clinical commentaries, editorials, interviews, letters, newspaper articles, abstracts only, or non-peer reviewed literature (e.g., theses), and (7) focused on neuroimaging without a behavioral component. Data pertaining to the details of the study (e.g., country of publication, year of publication, etc.) were extracted, however, of higher importance to our research question, data pertaining to screening measures used for hearing and vision impairment (e.g., type of test used, whether hearing- and visual-aids were worn, thresholds used, etc.) were extracted, collated, and summarized. Our search revealed that only 64% of studies screened for age-abnormal hearing impairment, 51% screened for age-abnormal vision impairment, and that consistent definitions of normal or abnormal vision and hearing were not used among the studies that screened for sensory abilities. A total of 1,624 younger adults and 4,778 older participants were included in the scoping review with males composing approximately 44% and females composing 56% of the total sample and most of the data was obtained from only four countries. We recommend that studies investigating the effects of aging on multisensory integration should screen for normal vision and hearing by using the World Health Organization's (WHO) hearing loss and visual impairment cut-off scores in order to maintain consistency among other aging researchers. As mild cognitive impairment (MCI) has been defined as a "transitional" or a "transitory" stage between normal aging and dementia and because approximately 3-5% of the aging population will develop MCI each year, it is therefore important that when researchers aim to study a healthy aging population, that they appropriately screen for MCI. One of our secondary aims was to determine how often researchers were screening for cognitive impairment and the types of tests that were used to do so. Our results revealed that only 55 out of 72 studies tested for neurological and cognitive function, and only a subset used standardized tests. Additionally, among the studies that used standardized tests, the cut-off scores used were not always adequate for screening out mild cognitive impairment. An additional secondary aim of this scoping review was to determine the feasibility of whether a meta-analysis could be conducted in the future to further quantitatively evaluate the results (i.e., are the findings obtained from studies using self-reported vision and hearing impairment screening methods significantly different from those measuring vision and hearing impairment in the lab) and to assess the scope of this problem. We found that it may not be feasible to conduct a meta-analysis with the entire dataset of this scoping review. However, a meta-analysis can be conducted if stricter parameters are used (e.g., focusing on accuracy or response time data only). Systematic Review Registration: https://doi.org/10.17605/OSF.IO/GTUHD.
Collapse
Affiliation(s)
- Aysha Basharat
- Department of Kinesiology, University of Waterloo, Waterloo, ON, Canada
| | | | | |
Collapse
|
2
|
Vaina LM, Calabro FJ, Samal A, Rana KD, Mamashli F, Khan S, Hämäläinen M, Ahlfors SP, Ahveninen J. Auditory cues facilitate object movement processing in human extrastriate visual cortex during simulated self-motion: A pilot study. Brain Res 2021; 1765:147489. [PMID: 33882297 DOI: 10.1016/j.brainres.2021.147489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 09/04/2020] [Revised: 04/12/2021] [Accepted: 04/13/2021] [Indexed: 10/21/2022]
Abstract
Visual segregation of moving objects is a considerable computational challenge when the observer moves through space. Recent psychophysical studies suggest that directionally congruent, moving auditory cues can substantially improve parsing object motion in such settings, but the exact brain mechanisms and visual processing stages that mediate these effects are still incompletely known. Here, we utilized multivariate pattern analyses (MVPA) of MRI-informed magnetoencephalography (MEG) source estimates to examine how crossmodal auditory cues facilitate motion detection during the observer's self-motion. During MEG recordings, participants identified a target object that moved either forward or backward within a visual scene that included nine identically textured objects simulating forward observer translation. Auditory motion cues 1) improved the behavioral accuracy of target localization, 2) significantly modulated the MEG source activity in the areas V2 and human middle temporal complex (hMT+), and 3) increased the accuracy at which the target movement direction could be decoded from hMT+ activity using MVPA. The increase of decoding accuracy by auditory cues in hMT+ was significant also when superior temporal activations in or near auditory cortices were regressed out from the hMT+ source activity to control for source estimation biases caused by point spread. Taken together, these results suggest that parsing object motion from self-motion-induced optic flow in the human extrastriate visual cortex can be facilitated by crossmodal influences from auditory system.
Collapse
Affiliation(s)
- Lucia M Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Harvard Medical School-Department of Neurology, Massachusetts General Hospital and Brigham and Women's Hospital, MA, USA
| | - Finnegan J Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA; Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - Abhisek Samal
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Kunjan D Rana
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - Fahimeh Mamashli
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Sheraz Khan
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Matti Hämäläinen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Seppo P Ahlfors
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA
| | - Jyrki Ahveninen
- Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA; Department of Radiology, Harvard Medical School, Boston, MA, USA.
| |
Collapse
|
3
|
Jones SA, Noppeney U. Ageing and multisensory integration: A review of the evidence, and a computational perspective. Cortex 2021; 138:1-23. [PMID: 33676086 DOI: 10.1016/j.cortex.2021.02.001] [Citation(s) in RCA: 27] [Impact Index Per Article: 6.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Academic Contribution Register] [Received: 07/08/2020] [Revised: 01/23/2021] [Accepted: 02/02/2021] [Indexed: 11/29/2022]
Abstract
The processing of multisensory signals is crucial for effective interaction with the environment, but our ability to perform this vital function changes as we age. In the first part of this review, we summarise existing research into the effects of healthy ageing on multisensory integration. We note that age differences vary substantially with the paradigms and stimuli used: older adults often receive at least as much benefit (to both accuracy and response times) as younger controls from congruent multisensory stimuli, but are also consistently more negatively impacted by the presence of intersensory conflict. In the second part, we outline a normative Bayesian framework that provides a principled and computationally informed perspective on the key ingredients involved in multisensory perception, and how these are affected by ageing. Applying this framework to the existing literature, we conclude that changes to sensory reliability, prior expectations (together with attentional control), and decisional strategies all contribute to the age differences observed. However, we find no compelling evidence of any age-related changes to the basic inference mechanisms involved in multisensory perception.
Collapse
Affiliation(s)
- Samuel A Jones
- The Staffordshire Centre for Psychological Research, Staffordshire University, Stoke-on-Trent, UK.
| | - Uta Noppeney
- Donders Institute for Brain, Cognition & Behaviour, Radboud University, Nijmegen, the Netherlands.
| |
Collapse
|
4
|
Abstract
The everyday environment brings to our sensory systems competing inputs from different modalities. The ability to filter these multisensory inputs in order to identify and efficiently utilize useful spatial cues is necessary to detect and process the relevant information. In the present study, we investigate how feature-based attention affects the detection of motion across sensory modalities. We were interested to determine how subjects use intramodal, cross-modal auditory, and combined audiovisual motion cues to attend to specific visual motion signals. The results showed that in most cases, both the visual and the auditory cues enhance feature-based orienting to a transparent visual motion pattern presented among distractor motion patterns. Whereas previous studies have shown cross-modal effects of spatial attention, our results demonstrate a spread of cross-modal feature-based attention cues, which have been matched for the detection threshold of the visual target. These effects were very robust in comparisons of the effects of valid vs. invalid cues, as well as in comparisons between cued and uncued valid trials. The effect of intramodal visual, cross-modal auditory, and bimodal cues also increased as a function of motion-cue salience. Our results suggest that orienting to visual motion patterns among distracters can be facilitated not only by intramodal priors, but also by feature-based cross-modal information from the auditory system.
Collapse
Affiliation(s)
- G. M. Hanada
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - J. Ahveninen
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
| | - F. J. Calabro
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Department of Psychiatry and Department of Bioengineering, University of Pittsburgh, Pittsburgh, PA, USA
| | - A. Yengo-Kahn
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
| | - L. M. Vaina
- Brain and Vision Research Laboratory, Department of Biomedical Engineering, Boston University, Boston, MA, USA
- Harvard Medical School, Athinoula A. Martinos Center for Biomedical Imaging, Department of Radiology, Massachusetts General Hospital, Charlestown, MA, USA
- Harvard Medical School, Department of Neurology, Massachusetts General Hospital and Brigham and Women’s Hospital, MA, USA
| |
Collapse
|