1
|
Huntley MK, Nguyen A, Albrecht MA, Marinovic W. Tactile cues are more intrinsically linked to motor timing than visual cues in visual-tactile sensorimotor synchronization. Atten Percept Psychophys 2024; 86:1022-1037. [PMID: 38263510 PMCID: PMC11062975 DOI: 10.3758/s13414-023-02828-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 12/07/2023] [Indexed: 01/25/2024]
Abstract
Many tasks require precise synchronization with external sensory stimuli, such as driving a car. This study investigates whether combined visual-tactile information provides additional benefits to movement synchrony over separate visual and tactile stimuli and explores the relationship with the temporal binding window for multisensory integration. In Experiment 1, participants completed a sensorimotor synchronization task to examine movement variability and a simultaneity judgment task to measure the temporal binding window. Results showed similar synchronization variability between visual-tactile and tactile-only stimuli, but significantly lower than visual only. In Experiment 2, participants completed a visual-tactile sensorimotor synchronization task with cross-modal stimuli presented inside (stimulus onset asynchrony 80 ms) and outside (stimulus-onset asynchrony 400 ms) the temporal binding window to examine temporal accuracy of movement execution. Participants synchronized their movement with the first stimulus in the cross-modal pair, either the visual or tactile stimulus. Results showed significantly greater temporal accuracy when only one stimulus was presented inside the window and the second stimulus was outside the window than when both stimuli were presented inside the window, with movement execution being more accurate when attending to the tactile stimulus. Overall, these findings indicate there may be a modality-specific benefit to sensorimotor synchronization performance, such that tactile cues are weighted more strongly than visual information as tactile information is more intrinsically linked to motor timing than visual information. Further, our findings indicate that the visual-tactile temporal binding window is related to the temporal accuracy of movement execution.
Collapse
Affiliation(s)
- Michelle K Huntley
- School of Population Health, Curtin University, Perth, Western Australia, Australia.
- School of Psychology and Public Health, La Trobe University, Wodonga, Victoria, Australia.
| | - An Nguyen
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| | - Matthew A Albrecht
- Western Australia Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, Western Australia, Australia
| | - Welber Marinovic
- School of Population Health, Curtin University, Perth, Western Australia, Australia
| |
Collapse
|
2
|
Marsicano G, Bertini C, Ronconi L. Alpha-band sensory entrainment improves audiovisual temporal acuity. Psychon Bull Rev 2024; 31:874-885. [PMID: 37783899 DOI: 10.3758/s13423-023-02388-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/10/2023] [Indexed: 10/04/2023]
Abstract
Visual and auditory stimuli are transmitted from the environment to sensory cortices with different timing, requiring the brain to encode when sensory inputs must be segregated or integrated into a single percept. The probability that different audiovisual (AV) stimuli are integrated into a single percept even when presented asynchronously is reflected in the construct of temporal binding window (TBW). There is a strong interest in testing whether it is possible to broaden or shrink TBW by using different neuromodulatory approaches that can speed up or slow down ongoing alpha oscillations, which have been repeatedly hypothesized to be an important determinant of the TBWs size. Here, we employed a web-based sensory entrainment protocol combined with a simultaneity judgment task using simple flash-beep stimuli. The aim was to test whether AV temporal acuity could be modulated trial by trial by synchronizing ongoing neural oscillations in the prestimulus period to a rhythmic sensory stream presented in the upper (∼12 Hz) or lower (∼8.5 Hz) alpha range. As a control, we implemented a nonrhythmic condition where only the first and the last entrainers were employed. Results show that upper alpha entrainment shrinks AV TBW and improves AV temporal acuity when compared with lower alpha and control conditions. Our findings represent a proof of concept of the efficacy of sensory entrainment to improve AV temporal acuity in a trial-by-trial manner, and they strengthen the idea that alpha oscillations may reflect the temporal unit of AV temporal binding.
Collapse
Affiliation(s)
- Gianluca Marsicano
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Caterina Bertini
- Department of Psychology, University of Bologna, Viale Berti Pichat 5, 40121, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, University of Bologna, Via Rasi e Spinelli 176, 47023, Cesena, Italy
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Via Olgettina 58, 20132, Milan, Italy.
- Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy.
| |
Collapse
|
3
|
Alwashmi K, Meyer G, Rowe F, Ward R. Enhancing learning outcomes through multisensory integration: A fMRI study of audio-visual training in virtual reality. Neuroimage 2024; 285:120483. [PMID: 38048921 DOI: 10.1016/j.neuroimage.2023.120483] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/19/2023] [Revised: 11/18/2023] [Accepted: 12/01/2023] [Indexed: 12/06/2023] Open
Abstract
The integration of information from different sensory modalities is a fundamental process that enhances perception and performance in real and virtual environments (VR). Understanding these mechanisms, especially during learning tasks that exploit novel multisensory cue combinations provides opportunities for the development of new rehabilitative interventions. This study aimed to investigate how functional brain changes support behavioural performance improvements during an audio-visual (AV) learning task. Twenty healthy participants underwent a 30 min daily VR training for four weeks. The task was an AV adaptation of a 'scanning training' paradigm that is commonly used in hemianopia rehabilitation. Functional magnetic resonance imaging (fMRI) and performance data were collected at baseline, after two and four weeks of training, and four weeks post-training. We show that behavioural performance, operationalised as mean reaction time reduction in VR, significantly improves. In separate tests in a controlled laboratory environment, we showed that the behavioural performance gains in the VR training environment transferred to a significant mean RT reduction for the trained AV voluntary task on a computer screen. Enhancements were observed in both the visual-only and AV conditions, with the latter demonstrating a faster response time supported by the presence of audio cues. The behavioural learning effect also transfers to two additional tasks that were tested: a visual search task and an involuntary visual task. Our fMRI results reveal an increase in functional activation (BOLD signal) in multisensory brain regions involved in early-stage AV processing: the thalamus, the caudal inferior parietal lobe and cerebellum. These functional changes were only observed for the trained, multisensory, task and not for unimodal visual stimulation. Functional activation changes in the thalamus were significantly correlated to behavioural performance improvements. This study demonstrates that incorporating spatial auditory cues to voluntary visual training in VR leads to augmented brain activation changes in multisensory integration, resulting in measurable performance gains across tasks. The findings highlight the potential of VR-based multisensory training as an effective method for enhancing cognitive function and as a potentially valuable tool in rehabilitative programmes.
Collapse
Affiliation(s)
- Kholoud Alwashmi
- Faculty of Health and Life Sciences, University of Liverpool, United Kingdom; Department of Radiology, Princess Nourah bint Abdulrahman University, Saudi Arabia.
| | - Georg Meyer
- Digital Innovation Facility, University of Liverpool, United Kingdom
| | - Fiona Rowe
- Institute of Population Health, University of Liverpool, United Kingdom
| | - Ryan Ward
- Digital Innovation Facility, University of Liverpool, United Kingdom; School Computer Science and Mathematics, Liverpool John Moores University, United Kingdom
| |
Collapse
|
4
|
Wang L, Lin L, Ren J. The characteristics of audiovisual temporal integration in streaming-bouncing bistable motion perception: considering both implicit and explicit processing perspectives. Cereb Cortex 2023; 33:11541-11555. [PMID: 37874024 DOI: 10.1093/cercor/bhad388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Revised: 10/01/2023] [Accepted: 10/02/2023] [Indexed: 10/25/2023] Open
Abstract
This study explored the behavioral and neural activity characteristics of audiovisual temporal integration in motion perception from both implicit and explicit perspectives. The streaming-bouncing bistable paradigm (SB task) was employed to investigate implicit temporal integration, while the corresponding simultaneity judgment task (SJ task) was used to examine explicit temporal integration. The behavioral results revealed a negative correlation between implicit and explicit temporal processing. In the ERP results of both tasks, three neural phases (PD100, ND180, and PD290) in the fronto-central region were identified as reflecting integration effects and the auditory-evoked multisensory N1 component may serve as a primary component responsible for cross-modal temporal processing. However, there were significant differences between the VA ERPs in the SB and SJ tasks and the influence of speed on implicit and explicit integration effects also varied. The aforementioned results, building upon the validation of previous temporal renormalization theory, suggest that implicit and explicit temporal integration operate under distinct processing modes within a shared neural network. This underscores the brain's flexibility and adaptability in cross-modal temporal processing.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, No. 399, Changhai Road, Yangpu District, Shanghai, 200438, China
| |
Collapse
|
5
|
Jiang Y, Qiao R, Shi Y, Tang Y, Hou Z, Tian Y. The effects of attention in auditory-visual integration revealed by time-varying networks. Front Neurosci 2023; 17:1235480. [PMID: 37600005 PMCID: PMC10434229 DOI: 10.3389/fnins.2023.1235480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1↔STG occurred before TPJ↔STG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
Collapse
Affiliation(s)
- Yuhao Jiang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- Central Nervous System Drug Key Laboratory of Sichuan Province, Luzhou, China
| | - Rui Qiao
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yupan Shi
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yi Tang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Zhengjun Hou
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yin Tian
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| |
Collapse
|
6
|
Pepper JL, Usherwood B, Bampouras TM, Nuttall HE. Age-related changes to the attentional modulation of temporal binding. Atten Percept Psychophys 2023; 85:1905-1919. [PMID: 37495933 PMCID: PMC10545588 DOI: 10.3758/s13414-023-02756-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 06/20/2023] [Indexed: 07/28/2023]
Abstract
During multisensory integration, the time range within which visual and auditory information can be perceived as synchronous and bound together is known as the temporal binding window (TBW). With increasing age, the TBW becomes wider, such that older adults erroneously, and often dangerously, integrate sensory inputs that are asynchronous. Recent research suggests that attentional cues can narrow the width of the TBW in younger adults, sharpening temporal perception and increasing the accuracy of integration. However, due to their age-related declines in attentional control, it is not yet known whether older adults can deploy attentional resources to narrow the TBW in the same way as younger adults. This study investigated the age-related changes to the attentional modulation of the TBW. Thirty younger and 30 older adults completed a cued-spatial-attention version of the stream-bounce illusion, assessing the extent to which the visual and auditory stimuli were integrated when presented at three different stimulus-onset asynchronies, and when attending to a validly cued or invalidly cued location. A 2 × 2 × 3 mixed ANOVA revealed that when participants attended to the validly cued location (i.e., when attention was present), susceptibility to the stream-bounce illusion decreased. However, crucially, this attentional manipulation significantly affected audiovisual integration in younger adults, but not in older adults. These findings suggest that older adults have multisensory integration-related attentional deficits. Directions for future research and practical applications surrounding treatments to improve the safety of older adults' perception and navigation through the environment are discussed.
Collapse
Affiliation(s)
- Jessica L. Pepper
- Department of Psychology, Fylde College, Lancaster University, Lancaster, UK LA1 4YF
| | - Barrie Usherwood
- Department of Psychology, Fylde College, Lancaster University, Lancaster, UK LA1 4YF
| | - Theodoros M. Bampouras
- School of Sport and Exercise Sciences, Liverpool John Moores University, Liverpool, UK L3 3AF
| | - Helen E. Nuttall
- Department of Psychology, Fylde College, Lancaster University, Lancaster, UK LA1 4YF
| |
Collapse
|
7
|
Bertonati G, Casado-Palacios M, Crepaldi M, Parmiggiani A, Maviglia A, Torazza D, Campus C, Gori M. MultiTab: A Novel Portable Device to Evaluate Multisensory Skills . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083497 DOI: 10.1109/embc40787.2023.10341048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
To infer spatial-temporal features of an external event we are guided by multisensory cues, with intensive research showing an enhancement in the perception when information coming from different sensory modalities are integrated. In this scenario, the motor system seems to also have an important role in boosting perception. With the present work, we introduce and validate a novel portable technology, named MultiTab, which is able to provide auditory and visual stimulation, as well as to measure the user's manual responses. Our preliminary results indicate that MultiTab reliably induces multisensory integration in a spatial localization task, shown by significantly reduced manual response times in the localization of audiovisual stimuli compared to unisensory stimuliClinical relevance- The current work presents a novel portable device that could contribute to the clinical evaluation of multisensory processing as well as spatial perception. In addition, by promoting and recording manual actions, MultiTab could be especially suitable for the design of rehabilitative protocols using multisensory motor training.
Collapse
|
8
|
Kassim FM, Lahooti SK, Keay EA, Iyyalol R, Rodger J, Albrecht MA, Martin-Iverson MT. Dexamphetamine widens temporal and spatial binding windows in healthy participants. J Psychiatry Neurosci 2023; 48:E90-E98. [PMID: 36918195 PMCID: PMC10019325 DOI: 10.1503/jpn.220149] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/24/2022] [Revised: 09/28/2022] [Accepted: 11/11/2022] [Indexed: 03/16/2023] Open
Abstract
BACKGROUND The pathophysiology of psychosis is complex, but a better understanding of stimulus binding windows (BWs) could help to improve our knowledge base. Previous studies have shown that dopamine release is associated with psychosis and widened BWs. We can probe BW mechanisms using drugs of specific interest to psychosis. Therefore, we were interested in understanding how manipulation of the dopamine or catecholamine systems affect psychosis and BWs. We aimed to investigate the effect of dexamphetamine, as a dopamine-releasing stimulant, on the BWs in a unimodal illusion: the tactile funneling illusion (TFI). METHODS We conducted a randomized, double-blind, counterbalanced placebo-controlled crossover study to investigate funnelling and errors of localization. We administered dexamphetamine (0.45 mg/kg) to 46 participants. We manipulated 5 spatial (5-1 cm) and 3 temporal (0, 500 and 750 ms) conditions in the TFI. RESULTS We found that dexamphetamine increased funnelling illusion (p = 0.009) and increased the error of localization in a delay-dependent manner (p = 0.03). We also found that dexamphetamine significantly increased the error of localization at 500 ms temporal separation and 4 cm spatial separation (p interaction = 0.009; p 500ms|4cm v. baseline = 0.01). LIMITATIONS Although amphetamine-induced models of psychosis are a useful approach to understanding the physiology of psychosis related to dopamine hyperactivity, dexamphetamine is equally effective at releasing noradrenaline and dopamine, and, therefore, we were unable to tease apart the effects of the 2 systems on BWs in our study. CONCLUSION We found that dexamphetamine increases illusory perception on the unimodal TFI in healthy participants, which suggests that dopamine or other catecholamines have a role in increasing tactile spatial and temporal BWs.
Collapse
Affiliation(s)
- Faiz M Kassim
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Samra Krakonja Lahooti
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Elizabeth Ann Keay
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Rajan Iyyalol
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Jennifer Rodger
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Matthew A Albrecht
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| | - Mathew T Martin-Iverson
- From the Department of Psychiatry, St. Paul's Hospital Millennium Medical College, Addis Ababa, Ethiopia (Kassim); the Psychopharmacology Unit, School of Biomedical Sciences, University of Western Australia, Perth, WA, Australia (Kassim, Lahooti, Keay, Martin-Iverson); the Psychiatry, Graylands Hospital, Mt Claremont, Perth, WA, Australia (Iyyalol); the Experimental and Regenerative Neurosciences, School of Biological Sciences, University of Western Australia, Crawley, WA, Australia (Rodger); the Brain Plasticity Group, Perron Institute for Neurological and Translational Science, Nedlands, WA, Australia (Rodger); the Western Australian Centre for Road Safety Research, School of Psychological Science, University of Western Australia, Perth, WA, Australia (Albrecht)
| |
Collapse
|
9
|
Kim H, Lee IK. Studying the Effects of Congruence of Auditory and Visual Stimuli on Virtual Reality Experiences. IEEE TRANSACTIONS ON VISUALIZATION AND COMPUTER GRAPHICS 2022; 28:2080-2090. [PMID: 35167477 DOI: 10.1109/tvcg.2022.3150514] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
Studies in virtual reality (VR) have introduced numerous multisensory simulation techniques for more immersive VR experiences. However, although they primarily focus on expanding sensory types or increasing individual sensory quality, they lack consensus in designing appropriate interactions between different sensory stimuli. This paper explores how the congruence between auditory and visual (AV) stimuli, which are the sensory stimuli typically provided by VR devices, affects the cognition and experience of VR users as a critical interaction factor in promoting multisensory integration. We defined the types of (in)congruence between AV stimuli, and then designed 12 virtual spaces with different types or degrees of congruence between AV stimuli. We then evaluated the presence, immersion, motion sickness, and cognition changes in each space. We observed the following key findings: 1) there is a limit to the degree of temporal or spatial incongruence that can be tolerated, with few negative effects on user experience until that point is exceeded; 2) users are tolerant of semantic incongruence; 3) a simulation that considers synesthetic congruence contributes to the user's sense of immersion and presence. Based on these insights, we identified the essential considerations for designing sensory simulations in VR and proposed future research directions.
Collapse
|
10
|
Peng X, Jiang H, Yang J, Shi R, Feng J, Liang Y. Effects of Temporal Characteristics on Pilots Perceiving Audiovisual Warning Signals Under Different Perceptual Loads. Front Psychol 2022; 13:808150. [PMID: 35222196 PMCID: PMC8867071 DOI: 10.3389/fpsyg.2022.808150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
Abstract
Our research aimed to investigate the effectiveness of auditory, visual, and audiovisual warning signals for capturing the attention of the pilot, and how stimulus onset asynchronies (SOA) in audiovisual stimuli affect pilots perceiving the bimodal warning signals under different perceptual load conditions. In experiment 1 of the low perceptual load condition, participants discriminated the location (right vs. left) of visual targets preceded by five different types of warning signals. In experiment 2 of high perceptual load, participants completed the location task identical to a low load condition and a digit detection task in a rapid serial visual presentation (RSVP) stream. The main effect of warning signals in two experiments showed that visual and auditory cues presented simultaneously (AV) could effectively and efficiently arouse the attention of the pilots in high and low load conditions. Specifically, auditory (A), AV, and visual preceding auditory stimulus by 100 ms (VA100) increased the spatial orientation to a valid position in low load conditions. With the increase in visual perceptual load, auditory preceding the visual stimulus by 100 ms (AV100) and A warning signals had stronger spatial orientation. The results are expected to theoretically support the optimization design of the cockpit display interface, contributing to immediate flight crew awareness.
Collapse
Affiliation(s)
- Xing Peng
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Hao Jiang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Jiazhong Yang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Rong Shi
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Junyi Feng
- Technical Support Center, Operation Control Department, Beijing Capital Airlines, Beijing, China
| | - Yaowei Liang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China.,Flying Department of Southwest Branch, Air China Limited, Chengdu, China
| |
Collapse
|
11
|
Marsicano G, Cerpelloni F, Melcher D, Ronconi L. Lower multisensory temporal acuity in individuals with high schizotypal traits: a web-based study. Sci Rep 2022; 12:2782. [PMID: 35177673 PMCID: PMC8854550 DOI: 10.1038/s41598-022-06503-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/04/2021] [Accepted: 01/25/2022] [Indexed: 12/02/2022] Open
Abstract
Natural events are often multisensory, requiring the brain to combine information from the same spatial location and timing, across different senses. The importance of temporal coincidence has led to the introduction of the temporal binding window (TBW) construct, defined as the time range within which multisensory inputs are highly likely to be perceptually bound into a single entity. Anomalies in TBWs have been linked to confused perceptual experiences and inaccurate filtering of sensory inputs coming from different environmental sources. Indeed, larger TBWs have been associated with disorders such as schizophrenia and autism and are also correlated to a higher level of subclinical traits of these conditions in the general population. Here, we tested the feasibility of using a web-based version of a classic audio-visual simultaneity judgment (SJ) task with simple flash-beep stimuli in order to measure multisensory temporal acuity and its relationship with schizotypal traits as measured in the general population. Results show that: (i) the response distribution obtained in the web-based SJ task was strongly similar to those reported by studies carried out in controlled laboratory settings, and (ii) lower multisensory temporal acuity was associated with higher schizotypal traits in the “cognitive-perceptual” domains. Our findings reveal the possibility of adequately using a web-based audio-visual SJ task outside a controlled laboratory setting, available to a more diverse and representative pool of participants. These results provide additional evidence for a close relationship between lower multisensory acuity and the expression of schizotypal traits in the general population.
Collapse
Affiliation(s)
- Gianluca Marsicano
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Filippo Cerpelloni
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy.,Laboratory of Biological Psychology, Department of Brain and Cognition, Leuven Brain Institute, KU Leuve, Leuven, Belgium.,Institute of Research in Psychology (IPSY) & Institute of Neuroscience (IoNS)-University of Louvain (UCLouvain), Leuven, Belgium
| | - David Melcher
- Center for Mind/Brain Sciences and Department of Psychology and Cognitive Science, University of Trento, Rovereto, Italy. .,Psychology Program, Division of Science, New York University Abu Dhabi, Abu Dhabi, United Arab Emirates.
| | - Luca Ronconi
- School of Psychology, Vita-Salute San Raffaele University, Milan, Italy.,Division of Neuroscience, IRCCS San Raffaele Scientific Institute, Milan, Italy
| |
Collapse
|
12
|
Wang L, Lin L, Sun Y, Hou S, Ren J. The effect of movement speed on audiovisual temporal integration in streaming-bouncing illusion. Exp Brain Res 2022; 240:1139-1149. [PMID: 35147722 DOI: 10.1007/s00221-022-06312-y] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2021] [Accepted: 01/18/2022] [Indexed: 11/04/2022]
Abstract
Motion perception in real situations is often stimulated by multisensory information. Speed is an essential characteristic of moving objects; however, at present, it is not clear whether speed affects the process of audiovisual temporal integration in motion perception. Therefore, this study used a streaming-bouncing task (a bistable motion perception; SB task) combined with a simultaneous judgment task (SJ task) to explore the effect of speed on audiovisual temporal integration from implicit and explicit perspectives. The experiment had a within-subjects design, two speed conditions (fast/slow), eleven audiovisual conditions [stimulus onset asynchrony (SOA): 0 ms/ ± 60 ms/ ± 120 ms/ ± 180 ms/ ± 240 ms/ ± 300 ms], and a visual-only condition. A total of 30 subjects were recruited for the study. These participants completed the SB task and the SJ task successively. The results showed the following outcomes: (1) the optimal times needed to induce the "bouncing" illusion and maximum audiovisual bounce-inducing effect (ABE) magnitude were much earlier than that for the optimal time of audiovisual synchrony, (2) speed as a bottom-up factor could affect the proportion of "bouncing" perception in SB illusions but did not affect the ABE magnitude, (3) speed could also affect the ability of audiovisual temporal integration in motion perception, and the main manifestation was that the point of subjective simultaneity (PSS) in fast speed conditions was earlier than that of slow speed conditions in the SJ task and (4) the SB task and SJ task were not related. In conclusion, the time to complete the maximum audiovisual integration was different from the optimal time for synchrony perception; moreover, speed could affect audiovisual temporal integration in motion perception but only in explicit temporal tasks.
Collapse
Affiliation(s)
- Luning Wang
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Liyue Lin
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Yujia Sun
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China
| | - Shuang Hou
- School of Psychology, Shanghai University of Sport, Shanghai, 200438, China
| | - Jie Ren
- China Table Tennis College, Shanghai University of Sport, Shanghai, 200438, China.
| |
Collapse
|
13
|
Visual field differences in temporal synchrony processing for audio-visual stimuli. PLoS One 2021; 16:e0261129. [PMID: 34914735 PMCID: PMC8675747 DOI: 10.1371/journal.pone.0261129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Accepted: 11/24/2021] [Indexed: 11/19/2022] Open
Abstract
Audio-visual integration relies on temporal synchrony between visual and auditory inputs. However, differences in traveling and transmitting speeds between visual and auditory stimuli exist; therefore, audio-visual synchrony perception exhibits flexible functions. The processing speed of visual stimuli affects the perception of audio-visual synchrony. The present study examined the effects of visual fields, in which visual stimuli are presented, for the processing of audio-visual temporal synchrony. The point of subjective simultaneity, the temporal binding window, and the rapid recalibration effect were measured using temporal order judgment, simultaneity judgment, and stream/bounce perception, because different mechanisms of temporal processing have been suggested among these three paradigms. The results indicate that auditory stimuli should be presented earlier for visual stimuli in the central visual field than in the peripheral visual field condition in order to perceive subjective simultaneity in the temporal order judgment task conducted in this study. Meanwhile, the subjective simultaneity bandwidth was broader in the central visual field than in the peripheral visual field during the simultaneity judgment task. In the stream/bounce perception task, neither the point of subjective simultaneity nor the temporal binding window differed between the two types of visual fields. Moreover, rapid recalibration occurred in both visual fields during the simultaneity judgment tasks. However, during the temporal order judgment task and stream/bounce perception, rapid recalibration occurred only in the central visual field. These results suggest that differences in visual processing speed based on the visual field modulate the temporal processing of audio-visual stimuli. Furthermore, these three tasks, temporal order judgment, simultaneity judgment, and stream/bounce perception, each have distinct functional characteristics for audio-visual synchrony perception. Future studies are necessary to confirm the effects of compensation regarding differences in the temporal resolution of the visual field in later cortical visual pathways on visual field differences in audio-visual temporal synchrony.
Collapse
|
14
|
Marin A, Störmer VS, Carver LJ. Expectations about dynamic visual objects facilitates early sensory processing of congruent sounds. Cortex 2021; 144:198-211. [PMID: 34673436 DOI: 10.1016/j.cortex.2021.08.006] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2021] [Revised: 05/17/2021] [Accepted: 08/05/2021] [Indexed: 11/17/2022]
Abstract
The perception of a moving object can lead to the expectation of its sound, yet little is known about how visual expectations influence auditory processing. We examined how visual perception of an object moving continuously across the visual field influences early auditory processing of a sound that occurred congruently or incongruently with the object's motion. In Experiment 1, electroencephalogram (EEG) activity was recorded from adults who passively viewed a ball that appeared either on the left or right boundary of a display and continuously traversed along the horizontal midline to make contact and elicit a bounce sound off the opposite boundary. Our main analysis focused on the auditory-evoked event-related potential. For audio-visual (AV) trials, a sound accompanied the visual input when the ball contacted the opposite boundary (AV-synchronous), or the sound occurred before contact (AV-asynchronous). We also included audio-only and visual-only trials. AV-synchronous sounds elicited an earlier and attenuated auditory response relative to AV-asynchronous or audio-only events. In Experiment 2, we examined the roles of expectancy and multisensory integration in influencing this response. In addition to the audio-only, AV-synchronous, and AV-asynchronous conditions, participants were shown a ball that became occluded prior to reaching the boundary of the display, but elicited an expected sound at the point of occluded collision. The auditory response during the AV-occluded condition resembled that of the AV-synchronous condition, suggesting that expectations induced by a moving object can influence early auditory processing. Broadly, the results suggest that dynamic visual stimuli can help generate expectations about the timing of sounds, which then facilitates the processing of auditory information that matches these expectations.
Collapse
Affiliation(s)
- Andrew Marin
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| | - Viola S Störmer
- Dartmouth College, Department of Psychological and Brain Sciences, Hanover, NH, USA.
| | - Leslie J Carver
- University of California, San Diego (UCSD), Psychology Department, La Jolla, CA, USA.
| |
Collapse
|
15
|
Martolini C, Cappagli G, Signorini S, Gori M. Effects of Increasing Stimulated Area in Spatiotemporally Congruent Unisensory and Multisensory Conditions. Brain Sci 2021; 11:brainsci11030343. [PMID: 33803142 PMCID: PMC7999573 DOI: 10.3390/brainsci11030343] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2021] [Revised: 02/27/2021] [Accepted: 02/27/2021] [Indexed: 11/16/2022] Open
Abstract
Research has shown that the ability to integrate complementary sensory inputs into a unique and coherent percept based on spatiotemporal coincidence can improve perceptual precision, namely multisensory integration. Despite the extensive research on multisensory integration, very little is known about the principal mechanisms responsible for the spatial interaction of multiple sensory stimuli. Furthermore, it is not clear whether the size of spatialized stimulation can affect unisensory and multisensory perception. The present study aims to unravel whether the stimulated area’s increase has a detrimental or beneficial effect on sensory threshold. Sixteen typical adults were asked to discriminate unimodal (visual, auditory, tactile), bimodal (audio-visual, audio-tactile, visuo-tactile) and trimodal (audio-visual-tactile) stimulation produced by one, two, three or four devices positioned on the forearm. Results related to unisensory conditions indicate that the increase of the stimulated area has a detrimental effect on auditory and tactile accuracy and visual reaction times, suggesting that the size of stimulated areas affects these perceptual stimulations. Concerning multisensory stimulation, our findings indicate that integrating auditory and tactile information improves sensory precision only when the stimulation area is augmented to four devices, suggesting that multisensory interaction is occurring for expanded spatial areas.
Collapse
Affiliation(s)
- Chiara Martolini
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
- Correspondence:
| | - Giulia Cappagli
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
| | - Sabrina Signorini
- Center of Child Neuro-Ophthalmology, IRCCS Mondino Foundation, via Mondino 2, 27100 Pavia, Italy;
| | - Monica Gori
- Unit for Visually Impaired People, Center for Human Technologies, Istituto Italiano di Tecnologia, via Enrico Melen 83, 16152 Genoa, Italy; (G.C.); (M.G.)
| |
Collapse
|
16
|
Atilgan H, Bizley JK. Training enhances the ability of listeners to exploit visual information for auditory scene analysis. Cognition 2021; 208:104529. [PMID: 33373937 PMCID: PMC7868888 DOI: 10.1016/j.cognition.2020.104529] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/16/2020] [Revised: 11/24/2020] [Accepted: 11/25/2020] [Indexed: 11/25/2022]
Abstract
The ability to use temporal relationships between cross-modal cues facilitates perception and behavior. Previously we observed that temporally correlated changes in the size of a visual stimulus and the intensity in an auditory stimulus influenced the ability of listeners to perform an auditory selective attention task (Maddox, Atilgan, Bizley, & Lee, 2015). Participants detected timbral changes in a target sound while ignoring those in a simultaneously presented masker. When the visual stimulus was temporally coherent with the target sound, performance was significantly better than when the visual stimulus was temporally coherent with the masker, despite the visual stimulus conveying no task-relevant information. Here, we trained observers to detect audiovisual temporal coherence and asked whether this changed the way in which they were able to exploit visual information in the auditory selective attention task. We observed that after training, participants were able to benefit from temporal coherence between the visual stimulus and both the target and masker streams, relative to the condition in which the visual stimulus was coherent with neither sound. However, we did not observe such changes in a second group that were trained to discriminate modulation rate differences between temporally coherent audiovisual streams, although they did show an improvement in their overall performance. A control group did not change their performance between pretest and post-test and did not change how they exploited visual information. These results provide insights into how crossmodal experience may optimize multisensory integration.
Collapse
|
17
|
Yang W, Li S, Xu J, Li Z, Yang X, Ren Y. Selective and divided attention modulates audiovisual integration in adolescents. COGNITIVE DEVELOPMENT 2020. [DOI: 10.1016/j.cogdev.2020.100922] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
|
18
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
19
|
Elshout JA, Van der Stoep N, Nijboer TCW, Van der Stigchel S. Motor congruency and multisensory integration jointly facilitate visual information processing before movement execution. Exp Brain Res 2020; 238:667-673. [PMID: 32036413 PMCID: PMC7080670 DOI: 10.1007/s00221-019-05714-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 12/18/2019] [Indexed: 10/25/2022]
Abstract
Attention allows us to select important sensory information and enhances sensory information processing. Attention and our motor system are tightly coupled: attention is shifted to the target location before a goal-directed eye- or hand movement is executed. Congruent eye-hand movements to the same target can boost the effect of this pre-movement shift of attention. Moreover, visual information processing can be enhanced by, for example, auditory input presented in spatial and temporal proximity of visual input via multisensory integration (MSI). In this study, we investigated whether the combination of MSI and motor congruency can synergistically enhance visual information processing beyond what can be observed using motor congruency alone. Participants performed congruent eye- and hand movements during a 2-AFC visual discrimination task. The discrimination target was presented in the planning phase of the movements at the movement target location or a movement irrelevant location. Three conditions were compared: (1) a visual target without sound, (2) a visual target with sound spatially and temporally aligned (MSI) and (3) a visual target with sound temporally misaligned (no MSI). Performance was enhanced at the movement-relevant location when congruent motor actions and MSI coincide compared to the other conditions. Congruence in the motor system and MSI together therefore lead to enhanced sensory information processing beyond the effects of motor congruency alone, before a movement is executed. Such a synergy implies that the boost of attention previously observed for the independent factors is not at ceiling level, but can be increased even further when the right conditions are met.
Collapse
Affiliation(s)
- J A Elshout
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - N Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - T C W Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Center of Excellence for Rehabilitation Medicine, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht University and De Hoogstraat Rehabilitation, 3583 TM, Utrecht, The Netherlands
| | - S Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
20
|
Zhou HY, Cheung EFC, Chan RCK. Audiovisual temporal integration: Cognitive processing, neural mechanisms, developmental trajectory and potential interventions. Neuropsychologia 2020; 140:107396. [PMID: 32087206 DOI: 10.1016/j.neuropsychologia.2020.107396] [Citation(s) in RCA: 38] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/17/2019] [Revised: 02/14/2020] [Accepted: 02/15/2020] [Indexed: 12/21/2022]
Abstract
To integrate auditory and visual signals into a unified percept, the paired stimuli must co-occur within a limited time window known as the Temporal Binding Window (TBW). The width of the TBW, a proxy of audiovisual temporal integration ability, has been found to be correlated with higher-order cognitive and social functions. A comprehensive review of studies investigating audiovisual TBW reveals several findings: (1) a wide range of top-down processes and bottom-up features can modulate the width of the TBW, facilitating adaptation to the changing and multisensory external environment; (2) a large-scale brain network works in coordination to ensure successful detection of audiovisual (a)synchrony; (3) developmentally, audiovisual TBW follows a U-shaped pattern across the lifespan, with a protracted developmental course into late adolescence and rebounding in size again in late life; (4) an enlarged TBW is characteristic of a number of neurodevelopmental disorders; and (5) the TBW is highly flexible via perceptual and musical training. Interventions targeting the TBW may be able to improve multisensory function and ameliorate social communicative symptoms in clinical populations.
Collapse
Affiliation(s)
- Han-Yu Zhou
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | | | - Raymond C K Chan
- Neuropsychology and Applied Cognitive Neuroscience Laboratory, CAS Key Laboratory of Mental Health, Institute of Psychology, Beijing, China; Department of Psychology, University of Chinese Academy of Sciences, Beijing, China.
| |
Collapse
|
21
|
Kim S, Kim J. Effects of Multimodal Association on Ambiguous Perception in Binocular Rivalry. Perception 2019; 48:796-819. [DOI: 10.1177/0301006619867023] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
When two eyes view dissimilar images, an observer typically reports ambiguous perception called binocular rivalry where the subjective perception fluctuates between the two inputs. This perceptual instability is often comprised of exclusive dominance of each image and a transition state called piecemeal state where the two images are intermingled in patchwork manner. Herein, we investigated the effects of multimodal association of sensory congruent pair, arbitrary pair, and reverse pair on piecemeal state in order to see how each level of association affects the ambiguous perception during binocular rivalry. To induce the multisensory associations, we designed a matching task with audiovisual feedback where subjects were required to respond according to given pairing rules. We found that explicit audiovisual associations can substantially affect the piecemeal state during binocular rivalry and that this congruency effect that reduces the amount of visual ambiguity originates primarily from explicit audiovisual association training rather than common sensory features. Furthermore, when one information is associated with multiple information, recent and preexisting associations work collectively to influence the perceptual ambiguity during rivalry. Our findings show that learned multimodal association directly affects the temporal dynamics of ambiguous perception during binocular rivalry by modulating not only the exclusive dominance but also the piecemeal state in a systematic manner.
Collapse
Affiliation(s)
- Sungyong Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| | - Jeounghoon Kim
- Graduate School of Culture Technology, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea; School of Humanities and Social Sciences, Korea Advanced Institute of Science and Technology, Daejeon, Republic of Korea
| |
Collapse
|
22
|
Living and Working in a Multisensory World: From Basic Neuroscience to the Hospital. MULTIMODAL TECHNOLOGIES AND INTERACTION 2019. [DOI: 10.3390/mti3010002] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/24/2023] Open
Abstract
The intensive care unit (ICU) of a hospital is an environment subjected to ceaseless noise. Patient alarms contribute to the saturated auditory environment and often overwhelm healthcare providers with constant and false alarms. This may lead to alarm fatigue and prevent optimum patient care. In response, a multisensory alarm system developed with consideration for human neuroscience and basic music theory is proposed as a potential solution. The integration of auditory, visual, and other sensory output within an alarm system can be used to convey more meaningful clinical information about patient vital signs in the ICU and operating room to ultimately improve patient outcomes.
Collapse
|
23
|
Sanders P, Thompson B, Corballis P, Searchfield G. On the Timing of Signals in Multisensory Integration and Crossmodal Interactions: a Scoping Review. Multisens Res 2019; 32:533-573. [PMID: 31137004 DOI: 10.1163/22134808-20191331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/24/2019] [Indexed: 11/19/2022]
Abstract
A scoping review was undertaken to explore research investigating early interactions and integration of auditory and visual stimuli in the human brain. The focus was on methods used to study low-level multisensory temporal processing using simple stimuli in humans, and how this research has informed our understanding of multisensory perception. The study of multisensory temporal processing probes how the relative timing between signals affects perception. Several tasks, illusions, computational models, and neuroimaging techniques were identified in the literature search. Research into early audiovisual temporal processing in special populations was also reviewed. Recent research has continued to provide support for early integration of crossmodal information. These early interactions can influence higher-level factors, and vice versa. Temporal relationships between auditory and visual stimuli influence multisensory perception, and likely play a substantial role in solving the 'correspondence problem' (how the brain determines which sensory signals belong together, and which should be segregated).
Collapse
Affiliation(s)
- Philip Sanders
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| | - Benjamin Thompson
- 2Centre for Brain Research, University of Auckland, New Zealand.,4School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand.,5School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul Corballis
- 2Centre for Brain Research, University of Auckland, New Zealand.,6Department of Psychology, University of Auckland, Auckland, New Zealand
| | - Grant Searchfield
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| |
Collapse
|
24
|
Yang W, Guo A, Li Y, Qiu J, Li S, Yin S, Chen J, Ren Y. Audio-Visual Spatiotemporal Perceptual Training Enhances the P300 Component in Healthy Older Adults. Front Psychol 2018; 9:2537. [PMID: 30618958 PMCID: PMC6297778 DOI: 10.3389/fpsyg.2018.02537] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2018] [Accepted: 11/28/2018] [Indexed: 11/13/2022] Open
Abstract
In older adults, cognitive abilities, such as those associated with vision and hearing, generally decrease with age. According to several studies, audio-visual perceptual training can improve perceived competence regarding visual and auditory stimuli, suggesting that perceptual training is effective and beneficial. However, whether audio-visual perceptual training can induce far-transfer effects in other forms of untrained cognitive processing that are not directly trained in older adults remains unclear. In this study, the classic P300 component, a neurophysiological indicator of cognitive processing of a stimulus, was selected as an evaluation index of the training effect. We trained both young and older adults on the ability to judge the temporal and spatial consistency of visual and auditory stimuli. P300 amplitudes were significantly greater in the posttraining session than in the pretraining session in older adults (P = 0.001). However, perceptual training had no significant effect (P = 0.949) on the P300 component in young adults. Our results illustrate that audio-visual perceptual training can lead to far-transfer effects in healthy older adults. These findings highlight the robust malleability of the aging brain, and further provide evidence to motivate exploration to improve cognitive abilities in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina.,Brain Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Ao Guo
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Yueying Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Jiajing Qiu
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Shufei Yin
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Jianxin Chen
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, sChina
| | - Yanna Ren
- Department of Psychology, Medical Humanities College, Guiyang College of Traditional Chinese Medicine, Guiyang, China
| |
Collapse
|
25
|
Feldman JI, Dunham K, Cassidy M, Wallace MT, Liu Y, Woynaroski TG. Audiovisual multisensory integration in individuals with autism spectrum disorder: A systematic review and meta-analysis. Neurosci Biobehav Rev 2018; 95:220-234. [PMID: 30287245 PMCID: PMC6291229 DOI: 10.1016/j.neubiorev.2018.09.020] [Citation(s) in RCA: 87] [Impact Index Per Article: 14.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2018] [Revised: 09/10/2018] [Accepted: 09/25/2018] [Indexed: 02/04/2023]
Abstract
An ever-growing literature has aimed to determine how individuals with autism spectrum disorder (ASD) differ from their typically developing (TD) peers on measures of multisensory integration (MSI) and to ascertain the degree to which differences in MSI are associated with the broad range of symptoms associated with ASD. Findings, however, have been highly variable across the studies carried out to date. The present work systematically reviews and quantitatively synthesizes the large literature on audiovisual MSI in individuals with ASD to evaluate the cumulative evidence for (a) group differences between individuals with ASD and TD peers, (b) correlations between MSI and autism symptoms in individuals with ASD and (c) study level factors that may moderate findings (i.e., explain differential effects) observed across studies. To identify eligible studies, a comprehensive search strategy was employed using the ProQuest search engine, PubMed database, forwards and backwards citation searches, direct author contact, and hand-searching of select conference proceedings. A significant between-group difference in MSI was evident in the literature, with individuals with ASD demonstrating worse audiovisual integration on average across studies compared to TD controls. This effect was moderated by mean participant age, such that between-group differences were more pronounced in younger samples. The mean correlation between MSI and autism and related symptomatology was also significant, indicating that increased audiovisual integration in individuals with ASD is associated with better language/communication abilities and/or reduced autism symptom severity in the extant literature. This effect was moderated by whether the stimuli were linguistic versus non-linguistic in nature, such that correlation magnitudes tended to be significantly greater when linguistic stimuli were utilized in the measure of MSI. Limitations and future directions for primary and meta-analytic research are discussed.
Collapse
Affiliation(s)
- Jacob I Feldman
- Department of Hearing and Speech Sciences, Vanderbilt University, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 37232, USA.
| | - Kacie Dunham
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Margaret Cassidy
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Mark T Wallace
- Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Pharmacology, Vanderbilt University, Nashville, TN, USA; Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| | - Yupeng Liu
- Neuroscience Undergraduate Program, Vanderbilt University, Nashville, TN, USA
| | - Tiffany G Woynaroski
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, 110 Magnolia Cir, Nashville, TN, 37203, USA; Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN, 37232, USA; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave S, MCE South Tower 8310, Nashville, TN, 27323, USA.
| |
Collapse
|
26
|
Tugac N, Gonzalez D, Noguchi K, Niechwiej-Szwedo E. The role of somatosensory input in target localization during binocular and monocular viewing while performing a high precision reaching and placement task. Exp Eye Res 2018; 183:76-83. [PMID: 30125540 DOI: 10.1016/j.exer.2018.08.013] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2018] [Revised: 08/15/2018] [Accepted: 08/16/2018] [Indexed: 11/25/2022]
Abstract
Binocular vision provides the most accurate and precise depth information; however, many people have impairments in binocular visual function. It is possible that other sensory inputs could be used to obtain reliable depth information when binocular vision is not available. However, it is currently unknown whether depth information from another modality improves target localization in depth during action execution. Therefore, the goal of this study was to assess whether somatosensory input improves target localization during the performance of a precision placement task. Visually normal young adults (n = 15) performed a bead threading task during binocular and monocular viewing in two experimental conditions where needle location was specified by 1) vision only, or 2) vision and somatosensory input, which was provided by the non-dominant limb. Performance on the task was assessed using spatial and temporal kinematic measures. In accordance with the hypothesis, results showed that the interval spent placing the bead on the needle was significantly shorter during monocular viewing when somatosensory input was available in comparison to a vision only condition. In contrast, results showed no evidence to support that somatosensory input about the needle location affects trajectory control. These findings demonstrate that the central nervous system relies predominately on visual input during reach execution, however, somatosensory input can be used to facilitate the performance of the precision placement task.
Collapse
Affiliation(s)
- Naime Tugac
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - David Gonzalez
- Department of Kinesiology, University of Waterloo, Waterloo, Canada
| | - Kimihiro Noguchi
- Department of Mathematics, Western Washington University, Bellingham, USA
| | | |
Collapse
|
27
|
Finotti G, Migliorati D, Costantini M. Multisensory integration, body representation and hyperactivity of the immune system. Conscious Cogn 2018; 63:61-73. [PMID: 29957448 DOI: 10.1016/j.concog.2018.06.009] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2017] [Revised: 06/05/2018] [Accepted: 06/06/2018] [Indexed: 10/28/2022]
Abstract
Multisensory stimuli are integrated over a delimited window of temporal asynchronies. This window is highly variable across individuals, but the origins of this variability are still not clear. We hypothesized that immune system functioning could partially account for this variability. In two experiments, we investigated the relationship between key aspects of multisensory integration in allergic participants and healthy controls. First, we tested the temporal constraint of multisensory integration, as measured by the temporal binding window. Second, we tested multisensory body representation, as indexed by the Rubber Hand Illusion (RHI). Results showed that allergic participants have a narrower temporal binding window and are less susceptible to the RHI than healthy controls. Overall, we provide evidence linking multisensory integration processes and the activity of the immune system. The present findings are discussed within the context of the effect of immune molecules on the brain mechanisms enabling multisensory integration and multisensory body representation.
Collapse
Affiliation(s)
- Gianluca Finotti
- Centre for Brain Science, Department of Psychology, University of Essex, United Kingdom; Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy.
| | - Daniele Migliorati
- Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy
| | - Marcello Costantini
- Centre for Brain Science, Department of Psychology, University of Essex, United Kingdom; Department of Neuroscience, Imaging and Clinical Sciences, University G. d'Annunzio, Chieti, Italy; Institute for Advanced Biomedical Technologies - ITAB, University G. d'Annunzio, Chieti, Italy.
| |
Collapse
|
28
|
Stevenson RA, Sheffield SW, Butera IM, Gifford RH, Wallace MT. Multisensory Integration in Cochlear Implant Recipients. Ear Hear 2018; 38:521-538. [PMID: 28399064 DOI: 10.1097/aud.0000000000000435] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Speech perception is inherently a multisensory process involving integration of auditory and visual cues. Multisensory integration in cochlear implant (CI) recipients is a unique circumstance in that the integration occurs after auditory deprivation and the provision of hearing via the CI. Despite the clear importance of multisensory cues for perception, in general, and for speech intelligibility, specifically, the topic of multisensory perceptual benefits in CI users has only recently begun to emerge as an area of inquiry. We review the research that has been conducted on multisensory integration in CI users to date and suggest a number of areas needing further research. The overall pattern of results indicates that many CI recipients show at least some perceptual gain that can be attributable to multisensory integration. The extent of this gain, however, varies based on a number of factors, including age of implantation and specific task being assessed (e.g., stimulus detection, phoneme perception, word recognition). Although both children and adults with CIs obtain audiovisual benefits for phoneme, word, and sentence stimuli, neither group shows demonstrable gain for suprasegmental feature perception. Additionally, only early-implanted children and the highest performing adults obtain audiovisual integration benefits similar to individuals with normal hearing. Increasing age of implantation in children is associated with poorer gains resultant from audiovisual integration, suggesting a sensitive period in development for the brain networks that subserve these integrative functions, as well as length of auditory experience. This finding highlights the need for early detection of and intervention for hearing loss, not only in terms of auditory perception, but also in terms of the behavioral and perceptual benefits of audiovisual processing. Importantly, patterns of auditory, visual, and audiovisual responses suggest that underlying integrative processes may be fundamentally different between CI users and typical-hearing listeners. Future research, particularly in low-level processing tasks such as signal detection will help to further assess mechanisms of multisensory integration for individuals with hearing loss, both with and without CIs.
Collapse
Affiliation(s)
- Ryan A Stevenson
- 1Department of Psychology, University of Western Ontario, London, Ontario, Canada; 2Brain and Mind Institute, University of Western Ontario, London, Ontario, Canada; 3Walter Reed National Military Medical Center, Audiology and Speech Pathology Center, London, Ontario, Canada; 4Vanderbilt Brain Institute, Nashville, Tennesse; 5Vanderbilt Kennedy Center, Nashville, Tennesse; 6Department of Psychology, Vanderbilt University, Nashville, Tennesse; 7Department of Psychiatry, Vanderbilt University Medical Center, Nashville, Tennesse; and 8Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennesse
| | | | | | | | | |
Collapse
|
29
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
30
|
Normal temporal binding window but no sound-induced flash illusion in people with one eye. Exp Brain Res 2018; 236:1825-1834. [DOI: 10.1007/s00221-018-5263-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Accepted: 04/12/2018] [Indexed: 10/17/2022]
|
31
|
Absent Audiovisual Integration Elicited by Peripheral Stimuli in Parkinson's Disease. PARKINSONS DISEASE 2018; 2018:1648017. [PMID: 29850014 PMCID: PMC5924975 DOI: 10.1155/2018/1648017] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 09/16/2017] [Revised: 01/01/2018] [Accepted: 01/29/2018] [Indexed: 01/22/2023]
Abstract
The basal ganglia, which have been shown to be a significant multisensory hub, are disordered in Parkinson's disease (PD). This study was to investigate the audiovisual integration of peripheral stimuli in PD patients with/without sleep disturbances. Thirty-six age-matched normal controls (NC) and 30 PD patients were recruited for an auditory/visual discrimination experiment. The mean response times for each participant were analyzed using repeated measures ANOVA and race model. The results showed that the response to all stimuli was significantly delayed for PD compared to NC (all p < 0.01). The response to audiovisual stimuli was significantly faster than that to unimodal stimuli in both NC and PD (p < 0.001). Additionally, audiovisual integration was absent in PD; however, it did occur in NC. Further analysis showed that there was no significant audiovisual integration in PD with/without cognitive impairment or in PD with/without sleep disturbances. Furthermore, audiovisual facilitation was not associated with Hoehn and Yahr stage, disease duration, or the presence of sleep disturbances (all p > 0.05). The current results showed that audiovisual multisensory integration for peripheral stimuli is absent in PD regardless of sleep disturbances and further suggested the abnormal audiovisual integration might be a potential early manifestation of PD.
Collapse
|
32
|
Murray MM, Thelen A, Ionta S, Wallace MT. Contributions of Intraindividual and Interindividual Differences to Multisensory Processes. J Cogn Neurosci 2018; 31:360-376. [PMID: 29488852 DOI: 10.1162/jocn_a_01246] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Most evidence on the neural and perceptual correlates of sensory processing derives from studies that have focused on only a single sensory modality and averaged the data from groups of participants. Although valuable, such studies ignore the substantial interindividual and intraindividual differences that are undoubtedly at play. Such variability plays an integral role in both the behavioral/perceptual realms and in the neural correlates of these processes, but substantially less is known when compared with group-averaged data. Recently, it has been shown that the presentation of stimuli from two or more sensory modalities (i.e., multisensory stimulation) not only results in the well-established performance gains but also gives rise to reductions in behavioral and neural response variability. To better understand the relationship between neural and behavioral response variability under multisensory conditions, this study investigated both behavior and brain activity in a task requiring participants to discriminate moving versus static stimuli presented in either a unisensory or multisensory context. EEG data were analyzed with respect to intraindividual and interindividual differences in RTs. The results showed that trial-by-trial variability of RTs was significantly reduced under audiovisual presentation conditions as compared with visual-only presentations across all participants. Intraindividual variability of RTs was linked to changes in correlated activity between clusters within an occipital to frontal network. In addition, interindividual variability of RTs was linked to differential recruitment of medial frontal cortices. The present findings highlight differences in the brain networks that support behavioral benefits during unisensory versus multisensory motion detection and provide an important view into the functional dynamics within neuronal networks underpinning intraindividual performance differences.
Collapse
Affiliation(s)
- Micah M Murray
- Vaudois University Hospital Center and University of Lausanne.,Center for Biomedical Imaging of Lausanne and Geneva.,Fondation Asile des Aveugles and University of Lausanne.,Vanderbilt University Medical Center
| | | | - Silvio Ionta
- Vaudois University Hospital Center and University of Lausanne.,Fondation Asile des Aveugles and University of Lausanne.,ETH Zürich
| | - Mark T Wallace
- Vanderbilt University Medical Center.,Vanderbilt University
| |
Collapse
|
33
|
Task-irrelevant sounds influence both temporal order and apparent-motion judgments about tactile stimuli applied to crossed and uncrossed hands. Atten Percept Psychophys 2017; 80:773-783. [DOI: 10.3758/s13414-017-1476-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
34
|
Barutchu A, Spence C, Humphreys GW. Multisensory enhancement elicited by unconscious visual stimuli. Exp Brain Res 2017; 236:409-417. [PMID: 29197998 PMCID: PMC5809521 DOI: 10.1007/s00221-017-5140-z] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Accepted: 09/26/2017] [Indexed: 12/19/2022]
Abstract
The merging of information from different senses (i.e., multisensory integration) can facilitate information processing. Processing enhancements have been observed with signals that are irrelevant to the task at hand, and with cues that are non-predictive. Such findings are consistent with the notion that multiple sensory signals are sometimes integrated automatically. Multisensory enhancement has even been reported with stimuli that have been presented subliminally, though only with meaningful multisensory relations that have already been learned. The question of whether there exist cases where multisensory effects occur without either learning or awareness has, though, not been clearly established in the literature to date. Here, we present a case study of a patient with Posterior Cortical Atrophy, who was unable to consciously perceive visual stimuli with our task parameters, yet who nevertheless still exhibited signs of multisensory enhancement even with unlearned relations between audiovisual stimuli. In a simple speeded detection task, both response speed, and the variability of reaction times, decreased in a similar manner to controls for multisensory stimuli. These results are consistent with the view that the conscious perception of stimuli and prior learning are not always a prerequisite for multisensory integration to enhance human performance.
Collapse
Affiliation(s)
- Ayla Barutchu
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK.
| | - Charles Spence
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| | - Glyn W Humphreys
- Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, UK
| |
Collapse
|
35
|
Abstract
Purpose of Review The integration of information across sensory modalities into unified percepts is a fundamental sensory process upon which a multitude of cognitive processes are based. We review the body of literature exploring aging-related changes in audiovisual integration published over the last five years. Specifically, we review the impact of changes in temporal processing, the influence of the effectiveness of sensory inputs, the role of working memory, and the newer studies of intra-individual variability during these processes. Recent Findings Work in the last five years on bottom-up influences of sensory perception has garnered significant attention. Temporal processing, a driving factors of multisensory integration, has now been shown to decouple with multisensory integration in aging, despite their co-decline with aging. The impact of stimulus effectiveness also changes with age, where older adults show maximal benefit from multisensory gain at high signal-to-noise ratios. Following sensory decline, high working memory capacities have now been shown to be somewhat of a protective factor against age-related declines in audiovisual speech perception, particularly in noise. Finally, newer research is emerging focusing on the general intra-individual variability observed with aging. Summary Overall, the studies of the past five years have replicated and expanded on previous work that highlights the role of bottom-up sensory changes with aging and their influence on audiovisual integration, as well as the top-down influence of working memory.
Collapse
Affiliation(s)
- Sarah H Baum
- Department of Psychology, University of Washington
| | - Ryan Stevenson
- Department of Psychology, Western University.,Brain and Mind Institute, Western University.,Department of Psychiatry, Schulich School of Medicine and Dentistry, Western University.,Program in Neuroscience, Schulich School of Medicine and Dentistry, Western University.,Centre for Vision Research, York University
| |
Collapse
|
36
|
Stevenson RA, Baum SH, Krueger J, Newhouse PA, Wallace MT. Links between temporal acuity and multisensory integration across life span. J Exp Psychol Hum Percept Perform 2017; 44:106-116. [PMID: 28447850 DOI: 10.1037/xhp0000424] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The temporal relationship between individual pieces of information from the different sensory modalities is one of the stronger cues to integrate such information into a unified perceptual gestalt, conveying numerous perceptual and behavioral advantages. Temporal acuity, however, varies greatly over the life span. It has previously been hypothesized that changes in temporal acuity in both development and healthy aging may thus play a key role in integrative abilities. This study tested the temporal acuity of 138 individuals ranging in age from 5 to 80. Temporal acuity and multisensory integration abilities were tested both within and across modalities (audition and vision) with simultaneity judgment and temporal order judgment tasks. We observed that temporal acuity, both within and across modalities, improved throughout development into adulthood and subsequently declined with healthy aging, as did the ability to integrate multisensory speech information. Of importance, throughout development, temporal acuity of simple stimuli (i.e., flashes and beeps) predicted individuals' abilities to integrate more complex speech information. However, in the aging population, although temporal acuity declined with healthy aging and was accompanied by declines in integrative abilities, temporal acuity was not able to predict integration at the individual level. Together, these results suggest that the impact of temporal acuity on multisensory integration varies throughout the life span. Although the maturation of temporal acuity drives the rise of multisensory integrative abilities during development, it is unable to account for changes in integrative abilities in healthy aging. The differential relationships between age, temporal acuity, and multisensory integration suggest an important role for experience in these processes. (PsycINFO Database Record
Collapse
Affiliation(s)
- Ryan A Stevenson
- Department of Psychology, Brain and Mind Institute, University of Western Ontario
| | - Sarah H Baum
- Department of Psychology, University of Washington
| | | | - Paul A Newhouse
- Department of Psychiatry and Behavioral Sciences, Vanderbilt University Medical Center
| | | |
Collapse
|
37
|
Simultaneity judgment using olfactory-visual, visual-gustatory, and olfactory-gustatory combinations. PLoS One 2017; 12:e0174958. [PMID: 28376116 PMCID: PMC5380340 DOI: 10.1371/journal.pone.0174958] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2016] [Accepted: 03/17/2017] [Indexed: 11/19/2022] Open
Abstract
Vision is a physical sense, whereas olfaction and gustation are chemical senses. Active sensing might function in vision, olfaction, and gustation, whereas passive sensing might function in vision and olfaction but not gustation. To investigate whether each sensory property affected synchrony perception, participants in this study performed simultaneity judgment (SJ) for three cross-modal combinations using visual (red LED light), olfactory (coumarin), and gustatory (NaCl solution) stimuli. We calculated the half-width at half-height (HWHH) and point of subjective simultaneity (PSS) on the basis of temporal distributions of simultaneous response rates in each combination. Although HWHH did not differ significantly among three cross-modal combinations, HWHH exhibited a higher value in cross-modal combinations involving one or two chemical stimuli than in combinations of two physical stimuli, reported in a previous study. The PSS of the olfactory–visual combination was approximately equal to the point of objective simultaneity (POS), whereas the PSS of visual–gustatory, and olfactory–gustatory combinations receded significantly from the POS. In order to generalize these results as specific to chemical senses in regard to synchrony perception, we need to determine whether the same phenomena will be reproduced when performing SJ for various cross-modal combinations using visual, olfactory, and gustatory stimuli other than red LED light, coumarin, and NaCl solution.
Collapse
|
38
|
Stevenson RA, Park S, Cochran C, McIntosh LG, Noel JP, Barense MD, Ferber S, Wallace MT. The associations between multisensory temporal processing and symptoms of schizophrenia. Schizophr Res 2017; 179:97-103. [PMID: 27746052 PMCID: PMC5463449 DOI: 10.1016/j.schres.2016.09.035] [Citation(s) in RCA: 96] [Impact Index Per Article: 13.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/23/2016] [Revised: 09/23/2016] [Accepted: 09/28/2016] [Indexed: 11/29/2022]
Abstract
Recent neurobiological accounts of schizophrenia have included an emphasis on changes in sensory processing. These sensory and perceptual deficits can have a cascading effect onto higher-level cognitive processes and clinical symptoms. One form of sensory dysfunction that has been consistently observed in schizophrenia is altered temporal processing. In this study, we investigated temporal processing within and across the auditory and visual modalities in individuals with schizophrenia (SCZ) and age-matched healthy controls. Individuals with SCZ showed auditory and visual temporal processing abnormalities, as well as multisensory temporal processing dysfunction that extended beyond that attributable to unisensory processing dysfunction. Most importantly, these multisensory temporal deficits were associated with the severity of hallucinations. This link between atypical multisensory temporal perception and clinical symptomatology suggests that clinical symptoms of schizophrenia may be at least partly a result of cascading effects from (multi)sensory disturbances. These results are discussed in terms of underlying neural bases and the possible implications for remediation.
Collapse
Affiliation(s)
- Ryan A. Stevenson
- The University of Western Ontario, Department of Psychology London, ON, Canada,The University of Western Ontario, Brain and Mind Institute London, ON, Canada
| | - Sohee Park
- Vanderbilt University, Department of Psychology Nashville, TN, USA
| | - Channing Cochran
- Vanderbilt University, Department of Psychology Nashville, TN, USA
| | | | - Jean-Paul Noel
- Vanderbilt Brain Institute, Vanderbilt University Nashville, TN, USA
| | - Morgan D. Barense
- University of Toronto, Department of Psychology Toronto, ON, Canada,Rotman Research Institute Toronto, ON, Canada
| | - Susanne Ferber
- University of Toronto, Department of Psychology Toronto, ON, Canada,Rotman Research Institute Toronto, ON, Canada
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University Nashville, TN, USA,Vanderbilt University Medical Center, Department of Hearing and Speech Sciences Nashville, TN, USA,Vanderbilt Kennedy Center, Vanderbilt University Medical Center Nashville, TN, USA,Vanderbilt University, Department of Psychology Nashville, TN, USA,Vanderbilt University Medical Center, Department of Psychiatry Nashville, TN, USA
| |
Collapse
|
39
|
|
40
|
Li Q, Yu H, Wu Y, Gao N. The spatial reliability of task-irrelevant sounds modulates bimodal audiovisual integration: An event-related potential study. Neurosci Lett 2016; 629:149-154. [PMID: 27392755 DOI: 10.1016/j.neulet.2016.07.003] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/18/2016] [Revised: 06/09/2016] [Accepted: 07/04/2016] [Indexed: 11/25/2022]
Abstract
The integration of multiple sensory inputs is essential for perception of the external world. The spatial factor is a fundamental property of multisensory audiovisual integration. Previous studies of the spatial constraints on bimodal audiovisual integration have mainly focused on the spatial congruity of audiovisual information. However, the effect of spatial reliability within audiovisual information on bimodal audiovisual integration remains unclear. In this study, we used event-related potentials (ERPs) to examine the effect of spatial reliability of task-irrelevant sounds on audiovisual integration. Three relevant ERP components emerged: the first at 140-200ms over a wide central area, the second at 280-320ms over the fronto-central area, and a third at 380-440ms over the parieto-occipital area. Our results demonstrate that ERP amplitudes elicited by audiovisual stimuli with reliable spatial relationships are larger than those elicited by stimuli with inconsistent spatial relationships. In addition, we hypothesized that spatial reliability within an audiovisual stimulus enhances feedback projections to the primary visual cortex from multisensory integration regions. Overall, our findings suggest that the spatial linking of visual and auditory information depends on spatial reliability within an audiovisual stimulus and occurs at a relatively late stage of processing.
Collapse
Affiliation(s)
- Qi Li
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China.
| | - Hongtao Yu
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Yan Wu
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| | - Ning Gao
- School of Computer Science and Technology, Changchun University of Science and Technology, Changchun, China
| |
Collapse
|
41
|
Van der Stoep N, Van der Stigchel S, Nijboer TCW, Spence C. Visually Induced Inhibition of Return Affects the Integration of Auditory and Visual Information. Perception 2016; 46:6-17. [DOI: 10.1177/0301006616661934] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.
Collapse
Affiliation(s)
- N. Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - S. Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - T. C. W. Nijboer
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; Brain Center Rudolf Magnus and Center of Excellence for Rehabilitation Medicine, University Medical Center Utrecht, Utrecht, The Netherlands; and De Hoogstraat Rehabilitation, Utrecht, The Netherlands
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, UK
| |
Collapse
|
42
|
Murray MM, Lewkowicz DJ, Amedi A, Wallace MT. Multisensory Processes: A Balancing Act across the Lifespan. Trends Neurosci 2016; 39:567-579. [PMID: 27282408 PMCID: PMC4967384 DOI: 10.1016/j.tins.2016.05.003] [Citation(s) in RCA: 137] [Impact Index Per Article: 17.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/19/2016] [Revised: 04/13/2016] [Accepted: 05/12/2016] [Indexed: 11/20/2022]
Abstract
Multisensory processes are fundamental in scaffolding perception, cognition, learning, and behavior. How and when stimuli from different sensory modalities are integrated rather than treated as separate entities is poorly understood. We review how the relative reliance on stimulus characteristics versus learned associations dynamically shapes multisensory processes. We illustrate the dynamism in multisensory function across two timescales: one long term that operates across the lifespan and one short term that operates during the learning of new multisensory relations. In addition, we highlight the importance of task contingencies. We conclude that these highly dynamic multisensory processes, based on the relative weighting of stimulus characteristics and learned associations, provide both stability and flexibility to brain functions over a wide range of temporal scales.
Collapse
Affiliation(s)
- Micah M Murray
- The Laboratory for Investigative Neurophysiology (The LINE), Department of Clinical Neurosciences and Department of Radiology, University Hospital Centre and University of Lausanne, Lausanne, Switzerland; Electroencephalography Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Lausanne, Switzerland; Department of Ophthalmology, University of Lausanne, Jules Gonin Eye Hospital, Lausanne, Switzerland; Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA.
| | - David J Lewkowicz
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Amir Amedi
- Department of Medical Neurobiology, Institute for Medical Research Israel-Canada (IMRIC), Hadassah Medical School, Hebrew University of Jerusalem, Jerusalem, Israel; Interdisciplinary and Cognitive Science Program, The Edmond & Lily Safra Center for Brain Sciences (ELSC), Hebrew University of Jerusalem, Jerusalem, Israel
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA; Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN, USA.
| |
Collapse
|
43
|
Noel JP, Lukowska M, Wallace M, Serino A. Multisensory simultaneity judgment and proximity to the body. J Vis 2016; 16:21. [PMID: 26891828 PMCID: PMC4777235 DOI: 10.1167/16.3.21] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/03/2022] Open
Abstract
The integration of information across different sensory modalities is known to be dependent upon the statistical characteristics of the stimuli to be combined. For example, the spatial and temporal proximity of stimuli are important determinants with stimuli that are close in space and time being more likely to be bound. These multisensory interactions occur not only for singular points in space/time, but over “windows” of space and time that likely relate to the ecological statistics of real-world stimuli. Relatedly, human psychophysical work has demonstrated that individuals are highly prone to judge multisensory stimuli as co-occurring over a wide range of time—a so-called simultaneity window (SW). Similarly, there exists a spatial representation of peripersonal space (PPS) surrounding the body in which stimuli related to the body and to external events occurring near the body are highly likely to be jointly processed. In the current study, we sought to examine the interaction between these temporal and spatial dimensions of multisensory representation by measuring the SW for audiovisual stimuli through proximal–distal space (i.e., PPS and extrapersonal space). Results demonstrate that the audiovisual SWs within PPS are larger than outside PPS. In addition, we suggest that this effect is likely due to an automatic and additional computation of these multisensory events in a body-centered reference frame. We discuss the current findings in terms of the spatiotemporal constraints of multisensory interactions and the implication of distinct reference frames on this process.
Collapse
|
44
|
Oculomotor interference of bimodal distractors. Vision Res 2016; 123:46-55. [PMID: 27164053 PMCID: PMC4894297 DOI: 10.1016/j.visres.2016.04.002] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/27/2015] [Revised: 04/08/2016] [Accepted: 04/18/2016] [Indexed: 11/28/2022]
Abstract
Bimodal distractors evoked more oculomotor competition than unimodal distractors. The direction of interference was dependent on the spatial layout of the scene. Close distractors cause deviation towards, remote distractors cause deviation away. Saccade averaging and trajectory deviation were similarly affected by distractors. Interfering effects were most pronounced in the spatial domain.
When executing an eye movement to a target location, the presence of an irrelevant distracting stimulus can influence the saccade metrics and latency. The present study investigated the influence of distractors of different sensory modalities (i.e. auditory, visual and audiovisual) which were presented at various distances (i.e. close or remote) from a visual target. The interfering effects of a bimodal distractor were more pronounced in the spatial domain than in the temporal domain. The results indicate that the direction of interference depended on the spatial layout of the visual scene. The close bimodal distractor caused the saccade endpoint and saccade trajectory to deviate towards the distractor whereas the remote bimodal distractor caused a deviation away from the distractor. Furthermore, saccade averaging and trajectory deviation evoked by a bimodal distractor was larger compared to the effects evoked by a unimodal distractor. This indicates that a bimodal distractor evoked stronger spatial oculomotor competition compared to a unimodal distractor and that the direction of the interference depended on the distance between the target and the distractor. Together, these findings suggest that the oculomotor vector to irrelevant bimodal input is enhanced and that the interference by multisensory input is stronger compared to unisensory input.
Collapse
|
45
|
Van der Stoep N, Van der Stigchel S, Nijboer TCW, Van der Smagt MJ. Audiovisual integration in near and far space: effects of changes in distance and stimulus effectiveness. Exp Brain Res 2016; 234:1175-88. [PMID: 25788009 PMCID: PMC4828496 DOI: 10.1007/s00221-015-4248-2] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2014] [Accepted: 03/03/2015] [Indexed: 12/19/2022]
Abstract
A factor that is often not considered in multisensory research is the distance from which information is presented. Interestingly, various studies have shown that the distance at which information is presented can modulate the strength of multisensory interactions. In addition, our everyday multisensory experience in near and far space is rather asymmetrical in terms of retinal image size and stimulus intensity. This asymmetry is the result of the relation between the stimulus-observer distance and its retinal image size and intensity: an object that is further away is generally smaller on the retina as compared to the same object when it is presented nearer. Similarly, auditory intensity decreases as the distance from the observer increases. We investigated how each of these factors alone, and their combination, affected audiovisual integration. Unimodal and bimodal stimuli were presented in near and far space, with and without controlling for distance-dependent changes in retinal image size and intensity. Audiovisual integration was enhanced for stimuli that were presented in far space as compared to near space, but only when the stimuli were not corrected for visual angle and intensity. The same decrease in intensity and retinal size in near space did not enhance audiovisual integration, indicating that these results cannot be explained by changes in stimulus efficacy or an increase in distance alone, but rather by an interaction between these factors. The results are discussed in the context of multisensory experience and spatial uncertainty, and underline the importance of studying multisensory integration in the depth space.
Collapse
Affiliation(s)
- N Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - S Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - T C W Nijboer
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
- Brain Center Rudolf Magnus, and Center of Excellence for Rehabilitation Medicine, De Hoogstraat Rehabilitation, University Medical Center Utrecht and De Hoogstraat Rehabilitation Center, Utrecht, The Netherlands
| | - M J Van der Smagt
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
46
|
Krueger Fister J, Stevenson RA, Nidiffer AR, Barnett ZP, Wallace MT. Stimulus intensity modulates multisensory temporal processing. Neuropsychologia 2016; 88:92-100. [PMID: 26920937 DOI: 10.1016/j.neuropsychologia.2016.02.016] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 01/20/2016] [Accepted: 02/22/2016] [Indexed: 12/18/2022]
Abstract
One of the more challenging feats that multisensory systems must perform is to determine which sensory signals originate from the same external event, and thus should be integrated or "bound" into a singular perceptual object or event, and which signals should be segregated. Two important stimulus properties impacting this process are the timing and effectiveness of the paired stimuli. It has been well established that the more temporally aligned two stimuli are, the greater the degree to which they influence one another's processing. In addition, the less effective the individual unisensory stimuli are in eliciting a response, the greater the benefit when they are combined. However, the interaction between stimulus timing and stimulus effectiveness in driving multisensory-mediated behaviors has never been explored - which was the purpose of the current study. Participants were presented with either high- or low-intensity audiovisual stimuli in which stimulus onset asynchronies (SOAs) were parametrically varied, and were asked to report on the perceived synchrony/asynchrony of the paired stimuli. Our results revealed an interaction between the temporal relationship (SOA) and intensity of the stimuli. Specifically, individuals were more tolerant of larger temporal offsets (i.e., more likely to call them synchronous) when the paired stimuli were less effective. This interaction was also seen in response time (RT) distributions. Behavioral gains in RTs were seen with synchronous relative to asynchronous presentations, but this effect was more pronounced with high-intensity stimuli. These data suggest that stimulus effectiveness plays an underappreciated role in the perception of the timing of multisensory events, and reinforces the interdependency of the principles of multisensory integration in determining behavior and shaping perception.
Collapse
Affiliation(s)
- Juliane Krueger Fister
- Neuroscience Graduate Program, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States.
| | - Ryan A Stevenson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States; Vanderbilt University Kennedy Center, United States; Department of Psychology, University of Toronto, Canada
| | - Aaron R Nidiffer
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Zachary P Barnett
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States
| | - Mark T Wallace
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, United States; Vanderbilt Brain Institute, United States; Vanderbilt University Kennedy Center, United States; Department of Psychology, Vanderbilt University, United States; Department of Psychiatry, Vanderbilt University, United States
| |
Collapse
|
47
|
Interactions between space and effectiveness in human multisensory performance. Neuropsychologia 2016; 88:83-91. [PMID: 26826522 DOI: 10.1016/j.neuropsychologia.2016.01.031] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/25/2015] [Revised: 12/30/2015] [Accepted: 01/26/2016] [Indexed: 11/23/2022]
Abstract
Several stimulus factors are important in multisensory integration, including the spatial and temporal relationships of the paired stimuli as well as their effectiveness. Changes in these factors have been shown to dramatically change the nature and magnitude of multisensory interactions. Typically, these factors are considered in isolation, although there is a growing appreciation for the fact that they are likely to be strongly interrelated. Here, we examined interactions between two of these factors - spatial location and effectiveness - in dictating performance in the localization of an audiovisual target. A psychophysical experiment was conducted in which participants reported the perceived location of visual flashes and auditory noise bursts presented alone and in combination. Stimuli were presented at four spatial locations relative to fixation (0°, 30°, 60°, 90°) and at two intensity levels (high, low). Multisensory combinations were always spatially coincident and of the matching intensity (high-high or low-low). In responding to visual stimuli alone, localization accuracy decreased and response times (RTs) increased as stimuli were presented at more eccentric locations. In responding to auditory stimuli, performance was poorest at the 30° and 60° locations. For both visual and auditory stimuli, accuracy was greater and RTs were faster for more intense stimuli. For responses to visual-auditory stimulus combinations, performance enhancements were found at locations in which the unisensory performance was lowest, results concordant with the concept of inverse effectiveness. RTs for these multisensory presentations frequently violated race-model predictions, implying integration of these inputs, and a significant location-by-intensity interaction was observed. Performance gains under multisensory conditions were larger as stimuli were positioned at more peripheral locations, and this increase was most pronounced for the low-intensity conditions. These results provide strong support that the effects of stimulus location and effectiveness on multisensory integration are interdependent, with both contributing to the overall effectiveness of the stimuli in driving the resultant multisensory response.
Collapse
|
48
|
Gau R, Noppeney U. How prior expectations shape multisensory perception. Neuroimage 2016; 124:876-886. [DOI: 10.1016/j.neuroimage.2015.09.045] [Citation(s) in RCA: 65] [Impact Index Per Article: 8.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2015] [Accepted: 09/20/2015] [Indexed: 11/24/2022] Open
|
49
|
van der Stoep N, Serino A, Farnè A, Di Luca M, Spence C. Depth: the Forgotten Dimension in Multisensory Research. Multisens Res 2016. [DOI: 10.1163/22134808-00002525] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The last quarter of a century has seen a dramatic rise of interest in the spatial constraints on multisensory integration. However, until recently, the majority of this research has investigated integration in the space directly in front of the observer. The space around us, however, extends in three spatial dimensions in the front and to the rear beyond such a limited area. The question to be addressed in this review concerns whether multisensory integration operates according to the same rules throughout the whole of three-dimensional space. The results reviewed here not only show that the space around us seems to be divided into distinct functional regions, but they also suggest that multisensory interactions are modulated by the region of space in which stimuli happen to be presented. We highlight a number of key limitations with previous research in this area, including: (1) The focus on only a very narrow region of two-dimensional space in front of the observer; (2) the use of static stimuli in most research; (3) the study of observers who themselves have been mostly static; and (4) the study of isolated observers. All of these factors may change the way in which the senses interact at any given distance, as can the emotional state/personality of the observer. In summarizing these salient issues, we hope to encourage researchers to consider these factors in their own research in order to gain a better understanding of the spatial constraints on multisensory integration as they affect us in our everyday life.
Collapse
Affiliation(s)
- N. van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - A. Serino
- Center for Neuroprosthetics, EPFL, Lausanne, Switzerland
| | - A. Farnè
- ImpAct Team, Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, 69000 Lyon, France
| | - M. Di Luca
- School of Psychology, CNCR, University of Birmingham, Birmingham, United Kingdom
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| |
Collapse
|
50
|
Van der Stoep N, Spence C, Nijboer TCW, Van der Stigchel S. On the relative contributions of multisensory integration and crossmodal exogenous spatial attention to multisensory response enhancement. Acta Psychol (Amst) 2015; 162:20-8. [PMID: 26436587 DOI: 10.1016/j.actpsy.2015.09.010] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/13/2015] [Revised: 09/22/2015] [Accepted: 09/24/2015] [Indexed: 11/18/2022] Open
Abstract
Two processes that can give rise to multisensory response enhancement (MRE) are multisensory integration (MSI) and crossmodal exogenous spatial attention. It is, however, currently unclear what the relative contribution of each of these is to MRE. We investigated this issue using two tasks that are generally assumed to measure MSI (a redundant target effect task) and crossmodal exogenous spatial attention (a spatial cueing task). One block of trials consisted of unimodal auditory and visual targets designed to provide a unimodal baseline. In two other blocks of trials, the participants were presented with spatially and temporally aligned and misaligned audiovisual (AV) targets (0, 50, 100, and 200ms SOA). In the integration block, the participants were instructed to respond to the onset of the first target stimulus that they detected (A or V). The instruction for the cueing block was to respond only to the onset of the visual targets. The targets could appear at one of three locations: left, center, and right. The participants were instructed to respond only to lateral targets. The results indicated that MRE was caused by MSI at 0ms SOA. At 50ms SOA, both crossmodal exogenous spatial attention and MSI contributed to the observed MRE, whereas the MRE observed at the 100 and 200ms SOAs was attributable to crossmodal exogenous spatial attention, alerting, and temporal preparation. These results therefore suggest that there may be a temporal window in which both MSI and exogenous crossmodal spatial attention can contribute to multisensory response enhancement.
Collapse
Affiliation(s)
- N Van der Stoep
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands.
| | - C Spence
- Oxford University, Department of Experimental Psychology, Oxford, United Kingdom
| | - T C W Nijboer
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands; Brain Center Rudolf Magnus, and Center of Excellence for Rehabilitation Medicine, University Medical Center Utrecht and De Hoogstraat Rehabilitation, The Netherlands
| | - S Van der Stigchel
- Utrecht University, Department of Experimental Psychology, Helmholtz Institute, Utrecht, The Netherlands
| |
Collapse
|