1
|
Crosse MJ, Foxe JJ, Tarrit K, Freedman EG, Molholm S. Resolution of impaired multisensory processing in autism and the cost of switching sensory modality. Commun Biol 2022; 5:601. [PMID: 35773473 PMCID: PMC9246932 DOI: 10.1038/s42003-022-03519-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Accepted: 05/23/2022] [Indexed: 11/09/2022] Open
Abstract
Children with autism spectrum disorders (ASD) exhibit alterations in multisensory processing, which may contribute to the prevalence of social and communicative deficits in this population. Resolution of multisensory deficits has been observed in teenagers with ASD for complex, social speech stimuli; however, whether this resolution extends to more basic multisensory processing deficits remains unclear. Here, in a cohort of 364 participants we show using simple, non-social audiovisual stimuli that deficits in multisensory processing observed in high-functioning children and teenagers with ASD are not evident in adults with the disorder. Computational modelling indicated that multisensory processing transitions from a default state of competition to one of facilitation, and that this transition is delayed in ASD. Further analysis revealed group differences in how sensory channels are weighted, and how this is impacted by preceding cross-sensory inputs. Our findings indicate that there is a complex and dynamic interplay among the sensory systems that differs considerably in individuals with ASD. Crosse et al. study a cohort of 364 participants with autism spectrum disorders (ASD) and matched controls, and show that deficits in multisensory processing observed in high-functioning children and teenagers with ASD are not evident in adults with the disorder. Using computational modelling they go on to demonstrate that there is a delayed transition of multisensory processing from a default state of competition to one of facilitation in ASD, as well as differences in sensory weighting and the ability to switch between sensory modalities, which sheds light on the interplay among sensory systems that differ in ASD individuals.
Collapse
Affiliation(s)
- Michael J Crosse
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA. .,Trinity Centre for Biomedical Engineering, Department of Mechanical, Manufacturing & Biomedical Engineering, Trinity College Dublin, Dublin, Ireland.
| | - John J Foxe
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA.,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA.,The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Katy Tarrit
- The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Edward G Freedman
- The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Dominick P. Purpura Department of Neuroscience, Rose F. Kennedy Intellectual and Developmental Disabilities Research Center, Albert Einstein College of Medicine, Bronx, NY, USA. .,The Cognitive Neurophysiology Laboratory, Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY, USA.
| |
Collapse
|
2
|
Audiovisual Integration for Saccade and Vergence Eye Movements Increases with Presbycusis and Loss of Selective Attention on the Stroop Test. Brain Sci 2022; 12:brainsci12050591. [PMID: 35624979 PMCID: PMC9139407 DOI: 10.3390/brainsci12050591] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/21/2022] [Revised: 04/23/2022] [Accepted: 04/28/2022] [Indexed: 11/17/2022] Open
Abstract
Multisensory integration is a capacity allowing us to merge information from different sensory modalities in order to improve the salience of the signal. Audiovisual integration is one of the most used kinds of multisensory integration, as vision and hearing are two senses used very frequently in humans. However, the literature regarding age-related hearing loss (presbycusis) on audiovisual integration abilities is almost nonexistent, despite the growing prevalence of presbycusis in the population. In that context, the study aims to assess the relationship between presbycusis and audiovisual integration using tests of saccade and vergence eye movements to visual vs. audiovisual targets, with a pure tone as an auditory signal. Tests were run with the REMOBI and AIDEAL technologies coupled with the pupil core eye tracker. Hearing abilities, eye movement characteristics (latency, peak velocity, average velocity, amplitude) for saccade and vergence eye movements, and the Stroop Victoria test were measured in 69 elderly and 30 young participants. The results indicated (i) a dual pattern of aging effect on audiovisual integration for convergence (a decrease in the aged group relative to the young one, but an increase with age within the elderly group) and (ii) an improvement of audiovisual integration for saccades for people with presbycusis associated with lower scores of selective attention in the Stroop test, regardless of age. These results bring new insight on an unknown topic, that of audio visuomotor integration in normal aging and in presbycusis. They highlight the potential interest of using eye movement targets in the 3D space and pure tone sound to objectively evaluate audio visuomotor integration capacities.
Collapse
|
3
|
Albini F, Pisoni A, Salvatore A, Calzolari E, Casati C, Marzoli SB, Falini A, Crespi SA, Godi C, Castellano A, Bolognini N, Vallar G. Aftereffects to Prism Exposure without Adaptation: A Single Case Study. Brain Sci 2022; 12:480. [PMID: 35448011 PMCID: PMC9028811 DOI: 10.3390/brainsci12040480] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 03/07/2022] [Accepted: 03/24/2022] [Indexed: 02/05/2023] Open
Abstract
Visuo-motor adaptation to optical prisms (Prism Adaptation, PA), displacing the visual scene laterally, is a behavioral method used for the experimental investigation of visuomotor plasticity, and, in clinical settings, for temporarily ameliorating and rehabilitating unilateral spatial neglect. This study investigated the building up of PA, and the presence of the typically occurring subsequent Aftereffects (AEs) in a brain-damaged patient (TMA), suffering from apperceptive agnosia and a right visual half-field defect, with bilateral atrophy of the parieto-occipital cortices, regions involved in PA and AEs. Base-Right prisms and control neutral lenses were used. PA was achieved by repeated pointing movements toward three types of stimuli: visual, auditory, and bimodal audio-visual. The presence and the magnitude of AEs were assessed by proprioceptive, visual, visuo-proprioceptive, and auditory-proprioceptive straight-ahead pointing tasks. The patient's brain connectivity was investigated by Diffusion Tensor Imaging (DTI). Unlike control participants, TMA did not show any adaptation to prism exposure, but her AEs were largely preserved. These findings indicate that AEs may occur even in the absence of PA, as indexed by the reduction of the pointing error, showing a dissociation between the classical measures of PA and AEs. In the PA process, error reduction, and its feedback, may be less central to the building up of AEs, than the sensorimotor pointing activity per se.
Collapse
Affiliation(s)
- Federica Albini
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Alberto Pisoni
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Anna Salvatore
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
| | - Elena Calzolari
- Neuro-Otology Unit, Division of Brain Sciences, Imperial College London, London SW7 2AZ, UK;
| | - Carlotta Casati
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
- Department of Neurorehabilitation Sciences, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy
| | - Stefania Bianchi Marzoli
- Laboratory of Neuro-Ophthalmology and Ocular Electrophysiology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| | - Andrea Falini
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Sofia Allegra Crespi
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Claudia Godi
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Antonella Castellano
- Neuroradiology Unit and CERMAC, IRCCS San Raffaele Scientific Institute, Vita-Salute San Raffaele University, 20132 Milano, Italy; (A.F.); (S.A.C.); (C.G.); (A.C.)
| | - Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| | - Giuseppe Vallar
- Department of Psychology, University of Milano-Bicocca, 20126 Milano, Italy; (A.P.); (A.S.); (N.B.)
- Experimental Laboratory of Research in Clinical Neuropsychology, IRCCS Istituto Auxologico Italiano, 20155 Milano, Italy;
| |
Collapse
|
4
|
Visually guided saccades and acoustic distractors: no evidence for the remote distractor effect or global effect. Exp Brain Res 2020; 239:59-66. [PMID: 33098653 DOI: 10.1007/s00221-020-05959-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Accepted: 10/13/2020] [Indexed: 10/23/2022]
Abstract
A remote visual distractor increases saccade reaction time (RT) to a visual target and may reflect the time required to resolve conflict between target- and distractor-related information within a common retinotopic representation in the superior colliculus (SC) (i.e., the remote distractor effect: RDE). Notably, because the SC serves as a sensorimotor interface it is possible that the RDE may be associated with the pairing of an acoustic distractor with a visual target; that is, the conflict related to saccade generation signals may be sensory-independent. To address that issue, we employed a traditional RDE experiment involving a visual target and visual proximal and remote distractors (Experiment 1) and an experiment wherein a visual target was presented with acoustic proximal and remote distractors (Experiment 2). As well, Experiments 1 and 2 employed no-distractor trials. Experiment 1 RTs elicited a reliable RDE, whereas Experiment 2 RTs for proximal and remote distractors were shorter than their no distractor counterparts. Accordingly, findings demonstrate that the RDE is sensory specific and arises from conflicting visual signals within a common retinotopic map. As well, Experiment 2 findings indicate that an acoustic distractor supports an intersensory facilitation that optimizes oculomotor planning.
Collapse
|
5
|
Bailey HD, Mullaney AB, Gibney KD, Kwakye LD. Audiovisual Integration Varies With Target and Environment Richness in Immersive Virtual Reality. Multisens Res 2018; 31:689-713. [PMID: 31264608 DOI: 10.1163/22134808-20181301] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/21/2017] [Accepted: 02/26/2018] [Indexed: 11/19/2022]
Abstract
We are continually bombarded by information arriving to each of our senses; however, the brain seems to effortlessly integrate this separate information into a unified percept. Although multisensory integration has been researched extensively using simple computer tasks and stimuli, much less is known about how multisensory integration functions in real-world contexts. Additionally, several recent studies have demonstrated that multisensory integration varies tremendously across naturalistic stimuli. Virtual reality can be used to study multisensory integration in realistic settings because it combines realism with precise control over the environment and stimulus presentation. In the current study, we investigated whether multisensory integration as measured by the redundant signals effects (RSE) is observable in naturalistic environments using virtual reality and whether it differs as a function of target and/or environment cue-richness. Participants detected auditory, visual, and audiovisual targets which varied in cue-richness within three distinct virtual worlds that also varied in cue-richness. We demonstrated integrative effects in each environment-by-target pairing and further showed a modest effect on multisensory integration as a function of target cue-richness but only in the cue-rich environment. Our study is the first to definitively show that minimal and more naturalistic tasks elicit comparable redundant signals effects. Our results also suggest that multisensory integration may function differently depending on the features of the environment. The results of this study have important implications in the design of virtual multisensory environments that are currently being used for training, educational, and entertainment purposes.
Collapse
Affiliation(s)
| | | | - Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin, OH, USA
| | | |
Collapse
|
6
|
Calzolari E, Albini F, Bolognini N, Vallar G. Multisensory and Modality-Specific Influences on Adaptation to Optical Prisms. Front Hum Neurosci 2017; 11:568. [PMID: 29213233 PMCID: PMC5702769 DOI: 10.3389/fnhum.2017.00568] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/16/2017] [Accepted: 11/09/2017] [Indexed: 11/30/2022] Open
Abstract
Visuo-motor adaptation to optical prisms displacing the visual scene (prism adaptation, PA) is a method used for investigating visuo-motor plasticity in healthy individuals and, in clinical settings, for the rehabilitation of unilateral spatial neglect. In the standard paradigm, the adaptation phase involves repeated pointings to visual targets, while wearing optical prisms displacing the visual scene laterally. Here we explored differences in PA, and its aftereffects (AEs), as related to the sensory modality of the target. Visual, auditory, and multisensory - audio-visual - targets in the adaptation phase were used, while participants wore prisms displacing the visual field rightward by 10°. Proprioceptive, visual, visual-proprioceptive, auditory-proprioceptive straight-ahead shifts were measured. Pointing to auditory and to audio-visual targets in the adaptation phase produces proprioceptive, visual-proprioceptive, and auditory-proprioceptive AEs, as the typical visual targets did. This finding reveals that cross-modal plasticity effects involve both the auditory and the visual modality, and their interactions (Experiment 1). Even a shortened PA phase, requiring only 24 pointings to visual and audio-visual targets (Experiment 2), is sufficient to bring about AEs, as compared to the standard 92-pointings procedure. Finally, pointings to auditory targets cause AEs, although PA with a reduced number of pointings (24) to auditory targets brings about smaller AEs, as compared to the 92-pointings procedure (Experiment 3). Together, results from the three experiments extend to the auditory modality the sensorimotor plasticity underlying the typical AEs produced by PA to visual targets. Importantly, PA to auditory targets appears characterized by less accurate pointings and error correction, suggesting that the auditory component of the PA process may be less central to the building up of the AEs, than the sensorimotor pointing activity per se. These findings highlight both the effectiveness of a reduced number of pointings for bringing about AEs, and the possibility of inducing PA with auditory targets, which may be used as a compensatory route in patients with visual deficits.
Collapse
Affiliation(s)
- Elena Calzolari
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuro-Otology Unit, Division of Brain Sciences, Imperial College London, London, United Kingdom
| | - Federica Albini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
| | - Nadia Bolognini
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuropsychological Laboratory, Istituto Auxologico Italiano, Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| | - Giuseppe Vallar
- Department of Psychology and NeuroMI, University of Milano-Bicocca, Milan, Italy
- Neuropsychological Laboratory, Istituto Auxologico Italiano, Istituto di Ricovero e Cura a Carattere Scientifico, Milan, Italy
| |
Collapse
|
7
|
Tinelli F, Cioni G, Purpura G. Development and Implementation of a New Telerehabilitation System for Audiovisual Stimulation Training in Hemianopia. Front Neurol 2017; 8:621. [PMID: 29209271 PMCID: PMC5702450 DOI: 10.3389/fneur.2017.00621] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2017] [Accepted: 11/06/2017] [Indexed: 11/13/2022] Open
Abstract
Telerehabilitation, defined as the method by which communication technologies are used to provide remote rehabilitation, although still underused, could be as efficient and effective as the conventional clinical rehabilitation practices. In the literature, there are descriptions of the use of telerehabilitation in adult patients with various diseases, whereas it is seldom used in clinical practice with child and adolescent patients. We have developed a new audiovisual telerehabilitation (AVT) system, based on the multisensory capabilities of the human brain, to provide a new tool for adults and children with visual field defects in order to improve ocular movements toward the blind hemifield. The apparatus consists of a semicircular structure in which visual and acoustic stimuli are positioned. A camera is integrated into the mechanical structure in the center of the panel to control eye and head movements. Patients can use this training system with a customized software on a tablet. From hospital, the therapist has complete control over the training process, and the results of the training sessions are automatically available within a few minutes on the hospital website. In this paper, we report the AVT system protocol and the preliminary results on its use by three adult patients. All three showed improvements in visual detection abilities with long-term effects. In the future, we will test this apparatus with children and their families. Since interventions for impairments in the visual field have a substantial cost for individuals and for the welfare system, we expect that our research could have a profound socio-economic impact avoiding prolonged and intensive hospital stays.
Collapse
Affiliation(s)
- Francesca Tinelli
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy
| | - Giovanni Cioni
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy.,Department of Clinical and Experimental Medicine, University of Pisa, Pisa, Italy
| | - Giulia Purpura
- Department of Developmental Neuroscience, IRCCS Stella Maris Foundation, Pisa, Italy
| |
Collapse
|
8
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
9
|
Kurela L, Wallace M. Serotonergic Modulation of Sensory and Multisensory Processing in Superior Colliculus. Multisens Res 2017. [DOI: 10.1163/22134808-00002552] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The ability to integrate information across the senses is vital for coherent perception of and interaction with the world. While much is known regarding the organization and function of multisensory neurons within the mammalian superior colliculus (SC), very little is understood at a mechanistic level. One open question in this regard is the role of neuromodulatory networks in shaping multisensory responses. While the SC receives substantial serotonergic projections from the raphe nuclei, and serotonergic receptors are distributed throughout the SC, the potential role of serotonin (5-HT) signaling in multisensory function is poorly understood. To begin to fill this knowledge void, the current study provides physiological evidence for the influences of 5-HT signaling on auditory, visual and audiovisual responses of individual neurons in the intermediate and deep layers of the SC, with a focus on the 5HT2a receptor. Using single-unit extracellular recordings in combination with pharmacological methods, we demonstrate that alterations in 5HT2a receptor signaling change receptive field (RF) architecture as well as responsivity and integrative abilities of SC neurons when assessed at the level of the single neuron. In contrast, little changes were seen in the local field potential (LFP). These results are the first to implicate the serotonergic system in multisensory processing, and are an important step to understanding how modulatory networks mediate multisensory integration in the SC.
Collapse
Affiliation(s)
- LeAnne R. Kurela
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
| | - Mark T. Wallace
- Vanderbilt Brain Institute, Vanderbilt University, Nashville, TN 37232, USA
- Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN 37232, USA
- Department of Psychology, Vanderbilt University, Nashville, TN 37232, USA
- Department of Psychiatry, Vanderbilt University, Nashville, TN 37232, USA
| |
Collapse
|
10
|
Diederich A, Colonius H, Kandil FI. Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response. Exp Brain Res 2016; 234:2059-2076. [DOI: 10.1007/s00221-016-4609-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 02/23/2016] [Indexed: 10/22/2022]
|
11
|
Caruso VC, Pages DS, Sommer MA, Groh JM. Similar prevalence and magnitude of auditory-evoked and visually evoked activity in the frontal eye fields: implications for multisensory motor control. J Neurophysiol 2016; 115:3162-73. [PMID: 26936983 DOI: 10.1152/jn.00935.2015] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2015] [Accepted: 02/26/2016] [Indexed: 11/22/2022] Open
Abstract
Saccadic eye movements can be elicited by more than one type of sensory stimulus. This implies substantial transformations of signals originating in different sense organs as they reach a common motor output pathway. In this study, we compared the prevalence and magnitude of auditory- and visually evoked activity in a structure implicated in oculomotor processing, the primate frontal eye fields (FEF). We recorded from 324 single neurons while 2 monkeys performed delayed saccades to visual or auditory targets. We found that 64% of FEF neurons were active on presentation of auditory targets and 87% were active during auditory-guided saccades, compared with 75 and 84% for visual targets and saccades. As saccade onset approached, the average level of population activity in the FEF became indistinguishable on visual and auditory trials. FEF activity was better correlated with the movement vector than with the target location for both modalities. In summary, the large proportion of auditory-responsive neurons in the FEF, the similarity between visual and auditory activity levels at the time of the saccade, and the strong correlation between the activity and the saccade vector suggest that auditory signals undergo tailoring to match roughly the strength of visual signals present in the FEF, facilitating accessing of a common motor output pathway.
Collapse
Affiliation(s)
- Valeria C Caruso
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina; Center for Cognitive Neuroscience, Duke University, Durham, North Carolina; Department of Psychology and Neuroscience, Duke University, Durham, North Carolina; Department of Neurobiology, Duke University, Durham, North Carolina; and
| | - Daniel S Pages
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina; Center for Cognitive Neuroscience, Duke University, Durham, North Carolina; Department of Psychology and Neuroscience, Duke University, Durham, North Carolina; Department of Neurobiology, Duke University, Durham, North Carolina; and
| | - Marc A Sommer
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina; Center for Cognitive Neuroscience, Duke University, Durham, North Carolina; Department of Neurobiology, Duke University, Durham, North Carolina; and Department of Biomedical Engineering, Duke University, Durham, North Carolina
| | - Jennifer M Groh
- Duke Institute for Brain Sciences, Duke University, Durham, North Carolina; Center for Cognitive Neuroscience, Duke University, Durham, North Carolina; Department of Psychology and Neuroscience, Duke University, Durham, North Carolina; Department of Neurobiology, Duke University, Durham, North Carolina; and
| |
Collapse
|
12
|
Makovac E, Buonocore A, McIntosh RD. Audio-visual integration and saccadic inhibition. Q J Exp Psychol (Hove) 2015; 68:1295-305. [PMID: 25599266 DOI: 10.1080/17470218.2014.979210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/24/2022]
Abstract
Saccades operate a continuous selection between competing targets at different locations. This competition has been mostly investigated in the visual context, and it is well known that a visual distractor can interfere with a saccade toward a visual target. Here, we investigated whether multimodal, audio-visual targets confer stronger resilience against visual distraction. Saccades to audio-visual targets had shorter latencies than saccades to unisensory stimuli. This facilitation exceeded the level that could be explained by simple probability summation, indicating that multisensory integration had occurred. The magnitude of inhibition induced by a visual distractor was comparable for saccades to unisensory and multisensory targets, but the duration of the inhibition was shorter for multimodal targets. We conclude that multisensory integration can allow a saccade plan to be reestablished more rapidly following saccadic inhibition.
Collapse
Affiliation(s)
- Elena Makovac
- a Human Cognitive Neuroscience, Psychology , University of Edinburgh , Edinburgh , UK
| | | | | |
Collapse
|
13
|
Wallace MT, Stevenson RA. The construct of the multisensory temporal binding window and its dysregulation in developmental disabilities. Neuropsychologia 2014; 64:105-23. [PMID: 25128432 PMCID: PMC4326640 DOI: 10.1016/j.neuropsychologia.2014.08.005] [Citation(s) in RCA: 200] [Impact Index Per Article: 20.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2014] [Revised: 08/04/2014] [Accepted: 08/05/2014] [Indexed: 01/18/2023]
Abstract
Behavior, perception and cognition are strongly shaped by the synthesis of information across the different sensory modalities. Such multisensory integration often results in performance and perceptual benefits that reflect the additional information conferred by having cues from multiple senses providing redundant or complementary information. The spatial and temporal relationships of these cues provide powerful statistical information about how these cues should be integrated or "bound" in order to create a unified perceptual representation. Much recent work has examined the temporal factors that are integral in multisensory processing, with many focused on the construct of the multisensory temporal binding window - the epoch of time within which stimuli from different modalities is likely to be integrated and perceptually bound. Emerging evidence suggests that this temporal window is altered in a series of neurodevelopmental disorders, including autism, dyslexia and schizophrenia. In addition to their role in sensory processing, these deficits in multisensory temporal function may play an important role in the perceptual and cognitive weaknesses that characterize these clinical disorders. Within this context, focus on improving the acuity of multisensory temporal function may have important implications for the amelioration of the "higher-order" deficits that serve as the defining features of these disorders.
Collapse
Affiliation(s)
- Mark T Wallace
- Vanderbilt Brain Institute, Vanderbilt University, 465 21st Avenue South, Nashville, TN 37232, USA; Department of Hearing & Speech Sciences, Vanderbilt University, Nashville, TN, USA; Department of Psychology, Vanderbilt University, Nashville, TN, USA; Department of Psychiatry, Vanderbilt University, Nashville, TN, USA.
| | - Ryan A Stevenson
- Department of Psychology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
14
|
Steenken R, Weber L, Colonius H, Diederich A. Designing driver assistance systems with crossmodal signals: multisensory integration rules for saccadic reaction times apply. PLoS One 2014; 9:e92666. [PMID: 24800823 PMCID: PMC4011748 DOI: 10.1371/journal.pone.0092666] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Accepted: 02/25/2014] [Indexed: 11/19/2022] Open
Abstract
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
- * E-mail:
| | - Lars Weber
- OFFIS, Department for Transportation, Human-Centred Design, Oldenburg, Germany
| | - Hans Colonius
- Department of Psychology, Cluster of Excellence “Hearing4all”, and Research Center Neurosensory Science, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
| | - Adele Diederich
- School of Humanities and Social Sciences, Jacobs University, Bremen, Germany
| |
Collapse
|
15
|
Sarko DK, Ghose D, Wallace MT. Convergent approaches toward the study of multisensory perception. Front Syst Neurosci 2013; 7:81. [PMID: 24265607 PMCID: PMC3820972 DOI: 10.3389/fnsys.2013.00081] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/10/2013] [Accepted: 10/20/2013] [Indexed: 11/13/2022] Open
Abstract
Classical analytical approaches for examining multisensory processing in individual neurons have relied heavily on changes in mean firing rate to assess the presence and magnitude of multisensory interaction. However, neurophysiological studies within individual sensory systems have illustrated that important sensory and perceptual information is encoded in forms that go beyond these traditional spike-based measures. Here we review analytical tools as they are used within individual sensory systems (auditory, somatosensory, and visual) to advance our understanding of how sensory cues are effectively integrated across modalities (e.g., audiovisual cues facilitating speech processing). Specifically, we discuss how methods used to assess response variability (Fano factor, or FF), local field potentials (LFPs), current source density (CSD), oscillatory coherence, spike synchrony, and receiver operating characteristics (ROC) represent particularly promising tools for understanding the neural encoding of multisensory stimulus features. The utility of each approach and how it might optimally be applied toward understanding multisensory processing is placed within the context of exciting new data that is just beginning to be generated. Finally, we address how underlying encoding mechanisms might shape-and be tested alongside with-the known behavioral and perceptual benefits that accompany multisensory processing.
Collapse
Affiliation(s)
- Diana K. Sarko
- Department of Anatomy, Cell Biology and Physiology, Edward Via College of Osteopathic MedicineSpartanburg, SC, USA
| | - Dipanwita Ghose
- Department of Anesthesiology, Vanderbilt University Medical CenterNashville, TN, USA
| | - Mark T. Wallace
- Department of Hearing and Speech Sciences, Vanderbilt UniversityNashville, TN, USA
| |
Collapse
|
16
|
Spence C. Just how important is spatial coincidence to multisensory integration? Evaluating the spatial rule. Ann N Y Acad Sci 2013; 1296:31-49. [DOI: 10.1111/nyas.12121] [Citation(s) in RCA: 115] [Impact Index Per Article: 10.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Affiliation(s)
- Charles Spence
- Department of Experimental Psychology; Oxford University
| |
Collapse
|
17
|
Van Barneveld DCPBM, Van Wanrooij MM. The influence of static eye and head position on the ventriloquist effect. Eur J Neurosci 2013; 37:1501-10. [PMID: 23463919 DOI: 10.1111/ejn.12176] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2012] [Revised: 12/20/2012] [Accepted: 01/30/2013] [Indexed: 11/28/2022]
Abstract
Orienting responses to audiovisual events have shorter reaction times and better accuracy and precision when images and sounds in the environment are aligned in space and time. How the brain constructs an integrated audiovisual percept is a computational puzzle because the auditory and visual senses are represented in different reference frames: the retina encodes visual locations with respect to the eyes; whereas the sound localisation cues are referenced to the head. In the well-known ventriloquist effect, the auditory spatial percept of the ventriloquist's voice is attracted toward the synchronous visual image of the dummy, but does this visual bias on sound localisation operate in a common reference frame by correctly taking into account eye and head position? Here we studied this question by independently varying initial eye and head orientations, and the amount of audiovisual spatial mismatch. Human subjects pointed head and/or gaze to auditory targets in elevation, and were instructed to ignore co-occurring visual distracters. Results demonstrate that different initial head and eye orientations are accurately and appropriately incorporated into an audiovisual response. Effectively, sounds and images are perceptually fused according to their physical locations in space independent of an observer's point of view. Implications for neurophysiological findings and modelling efforts that aim to reconcile sensory and motor signals for goal-directed behaviour are discussed.
Collapse
Affiliation(s)
- Denise C P B M Van Barneveld
- Department of Biophysics, Donders Institute for Brain, Cognition and Behaviour, Radboud University Nijmegen, P.O. Box 9010, 6500 GL, Nijmegen, The Netherlands
| | | |
Collapse
|
18
|
Modeling Multisensory Processes in Saccadic Responses. Front Neurosci 2013. [DOI: 10.1201/9781439812174-18] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] Open
|
19
|
Abstract
The Mozart Effect is a phenomenon whereby certain pieces of music induce temporary enhancement in "spatial temporal reasoning." To determine whether the Mozart Effect can improve surgical performance, 55 male volunteers (mean age = 20.6 years, range = 16-27), novice to surgery, were timed as they completed an activity course on a laparoscopic simulator. Subjects were then randomized for exposure to 1 of 2 musical pieces by Mozart (n = 21) and Dream Theater (n = 19), after which they repeated the course. Following a 15-minute exposure to a nonmusical piece, subjects were exposed to one of the pieces and performed the activity course a third time. An additional group (n = 15) that was not corandomized performed the tasks without any exposure to music. The percent improvements in completion time between 3 successive trials were calculated for each subject and group means compared. In 2 of the tasks, subjects exposed to the Dream Theater piece achieved approximately 30% more improvement (26.7 ± 8.3%) than those exposed to the Mozart piece (20.2 ± 7.8%, P = .021) or to no music (20.4 ± 9.1%, P = .049). Distinct patterns of covariance between baseline performance and subsequent improvement were observed for the different musical conditions and tasks. The data confirm the existence of a Mozart Effect and demonstrate for the first time its practical applicability. Prior exposure to certain pieces may enhance performance in practical skills requiring spatial temporal reasoning.
Collapse
|
20
|
Ghose D, Barnett ZP, Wallace MT. Impact of response duration on multisensory integration. J Neurophysiol 2012; 108:2534-44. [PMID: 22896723 DOI: 10.1152/jn.00286.2012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Multisensory neurons in the superior colliculus (SC) have been shown to have large receptive fields that are heterogeneous in nature. These neurons have the capacity to integrate their different sensory inputs, a process that has been shown to depend on the physical characteristics of the stimuli that are combined (i.e., spatial and temporal relationship and relative effectiveness). Recent work has highlighted the interdependence of these factors in driving multisensory integration, adding a layer of complexity to our understanding of multisensory processes. In the present study our goal was to add to this understanding by characterizing how stimulus location impacts the temporal dynamics of multisensory responses in cat SC neurons. The results illustrate that locations within the spatial receptive fields (SRFs) of these neurons can be divided into those showing short-duration responses and long-duration response profiles. Most importantly, discharge duration appears to be a good determinant of multisensory integration, such that short-duration responses are typically associated with a high magnitude of multisensory integration (i.e., superadditive responses) while long-duration responses are typically associated with low integrative capacity. These results further reinforce the complexity of the integrative features of SC neurons and show that the large SRFs of these neurons are characterized by vastly differing temporal dynamics, dynamics that strongly shape the integrative capacity of these neurons.
Collapse
Affiliation(s)
- Dipanwita Ghose
- Department of Psychology, Vanderbilt University, Nashville, Tennessee 37240, USA.
| | | | | |
Collapse
|
21
|
Schiller PH, Kwak MC, Slocum WM. Visual and auditory cue integration for the generation of saccadic eye movements in monkeys and lever pressing in humans. Eur J Neurosci 2012; 36:2500-4. [PMID: 22621264 DOI: 10.1111/j.1460-9568.2012.08133.x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
This study examined how effectively visual and auditory cues can be integrated in the brain for the generation of motor responses. The latencies with which saccadic eye movements are produced in humans and monkeys form, under certain conditions, a bimodal distribution, the first mode of which has been termed express saccades. In humans, a much higher percentage of express saccades is generated when both visual and auditory cues are provided compared with the single presentation of these cues [H. C. Hughes et al. (1994) J. Exp. Psychol. Hum. Percept. Perform., 20, 131-153]. In this study, we addressed two questions: first, do monkeys also integrate visual and auditory cues for express saccade generation as do humans and second, does such integration take place in humans when, instead of eye movements, the task is to press levers with fingers? Our results show that (i) in monkeys, as in humans, the combined visual and auditory cues generate a much higher percentage of express saccades than do singly presented cues and (ii) the latencies with which levers are pressed by humans are shorter when both visual and auditory cues are provided compared with the presentation of single cues, but the distribution in all cases is unimodal; response latencies in the express range seen in the execution of saccadic eye movements are not obtained with lever pressing.
Collapse
Affiliation(s)
- Peter H Schiller
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology, 77 Massachusetts Avenue, Cambridge, MA, USA.
| | | | | |
Collapse
|
22
|
|
23
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/9781439812174-15] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
24
|
Sarko D, Nidiffer A, III A, Ghose D, Hillock-Dunn R, Fister M, Krueger J, Wallace M. Spatial and Temporal Features of Multisensory Processes. Front Neurosci 2011. [DOI: 10.1201/b11092-15] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022] Open
|
25
|
Computing an optimal time window of audiovisual integration in focused attention tasks: illustrated by studies on effect of age and prior knowledge. Exp Brain Res 2011; 212:327-37. [PMID: 21626414 DOI: 10.1007/s00221-011-2732-x] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2011] [Accepted: 05/12/2011] [Indexed: 10/18/2022]
Abstract
The concept of a "time window of integration" holds that information from different sensory modalities must not be perceived too far apart in time in order to be integrated into a multisensory perceptual event. Empirical estimates of window width differ widely, however, ranging from 40 to 600 ms depending on context and experimental paradigm. Searching for theoretical derivation of window width, Colonius and Diederich (Front Integr Neurosci 2010) developed a decision-theoretic framework using a decision rule that is based on the prior probability of a common source, the likelihood of temporal disparities between the unimodal signals, and the payoff for making right or wrong decisions. Here, this framework is extended to the focused attention task where subjects are asked to respond to signals from a target modality only. Evoking the framework of the time-window-of-integration (TWIN) model, an explicit expression for optimal window width is obtained. The approach is probed on two published focused attention studies. The first is a saccadic reaction time study assessing the efficiency with which multisensory integration varies as a function of aging. Although the window widths for young and older adults differ by nearly 200 ms, presumably due to their different peripheral processing speeds, neither of them deviates significantly from the optimal values. In the second study, head saccadic reactions times to a perfectly aligned audiovisual stimulus pair had been shown to depend on the prior probability of spatial alignment. Intriguingly, they reflected the magnitude of the time-window widths predicted by our decision-theoretic framework, i.e., a larger time window is associated with a higher prior probability.
Collapse
|
26
|
Bolognini N, Maravita A. Uncovering Multisensory Processing through Non-Invasive Brain Stimulation. Front Psychol 2011; 2:46. [PMID: 21716922 PMCID: PMC3110874 DOI: 10.3389/fpsyg.2011.00046] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/30/2010] [Accepted: 03/04/2011] [Indexed: 02/04/2023] Open
Abstract
Most of current knowledge about the mechanisms of multisensory integration of environmental stimuli by the human brain derives from neuroimaging experiments. However, neuroimaging studies do not always provide conclusive evidence about the causal role of a given area for multisensory interactions, since these techniques can mainly derive correlations between brain activations and behavior. Conversely, techniques of non-invasive brain stimulation (NIBS) represent a unique and powerful approach to inform models of causal relations between specific brain regions and individual cognitive and perceptual functions. Although NIBS has been widely used in cognitive neuroscience, its use in the study of multisensory processing in the human brain appears a quite novel field of research. In this paper, we review and discuss recent studies that have used two techniques of NIBS, namely transcranial magnetic stimulation and transcranial direct current stimulation, for investigating the causal involvement of unisensory and heteromodal cortical areas in multisensory processing, the effects of multisensory cues on cortical excitability in unisensory areas, and the putative functional connections among different cortical areas subserving multisensory interactions. The emerging view is that NIBS is an essential tool available to neuroscientists seeking for causal relationships between a given area or network and multisensory processes. With its already large and fast increasing usage, future work using NIBS in isolation, as well as in conjunction with different neuroimaging techniques, could substantially improve our understanding of multisensory processing in the human brain.
Collapse
Affiliation(s)
- Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca Milan, Italy
| | | |
Collapse
|
27
|
Van Wanrooij MM, Bremen P, John Van Opstal A. Acquired prior knowledge modulates audiovisual integration. Eur J Neurosci 2010; 31:1763-71. [PMID: 20584180 DOI: 10.1111/j.1460-9568.2010.07198.x] [Citation(s) in RCA: 30] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Orienting responses to audiovisual events in the environment can benefit markedly by the integration of visual and auditory spatial information. However, logically, audiovisual integration would only be considered successful for stimuli that are spatially and temporally aligned, as these would be emitted by a single object in space-time. As humans do not have prior knowledge about whether novel auditory and visual events do indeed emanate from the same object, such information needs to be extracted from a variety of sources. For example, expectation about alignment or misalignment could modulate the strength of multisensory integration. If evidence from previous trials would repeatedly favour aligned audiovisual inputs, the internal state might also assume alignment for the next trial, and hence react to a new audiovisual event as if it were aligned. To test for such a strategy, subjects oriented a head-fixed pointer as fast as possible to a visual flash that was consistently paired, though not always spatially aligned, with a co-occurring broadband sound. We varied the probability of audiovisual alignment between experiments. Reaction times were consistently lower in blocks containing only aligned audiovisual stimuli than in blocks also containing pseudorandomly presented spatially disparate stimuli. Results demonstrate dynamic updating of the subject's prior expectation of audiovisual congruency. We discuss a model of prior probability estimation to explain the results.
Collapse
Affiliation(s)
- Marc M Van Wanrooij
- Radboud University Nijmegen, Donders Institute of Brain, Cognition and Behaviour, Department of Biophysics, Geert Grooteplein 21, 6525 EZ Nijmegen, The Netherlands.
| | | | | |
Collapse
|
28
|
Bolognini N, Fregni F, Casati C, Olgiati E, Vallar G. Brain polarization of parietal cortex augments training-induced improvement of visual exploratory and attentional skills. Brain Res 2010; 1349:76-89. [DOI: 10.1016/j.brainres.2010.06.053] [Citation(s) in RCA: 86] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2009] [Revised: 06/13/2010] [Accepted: 06/22/2010] [Indexed: 10/19/2022]
|
29
|
Falchier A, Schroeder CE, Hackett TA, Lakatos P, Nascimento-Silva S, Ulbert I, Karmos G, Smiley JF. Projection from visual areas V2 and prostriata to caudal auditory cortex in the monkey. Cereb Cortex 2009; 20:1529-38. [PMID: 19875677 DOI: 10.1093/cercor/bhp213] [Citation(s) in RCA: 105] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
Studies in humans and monkeys report widespread multisensory interactions at or near primary visual and auditory areas of neocortex. The range and scale of these effects has prompted increased interest in interconnectivity between the putatively "unisensory" cortices at lower hierarchical levels. Recent anatomical tract-tracing studies have revealed direct projections from auditory cortex to primary visual area (V1) and secondary visual area (V2) that could serve as a substrate for auditory influences over low-level visual processing. To better understand the significance of these connections, we looked for reciprocal projections from visual cortex to caudal auditory cortical areas in macaque monkeys. We found direct projections from area prostriata and the peripheral visual representations of area V2. Projections were more abundant after injections of temporoparietal area and caudal parabelt than after injections of caudal medial belt and the contiguous areas near the fundus of the lateral sulcus. Only one injection was confined to primary auditory cortex (area A1) and did not demonstrate visual connections. The projections from visual areas originated mainly from infragranular layers, suggestive of a "feedback"-type projection. The selective localization of these connections to peripheral visual areas and caudal auditory cortex suggests that they are involved in spatial localization.
Collapse
Affiliation(s)
- Arnaud Falchier
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute for Psychiatric Research, Orangeburg, NY 10962, USA
| | | | | | | | | | | | | | | |
Collapse
|
30
|
Smiley JF, Falchier A. Multisensory connections of monkey auditory cerebral cortex. Hear Res 2009; 258:37-46. [PMID: 19619628 DOI: 10.1016/j.heares.2009.06.019] [Citation(s) in RCA: 71] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/28/2009] [Revised: 06/26/2009] [Accepted: 06/29/2009] [Indexed: 11/16/2022]
Abstract
Functional studies have demonstrated multisensory responses in auditory cortex, even in the primary and early auditory association areas. The features of somatosensory and visual responses in auditory cortex suggest that they are involved in multiple processes including spatial, temporal and object-related perception. Tract tracing studies in monkeys have demonstrated several potential sources of somatosensory and visual inputs to auditory cortex. These include potential somatosensory inputs from the retroinsular (RI) and granular insula (Ig) cortical areas, and from the thalamic posterior (PO) nucleus. Potential sources of visual responses include peripheral field representations of areas V2 and prostriata, as well as the superior temporal polysensory area (STP) in the superior temporal sulcus, and the magnocellular medial geniculate thalamic nucleus (MGm). Besides these sources, there are several other thalamic, limbic and cortical association structures that have multisensory responses and may contribute cross-modal inputs to auditory cortex. These connections demonstrated by tract tracing provide a list of potential inputs, but in most cases their significance has not been confirmed by functional experiments. It is possible that the somatosensory and visual modulation of auditory cortex are each mediated by multiple extrinsic sources.
Collapse
Affiliation(s)
- John F Smiley
- Cognitive Neuroscience and Schizophrenia Program, Nathan Kline Institute for Psychiatric Research, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| | | |
Collapse
|
31
|
Van Wanrooij MM, Bell AH, Munoz DP, Van Opstal AJ. The effect of spatial-temporal audiovisual disparities on saccades in a complex scene. Exp Brain Res 2009; 198:425-37. [PMID: 19415249 PMCID: PMC2733184 DOI: 10.1007/s00221-009-1815-4] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2008] [Accepted: 04/11/2009] [Indexed: 11/30/2022]
Abstract
In a previous study we quantified the effect of multisensory integration on the latency and accuracy of saccadic eye movements toward spatially aligned audiovisual (AV) stimuli within a rich AV-background (Corneil et al. in J Neurophysiol 88:438–454, 2002). In those experiments both stimulus modalities belonged to the same object, and subjects were instructed to foveate that source, irrespective of modality. Under natural conditions, however, subjects have no prior knowledge as to whether visual and auditory events originated from the same, or from different objects in space and time. In the present experiments we included these possibilities by introducing various spatial and temporal disparities between the visual and auditory events within the AV-background. Subjects had to orient fast and accurately to the visual target, thereby ignoring the auditory distractor. We show that this task belies a dichotomy, as it was quite difficult to produce fast responses (<250 ms) that were not aurally driven. Subjects therefore made many erroneous saccades. Interestingly, for the spatially aligned events the inability to ignore auditory stimuli produced shorter reaction times, but also more accurate responses than for the unisensory target conditions. These findings, which demonstrate effective multisensory integration, are similar to the previous study, and the same multisensory integration rules are applied (Corneil et al. in J Neurophysiol 88:438–454, 2002). In contrast, with increasing spatial disparity, integration gradually broke down, as the subjects’ responses became bistable: saccades were directed either to the auditory (fast responses), or to the visual stimulus (late responses). Interestingly, also in this case responses were faster and more accurate than to the respective unisensory stimuli.
Collapse
Affiliation(s)
- Marc M Van Wanrooij
- Department of Biophysics, Donders Institute of Brain, Cognition and Behaviour, Radboud University Nijmegen, 6525 EZ Nijmegen, The Netherlands
| | | | | | | |
Collapse
|
32
|
Time-Window-of-Integration (TWIN) Model for Saccadic Reaction Time: Effect of Auditory Masker Level on Visual–Auditory Spatial Interaction in Elevation. Brain Topogr 2009; 21:177-84. [PMID: 19337824 DOI: 10.1007/s10548-009-0091-8] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2009] [Accepted: 03/19/2009] [Indexed: 10/20/2022]
|
33
|
Royal DW, Carriere BN, Wallace MT. Spatiotemporal architecture of cortical receptive fields and its impact on multisensory interactions. Exp Brain Res 2009; 198:127-36. [PMID: 19308362 DOI: 10.1007/s00221-009-1772-y] [Citation(s) in RCA: 44] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/23/2008] [Accepted: 03/05/2009] [Indexed: 11/29/2022]
Abstract
Recent electrophysiology studies have suggested that neuronal responses to multisensory stimuli may possess a unique temporal signature. To evaluate this temporal dynamism, unisensory and multisensory spatiotemporal receptive fields (STRFs) of neurons in the cortex of the cat anterior ectosylvian sulcus were constructed. Analyses revealed that the multisensory STRFs of these neurons differed significantly from the component unisensory STRFs and their linear summation. Most notably, multisensory responses were found to have higher peak firing rates, shorter response latencies, and longer discharge durations. More importantly, multisensory STRFs were characterized by two distinct temporal phases of enhanced integration that reflected the shorter response latencies and longer discharge durations. These findings further our understanding of the temporal architecture of cortical multisensory processing, and thus provide important insights into the possible functional role(s) played by multisensory cortex in spatially directed perceptual processes.
Collapse
Affiliation(s)
- David W Royal
- Kennedy Center for Research on Human Development, Vanderbilt University, Nashville, TN 37232, USA.
| | | | | |
Collapse
|
34
|
Steenken R, Colonius H, Diederich A, Rach S. Visual–auditory interaction in saccadic reaction time: Effects of auditory masker level. Brain Res 2008; 1220:150-6. [PMID: 17900544 DOI: 10.1016/j.brainres.2007.08.034] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2007] [Revised: 08/04/2007] [Accepted: 08/08/2007] [Indexed: 11/23/2022]
Abstract
Saccadic reaction time (SRT) to a visual target tends to be shorter when auditory stimuli are presented in close temporal and spatial proximity, even when subjects are instructed to ignore the auditory non-target (focused attention paradigm). Observed SRT reductions typically range between 10 and 50 ms and decrease as spatial disparity between the stimuli increases. Previous studies using pairs of visual and auditory stimuli differing in both azimuth and vertical position suggest that the amount of SRT facilitation decreases not with the physical but with the perceivable distance between visual target and auditory accessory. Here we probe this hypothesis by presenting an additional white-noise masker background of 3 s duration. Increasing the masker level had a diametrical effect on SRTs in spatially coincident vs. disparate stimulus configurations: saccadic responses to coincident visual-auditory stimuli are slowed down, whereas saccadic responses to disparate stimuli are speeded up. As verified in a separate auditory localization task, localizability of the auditory accessory decreases with masker level. The SRT results are accounted for by a conceptual model positing that increasing masker level enlarges the area of possible auditory stimulus locations: it implies that perceivable distances decrease for disparate stimulus configurations and increase for coincident stimulus pairs.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, University of Oldenburg, P.O. Box 2503, 26111 Oldenburg, Germany.
| | | | | | | |
Collapse
|
35
|
When a high-intensity "distractor" is better then a low-intensity one: modeling the effect of an auditory or tactile nontarget stimulus on visual saccadic reaction time. Brain Res 2008; 1242:219-30. [PMID: 18573240 DOI: 10.1016/j.brainres.2008.05.081] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2008] [Revised: 05/29/2008] [Accepted: 05/29/2008] [Indexed: 11/21/2022]
Abstract
In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget.
Collapse
|
36
|
Bolognini N, Leo F, Passamonti C, Stein BE, Làdavas E. Multisensory-mediated auditory localization. Perception 2008; 36:1477-85. [PMID: 18265830 DOI: 10.1068/p5846] [Citation(s) in RCA: 38] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Abstract
Multisensory integration is a powerful mechanism for maximizing sensitivity to sensory events. We examined its effects on auditory localization in healthy human subjects. The specific objective was to test whether the relative intensity and location of a seemingly irrelevant visual stimulus would influence auditory localization in accordance with the inverse effectiveness and spatial rules of multisensory integration that have been developed from neurophysiological studies with animals [Stein and Meredith, 1993 The Merging of the Senses (Cambridge, MA: MIT Press)]. Subjects were asked to localize a sound in one condition in which a neutral visual stimulus was either above threshold (supra-threshold) or at threshold. In both cases the spatial disparity of the visual and auditory stimuli was systematically varied. The results reveal that stimulus salience is a critical factor in determining the effect of a neutral visual cue on auditory localization. Visual bias and, hence, perceptual translocation of the auditory stimulus appeared when the visual stimulus was supra-threshold, regardless of its location. However, this was not the case when the visual stimulus was at threshold. In this case, the influence of the visual cue was apparent only when the two cues were spatially coincident and resulted in an enhancement of stimulus localization. These data suggest that the brain uses multiple strategies to integrate multisensory information.
Collapse
Affiliation(s)
- Nadia Bolognini
- Department of Psychology, University of Milano-Bicocca, via dell'Innovazione 10, 20126 Milan, Italy.
| | | | | | | | | |
Collapse
|
37
|
Steenken R, Diederich A, Colonius H. Time course of auditory masker effects: tapping the locus of audiovisual integration? Neurosci Lett 2008; 435:78-83. [PMID: 18355963 DOI: 10.1016/j.neulet.2008.02.017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/27/2007] [Revised: 01/22/2008] [Accepted: 02/06/2008] [Indexed: 11/28/2022]
Abstract
In a focused attention paradigm, saccadic reaction time (SRT) to a visual target tends to be shorter when an auditory accessory stimulus is presented in close temporal and spatial proximity. Observed SRT reductions typically diminish as spatial disparity between the stimuli increases. Here a visual target LED (500 ms duration) was presented above or below the fixation point and a simultaneously presented auditory accessory (2 ms duration) could appear at the same or the opposite vertical position. SRT enhancement was about 35 ms in the coincident and 10 ms in the disparate condition. In order to further probe the audiovisual integration mechanism, in addition to the auditory non-target an auditory masker (200 ms duration) was presented before, simultaneous to, or after the accessory stimulus. In all interstimulus interval (ISI) conditions, SRT enhancement went down both in the coincident and disparate configuration, but this decrement was fairly stable across the ISI values. If multisensory integration solely relied on a feed-forward process, one would expect a monotonic decrease of the masker effect with increasing ISI in the backward masking condition. It is therefore conceivable that the relatively high-energetic masker causes a broad excitatory response of SC neurons. During this state, the spatial audio-visual information from multisensory association areas is fed back and merged with the spatially unspecific excitation pattern induced by the masker. Assuming that a certain threshold of activation has to be achieved in order to generate a saccade in the correct direction, the blurred joint output of noise and spatial audio-visual information needs more time to reach this threshold prolonging SRT to an audio-visual object.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, University of Oldenburg, P.O. Box 2503, 26111 Oldenburg, Germany.
| | | | | |
Collapse
|
38
|
Diederich A, Colonius H. Crossmodal interaction in saccadic reaction time: separating multisensory from warning effects in the time window of integration model. Exp Brain Res 2007; 186:1-22. [PMID: 18004552 DOI: 10.1007/s00221-007-1197-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2007] [Accepted: 10/19/2007] [Indexed: 10/22/2022]
Abstract
In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) non-target presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -500 (non-target prior to target) to 0 ms, but the effect was larger for ipsi- than for contralateral presentation within an SOA range from -200 ms to 0. The time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000, 2004) is extended here to separate the effect of a spatially unspecific warning effect of the non-target from a spatially specific and genuine multisensory integration effect.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University Bremen, P.O. Box 750 561, 28725, Bremen, Germany.
| | | |
Collapse
|
39
|
Diederich A, Colonius H. Modeling spatial effects in visual-tactile saccadic reaction time. ACTA ACUST UNITED AC 2007; 69:56-67. [PMID: 17515216 DOI: 10.3758/bf03194453] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Saccadic reaction time (SRT) to visual targets tends to be shorter when nonvisual stimuli are presented in close temporal or spatial proximity, even when subjects are instructed to ignore the accessory input. Here, we investigate visual-tactile interaction effects on SRT under varying spatial configurations. SRT to bimodal stimuli was reduced by up to 30 msec, in comparison with responses to unimodal visual targets. In contrast to previous findings, the amount of multisensory facilitation did not decrease with increases in the physical distance between the target and the nontarget but depended on (1) whether the target and the nontarget were presented in the same hemifield (ipsilateral) or in different hemifields (contralateral), (2) the eccentricity of the stimuli, and (3) the frequency of the vibrotactile nontarget. The time-window-of-integration (TWIN) model for SRT (Colonius & Diederich, 2004) is shown to yield an explicit characterization of the observed multisensory spatial interaction effects through the removal of the peripheral-processing effects of stimulus location and tactile frequency.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University Bremen, Bremen, Germany.
| | | |
Collapse
|
40
|
Abstract
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.
Collapse
|
41
|
Diederich A, Colonius H. Why two "Distractors" are better than one: modeling the effect of non-target auditory and tactile stimuli on visual saccadic reaction time. Exp Brain Res 2007; 179:43-54. [PMID: 17216154 DOI: 10.1007/s00221-006-0768-0] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2006] [Accepted: 10/13/2006] [Indexed: 10/23/2022]
Abstract
Saccadic reaction time (SRT) was measured in a focused attention task with a visual target stimulus (LED) and auditory (white noise burst) and tactile (vibration applied to palm) stimuli presented as non-targets at five different onset times (SOAs) with respect to the target. Mean SRT was reduced (i) when the number of non-targets was increased and (ii) when target and non-targets were all presented in the same hemifield; (iii) this facilitation first increases and then decreases as the time point of presenting the non-targets is shifted from early to late relative to the target presentation. These results are consistent with the time-window-of-integration (TWIN) model (Colonius and Diederich in J Cogn Neurosci 16:1000-1009, 2004) which distinguishes a peripheral stage of independent sensory channels racing against each other from a second stage of neural integration of the input and preparation of an oculomotor response. Cross-modal interaction manifests itself in an increase or decrease of second stage processing time. For the first time, without making specific distributional assumptions on the processing times, TWIN is shown to yield numerical estimates for the facilitative effects of the number of non-targets and of the spatial configuration of target and non-targets. More generally, the TWIN model framework suggests that multisensory integration is a function of unimodal stimulus properties, like intensity, in the first stage and of cross-modal stimulus properties, like spatial disparity, in the second stage.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, International University Bremen, P.O. Box 750 561, 28725 Bremen, Germany.
| | | |
Collapse
|
42
|
Whitchurch EA, Takahashi TT. Combined auditory and visual stimuli facilitate head saccades in the barn owl (Tyto alba). J Neurophysiol 2006; 96:730-45. [PMID: 16672296 DOI: 10.1152/jn.00072.2006] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The barn owl naturally responds to an auditory or visual stimulus in its environment with a quick head turn toward the source. We measured these head saccades evoked by auditory, visual, and simultaneous, co-localized audiovisual stimuli to quantify multisensory interactions in the barn owl. Stimulus levels ranged from near to well above saccadic threshold. In accordance with previous human psychophysical findings, the owl's saccade reaction times (SRTs) and errors to unisensory stimuli were inversely related to stimulus strength. Auditory saccades characteristically had shorter reaction times but were less accurate than visual saccades. Audiovisual trials, over a large range of tested stimulus combinations, had auditory-like SRTs and visual-like errors, suggesting that barn owls are able to use both auditory and visual cues to produce saccades with the shortest possible SRT and greatest accuracy. These results support a model of sensory integration in which the faster modality initiates the saccade and the slower modality remains available to refine saccade trajectory.
Collapse
|
43
|
Colonius H, Diederich A. The race model inequality: Interpreting a geometric measure of the amount of violation. Psychol Rev 2006; 113:148-54. [PMID: 16478305 DOI: 10.1037/0033-295x.113.1.148] [Citation(s) in RCA: 86] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An inequality by J. O. Miller (1982) has become the standard tool to test the race model for redundant signals reaction times (RTs), as an alternative to a neural summation mechanism. It stipulates that the RT distribution function to redundant stimuli is never larger than the sum of the distribution functions for 2 single stimuli. When many different experimental conditions are to be compared, a numerical index of violation is very desirable. Widespread practice is to take a certain area with contours defined by the distribution functions for single and redundant stimuli. Here this area is shown to equal the difference between 2 mean RT values. This result provides an intuitive interpretation of the index and makes it amenable to simple statistical testing. An extension of this approach to 3 redundant signals is presented.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, Oldenburg University, Oldenburg, Germany.
| | | |
Collapse
|
44
|
Bolognini N, Rasi F, Coccia M, Làdavas E. Visual search improvement in hemianopic patients after audio-visual stimulation. ACTA ACUST UNITED AC 2005; 128:2830-42. [PMID: 16219672 DOI: 10.1093/brain/awh656] [Citation(s) in RCA: 116] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
One of the most effective techniques in the rehabilitation of visual field defects is based on implementation of oculomotor strategies to compensate for visual field loss. In the present study we develop a new rehabilitation approach based on the audio-visual stimulation of the visual field. Since it has been demonstrated that audio-visual interaction in multisensory neurons can improve temporally visual perception in patients with hemianopia, the aim of the present study was to verify whether a systematic audio-visual stimulation might induce a long-lasting amelioration of visual field disorders. Eight patients with chronic visual field defects were trained to detect the presence of visual targets. During the training, the visual stimulus could be presented alone, i.e. unimodal condition, or together with an acoustic stimulus, i.e. crossmodal conditions. In the crossmodal conditions, the spatial disparity between the visual and the acoustic stimuli were systematically varied (0, 16 and 32 degrees of disparity). Furthermore, the temporal interval between the acoustic stimulus and the visual target in the crossmodal conditions was gradually reduced from 500 to 0 ms. Patients underwent the treatment for 4 h daily, over a period of nearly 2 weeks. The results showed a progressive improvement of visual detections during the training and an improvement of visual oculomotor exploration that allowed patients to efficiently compensate for the loss of vision. More interesting, there was a transfer of treatment gains to functional measures assessing visual field exploration and to daily-life activities, which was found stable at the 1 month follow-up control session. These findings are very promising with respect to the possibility of taking advantage of human multisensory capabilities to recover from unimodal sensory impairments.
Collapse
Affiliation(s)
- Nadia Bolognini
- Dipartimento di Psicologia, Università degli Studi di Bologna, Bologna, Italy
| | | | | | | |
Collapse
|
45
|
Kirchner H, Colonius H. Interstimulus contingency facilitates saccadic responses in a bimodal go/no-go task. ACTA ACUST UNITED AC 2005; 25:261-72. [PMID: 16040236 DOI: 10.1016/j.cogbrainres.2005.06.006] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2004] [Revised: 06/01/2005] [Accepted: 06/02/2005] [Indexed: 11/29/2022]
Abstract
The saccadic response to a suddenly appearing visual target stimulus is faster when an accessory auditory stimulus is presented in its spatiotemporal proximity. This multisensory facilitation of reaction time is usually considered a mandatory bottom-up process. Here, we report that it can be modulated by the predictability of the target location provided by an accessory stimulus, thereby indicating a form of top-down processing. Subjects were asked to make a saccade in the direction of a visual target randomly appearing left or right from fixation. An accessory auditory stimulus was presented either at the same location or opposite to the target, with the probability varying over blocks of presentation. Thus, the auditory stimulus contained probabilistic information about the target location (interstimulus contingency). A certain percentage of the trials were catch trials in which the auditory accompanying stimulus (Experiment 1) or the visual target (Experiment 2) was presented alone and the subjects were asked to withhold their response. In particular with visual catch trials, varying the predictability of target location resulted in reaction time facilitation in the bimodal trials, with both high (80%) and low predictability (20%), but only when both stimuli were presented within a small time window (40 ms). As subjects could not possibly follow the task instructions in this short period explicitly, we conclude that they utilized the interstimulus contingency information implicitly, thus revealing an extremely fast involuntary top-down control on saccadic eye movements.
Collapse
Affiliation(s)
- Holle Kirchner
- Centre de Recherche Cerveau et Cognition, Faculté de Medecine, F-31062 Toulouse Cedex, France.
| | | |
Collapse
|
46
|
Perrault TJ, Vaughan JW, Stein BE, Wallace MT. Superior Colliculus Neurons Use Distinct Operational Modes in the Integration of Multisensory Stimuli. J Neurophysiol 2005; 93:2575-86. [PMID: 15634709 DOI: 10.1152/jn.00926.2004] [Citation(s) in RCA: 120] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Many neurons in the superior colliculus (SC) integrate sensory information from multiple modalities, giving rise to significant response enhancements. Although enhanced multisensory responses have been shown to depend on the spatial and temporal relationships of the stimuli as well as on their relative effectiveness, these factors alone do not appear sufficient to account for the substantial heterogeneity in the magnitude of the multisensory products that have been observed. Toward this end, the present experiments have revealed that there are substantial differences in the operations used by different multisensory SC neurons to integrate their cross-modal inputs, suggesting that intrinsic differences in these neurons may also play an important deterministic role in multisensory integration. In addition, the integrative operation employed by a given neuron was found to be well correlated with the neuron's dynamic range. In total, four categories of SC neurons were identified based on how their multisensory responses changed relative to the predicted addition of the two unisensory inputs as stimulus effectiveness was altered. Despite the presence of these categories, a general rule was that the most robust multisensory enhancements were seen with combinations of the least effective unisensory stimuli. Together, these results provide a better quantitative picture of the integrative operations performed by multisensory SC neurons and suggest mechanistic differences in the way in which these neurons synthesize cross-modal information.
Collapse
Affiliation(s)
- Thomas J Perrault
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA
| | | | | | | |
Collapse
|
47
|
Abstract
Motion is a potent sub-modality of vision. Motion cues alone can be used to segment images into figure and ground and break camouflage. Specific patterns of motion support vivid percepts of form, guide locomotion by specifying directional heading and the passage of objects, and in case of an impending collision, the time to impact. Visual motion also drives smooth pursuit eye movements (SPEMs) that serve to stabilize the retinal image of objects in motion. In contrast, the auditory system does not appear to be particularly sensitive to motion. We review the ambiguous status of auditory motion processing from the psychophysical and electrophysiological perspectives. We then report the results of two experiments that use ocular tracking performance as an objective measure of the perception of auditory motion in humans. We examine ocular tracking of auditory motion, visual motion, combined auditory + visual motion and imagined motion in both the frontal plane and in depth. The results demonstrate that ocular tracking of auditory motion is no better than ocular tracking of imagined motion. These results are consistent with the suggestion that, unlike the visual system, the human auditory system is not endowed with low-level motion sensitive elements. We hypothesize however, that auditory information may gain access to a recently described high-level motion processing system that is heavily dependent on 'top-down' influences, including attention.
Collapse
Affiliation(s)
- Leanne Boucher
- Dartmouth College, Department of Psychological and Brain Sciences, 6207 Moore Hall, Hanover, NH 03755, USA
| | | | | | | |
Collapse
|
48
|
Diederich A, Colonius H. Bimodal and trimodal multisensory enhancement: Effects of stimulus onset and intensity on reaction time. ACTA ACUST UNITED AC 2004; 66:1388-404. [PMID: 15813202 DOI: 10.3758/bf03195006] [Citation(s) in RCA: 227] [Impact Index Per Article: 11.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Manual reaction times to visual, auditory, and tactile stimuli presented simultaneously, or with a delay, were measured to test for multisensory interaction effects in a simple detection task with redundant signals. Responses to trimodal stimulus combinations were faster than those to bimodal combinations, which in turn were faster than reactions to unimodal stimuli. Response enhancement increased with decreasing auditory and tactile stimulus intensity and was a U-shaped function of stimulus onset asynchrony. Distribution inequality tests indicated that the multisensory interaction effects were larger than predicted by separate activation models, including the difference between bimodal and trimodal response facilitation. The results are discussed with respect to previous findings in a focused attention task and are compared with multisensory integration rules observed in bimodal and trimodal superior colliculus neurons in the cat and monkey.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, International University Bremen, D-28725 Bremen, Germany.
| | | |
Collapse
|
49
|
Sakata S, Yamamori T, Sakurai Y. Behavioral studies of auditory-visual spatial recognition and integration in rats. Exp Brain Res 2004; 159:409-17. [PMID: 15249987 DOI: 10.1007/s00221-004-1962-6] [Citation(s) in RCA: 27] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2003] [Accepted: 05/02/2004] [Indexed: 11/28/2022]
Abstract
Rodents are useful animal models in the study of the molecular and cellular mechanisms underlying various neural functions. For studying behavioral properties associated with multisensory functions in rats, we measured the speed and accuracy of target detection by the reaction-time procedure. In the first experiment, we utilized simple two-alternative-choice tasks, in which spatial cues are visual or auditory modalities, and conducted a cross-modal transfer test in order to determine whether rats recognize amodal spatial information. Rats showed successful performance in the cross-modal transfer test and the speed to respond to sensory stimuli was constant under a rule-consistent condition despite the change in cue modality. In the second experiment, we developed audiovisual two-alternative-choice tasks, in which both auditory and visual stimuli were simultaneously presented but one of the two modalities was task-relevant, in order to determine whether the response to the sensory stimulation of one modality is enhanced by the stimulation of a different modality. If bimodal stimuli were spatially coincident, the speed for detecting the relevant stimulus was shortened and the extent of the effect was comparable to those in past studies of humans and other mammals. These results indicate the cross-modal spatial abilities of rats and our present paradigms may provide useful behavioral tasks for studying the neural bases of multisensory processing and integration in rats.
Collapse
Affiliation(s)
- Shuzo Sakata
- Division of Speciation Mechanisms 1, National Institute for Basic Biology, 38 Nishigonaka, Myodaiji, 444-8585 Okazaki, Japan
| | | | | |
Collapse
|
50
|
Colonius H, Diederich A. Multisensory Interaction in Saccadic Reaction Time: A Time-Window-of-Integration Model. J Cogn Neurosci 2004; 16:1000-9. [PMID: 15298787 DOI: 10.1162/0898929041502733] [Citation(s) in RCA: 159] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Saccadic reaction time to visual targets tends to be faster when stimuli from another modality (in particular, audition and touch) are presented in close temporal or spatial proximity even when subjects are instructed to ignore the accessory input (focused attention task). Multisensory interaction effects measured in neural structures involved in saccade generation (in particular, the superior colliculus) have demonstrated a similar spatio-temporal dependence. Neural network models of multisensory spatial integration have been shown to generate convergence of the visual, auditory, and tactile reference frames and the sensorimotor coordinate transformations necessary for coordinated head and eye movements. However, because these models do not capture the temporal coincidences critical for multisensory integration to occur, they cannot easily predict multisensory effects observed in behavioral data such as saccadic reaction times. This article proposes a quantitative stochastic framework, the time-window-of-integration model, to account for the temporal rules of multisensory integration. Saccadic responses collected from a visual–tactile focused attention task are shown to be consistent with the time-window-of-integration model predictions.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, Universitaet Oldenburg, Germany.
| | | |
Collapse
|