201
|
Hugenschmidt CE, Hayasaka S, Peiffer AM, Laurienti PJ. Applying capacity analyses to psychophysical evaluation of multisensory interactions. AN INTERNATIONAL JOURNAL ON INFORMATION FUSION 2010; 11:12-20. [PMID: 20161039 PMCID: PMC2753979 DOI: 10.1016/j.inffus.2009.04.004] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
Determining when, if, and how information from separate sensory channels has been combined is a fundamental goal of research on multisensory processing in the brain. This can be a particular challenge in psychophysical data, as there is no direct recording of neural output. The most common way to characterize multisensory interactions in behavioral data is to compare responses to multisensory stimulation with the race model, a model of parallel, independent processing constructed from the probability of responses to the two unisensory stimuli which make up the multisensory stimulus. If observed multisensory reaction times are faster than those predicted by the model, it is inferred that information from the two channels is being combined rather than processed independently. Recently, behavioral research has been published employing capacity analyses where comparisons between two conditions are carried out at the level of the integrated hazard function. Capacity analyses seem to be particularly appealing technique for evaluating multisensory functioning, as they describe relationships between conditions across the entire distribution curve, are relatively easy and intuitive to interpret. The current paper presents capacity analysis of a behavioral data set previously analyzed using the race model. While applications of capacity analyses are still somewhat limited due to their novelty, it is hoped that this exploration of capacity and race model analyses will encourage the use of this promising new technique both in multisensory research and other applicable fields.
Collapse
Affiliation(s)
- Christina E. Hugenschmidt
- Wake Forest University School of Medicine Department of Radiology, Medical Center Boulevard, Winston-Salem, NC 27103, USA
- Neuroscience Program, Medical Center Boulevard, Winston-Salem, NC 27103, USA
| | - Satoru Hayasaka
- Wake Forest University School of Medicine Department of Radiology, Medical Center Boulevard, Winston-Salem, NC 27103, USA
| | - Ann M. Peiffer
- Wake Forest University School of Medicine Department of Radiology, Medical Center Boulevard, Winston-Salem, NC 27103, USA
| | - Paul J. Laurienti
- Wake Forest University School of Medicine Department of Radiology, Medical Center Boulevard, Winston-Salem, NC 27103, USA
| |
Collapse
|
202
|
The auditory redundant signals effect: An influence of number of stimuli or number of percepts? Atten Percept Psychophys 2009; 71:1375-84. [PMID: 19633352 DOI: 10.3758/app.71.6.1375] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
203
|
Angelaki DE, Gu Y, DeAngelis GC. Multisensory integration: psychophysics, neurophysiology, and computation. Curr Opin Neurobiol 2009; 19:452-8. [PMID: 19616425 DOI: 10.1016/j.conb.2009.06.008] [Citation(s) in RCA: 221] [Impact Index Per Article: 14.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/19/2009] [Revised: 06/15/2009] [Accepted: 06/16/2009] [Indexed: 10/20/2022]
Abstract
Fundamental observations and principles derived from traditional physiological studies of multisensory integration have been difficult to reconcile with computational and psychophysical studies that share the foundation of probabilistic (Bayesian) inference. We review recent work on multisensory integration, focusing on experiments that bridge single-cell electrophysiology, psychophysics, and computational principles. These studies show that multisensory (visual-vestibular) neurons can account for near-optimal cue integration during the perception of self-motion. Unlike the nonlinear (superadditive) interactions emphasized in some previous studies, visual-vestibular neurons accomplish near-optimal cue integration through subadditive linear summation of their inputs, consistent with recent computational theories. Important issues remain to be resolved, including the observation that variations in cue reliability appear to change the weights that neurons apply to their different sensory inputs.
Collapse
Affiliation(s)
- Dora E Angelaki
- Department of Anatomy & Neurobiology, Washington University School of Medicine, St. Louis, MO 63110, USA.
| | | | | |
Collapse
|
204
|
Abstract
Pooling and synthesizing signals across different senses often enhances responses to the event from which they are derived. Here, we examine whether multisensory response enhancements are attributable to a redundant target effect (two stimuli rather than one) or if there is some special quality inherent in the combination of cues from different senses. To test these possibilities, the performance of animals in localizing and detecting spatiotemporally concordant visual and auditory stimuli was examined when these stimuli were presented individually (visual or auditory) or in cross-modal (visual-auditory) and within-modal (visual-visual, auditory-auditory) combinations. Performance enhancements proved to be far greater for combinations of cross-modal than within-modal stimuli and support the idea that the behavioral products derived from multisensory integration are not attributable to simple target redundancy. One likely explanation is that whereas cross-modal signals offer statistically independent samples of the environment, within-modal signals can exhibit substantial covariance, and consequently multisensory integration can yield more substantial error reduction than unisensory integration.
Collapse
|
205
|
Ma WJ, Zhou X, Ross LA, Foxe JJ, Parra LC. Lip-reading aids word recognition most in moderate noise: a Bayesian explanation using high-dimensional feature space. PLoS One 2009; 4:e4638. [PMID: 19259259 PMCID: PMC2645675 DOI: 10.1371/journal.pone.0004638] [Citation(s) in RCA: 109] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/16/2008] [Accepted: 01/07/2009] [Indexed: 11/21/2022] Open
Abstract
Watching a speaker's facial movements can dramatically enhance our ability to comprehend words, especially in noisy environments. From a general doctrine of combining information from different sensory modalities (the principle of inverse effectiveness), one would expect that the visual signals would be most effective at the highest levels of auditory noise. In contrast, we find, in accord with a recent paper, that visual information improves performance more at intermediate levels of auditory noise than at the highest levels, and we show that a novel visual stimulus containing only temporal information does the same. We present a Bayesian model of optimal cue integration that can explain these conflicts. In this model, words are regarded as points in a multidimensional space and word recognition is a probabilistic inference process. When the dimensionality of the feature space is low, the Bayesian model predicts inverse effectiveness; when the dimensionality is high, the enhancement is maximal at intermediate auditory noise levels. When the auditory and visual stimuli differ slightly in high noise, the model makes a counterintuitive prediction: as sound quality increases, the proportion of reported words corresponding to the visual stimulus should first increase and then decrease. We confirm this prediction in a behavioral experiment. We conclude that auditory-visual speech perception obeys the same notion of optimality previously observed only for simple multisensory stimuli.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Neuroscience, Baylor College of Medicine, Houston, Texas, United States of America.
| | | | | | | | | |
Collapse
|
206
|
Diederich A, Colonius H. Crossmodal interaction in speeded responses: time window of integration model. PROGRESS IN BRAIN RESEARCH 2009; 174:119-35. [PMID: 19477335 DOI: 10.1016/s0079-6123(09)01311-9] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/06/2023]
Abstract
Saccadic reaction time (SRT) to a visual stimulus tends to be faster when an auditory and/or somatosensory stimulus is presented in close temporal or spatial proximity, even when participants are instructed to ignore the accessory input (focused attention task). The time course of SRT as a function of stimulus onset asynchrony (SOA) is consistent with the time-window-of-integration (TWIN) model assuming a peripheral stage of parallel processing in separate sensory channels followed by a secondary stage of multisensory integration. TWIN has been shown to account for effects of the spatial configuration of the stimuli, for the effect of increasing the number of nontargets presented together with the target, for a possible warning effect of the nontarget, for effects of increasing the intensity of the nontarget, and for the effect of background noise on multisensory integration. Moreover, it has been able to accommodate some effects of aging on multisensory integration. There is empirical support for TWIN's tenet of the separability between spatial and temporal factors on multisensory integration. Besides presenting many features of TWIN within the context of crossmodal interaction modeling efforts, some possible directions on how the TWIN framework could serve to elucidate the link between perception and action are shown.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University, Bremen, Germany.
| | | |
Collapse
|
207
|
Passamonti C, Bertini C, Làdavas E. Audio-visual stimulation improves oculomotor patterns in patients with hemianopia. Neuropsychologia 2008; 47:546-55. [PMID: 18983860 DOI: 10.1016/j.neuropsychologia.2008.10.008] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2008] [Revised: 09/17/2008] [Accepted: 10/03/2008] [Indexed: 10/21/2022]
Abstract
Patients with visual field disorders often exhibit impairments in visual exploration and a typical defective oculomotor scanning behaviour. Recent evidence [Bolognini, N., Rasi, F., Coccia, M., & Làdavas, E. (2005b). Visual search improvement in hemianopic patients after audio-visual stimulation. Brain, 128, 2830-2842] suggests that systematic audio-visual stimulation of the blind hemifield can improve accuracy and search times in visual exploration, probably due to the stimulation of Superior Colliculus (SC), an important multisensory structure involved in both the initiation and execution of saccades. The aim of the present study is to verify this hypothesis by studying the effects of multisensory training on oculomotor scanning behaviour. Oculomotor responses during a visual search task and a reading task were studied before and after visual (control) or audio-visual (experimental) training, in a group of 12 patients with chronic visual field defects and 12 controls subjects. Eye movements were recorded using an infra-red technique which measured a range of spatial and temporal variables. Prior to treatment, patients' performance was significantly different from that of controls in relation to fixations and saccade parameters; after Audio-Visual Training, all patients reported an improvement in ocular exploration characterized by fewer fixations and refixations, quicker and larger saccades, and reduced scanpath length. Overall, these improvements led to a reduction of total exploration time. Similarly, reading parameters were significantly affected by the training, with respect to specific impairments observed in both left- and right-hemianopia readers. Our findings provide evidence that Audio-Visual Training, by stimulating the SC, may induce a more organized pattern of visual exploration due to an implementation of efficient oculomotor strategies. Interestingly, the improvement was found to be stable at a 1 year follow-up control session, indicating a long-term persistence of treatment effects on the oculomotor system.
Collapse
|
208
|
Coello Y, Bartolo A, Amiri B, Devanne H, Houdayer E, Derambure P. Perceiving what is reachable depends on motor representations: evidence from a transcranial magnetic stimulation study. PLoS One 2008; 3:e2862. [PMID: 18682848 PMCID: PMC2483935 DOI: 10.1371/journal.pone.0002862] [Citation(s) in RCA: 52] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/20/2008] [Accepted: 07/14/2008] [Indexed: 11/24/2022] Open
Abstract
BACKGROUND Visually determining what is reachable in peripersonal space requires information about the egocentric location of objects but also information about the possibilities of action with the body, which are context dependent. The aim of the present study was to test the role of motor representations in the visual perception of peripersonal space. METHODOLOGY Seven healthy participants underwent a TMS study while performing a right-left decision (control) task or perceptually judging whether a visual target was reachable or not with their right hand. An actual grasping movement task was also included. Single pulse TMS was delivered 80% of the trials on the left motor and premotor cortex and on a control site (the temporo-occipital area), at 90% of the resting motor threshold and at different SOA conditions (50ms, 100ms, 200ms or 300ms). PRINCIPAL FINDINGS Results showed a facilitation effect of the TMS on reaction times in all tasks, whatever the site stimulated and until 200ms after stimulus presentation. However, the facilitation effect was on average 34ms lower when stimulating the motor cortex in the perceptual judgement task, especially for stimuli located at the boundary of peripersonal space. CONCLUSION This study provides the first evidence that brain motor area participate in the visual determination of what is reachable. We discuss how motor representations may feed the perceptual system with information about possible interactions with nearby objects and thus may contribute to the perception of the boundary of peripersonal space.
Collapse
Affiliation(s)
- Yann Coello
- Lab URECA, EA 1059, Université de Lille-Nord de France, Lille, France.
| | | | | | | | | | | |
Collapse
|
209
|
Ma WJ, Pouget A. Linking neurons to behavior in multisensory perception: a computational review. Brain Res 2008; 1242:4-12. [PMID: 18602905 DOI: 10.1016/j.brainres.2008.04.082] [Citation(s) in RCA: 57] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2008] [Revised: 04/25/2008] [Accepted: 04/27/2008] [Indexed: 11/18/2022]
Abstract
A large body of psychophysical and physiological findings has characterized how information is integrated across multiple senses. This work has focused on two major issues: how do we integrate information, and when do we integrate, i.e., how do we decide if two signals come from the same source or different sources. Recent studies suggest that humans and animals use Bayesian strategies to solve both problems. With regard to how to integrate, computational studies have also started to shed light on the neural basis of this Bayes-optimal computation, suggesting that, if neuronal variability is Poisson-like, a simple linear combination of population activity is all that is required for optimality. We review both sets of developments, which together lay out a path towards a complete neural theory of multisensory perception.
Collapse
Affiliation(s)
- Wei Ji Ma
- Department of Brain and Cognitive Sciences, University of Rochester, Rochester, NY 14627, USA.
| | | |
Collapse
|
210
|
When a high-intensity "distractor" is better then a low-intensity one: modeling the effect of an auditory or tactile nontarget stimulus on visual saccadic reaction time. Brain Res 2008; 1242:219-30. [PMID: 18573240 DOI: 10.1016/j.brainres.2008.05.081] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2008] [Revised: 05/29/2008] [Accepted: 05/29/2008] [Indexed: 11/21/2022]
Abstract
In a focused attention task saccadic reaction time (SRT) to a visual target stimulus (LED) was measured with an auditory (white noise burst) or tactile (vibration applied to palm) nontarget presented in ipsi- or contralateral position to the target. Crossmodal facilitation of SRT was observed under all configurations and stimulus onset asynchrony (SOA) values ranging from -250 ms (nontarget prior to target) to 50 ms. This study specifically addressed the effect of varying nontarget intensity. While facilitation effects for auditory nontargets are somewhat more pronounced than for tactile ones, decreasing intensity slightly reduced facilitation for both types of nontargets. The time course of crossmodal mean SRT over SOA and the pattern of facilitation observed here suggest the existence of two distinct underlying mechanisms: (a) a spatially unspecific crossmodal warning triggered by the nontarget being detected early enough before the arrival of the target plus (b) a spatially specific multisensory integration mechanism triggered by the target processing time terminating within the time window of integration. It is shown that the time window of integration (TWIN) model introduced by the authors gives a reasonable quantitative account of the data relating observed SRT to the unobservable probability of integration and crossmodal warning for each SOA value under a high and low intensity level of the nontarget.
Collapse
|
211
|
Hecht D, Reiner M, Karni A. Multisensory enhancement: gains in choice and in simple response times. Exp Brain Res 2008; 189:133-43. [PMID: 18478210 DOI: 10.1007/s00221-008-1410-0] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2007] [Accepted: 04/26/2008] [Indexed: 10/22/2022]
Abstract
Human observers can detect combinations of multisensory signals faster than each of the corresponding signals presented separately. In simple detection tasks, this facilitation in response times may reflect an enhancement in the perceptual processing stage or/and in the motor response stage. The current study compared the multisensory enhancements obtained in simple and choice response times (SRT and CRT, respectively) in bi- and tri-sensory (audio-visual-haptic) signal combinations using an identical experimental setup that differed only in the tasks--detecting the signals (SRT) or reporting the signals' location (CRT). Our measurements show that RTs were faster in the multisensory combinations conditions compared to the single stimulus conditions and that the absolute multisensory gains were larger in CRT than in SRT. These results can be interpreted in two ways. According to a serial stages model, the larger multisensory gains in CRT may suggest that when combinations of multisensory signals are presented, an additional enhancement occurs in the cognitive processing stages engaged in the CRT, beyond the enhancement in the perceptual and motor stages common to both SRT and CRT. Alternatively, the results suggest that multisensory enhancement reflect task-dependent interactions within and between multiple processing levels rather than facilitated processing modules. Thus, the larger absolute multisensory gains in CRT may reflect the inverse effectiveness principle, and Bayesian statistics, in that the maximal multisensory enhancements occur in the more difficult (less precise) uni-sensory conditions, i.e., in the CRT.
Collapse
Affiliation(s)
- David Hecht
- The Touch Laboratory, Gutwirth Building, Department of Education in Technology and Science, Technion-Israel Institute of Technology, Haifa 32000, Israel.
| | | | | |
Collapse
|
212
|
Abstract
The brain effectively integrates multisensory information to enhance perception. For example, audiovisual stimuli typically yield faster responses than isolated unimodal ones (redundant signal effect, RSE). Here, we show that the audiovisual RSE is likely subserved by a neural site of integration (neural coactivation), rather than by an independent-channels mechanism such as race models. This neural site is probably the superior colliculus (SC), because an RSE explainable by neural coactivation does not occur with purple or blue stimuli, which are invisible to the SC; such an RSE only occurs for spatially and temporally coincident audiovisual stimuli, in strict adherence with the multisensory responses in the SC of the cat. These data suggest that audiovisual integration in humans occurs very early during sensory processing, in the SC.
Collapse
|
213
|
Stein BE, Stanford TR. Multisensory integration: current issues from the perspective of the single neuron. Nat Rev Neurosci 2008; 9:255-66. [PMID: 18354398 DOI: 10.1038/nrn2331] [Citation(s) in RCA: 912] [Impact Index Per Article: 57.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
For thousands of years science philosophers have been impressed by how effectively the senses work together to enhance the salience of biologically meaningful events. However, they really had no idea how this was accomplished. Recent insights into the underlying physiological mechanisms reveal that, in at least one circuit, this ability depends on an intimate dialogue among neurons at multiple levels of the neuraxis; this dialogue cannot take place until long after birth and might require a specific kind of experience. Understanding the acquisition and usage of multisensory integration in the midbrain and cerebral cortex of mammals has been aided by a multiplicity of approaches. Here we examine some of the fundamental advances that have been made and some of the challenging questions that remain.
Collapse
Affiliation(s)
- Barry E Stein
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | |
Collapse
|
214
|
Carriere BN, Royal DW, Wallace MT. Spatial heterogeneity of cortical receptive fields and its impact on multisensory interactions. J Neurophysiol 2008; 99:2357-68. [PMID: 18287544 DOI: 10.1152/jn.01386.2007] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Investigations of multisensory processing at the level of the single neuron have illustrated the importance of the spatial and temporal relationship of the paired stimuli and their relative effectiveness in determining the product of the resultant interaction. Although these principles provide a good first-order description of the interactive process, they were derived by treating space, time, and effectiveness as independent factors. In the anterior ectosylvian sulcus (AES) of the cat, previous work hinted that the spatial receptive field (SRF) architecture of multisensory neurons might play an important role in multisensory processing due to differences in the vigor of responses to identical stimuli placed at different locations within the SRF. In this study the impact of SRF architecture on cortical multisensory processing was investigated using semichronic single-unit electrophysiological experiments targeting a multisensory domain of the cat AES. The visual and auditory SRFs of AES multisensory neurons exhibited striking response heterogeneity, with SRF architecture appearing to play a major role in the multisensory interactions. The deterministic role of SRF architecture was tightly coupled to the manner in which stimulus location modulated the responsiveness of the neuron. Thus multisensory stimulus combinations at weakly effective locations within the SRF resulted in large (often superadditive) response enhancements, whereas combinations at more effective spatial locations resulted in smaller (additive/subadditive) interactions. These results provide important insights into the spatial organization and processing capabilities of cortical multisensory neurons, features that may provide important clues as to the functional roles played by this area in spatially directed perceptual processes.
Collapse
Affiliation(s)
- Brian N Carriere
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, NC 27157-1010, USA.
| | | | | |
Collapse
|
215
|
Hecht D, Reiner M, Karni A. Enhancement of response times to bi- and tri-modal sensory stimuli during active movements. Exp Brain Res 2007; 185:655-65. [PMID: 17992522 DOI: 10.1007/s00221-007-1191-x] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2007] [Accepted: 10/19/2007] [Indexed: 12/01/2022]
Abstract
Simultaneous activation of two sensory modalities can improve perception and enhance performance. This multi-sensory enhancement had been previously observed only in conditions wherein participants were not performing any movement. Since tactile perception is attenuated during active movements, we investigated whether a bi- and a tri-modal enhancement can occur also when participants are presented with tactile stimuli, while engaged in active movements. Participants held a pen-like stylus and performed bidirectional writing-like movements inside a restricted workspace. During these movements participants were given a uni-modal sensory signal (visual--a thin gray line; auditory--a brief sound; haptic--a mechanical resisting force delivered through the stylus) or a bi- or tri-modal combination of these uni-modal signals, and their task was to respond, by pressing a button on the stylus, as soon as any one of these three stimuli was detected. Results showed that a combination of tri-modal signals was detected faster than any of the bi-modal combinations, which in turn were detected faster than any of the uni-modal signals. These facilitations exceeded the "Race model" predictions. A breakdown of the time gained in the bi-modal combinations by hemispace, hands and gender, provide further support for the "inverse effectiveness" principle, as the maximal bi-modal enhancements occurred for the least effective uni-modal responses.
Collapse
Affiliation(s)
- David Hecht
- The Touch Laboratory, Department of Education in Technology and Science, Technion - Israel Institute of Technology, Gutwirth Building, Haifa 32000, Israel.
| | | | | |
Collapse
|
216
|
Impact of contingency manipulations on accessory stimulus effects. ACTA ACUST UNITED AC 2007; 69:1117-25. [DOI: 10.3758/bf03193949] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
217
|
Diederich A, Colonius H. Modeling spatial effects in visual-tactile saccadic reaction time. ACTA ACUST UNITED AC 2007; 69:56-67. [PMID: 17515216 DOI: 10.3758/bf03194453] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Saccadic reaction time (SRT) to visual targets tends to be shorter when nonvisual stimuli are presented in close temporal or spatial proximity, even when subjects are instructed to ignore the accessory input. Here, we investigate visual-tactile interaction effects on SRT under varying spatial configurations. SRT to bimodal stimuli was reduced by up to 30 msec, in comparison with responses to unimodal visual targets. In contrast to previous findings, the amount of multisensory facilitation did not decrease with increases in the physical distance between the target and the nontarget but depended on (1) whether the target and the nontarget were presented in the same hemifield (ipsilateral) or in different hemifields (contralateral), (2) the eccentricity of the stimuli, and (3) the frequency of the vibrotactile nontarget. The time-window-of-integration (TWIN) model for SRT (Colonius & Diederich, 2004) is shown to yield an explicit characterization of the observed multisensory spatial interaction effects through the removal of the peripheral-processing effects of stimulus location and tactile frequency.
Collapse
Affiliation(s)
- Adele Diederich
- School of Humanities and Social Sciences, Jacobs University Bremen, Bremen, Germany.
| | | |
Collapse
|
218
|
Gillmeister H, Eimer M. Tactile enhancement of auditory detection and perceived loudness. Brain Res 2007; 1160:58-68. [PMID: 17573048 DOI: 10.1016/j.brainres.2007.03.041] [Citation(s) in RCA: 78] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2006] [Revised: 02/20/2007] [Accepted: 03/15/2007] [Indexed: 11/17/2022]
Abstract
To study the effects of touch on auditory processing, we examined whether uninformative and irrelevant tactile stimuli presented together with task-relevant sounds can improve auditory detection (Experiment 1), and enhance perceived loudness (Experiment 2). We demonstrated that irrelevant tactile signals facilitate the detection of faint tones, and increase auditory intensity ratings. These crossmodal facilitation effects were found for synchronous when compared to asynchronous auditory-tactile stimulation, and were stronger for weaker than for louder sounds. They are interpreted in terms of a multisensory integration mechanism that increases the strength of auditory signals, and adheres to the rules of inverse effectiveness and temporal (but not spatial) co-occurrence. This integration might be mediated by auditory-tactile multisensory neurons in regions of auditory association cortex that are also involved in auditory detection and loudness discrimination.
Collapse
|
219
|
Rowland BA, Quessy S, Stanford TR, Stein BE. Multisensory integration shortens physiological response latencies. J Neurosci 2007; 27:5879-84. [PMID: 17537958 PMCID: PMC6672269 DOI: 10.1523/jneurosci.4986-06.2007] [Citation(s) in RCA: 123] [Impact Index Per Article: 7.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Individual superior colliculus (SC) neurons integrate information from multiple sensory sources to enhance their physiological response. The response of an SC neuron to a cross-modal stimulus combination can not only exceed the best component unisensory response but can also exceed their arithmetic sum (i.e., superadditivity). The present experiments were designed to investigate the temporal profile of multisensory integration in this model system. We found that cross-modal stimuli frequently shortened physiological response latencies (mean shift, 6.2 ms) and that response enhancement was greatest in the initial phase of the response (the phenomenon of initial response enhancement). The vast majority of the responses studied evidenced superadditive computations, most often at the beginning of the multisensory response.
Collapse
Affiliation(s)
- Benjamin A Rowland
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina 27157, USA.
| | | | | | | |
Collapse
|
220
|
Gondan M, Vorberg D, Greenlee MW. Modality shift effects mimic multisensory interactions: an event-related potential study. Exp Brain Res 2007; 182:199-214. [PMID: 17562033 DOI: 10.1007/s00221-007-0982-4] [Citation(s) in RCA: 19] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/13/2006] [Accepted: 05/08/2007] [Indexed: 11/26/2022]
Abstract
A frequent approach to study interactions of the auditory and the visual system is to measure event-related potentials (ERPs) to auditory, visual, and auditory-visual stimuli (A, V, AV). A nonzero result of the AV - (A + V) comparison indicates that the sensory systems interact at a specific processing stage. Two possible biases weaken the conclusions drawn by this approach: first, subtracting two ERPs from one requires that A, V, and AV do not share any common activity. We have shown before (Gondan and Röder in Brain Res 1073-1074:389-397, 2006) that the problem of common activity can be avoided using an additional tactile stimulus (T) and evaluating the ERP difference (T + TAV) - (TA + TV). A second possible confound is the modality shift effect (MSE): for example, the auditory N1 is increased if an auditory stimulus follows a visual stimulus, whereas it is smaller if the modality is unchanged (ipsimodal stimulus). Bimodal stimuli might be affected less by MSEs because at least one component always matches the preceding trial. Consequently, an apparent amplitude modulation of the N1 would be observed in AV. We tested the influence of MSEs on auditory-visual interactions by comparing the results of AV - (A + V) using (a) all stimuli and using (b) only ipsimodal stimuli. (a) and (b) differed around 150 ms, this indicates that AV - (A + V) is indeed affected by the MSE. We then formally and empirically demonstrate that (T + TAV) - (TA + TV) is robust against possible biases due to the MSE.
Collapse
Affiliation(s)
- Matthias Gondan
- Department of Psychology, University of Regensburg, 93050 Regensburg, Germany.
| | | | | |
Collapse
|
221
|
Abstract
Single-neuron studies have highlighted dramatic enhancements in neural activity consequent to multisensory integration. Most notable are 'superadditive' enhancements in which the multisensory response exceeds the sum of those evoked by the modality-specific stimulus components individually. Although all multisensory enhancements may have perceptual/behavioral consequences, superadditivity, which suggests a nonlinear combination of modality-specific influences, seems to have had a disproportionate influence within the multisensory literature. This influence has been reinforced by the increasing application of noninvasive techniques such as functional imaging and event-related potential recording, which depend on response nonlinearities to demonstrate underlying multisensory processes. In promoting the idea that many multisensory behaviors may not rely on superadditivity, we consider more recent single-neuron studies that place its incidence in context.
Collapse
Affiliation(s)
- Terrence R Stanford
- Department of Neurobiology and Anatomy, Wake Forest University School of Medicine, Winston-Salem, North Carolina, USA.
| | | |
Collapse
|
222
|
Abstract
Congruent information conveyed over different sensory modalities often facilitates a variety of cognitive processes, including speech perception (Sumby & Pollack, 1954). Since auditory processing is substantially faster than visual processing, auditory-visual integration can occur over a surprisingly wide temporal window (Stein, 1998). We investigated the processing architecture mediating the integration of acoustic digit names with corresponding symbolic visual forms. The digits "1" or "2" were presented in auditory, visual, or bimodal format at several stimulus onset asynchronies (SOAs; 0, 75, 150, and 225 msec). The reaction times (RTs) for echoing unimodal auditory stimuli were approximately 100 msec faster than the RTs for naming their visual forms. Correspondingly, bimodal facilitation violated race model predictions, but only at SOA values greater than 75 msec. These results indicate that the acoustic and visual information are pooled prior to verbal response programming. However, full expression of this bimodal summation is dependent on the central coincidence of the visual and auditory inputs. These results are considered in the context of studies demonstrating multimodal activation of regions involved in speech production.
Collapse
|
223
|
KITAGAWA NORIMICHI, SPENCE CHARLES. Audiotactile multisensory interactions in human information processing. JAPANESE PSYCHOLOGICAL RESEARCH 2006. [DOI: 10.1111/j.1468-5884.2006.00317.x] [Citation(s) in RCA: 48] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
|
224
|
Whitchurch EA, Takahashi TT. Combined auditory and visual stimuli facilitate head saccades in the barn owl (Tyto alba). J Neurophysiol 2006; 96:730-45. [PMID: 16672296 DOI: 10.1152/jn.00072.2006] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The barn owl naturally responds to an auditory or visual stimulus in its environment with a quick head turn toward the source. We measured these head saccades evoked by auditory, visual, and simultaneous, co-localized audiovisual stimuli to quantify multisensory interactions in the barn owl. Stimulus levels ranged from near to well above saccadic threshold. In accordance with previous human psychophysical findings, the owl's saccade reaction times (SRTs) and errors to unisensory stimuli were inversely related to stimulus strength. Auditory saccades characteristically had shorter reaction times but were less accurate than visual saccades. Audiovisual trials, over a large range of tested stimulus combinations, had auditory-like SRTs and visual-like errors, suggesting that barn owls are able to use both auditory and visual cues to produce saccades with the shortest possible SRT and greatest accuracy. These results support a model of sensory integration in which the faster modality initiates the saccade and the slower modality remains available to refine saccade trajectory.
Collapse
|
225
|
Rach S, Diederich A. Visual-tactile integration: does stimulus duration influence the relative amount of response enhancement? Exp Brain Res 2006; 173:514-20. [PMID: 16636793 DOI: 10.1007/s00221-006-0452-4] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2005] [Accepted: 03/12/2006] [Indexed: 11/30/2022]
Abstract
Responses to multiple stimuli from different modalities tend to be faster compared to responses to each of these stimuli alone. Neurophysiological studies on higher mammals and behavioral studies on humans suggest that the relative amount of enhancement is inversely related to stimuli intensity. In two experiments the duration of visual and tactile stimuli was varied to investigate whether duration, as a further determinant of stimulus effectiveness, is also inversely related to the relative amount of response enhancement. Visual and tactile stimuli were presented left or right of fixation either in the same or different hemifields. Participants were required to gaze only at visual stimuli and to ignore tactile (focused attention paradigm). Saccadic reaction times were recorded. Results from both experiments show that the relative amount of response enhancement was largest for the shortest stimulus duration and decreases with increasing stimulus duration, i.e., inverse effectiveness of stimulus duration.
Collapse
Affiliation(s)
- Stefan Rach
- School of Humanities and Social Sciences, International University Bremen, Bremen, Germany.
| | | |
Collapse
|
226
|
Lagarde J, Kelso JAS. Binding of movement, sound and touch: multimodal coordination dynamics. Exp Brain Res 2006; 173:673-88. [PMID: 16528497 DOI: 10.1007/s00221-006-0410-1] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2005] [Accepted: 02/14/2006] [Indexed: 12/26/2022]
Abstract
Very little is known about the coordination of movement in combination with stimuli such as sound and touch. The present research investigates the hypothesis that both the type of action (e.g., a flexion or extension movement) and the sensory modality (e.g., auditory or tactile) determine the stability of multimodal coordination. We performed a parametric study in which the ability to synchronize movement, touch and sound was explored over a broad range of stimulus frequencies or rates. As expected, synchronization of finger movement with external auditory and tactile stimuli was successfully established and maintained across all frequencies. In the key experimental conditions, participants were instructed to synchronize peak flexion of the index finger with touch and peak extension with sound (and vice-versa). In this situation, tactile and auditory stimuli were delivered counter-phase to each other. Two key effects were observed. First, switching between multimodal coordination patterns occurred, with transitions selecting one multimodal pattern (flexion with sound and extension with touch) more often than its partner. This finding indicates that the stability of multimodal coordination is influenced by both the type of action and the stimulus modality. Second, at higher rates, transitions from coherent to incoherent phase relations between touch, movement and sound occurred, attesting to the breakdown of multimodal coordination. Because timing errors in multimodal coordination were systematically altered when compared to unimodal control conditions we are led to consider the role played by time delays in multimodal coordination dynamics.
Collapse
Affiliation(s)
- J Lagarde
- Laboratory Efficience Deficience Motrice, University Montpellier-1, 700 Avenue Pic Saint Loup, 34090 Montpellier, France.
| | | |
Collapse
|
227
|
Colonius H, Diederich A. The race model inequality: Interpreting a geometric measure of the amount of violation. Psychol Rev 2006; 113:148-54. [PMID: 16478305 DOI: 10.1037/0033-295x.113.1.148] [Citation(s) in RCA: 86] [Impact Index Per Article: 4.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
An inequality by J. O. Miller (1982) has become the standard tool to test the race model for redundant signals reaction times (RTs), as an alternative to a neural summation mechanism. It stipulates that the RT distribution function to redundant stimuli is never larger than the sum of the distribution functions for 2 single stimuli. When many different experimental conditions are to be compared, a numerical index of violation is very desirable. Widespread practice is to take a certain area with contours defined by the distribution functions for single and redundant stimuli. Here this area is shown to equal the difference between 2 mean RT values. This result provides an intuitive interpretation of the index and makes it amenable to simple statistical testing. An extension of this approach to 3 redundant signals is presented.
Collapse
Affiliation(s)
- Hans Colonius
- Department of Psychology, Oldenburg University, Oldenburg, Germany.
| | | |
Collapse
|
228
|
Akerfelt A, Colonius H, Diederich A. Visual-tactile saccadic inhibition. Exp Brain Res 2005; 169:554-63. [PMID: 16328301 DOI: 10.1007/s00221-005-0168-x] [Citation(s) in RCA: 16] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/03/2004] [Accepted: 09/21/2005] [Indexed: 10/25/2022]
Abstract
In an eye movement countermanding paradigm it is demonstrated for the first time that a tactile stimulus can be an effective stop signal when human participants are to inhibit saccades to a visual target. Estimated stop signal processing times were 90-140 ms, comparable to results with auditory stop signals, but shorter than those commonly found for manual responses. Two of the three participants significantly slowed their reactions in expectation of the stop signal as revealed by a control experiment without stop signals. All participants produced slower responses in the shortest stop signal delay condition than predicted by the race model (Logan and Cowan 1984) along with hypometric saccades on stop failure trials, suggesting that the race model may need to be elaborated to include some component of interaction of stop and go signal processing.
Collapse
Affiliation(s)
- Annika Akerfelt
- Department of Psychology, University of Oldenburg, P.O. Box 2503, 26111, Oldenburg, Germany.
| | | | | |
Collapse
|