1
|
Multisensory-driven facilitation within the peripersonal space is modulated by the expectations about stimulus location on the body. Sci Rep 2022; 12:20061. [PMID: 36414633 PMCID: PMC9681840 DOI: 10.1038/s41598-022-21469-w] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2022] [Accepted: 09/27/2022] [Indexed: 11/24/2022] Open
Abstract
Compelling evidence from human and non-human studies suggests that responses to multisensory events are fastened when stimuli occur within the space surrounding the bodily self (i.e., peripersonal space; PPS). However, some human studies did not find such effect. We propose that these dissonant voices might actually uncover a specific mechanism, modulating PPS boundaries according to sensory regularities. We exploited a visuo-tactile paradigm, wherein participants provided speeded responses to tactile stimuli and rated their perceived intensity while ignoring simultaneous visual stimuli, appearing near the stimulated hand (VTNear) or far from it (VTFar; near the non-stimulated hand). Tactile stimuli could be delivered only to one hand (unilateral task) or to both hands randomly (bilateral task). Results revealed that a space-dependent multisensory enhancement (i.e., faster responses and higher perceived intensity in VTNear than VTFar) was present when highly predictable tactile stimulation induced PPS to be circumscribed around the stimulated hand (unilateral task). Conversely, when stimulus location was unpredictable (bilateral task), participants showed a comparable multisensory enhancement in both bimodal conditions, suggesting a PPS widening to include both hands. We propose that the detection of environmental regularities actively shapes PPS boundaries, thus optimizing the detection and reaction to incoming sensory stimuli.
Collapse
|
2
|
Fleming JT, Noyce AL, Shinn-Cunningham BG. Audio-visual spatial alignment improves integration in the presence of a competing audio-visual stimulus. Neuropsychologia 2020; 146:107530. [PMID: 32574616 DOI: 10.1016/j.neuropsychologia.2020.107530] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/13/2019] [Revised: 06/08/2020] [Accepted: 06/08/2020] [Indexed: 11/26/2022]
Abstract
In order to parse the world around us, we must constantly determine which sensory inputs arise from the same physical source and should therefore be perceptually integrated. Temporal coherence between auditory and visual stimuli drives audio-visual (AV) integration, but the role played by AV spatial alignment is less well understood. Here, we manipulated AV spatial alignment and collected electroencephalography (EEG) data while human subjects performed a free-field variant of the "pip and pop" AV search task. In this paradigm, visual search is aided by a spatially uninformative auditory tone, the onsets of which are synchronized to changes in the visual target. In Experiment 1, tones were either spatially aligned or spatially misaligned with the visual display. Regardless of AV spatial alignment, we replicated the key pip and pop result of improved AV search times. Mirroring the behavioral results, we found an enhancement of early event-related potentials (ERPs), particularly the auditory N1 component, in both AV conditions. We demonstrate that both top-down and bottom-up attention contribute to these N1 enhancements. In Experiment 2, we tested whether spatial alignment influences AV integration in a more challenging context with competing multisensory stimuli. An AV foil was added that visually resembled the target and was synchronized to its own stream of synchronous tones. The visual components of the AV target and AV foil occurred in opposite hemifields; the two auditory components were also in opposite hemifields and were either spatially aligned or spatially misaligned with the visual components to which they were synchronized. Search was fastest when the auditory and visual components of the AV target (and the foil) were spatially aligned. Attention modulated ERPs in both spatial conditions, but importantly, the scalp topography of early evoked responses shifted only when stimulus components were spatially aligned, signaling the recruitment of different neural generators likely related to multisensory integration. These results suggest that AV integration depends on AV spatial alignment when stimuli in both modalities compete for selective integration, a common scenario in real-world perception.
Collapse
Affiliation(s)
- Justin T Fleming
- Speech and Hearing Bioscience and Technology Program, Division of Medical Sciences, Harvard Medical School, Boston, MA, USA
| | - Abigail L Noyce
- Department of Psychological and Brain Sciences, Boston University, Boston, MA, USA
| | | |
Collapse
|
3
|
Abstract
Speech research during recent years has moved progressively away from its traditional focus on audition toward a more multisensory approach. In addition to audition and vision, many somatosenses including proprioception, pressure, vibration and aerotactile sensation are all highly relevant modalities for experiencing and/or conveying speech. In this article, we review both long-standing cross-modal effects stemming from decades of audiovisual speech research as well as new findings related to somatosensory effects. Cross-modal effects in speech perception to date are found to be constrained by temporal congruence and signal relevance, but appear to be unconstrained by spatial congruence. Far from taking place in a one-, two- or even three-dimensional space, the literature reveals that speech occupies a highly multidimensional sensory space. We argue that future research in cross-modal effects should expand to consider each of these modalities both separately and in combination with other modalities in speech.
Collapse
Affiliation(s)
- Megan Keough
- Interdisciplinary Speech Research Lab, Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
| | - Donald Derrick
- New Zealand Institute of Brain and Behaviour, University of Canterbury, Christchurch 8140, New Zealand
- MARCS Institute for Brain, Behaviour and Development, Western Sydney University, Penrith, New South Wales 2751, Australia
| | - Bryan Gick
- Interdisciplinary Speech Research Lab, Department of Linguistics, University of British Columbia, Vancouver, British Columbia V6T 1Z4, Canada
- Haskins Laboratories, Yale University, New Haven, CT 06511, USA
| |
Collapse
|
4
|
Zelic G, Mottet D, Lagarde J. Perceptuo-motor compatibility governs multisensory integration in bimanual coordination dynamics. Exp Brain Res 2015; 234:463-74. [DOI: 10.1007/s00221-015-4476-5] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Accepted: 10/15/2015] [Indexed: 11/30/2022]
|
5
|
Misselhorn J, Daume J, Engel AK, Friese U. A matter of attention: Crossmodal congruence enhances and impairs performance in a novel trimodal matching paradigm. Neuropsychologia 2015. [PMID: 26209356 DOI: 10.1016/j.neuropsychologia.2015.07.022] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
A novel crossmodal matching paradigm including vision, audition, and somatosensation was developed in order to investigate the interaction between attention and crossmodal congruence in multisensory integration. To that end, all three modalities were stimulated concurrently while a bimodal focus was defined blockwise. Congruence between stimulus intensity changes in the attended modalities had to be evaluated. We found that crossmodal congruence improved performance if both, the attended modalities and the task-irrelevant distractor were congruent. If the attended modalities were incongruent, the distractor impaired performance due to its congruence relation to one of the attended modalities. Between attentional conditions, magnitudes of crossmodal enhancement or impairment differed. Largest crossmodal effects were seen in visual-tactile matching, intermediate effects for audio-visual and smallest effects for audio-tactile matching. We conclude that differences in crossmodal matching likely reflect characteristics of multisensory neural network architecture. We discuss our results with respect to the timing of perceptual processing and state hypotheses for future physiological studies. Finally, etiological questions are addressed.
Collapse
Affiliation(s)
- Jonas Misselhorn
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany.
| | - Jonathan Daume
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Andreas K Engel
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| | - Uwe Friese
- Department of Neurophysiology and Pathophysiology, University Medical Center Hamburg-Eppendorf, Martinistr. 52, 20246 Hamburg, Germany
| |
Collapse
|
6
|
Pannunzi M, Pérez-Bellido A, Pereda-Baños A, López-Moliner J, Deco G, Soto-Faraco S. Deconstructing multisensory enhancement in detection. J Neurophysiol 2014; 113:1800-18. [PMID: 25520431 DOI: 10.1152/jn.00341.2014] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
The mechanisms responsible for the integration of sensory information from different modalities have become a topic of intense interest in psychophysics and neuroscience. Many authors now claim that early, sensory-based cross-modal convergence improves performance in detection tasks. An important strand of supporting evidence for this claim is based on statistical models such as the Pythagorean model or the probabilistic summation model. These models establish statistical benchmarks representing the best predicted performance under the assumption that there are no interactions between the two sensory paths. Following this logic, when observed detection performances surpass the predictions of these models, it is often inferred that such improvement indicates cross-modal convergence. We present a theoretical analyses scrutinizing some of these models and the statistical criteria most frequently used to infer early cross-modal interactions during detection tasks. Our current analysis shows how some common misinterpretations of these models lead to their inadequate use and, in turn, to contradictory results and misleading conclusions. To further illustrate the latter point, we introduce a model that accounts for detection performances in multimodal detection tasks but for which surpassing of the Pythagorean or probabilistic summation benchmark can be explained without resorting to early cross-modal interactions. Finally, we report three experiments that put our theoretical interpretation to the test and further propose how to adequately measure multimodal interactions in audiotactile detection tasks.
Collapse
Affiliation(s)
| | | | | | - Joan López-Moliner
- Universitat de Barcelona, Barcelona, Spain; Institute for Brain, Cognition and Behaviour (IR3C), Barcelona, Spain; and
| | - Gustavo Deco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| | - Salvador Soto-Faraco
- Universitat Pompeu Fabra, Barcelona, Spain; Institució Catalana de Recerca i Estudis Avançats (ICREA), Barcelona, Spain
| |
Collapse
|
7
|
van Atteveldt N, Murray MM, Thut G, Schroeder CE. Multisensory integration: flexible use of general operations. Neuron 2014; 81:1240-1253. [PMID: 24656248 DOI: 10.1016/j.neuron.2014.02.044] [Citation(s) in RCA: 176] [Impact Index Per Article: 17.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 02/27/2014] [Indexed: 11/25/2022]
Abstract
Research into the anatomical substrates and "principles" for integrating inputs from separate sensory surfaces has yielded divergent findings. This suggests that multisensory integration is flexible and context dependent and underlines the need for dynamically adaptive neuronal integration mechanisms. We propose that flexible multisensory integration can be explained by a combination of canonical, population-level integrative operations, such as oscillatory phase resetting and divisive normalization. These canonical operations subsume multisensory integration into a fundamental set of principles as to how the brain integrates all sorts of information, and they are being used proactively and adaptively. We illustrate this proposition by unifying recent findings from different research themes such as timing, behavioral goal, and experience-related differences in integration.
Collapse
Affiliation(s)
- Nienke van Atteveldt
- Neuroimaging & Neuromodeling group, Netherlands Institute for Neuroscience, Royal Netherlands Academy of Arts and Sciences, Meibergdreef 47, 1105 BA Amsterdam, The Netherlands; Department of Educational Neuroscience, Faculty of Psychology & Education and Institute LEARN!, VU University Amsterdam, van der Boechorststraat 1, 1081 BT Amsterdam, The Netherlands; Department of Cognitive Neuroscience, Faculty of Psychology & Neuroscience, Maastricht University, P.O. Box 616, 6200 MD Maastricht, The Netherlands.
| | - Micah M Murray
- The Laboratory for Investigative Neurophysiology (the LINE), Neuropsychology and Neurorehabilitation Service and Radiodiagnostic Service, University Hospital Center and University of Lausanne, Avenue Pierre Decker 5, 1011 Lausanne, Switzerland; EEG Brain Mapping Core, Centre for Biomedical Imaging (CIBM), Rue du Bugnon 46, 1011 Lausanne, Switzerland
| | - Gregor Thut
- Institute of Neuroscience and Psychology, University of Glasgow, 58 Hillhead Street, Glasgow, G12 8QB, UK
| | - Charles E Schroeder
- Columbia University, Department Psychiatry, and the New York State Psychiatric Institute, 1051 Riverside Drive, New York, NY 10032, USA; Nathan S. Kline Institute, Cognitive Neuroscience & Schizophrenia Program, 140 Old Orangeburg Road, Orangeburg, NY 10962, USA.
| |
Collapse
|
8
|
Segregated audio–tactile events destabilize the bimanual coordination of distinct rhythms. Exp Brain Res 2012; 219:409-19. [DOI: 10.1007/s00221-012-3103-y] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2012] [Accepted: 04/17/2012] [Indexed: 12/27/2022]
|
9
|
Zelic G, Mottet D, Lagarde J. Behavioral impact of unisensory and multisensory audio-tactile events: pros and cons for interlimb coordination in juggling. PLoS One 2012; 7:e32308. [PMID: 22384211 PMCID: PMC3288083 DOI: 10.1371/journal.pone.0032308] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2011] [Accepted: 01/26/2012] [Indexed: 11/20/2022] Open
Abstract
Recent behavioral neuroscience research revealed that elementary reactive behavior can be improved in the case of cross-modal sensory interactions thanks to underlying multisensory integration mechanisms. Can this benefit be generalized to an ongoing coordination of movements under severe physical constraints? We choose a juggling task to examine this question. A central issue well-known in juggling lies in establishing and maintaining a specific temporal coordination among balls, hands, eyes and posture. Here, we tested whether providing additional timing information about the balls and hands motions by using external sound and tactile periodic stimulations, the later presented at the wrists, improved the behavior of jugglers. One specific combination of auditory and tactile metronome led to a decrease of the spatiotemporal variability of the juggler's performance: a simple sound associated to left and right tactile cues presented antiphase to each other, which corresponded to the temporal pattern of hands movement in the juggling task. A contrario, no improvements were obtained in the case of other auditory and tactile combinations. We even found a degraded performance when tactile events were presented alone. The nervous system thus appears able to integrate in efficient way environmental information brought by different sensory modalities, but only if the information specified matches specific features of the coordination pattern. We discuss the possible implications of these results for the understanding of the neuronal integration process implied in audio-tactile interaction in the context of complex voluntary movement, and considering the well-known gating effect of movement on vibrotactile perception.
Collapse
Affiliation(s)
| | | | - Julien Lagarde
- Movement To Health, EuroMov, Montpellier 1 University, Montpellier, France
- * E-mail:
| |
Collapse
|