1
|
Böing S, Van der Stigchel S, Van der Stoep N. The impact of acute asymmetric hearing loss on multisensory integration. Eur J Neurosci 2024; 59:2373-2390. [PMID: 38303554 DOI: 10.1111/ejn.16263] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2023] [Revised: 12/15/2023] [Accepted: 01/09/2024] [Indexed: 02/03/2024]
Abstract
Humans have the remarkable ability to integrate information from different senses, which greatly facilitates the detection, localization and identification of events in the environment. About 466 million people worldwide suffer from hearing loss. Yet, the impact of hearing loss on how the senses work together is rarely investigated. Here, we investigate how a common sensory impairment, asymmetric conductive hearing loss (AHL), alters the way our senses interact by examining human orienting behaviour with normal hearing (NH) and acute AHL. This type of hearing loss disrupts auditory localization. We hypothesized that this creates a conflict between auditory and visual spatial estimates and alters how auditory and visual inputs are integrated to facilitate multisensory spatial perception. We analysed the spatial and temporal properties of saccades to auditory, visual and audiovisual stimuli before and after plugging the right ear of participants. Both spatial and temporal aspects of multisensory integration were affected by AHL. Compared with NH, AHL caused participants to make slow, inaccurate and unprecise saccades towards auditory targets. Surprisingly, increased weight on visual input resulted in accurate audiovisual localization with AHL. This came at a cost: saccade latencies for audiovisual targets increased significantly. The larger the auditory localization errors, the less participants were able to benefit from audiovisual integration in terms of saccade latency. Our results indicate that observers immediately change sensory weights to effectively deal with acute AHL and preserve audiovisual accuracy in a way that cannot be fully explained by statistical models of optimal cue integration.
Collapse
Affiliation(s)
- Sanne Böing
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Stefan Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
2
|
Wang X, Tang X, Wang A, Zhang M. Non-spatial inhibition of return attenuates audiovisual integration owing to modality disparities. Atten Percept Psychophys 2023:10.3758/s13414-023-02825-y. [PMID: 38127253 DOI: 10.3758/s13414-023-02825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2023] [Indexed: 12/23/2023]
Abstract
Although previous studies have investigated the relationship between inhibition of return (IOR) and multisensory integration, the influence of non-spatial has not been explored. The present study aimed to investigate the influence of non-spatial IOR on audiovisual integration by using a "prime-neutral cue-target" paradigm. In Experiment 1, which manipulated prime validity and target modality, the targets were positioned centrally, revealing significant non-spatial IOR effects in the visual, auditory, and audiovisual modalities. Analysis of relative multisensory response enhancement (rMRE) indicated substantial audiovisual integration enhancement in both valid and invalid target conditions. Furthermore, the enhancement was weaker for valid targets than for invalid targets. In Experiment 2, the targets were positioned above and below to rule out repetition blindness (RB); this experiment successfully replicated the results observed in Experiment 1. Notably, Experiments 1 and 2 consistently found that the correlation between modality differences and rMRE for valid targets indicated that differences in signal strength between visual and auditory modalities contributed to a reduction in audiovisual integration. However, the absence of correlation with the invalid target suggests that attention, as a key factor, may play a significant role in this process. The present study highlights how non-spatial IOR reduces audiovisual integration and sheds light on the complex interaction between attention and multisensory integration.
Collapse
Affiliation(s)
- Xiaoxue Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
3
|
Feenders G. Attentional capture or multisensory integration? (Commentary on Bean et al., 2021). Eur J Neurosci 2023; 58:3714-3718. [PMID: 37697730 DOI: 10.1111/ejn.16131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2021] [Accepted: 08/10/2023] [Indexed: 09/13/2023]
Affiliation(s)
- Gesa Feenders
- Animal Physiology and Behaviour Group, Cluster of Excellence Hearing4all, Department of Neuroscience, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
4
|
Li S, Zhang T, Zu G, Wang A, Zhang M. Electrophysiological evidence of crossmodal correspondence between auditory pitch and visual elevation affecting inhibition of return. Brain Cogn 2023; 171:106075. [PMID: 37625284 DOI: 10.1016/j.bandc.2023.106075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Revised: 08/02/2023] [Accepted: 08/06/2023] [Indexed: 08/27/2023]
Abstract
Inhibition of return (IOR) has proved to be weakened by audiovisual integration because of the increased perceptual salience of targets. Although other audiovisual interactions, such as crossmodal correspondence, have also been shown to facilitate attentional processes, to the best of our knowledge, no study has investigated the interaction between crossmodal correspondence and IOR. The present study employed Posner's spatial cueing paradigm and manipulated the cue validity, crossmodal correspondence congruency and time interval of auditory and visual stimuli (AV interval) to explore the effect of crossmodal correspondence on the IOR effect. The behavioral results showed a reduced IOR effect under the correspondence congruency condition in contrast to the correspondence incongruency condition at the AV interval of 200 ms, whereas at an AV interval of 80 ms, the decreased IOR effect under crossmodal correspondence congruency was eliminated. The electrophysiological results showed a reduced amplitude difference in P2 between valid and invalid cue conditions when the crossmodal correspondence effect decreased the IOR effect. The present study provided the first evidence of the weakened effect of the crossmodal correspondence effect on the IOR effect, which could be eliminated by audiovisual integration.
Collapse
Affiliation(s)
- Shuqi Li
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Tianyang Zhang
- School of Public Health, Jiangsu Key Laboratory of Preventive and Translational Medicine for Geriatric Diseases, Medical College of Soochow University, Suzhou, China
| | - Guangyao Zu
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China.
| | - Ming Zhang
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China; Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
5
|
Bertonati G, Casado-Palacios M, Crepaldi M, Parmiggiani A, Maviglia A, Torazza D, Campus C, Gori M. MultiTab: A Novel Portable Device to Evaluate Multisensory Skills . ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2023; 2023:1-4. [PMID: 38083497 DOI: 10.1109/embc40787.2023.10341048] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/18/2023]
Abstract
To infer spatial-temporal features of an external event we are guided by multisensory cues, with intensive research showing an enhancement in the perception when information coming from different sensory modalities are integrated. In this scenario, the motor system seems to also have an important role in boosting perception. With the present work, we introduce and validate a novel portable technology, named MultiTab, which is able to provide auditory and visual stimulation, as well as to measure the user's manual responses. Our preliminary results indicate that MultiTab reliably induces multisensory integration in a spatial localization task, shown by significantly reduced manual response times in the localization of audiovisual stimuli compared to unisensory stimuliClinical relevance- The current work presents a novel portable device that could contribute to the clinical evaluation of multisensory processing as well as spatial perception. In addition, by promoting and recording manual actions, MultiTab could be especially suitable for the design of rehabilitative protocols using multisensory motor training.
Collapse
|
6
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
7
|
Ischer M, Coppin G, De Marles A, Essellier M, Porcherot C, Cayeux I, Margot C, Sander D, Delplanque S. Exogenous capture of visual spatial attention by olfactory-trigeminal stimuli. PLoS One 2021; 16:e0252943. [PMID: 34111171 PMCID: PMC8191882 DOI: 10.1371/journal.pone.0252943] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2020] [Accepted: 05/25/2021] [Indexed: 11/18/2022] Open
Abstract
The extent to which a nasal whiff of scent can exogenously orient visual spatial attention remains poorly understood in humans. In a series of seven studies, we investigated the existence of an exogenous capture of visual spatial attention by purely trigeminal (i.e., CO2) and both olfactory and trigeminal stimuli (i.e., eucalyptol). We chose these stimuli because they activate the trigeminal system which can be considered as an alert system and are thus supposedly relevant for the individual, and thus prone to capture attention. We used them as lateralized cues in a variant of a visual spatial cueing paradigm. In valid trials, trigeminal cues and visual targets were presented on the same side whereas in invalid trials they were presented on opposite sides. To characterize the dynamics of the cross-modal attentional capture, we manipulated the interval between the onset of the trigeminal cues and the visual targets (from 580 to 1870 ms). Reaction times in trigeminal valid trials were shorter than all other trials, but only when this interval was around 680 or 1170 ms for CO2 and around 610 ms for eucalyptol. This result reflects that both pure trigeminal and olfactory-trigeminal stimuli can exogenously capture humans’ spatial visual attention. We discuss the importance of considering the dynamics of this cross-modal attentional capture.
Collapse
Affiliation(s)
- Matthieu Ischer
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Géraldine Coppin
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Department of Psychology, University of Geneva, Geneva, Switzerland
- Swiss Distance University Institute (UniDistance/FernUni), Brig, Switzerland
| | - Axel De Marles
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
| | - Myriam Essellier
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Department of Psychology, University of Geneva, Geneva, Switzerland
| | | | | | | | - David Sander
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Department of Psychology, University of Geneva, Geneva, Switzerland
| | - Sylvain Delplanque
- Swiss Center for Affective Sciences, University of Geneva, Geneva, Switzerland
- Department of Psychology, University of Geneva, Geneva, Switzerland
- * E-mail:
| |
Collapse
|
8
|
Van der Stoep N, Van der Smagt MJ, Notaro C, Spock Z, Naber M. The additive nature of the human multisensory evoked pupil response. Sci Rep 2021; 11:707. [PMID: 33436889 PMCID: PMC7803952 DOI: 10.1038/s41598-020-80286-1] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2020] [Accepted: 12/14/2020] [Indexed: 12/23/2022] Open
Abstract
Pupillometry has received increased interest for its usefulness in measuring various sensory processes as an alternative to behavioural assessments. This is also apparent for multisensory investigations. Studies of the multisensory pupil response, however, have produced conflicting results. Some studies observed super-additive multisensory pupil responses, indicative of multisensory integration (MSI). Others observed additive multisensory pupil responses even though reaction time (RT) measures were indicative of MSI. Therefore, in the present study, we investigated the nature of the multisensory pupil response by combining methodological approaches of previous studies while using supra-threshold stimuli only. In two experiments we presented auditory and visual stimuli to observers that evoked a(n) (onset) response (be it constriction or dilation) in a simple detection task and a change detection task. In both experiments, the RT data indicated MSI as shown by race model inequality violation. Still, the multisensory pupil response in both experiments could best be explained by linear summation of the unisensory pupil responses. We conclude that the multisensory pupil response for supra-threshold stimuli is additive in nature and cannot be used as a measure of MSI, as only a departure from additivity can unequivocally demonstrate an interaction between the senses.
Collapse
Affiliation(s)
- Nathan Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands.
| | - M J Van der Smagt
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - C Notaro
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - Z Spock
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| | - M Naber
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Langeveld Building, Room H0.26, Heidelberglaan 1, 3584 CS, Utrecht, The Netherlands
| |
Collapse
|
9
|
Fear-related signals are prioritised in visual, somatosensory and spatial systems. Neuropsychologia 2020; 150:107698. [PMID: 33253690 DOI: 10.1016/j.neuropsychologia.2020.107698] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Accepted: 11/25/2020] [Indexed: 12/21/2022]
Abstract
The human brain has evolved a multifaceted fear system, allowing threat detection to enable rapid adaptive responses crucial for survival. Although many cortical and subcortical brain areas are believed to be involved in the survival circuits detecting and responding to threat, the amygdala has reportedly a crucial role in the fear system. Here, we review evidence demonstrating that fearful faces, a specific category of salient stimuli indicating the presence of threat in the surrounding, are preferentially processed in the fear system and in the connected sensory cortices, even when they are presented outside of awareness or are irrelevant to the task. In the visual domain, we discuss evidence showing in hemianopic patients that fearful faces, via a subcortical colliculo-pulvinar-amygdala pathway, have a privileged visual processing even in the absence of awareness and facilitate responses towards visual stimuli in the intact visual field. Moreover, evidence showing that somatosensory cortices prioritise fearful-related signals, to the extent that tactile processing is enhanced in the presence of fearful faces, will be also reported. Finally, we will review evidence revealing that fearful faces have a pivotal role in modulating responses in peripersonal space, in line with the defensive functional definition of PPS.
Collapse
|
10
|
Ellena G, Starita F, Haggard P, Làdavas E. The spatial logic of fear. Cognition 2020; 203:104336. [DOI: 10.1016/j.cognition.2020.104336] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2019] [Revised: 05/21/2020] [Accepted: 05/24/2020] [Indexed: 10/24/2022]
|
11
|
Deploying attention to the target location of a pointing action modulates audiovisual processes at nontarget locations. Atten Percept Psychophys 2020; 82:3507-3520. [PMID: 32676805 DOI: 10.3758/s13414-020-02065-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The current study examined how the deployment of spatial attention at the onset of a pointing movement influenced audiovisual crossmodal interactions at the target of the pointing action and at nontarget locations. These interactions were quantified by measuring the susceptibility to the fission (i.e., reporting two visual flashes under one flash and two auditory beep pairings) and fusion (i.e., reporting one flash under two flashes and one beep pairing) audiovisual illusions. At movement onset, unimodal, or auditory and visual bimodal stimuli were either presented at the target of the pointing action or in an adjacent, nontarget location. In Experiment 1, perceptual accuracy within the unimodal and bimodal conditions was lower in the nontarget relative to the target condition. The fission illusion was uninfluenced by target condition. However, the fusion illusion was more likely to be reported at the target relative to the nontarget location. In Experiment 2, the stimuli from Experiment 1 were further presented at a location near where the eyes were fixated (i.e., congruent condition), where the hand was aiming (i.e., target), or in a location where neither the eyes were fixated nor the hand was aiming. The results yielded the greatest susceptibility to the fusion illusion when the visual location and movement end points were congruent relative to when either movement or fixation was incongruent. Although attention may facilitate the processing of unisensory and multisensory cues in general, attention might have the strongest influence on the audiovisual integration mechanisms that underlie the sound-induced fusion illusion.
Collapse
|
12
|
Ellena G, Battaglia S, Làdavas E. The spatial effect of fearful faces in the autonomic response. Exp Brain Res 2020; 238:2009-2018. [PMID: 32617883 DOI: 10.1007/s00221-020-05829-4] [Citation(s) in RCA: 28] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/22/2020] [Accepted: 05/07/2020] [Indexed: 12/25/2022]
Abstract
Peripersonal space (PPS) corresponds to the space around the body and it is defined by the location in space where multimodal inputs from bodily and external stimuli are integrated. Its extent varies according to the characteristics of external stimuli, e.g., the salience of an emotional facial expression. In the present study, we investigated the psycho-physiological correlates of the extension phenomenon. Specifically, we investigated whether an approaching human face showing either an emotionally negative (fearful) or positive (joyful) facial expression would differentially modulate PPS representation, compared to the same face with a neutral expression. To this aim, we continuously recorded the skin conductance response (SCR) of 27 healthy participants while they watched approaching 3D avatar faces showing fearful, joyful or neutral expressions, and then pressed a button to respond to tactile stimuli delivered on their cheeks at three possible delays (visuo-tactile trials). The results revealed that the SCR to fearful faces, but not joyful or neutral faces, was modulated by the apparent distance from the participant's body. SCR increased from very far space to far and then to near space. We propose that the proximity of the fearful face provided a cue to the presence of a threat in the environment and elicited a robust and urgent organization of defensive responses. In contrast, there would be no need to organize defensive responses to joyful or neutral faces and, as a consequence, no SCR differences were found across spatial positions. These results confirm the defensive function of PPS.
Collapse
Affiliation(s)
- Giulia Ellena
- Department of Psychology, University of Bologna, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, CsrNC, University of Bologna, Bologna, Italy
| | - Simone Battaglia
- Department of Psychology, University of Bologna, Bologna, Italy
- Centre for Studies and Research in Cognitive Neuroscience, CsrNC, University of Bologna, Bologna, Italy
| | - Elisabetta Làdavas
- Department of Psychology, University of Bologna, Bologna, Italy.
- Centre for Studies and Research in Cognitive Neuroscience, CsrNC, University of Bologna, Bologna, Italy.
| |
Collapse
|
13
|
Shaw LH, Freedman EG, Crosse MJ, Nicholas E, Chen AM, Braiman MS, Molholm S, Foxe JJ. Operating in a Multisensory Context: Assessing the Interplay Between Multisensory Reaction Time Facilitation and Inter-sensory Task-switching Effects. Neuroscience 2020; 436:122-135. [PMID: 32325100 DOI: 10.1016/j.neuroscience.2020.04.013] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/24/2019] [Revised: 04/03/2020] [Accepted: 04/06/2020] [Indexed: 11/28/2022]
Abstract
Individuals respond faster to presentations of bisensory stimuli (e.g. audio-visual targets) than to presentations of either unisensory constituent in isolation (i.e. to the auditory-alone or visual-alone components of an audio-visual stimulus). This well-established multisensory speeding effect, termed the redundant signals effect (RSE), is not predicted by simple linear summation of the unisensory response time probability distributions. Rather, the speeding is typically faster than this prediction, leading researchers to ascribe the RSE to a so-called co-activation account. According to this account, multisensory neural processing occurs whereby the unisensory inputs are integrated to produce more effective sensory-motor activation. However, the typical paradigm used to test for RSE involves random sequencing of unisensory and bisensory inputs in a mixed design, raising the possibility of an alternate attention-switching account. This intermixed design requires participants to switch between sensory modalities on many task trials (e.g. from responding to a visual stimulus to an auditory stimulus). Here we show that much, if not all, of the RSE under this paradigm can be attributed to slowing of reaction times to unisensory stimuli resulting from modality switching, and is not in fact due to speeding of responses to AV stimuli. As such, the present data do not support a co-activation account, but rather suggest that switching and mixing costs akin to those observed during classic task-switching paradigms account for the observed RSE.
Collapse
Affiliation(s)
- Luke H Shaw
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Edward G Freedman
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Michael J Crosse
- The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - Eric Nicholas
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Allen M Chen
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Matthew S Braiman
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA
| | - Sophie Molholm
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA; The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA
| | - John J Foxe
- The Cognitive Neurophysiology Laboratory, The Del Monte Institute for Neuroscience, Department of Neuroscience, University of Rochester School of Medicine and Dentistry, Rochester, NY 14642, USA; The Cognitive Neurophysiology Laboratory, Department of Pediatrics & Neuroscience, Albert Einstein College of Medicine & Montefiore Medical Center, Bronx, NY 10461, USA.
| |
Collapse
|
14
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
15
|
Zuanazzi A, Noppeney U. The Intricate Interplay of Spatial Attention and Expectation: a Multisensory Perspective. Multisens Res 2020; 33:383-416. [DOI: 10.1163/22134808-20201482] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.
Collapse
Affiliation(s)
- Arianna Zuanazzi
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 2Department of Psychology, New York University, New York, NY, USA
| | - Uta Noppeney
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 3Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
16
|
Elshout JA, Van der Stoep N, Nijboer TCW, Van der Stigchel S. Motor congruency and multisensory integration jointly facilitate visual information processing before movement execution. Exp Brain Res 2020; 238:667-673. [PMID: 32036413 PMCID: PMC7080670 DOI: 10.1007/s00221-019-05714-9] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2019] [Accepted: 12/18/2019] [Indexed: 10/25/2022]
Abstract
Attention allows us to select important sensory information and enhances sensory information processing. Attention and our motor system are tightly coupled: attention is shifted to the target location before a goal-directed eye- or hand movement is executed. Congruent eye-hand movements to the same target can boost the effect of this pre-movement shift of attention. Moreover, visual information processing can be enhanced by, for example, auditory input presented in spatial and temporal proximity of visual input via multisensory integration (MSI). In this study, we investigated whether the combination of MSI and motor congruency can synergistically enhance visual information processing beyond what can be observed using motor congruency alone. Participants performed congruent eye- and hand movements during a 2-AFC visual discrimination task. The discrimination target was presented in the planning phase of the movements at the movement target location or a movement irrelevant location. Three conditions were compared: (1) a visual target without sound, (2) a visual target with sound spatially and temporally aligned (MSI) and (3) a visual target with sound temporally misaligned (no MSI). Performance was enhanced at the movement-relevant location when congruent motor actions and MSI coincide compared to the other conditions. Congruence in the motor system and MSI together therefore lead to enhanced sensory information processing beyond the effects of motor congruency alone, before a movement is executed. Such a synergy implies that the boost of attention previously observed for the independent factors is not at ceiling level, but can be increased even further when the right conditions are met.
Collapse
Affiliation(s)
- J A Elshout
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands.
| | - N Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - T C W Nijboer
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
- Center of Excellence for Rehabilitation Medicine, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht University and De Hoogstraat Rehabilitation, 3583 TM, Utrecht, The Netherlands
| | - S Van der Stigchel
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| |
Collapse
|
17
|
Cue-target onset asynchrony modulates interaction between exogenous attention and audiovisual integration. Cogn Process 2020; 21:261-270. [PMID: 31953644 DOI: 10.1007/s10339-020-00950-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
Previous studies have shown that exogenous attention decreases audiovisual integration (AVI); however, whether the interaction between exogenous attention and AVI is influenced by cue-target onset asynchrony (CTOA) remains unclear. To clarify this matter, twenty participants were recruited to perform an auditory/visual discrimination task, and they were instructed to respond to the target stimuli as rapidly and accurately as possible. The analysis of the mean response times showed an effective cueing effect under all cued conditions and significant response facilitation for all audiovisual stimuli. A further comparison of the differences between the probability of audiovisual cumulative distributive functions (CDFs) and race model CDFs showed that the AVI latency was shortened under the cued condition relative to that under the no-cue condition, and there was a significant break point when the CTOA was 200 ms, with a decrease in the AVI upon going from 100 to 200 ms and an increase upon going from 200 to 400 ms. These results indicated different mechanisms for the interaction between exogenous attention and the AVI under the shorter and longer CTOA conditions and further suggested that there may be a temporal window in which the AVI effect is mainly affected by exogenous attention, but the interaction might be interfered with by endogenous attention when exceeding the temporal window.
Collapse
|
18
|
Van der Stoep N, Van der Stigchel S, Van Engelen RC, Biesbroek JM, Nijboer TCW. Impairments in Multisensory Integration after Stroke. J Cogn Neurosci 2019; 31:885-899. [PMID: 30883294 DOI: 10.1162/jocn_a_01389] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/19/2022]
Abstract
The integration of information from multiple senses leads to a plethora of behavioral benefits, most predominantly to faster and better detection, localization, and identification of events in the environment. Although previous studies of multisensory integration (MSI) in humans have provided insights into the neural underpinnings of MSI, studies of MSI at a behavioral level in individuals with brain damage are scarce. Here, a well-known psychophysical paradigm (the redundant target paradigm) was employed to quantify MSI in a group of stroke patients. The relation between MSI and lesion location was analyzed using lesion subtraction analysis. Twenty-one patients with ischemic infarctions and 14 healthy control participants responded to auditory, visual, and audiovisual targets in the left and right visual hemifield. Responses to audiovisual targets were faster than to unisensory targets. This could be due to MSI or statistical facilitation. Comparing the audiovisual RTs to the winner of a race between unisensory signals allowed us to determine whether participants could integrate auditory and visual information. The results indicated that (1) 33% of the patients showed an impairment in MSI; (2) patients with MSI impairment had left hemisphere and brainstem/cerebellar lesions; and (3) the left caudate, left pallidum, left putamen, left thalamus, left insula, left postcentral and precentral gyrus, left central opercular cortex, left amygdala, and left OFC were more often damaged in patients with MSI impairments. These results are the first to demonstrate the impact of brain damage on MSI in stroke patients using a well-established psychophysical paradigm.
Collapse
Affiliation(s)
| | | | | | | | - Tanja C W Nijboer
- Helmholtz Institute, Utrecht University.,Brain Center Rudolph Magnus, University Medical Center, Utrecht University.,Center for Brain Rehabilitation Medicine, Utrecht Medical Center, Utrecht University
| |
Collapse
|
19
|
Diederich A, Colonius H. Multisensory Integration and Exogenous Spatial Attention: A Time-window-of-integration Analysis. J Cogn Neurosci 2019; 31:699-710. [PMID: 30822208 DOI: 10.1162/jocn_a_01386] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Although it is well documented that occurrence of an irrelevant and nonpredictive sound facilitates motor responses to a subsequent target light appearing nearby, the cause of this "exogenous spatial cuing effect" has been under discussion. On the one hand, it has been postulated to be the result of a shift of visual spatial attention possibly triggered by parietal and/or cortical supramodal "attention" structures. On the other hand, the effect has been considered to be due to multisensory integration based on the activation of multisensory convergence structures in the brain. Recent RT experiments have suggested that multisensory integration and exogenous spatial cuing differ in their temporal profiles of facilitation: When the nontarget occurs 100-200 msec before the target, facilitation is likely driven by crossmodal exogenous spatial attention, whereas multisensory integration effects are still seen when target and nontarget are presented nearly simultaneously. Here, we develop an extension of the time-window-of-integration model that combines both mechanisms within the same formal framework. The model is illustrated by fitting it to data from a focused attention task with a visual target and an auditory nontarget presented at horizontally or vertically varying positions. Results show that both spatial cuing and multisensory integration may coexist in a single trial in bringing about the crossmodal facilitation of RT effects. Moreover, the formal analysis via time window of integration allows to predict and quantify the contribution of either mechanism as they occur across different spatiotemporal conditions.
Collapse
|
20
|
Sanders P, Thompson B, Corballis P, Searchfield G. On the Timing of Signals in Multisensory Integration and Crossmodal Interactions: a Scoping Review. Multisens Res 2019; 32:533-573. [PMID: 31137004 DOI: 10.1163/22134808-20191331] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2018] [Accepted: 04/24/2019] [Indexed: 11/19/2022]
Abstract
A scoping review was undertaken to explore research investigating early interactions and integration of auditory and visual stimuli in the human brain. The focus was on methods used to study low-level multisensory temporal processing using simple stimuli in humans, and how this research has informed our understanding of multisensory perception. The study of multisensory temporal processing probes how the relative timing between signals affects perception. Several tasks, illusions, computational models, and neuroimaging techniques were identified in the literature search. Research into early audiovisual temporal processing in special populations was also reviewed. Recent research has continued to provide support for early integration of crossmodal information. These early interactions can influence higher-level factors, and vice versa. Temporal relationships between auditory and visual stimuli influence multisensory perception, and likely play a substantial role in solving the 'correspondence problem' (how the brain determines which sensory signals belong together, and which should be segregated).
Collapse
Affiliation(s)
- Philip Sanders
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| | - Benjamin Thompson
- 2Centre for Brain Research, University of Auckland, New Zealand.,4School of Optometry and Vision Science, University of Auckland, Auckland, New Zealand.,5School of Optometry and Vision Science, University of Waterloo, Waterloo, Canada
| | - Paul Corballis
- 2Centre for Brain Research, University of Auckland, New Zealand.,6Department of Psychology, University of Auckland, Auckland, New Zealand
| | - Grant Searchfield
- 1Section of Audiology, University of Auckland, Auckland, New Zealand.,2Centre for Brain Research, University of Auckland, New Zealand.,3Brain Research New Zealand - Rangahau Roro Aotearoa, New Zealand
| |
Collapse
|
21
|
Bazilinskyy P, de Winter J. Crowdsourced Measurement of Reaction Times to Audiovisual Stimuli With Various Degrees of Asynchrony. HUMAN FACTORS 2018; 60:1192-1206. [PMID: 30036098 PMCID: PMC6207992 DOI: 10.1177/0018720818787126] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 08/06/2017] [Accepted: 06/10/2018] [Indexed: 06/08/2023]
Abstract
OBJECTIVE This study was designed to replicate past research concerning reaction times to audiovisual stimuli with different stimulus onset asynchrony (SOA) using a large sample of crowdsourcing respondents. BACKGROUND Research has shown that reaction times are fastest when an auditory and a visual stimulus are presented simultaneously and that SOA causes an increase in reaction time, this increase being dependent on stimulus intensity. Research on audiovisual SOA has been conducted with small numbers of participants. METHOD Participants ( N = 1,823) each performed 176 reaction time trials consisting of 29 SOA levels and three visual intensity levels, using CrowdFlower, with a compensation of US$0.20 per participant. Results were verified with a local Web-in-lab study ( N = 34). RESULTS The results replicated past research, with a V shape of mean reaction time as a function of SOA, the V shape being stronger for lower-intensity visual stimuli. The level of SOA affected mainly the right side of the reaction time distribution, whereas the fastest 5% was hardly affected. The variability of reaction times was higher for the crowdsourcing study than for the Web-in-lab study. CONCLUSION Crowdsourcing is a promising medium for reaction time research that involves small temporal differences in stimulus presentation. The observed effects of SOA can be explained by an independent-channels mechanism and also by some participants not perceiving the auditory or visual stimulus, hardware variability, misinterpretation of the task instructions, or lapses in attention. APPLICATION The obtained knowledge on the distribution of reaction times may benefit the design of warning systems.
Collapse
Affiliation(s)
- Pavlo Bazilinskyy
- Pavlo Bazilinskyy, Department of BioMechanical Engineering, Faculty of Mechanical, Maritime and Materials Engineering, Delft University of Technology, Mekelweg 2, 2628 CD Delft, the Netherlands; e-mail:
| | | |
Collapse
|
22
|
Schut MJ, Van der Stoep N, Van der Stigchel S. Auditory spatial attention is encoded in a retinotopic reference frame across eye-movements. PLoS One 2018; 13:e0202414. [PMID: 30125311 PMCID: PMC6101386 DOI: 10.1371/journal.pone.0202414] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2018] [Accepted: 08/02/2018] [Indexed: 11/21/2022] Open
Abstract
The retinal location of visual information changes each time we move our eyes. Although it is now known that visual information is remapped in retinotopic coordinates across eye-movements (saccades), it is currently unclear how head-centered auditory information is remapped across saccades. Keeping track of the location of a sound source in retinotopic coordinates requires a rapid multi-modal reference frame transformation when making saccades. To reveal this reference frame transformation, we designed an experiment where participants attended an auditory or visual cue and executed a saccade. After the saccade had landed, an auditory or visual target could be presented either at the prior retinotopic location or at an uncued location. We observed that both auditory and visual targets presented at prior retinotopic locations were reacted to faster than targets at other locations. In a second experiment, we observed that spatial attention pointers obtained via audition are available in retinotopic coordinates immediately after an eye-movement is made. In a third experiment, we found evidence for an asymmetric cross-modal facilitation of information that is presented at the retinotopic location. In line with prior single cell recording studies, this study provides the first behavioral evidence for immediate auditory and cross-modal transsaccadic updating of spatial attention. These results indicate that our brain has efficient solutions for solving the challenges in localizing sensory input that arise in a dynamic context.
Collapse
Affiliation(s)
- Martijn Jan Schut
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - Nathan Van der Stoep
- Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | | |
Collapse
|
23
|
Audiovisual integration in depth: multisensory binding and gain as a function of distance. Exp Brain Res 2018; 236:1939-1951. [PMID: 29700577 PMCID: PMC6010498 DOI: 10.1007/s00221-018-5274-7] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Accepted: 02/19/2018] [Indexed: 11/01/2022]
Abstract
The integration of information across sensory modalities is dependent on the spatiotemporal characteristics of the stimuli that are paired. Despite large variation in the distance over which events occur in our environment, relatively little is known regarding how stimulus-observer distance affects multisensory integration. Prior work has suggested that exteroceptive stimuli are integrated over larger temporal intervals in near relative to far space, and that larger multisensory facilitations are evident in far relative to near space. Here, we sought to examine the interrelationship between these previously established distance-related features of multisensory processing. Participants performed an audiovisual simultaneity judgment and redundant target task in near and far space, while audiovisual stimuli were presented at a range of temporal delays (i.e., stimulus onset asynchronies). In line with the previous findings, temporal acuity was poorer in near relative to far space. Furthermore, reaction time to asynchronously presented audiovisual targets suggested a temporal window for fast detection-a range of stimuli asynchronies that was also larger in near as compared to far space. However, the range of reaction times over which multisensory response enhancement was observed was limited to a restricted range of relatively small (i.e., 150 ms) asynchronies, and did not differ significantly between near and far space. Furthermore, for synchronous presentations, these distance-related (i.e., near vs. far) modulations in temporal acuity and multisensory gain correlated negatively at an individual subject level. Thus, the findings support the conclusion that multisensory temporal binding and gain are asymmetrically modulated as a function of distance from the observer, and specifies that this relationship is specific for temporally synchronous audiovisual stimulus presentations.
Collapse
|
24
|
Minakata K, Gondan M. Differential coactivation in a redundant signals task with weak and strong go/no-go stimuli. Q J Exp Psychol (Hove) 2018; 72:922-929. [PMID: 29642781 DOI: 10.1177/1747021818772033] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
When participants respond to stimuli of two sources, response times (RTs) are often faster when both stimuli are presented together relative to the RTs obtained when presented separately (redundant signals effect [RSE]). Race models and coactivation models can explain the RSE. In race models, separate channels process the two stimulus components, and the faster processing time determines the overall RT. In audiovisual experiments, the RSE is often higher than predicted by race models, and coactivation models have been proposed that assume integrated processing of the two stimuli. Where does coactivation occur? We implemented a go/no-go task with randomly intermixed weak and strong auditory, visual, and audiovisual stimuli. In one experimental session, participants had to respond to strong stimuli and withhold their response to weak stimuli. In the other session, these roles were reversed. Interestingly, coactivation was only observed in the experimental session in which participants had to respond to strong stimuli. If weak stimuli served as targets, results were widely consistent with the race model prediction. The pattern of results contradicts the inverse effectiveness law. We present two models that explain the result in terms of absolute and relative thresholds.
Collapse
Affiliation(s)
- Katsumi Minakata
- 1 DTU Management Engineering, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Matthias Gondan
- 2 Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
25
|
Domínguez-Borràs J, Rieger SW, Corradi-Dell'Acqua C, Neveu R, Vuilleumier P. Fear Spreading Across Senses: Visual Emotional Events Alter Cortical Responses to Touch, Audition, and Vision. Cereb Cortex 2017; 27:68-82. [PMID: 28365774 PMCID: PMC5939199 DOI: 10.1093/cercor/bhw337] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2016] [Revised: 09/07/2016] [Indexed: 12/01/2022] Open
Abstract
Attention and perception are potentiated for emotionally significant stimuli, promoting efficient reactivity and survival. But does such enhancement extend to stimuli simultaneously presented across different sensory modalities? We used functional magnetic resonance imaging in humans to examine the effects of visual emotional signals on concomitant sensory inputs in auditory, somatosensory, and visual modalities. First, we identified sensory areas responsive to task-irrelevant tones, touches, or flickers, presented bilaterally while participants attended to either a neutral or a fearful face. Then, we measured whether these responses were modulated by the emotional content of the face. Sensory responses in primary cortices were enhanced for auditory and tactile stimuli when these appeared with fearful faces, compared with neutral, but striate cortex responses to the visual stimuli were reduced in the left hemisphere, plausibly as a consequence of sensory competition. Finally, conjunction and functional connectivity analyses identified 2 distinct networks presumably responsible for these emotional modulatory processes, involving cingulate, insular, and orbitofrontal cortices for the increased sensory responses, and ventrolateral prefrontal cortex for the decreased sensory responses. These results suggest that emotion tunes the excitability of sensory systems across multiple modalities simultaneously, allowing the individual to adaptively process incoming inputs in a potentially threatening environment.
Collapse
Affiliation(s)
- Judith Domínguez-Borràs
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, CH-1211 Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, CH-1202 Geneva, Switzerland
| | - Sebastian Walter Rieger
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, CH-1202 Geneva, Switzerland
- Geneva Neuroscience Center, University of Geneva, CH-1211 Geneva, Switzerland
| | - Corrado Corradi-Dell'Acqua
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, CH-1211 Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, CH-1202 Geneva, Switzerland
- Department of Psychology, FPSE, University of Geneva, CH-1205, Geneva, Switzerland
| | - Rémi Neveu
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, CH-1211 Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, CH-1202 Geneva, Switzerland
| | - Patrik Vuilleumier
- Laboratory for Behavioral Neurology and Imaging of Cognition, Department of Neuroscience, University Medical Center, CH-1211 Geneva, Switzerland
- Swiss Center for Affective Sciences, University of Geneva, Campus Biotech, CH-1202 Geneva, Switzerland
- Geneva Neuroscience Center, University of Geneva, CH-1211 Geneva, Switzerland
- Department of Neurology, University Hospital, CH-1211 Geneva, Switzerland
| |
Collapse
|
26
|
Van der Stoep N, Van der Stigchel S, Nijboer TCW, Spence C. Visually Induced Inhibition of Return Affects the Integration of Auditory and Visual Information. Perception 2016; 46:6-17. [DOI: 10.1177/0301006616661934] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Multisensory integration (MSI) and exogenous spatial attention can both speedup responses to perceptual events. Recently, it has been shown that audiovisual integration at exogenously attended locations is reduced relative to unattended locations. This effect was observed at short cue-target intervals (200–250 ms). At longer intervals, however, the initial benefits of exogenous shifts of spatial attention at the cued location are often replaced by response time (RT) costs (also known as Inhibition of Return, IOR). Given these opposing cueing effects at shorter versus longer intervals, we decided to investigate whether MSI would also be affected by IOR. Uninformative exogenous visual spatial cues were presented between 350 and 450 ms prior to the onset of auditory, visual, and audiovisual targets. As expected, IOR was observed for visual targets (invalid cue RT < valid cue RT). For auditory and audiovisual targets, neither IOR nor any spatial cueing effects were observed. The amount of relative multisensory response enhancement and race model inequality violation was larger for uncued as compared with cued locations indicating that IOR reduces MSI. The results are discussed in the context of changes in unisensory signal strength at cued as compared with uncued locations.
Collapse
Affiliation(s)
- N. Van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - S. Van der Stigchel
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - T. C. W. Nijboer
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands; Brain Center Rudolf Magnus and Center of Excellence for Rehabilitation Medicine, University Medical Center Utrecht, Utrecht, The Netherlands; and De Hoogstraat Rehabilitation, Utrecht, The Netherlands
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, UK
| |
Collapse
|
27
|
Diederich A, Colonius H, Kandil FI. Prior knowledge of spatiotemporal configuration facilitates crossmodal saccadic response. Exp Brain Res 2016; 234:2059-2076. [DOI: 10.1007/s00221-016-4609-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/09/2015] [Accepted: 02/23/2016] [Indexed: 10/22/2022]
|
28
|
van der Stoep N, Serino A, Farnè A, Di Luca M, Spence C. Depth: the Forgotten Dimension in Multisensory Research. Multisens Res 2016. [DOI: 10.1163/22134808-00002525] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The last quarter of a century has seen a dramatic rise of interest in the spatial constraints on multisensory integration. However, until recently, the majority of this research has investigated integration in the space directly in front of the observer. The space around us, however, extends in three spatial dimensions in the front and to the rear beyond such a limited area. The question to be addressed in this review concerns whether multisensory integration operates according to the same rules throughout the whole of three-dimensional space. The results reviewed here not only show that the space around us seems to be divided into distinct functional regions, but they also suggest that multisensory interactions are modulated by the region of space in which stimuli happen to be presented. We highlight a number of key limitations with previous research in this area, including: (1) The focus on only a very narrow region of two-dimensional space in front of the observer; (2) the use of static stimuli in most research; (3) the study of observers who themselves have been mostly static; and (4) the study of isolated observers. All of these factors may change the way in which the senses interact at any given distance, as can the emotional state/personality of the observer. In summarizing these salient issues, we hope to encourage researchers to consider these factors in their own research in order to gain a better understanding of the spatial constraints on multisensory integration as they affect us in our everyday life.
Collapse
Affiliation(s)
- N. van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - A. Serino
- Center for Neuroprosthetics, EPFL, Lausanne, Switzerland
| | - A. Farnè
- ImpAct Team, Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, 69000 Lyon, France
| | - M. Di Luca
- School of Psychology, CNCR, University of Birmingham, Birmingham, United Kingdom
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| |
Collapse
|