1
|
Einhäuser W, Neubert CR, Grimm S, Bendixen A. High visual salience of alert signals can lead to a counterintuitive increase of reaction times. Sci Rep 2024; 14:8858. [PMID: 38632303 PMCID: PMC11024089 DOI: 10.1038/s41598-024-58953-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 03/26/2024] [Indexed: 04/19/2024] Open
Abstract
It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a "sweet spot" that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.
Collapse
Affiliation(s)
- Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.
| | - Christiane R Neubert
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Sabine Grimm
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- BioCog - Cognitive and Biological Psychology, Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
2
|
Wang X, Ren P, Miao X, Zhang X, Qian Y, Chi L. Attention Load Regulates the Facilitation of Audio-Visual Information on Landing Perception in Badminton. Percept Mot Skills 2023; 130:1687-1713. [PMID: 37284745 DOI: 10.1177/00315125231180893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Based on the role of the high temporal sensitivity of the auditory modality and the advantage of audio-visual integration in motion perception and anticipation, we investigated the effect of audio-visual information on landing perception in badminton through two experiments; and we explored the regulatory role of attention load. In this study, experienced badminton players were asked to predict the landing position of the shuttle under the conditions of video (visual) or audio-video (audio-visual) presentation. We manipulated flight information or attention load. The results of Experiment 1 showed that, whether the visual information was rich or not, that is, whether or not it contained the early flight trajectory, the addition of auditory information played a promoting role. The results of Experiment 2 showed that attention load regulated the facilitation of multi-modal integration on landing perception. The facilitation of audio-visual information was impaired under high load, meaning that audio-visual integration tended to be guided by attention from top to bottom. The results support the superiority effect of multi-modal integration, suggesting that adding auditory perception training to sports training could significantly improve athletes' performance.
Collapse
Affiliation(s)
- Xiaoting Wang
- School of Psychology, Beijing Sport University, Beijing, China
| | - Pengfei Ren
- School of Physical Education, Yan'an University, Yan'an, China
| | - Xiuying Miao
- School of Psychology, Beijing Sport University, Beijing, China
| | - Xin Zhang
- School of Psychology, Beijing Sport University, Beijing, China
| | - Yiming Qian
- Department of Psychology, Tsinghua University, Beijing, China
| | - Lizhong Chi
- School of Psychology, Beijing Sport University, Beijing, China
| |
Collapse
|
3
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
4
|
Peng X, Jiang H, Yang J, Shi R, Feng J, Liang Y. Effects of Temporal Characteristics on Pilots Perceiving Audiovisual Warning Signals Under Different Perceptual Loads. Front Psychol 2022; 13:808150. [PMID: 35222196 PMCID: PMC8867071 DOI: 10.3389/fpsyg.2022.808150] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/03/2021] [Accepted: 01/10/2022] [Indexed: 11/13/2022] Open
Abstract
Our research aimed to investigate the effectiveness of auditory, visual, and audiovisual warning signals for capturing the attention of the pilot, and how stimulus onset asynchronies (SOA) in audiovisual stimuli affect pilots perceiving the bimodal warning signals under different perceptual load conditions. In experiment 1 of the low perceptual load condition, participants discriminated the location (right vs. left) of visual targets preceded by five different types of warning signals. In experiment 2 of high perceptual load, participants completed the location task identical to a low load condition and a digit detection task in a rapid serial visual presentation (RSVP) stream. The main effect of warning signals in two experiments showed that visual and auditory cues presented simultaneously (AV) could effectively and efficiently arouse the attention of the pilots in high and low load conditions. Specifically, auditory (A), AV, and visual preceding auditory stimulus by 100 ms (VA100) increased the spatial orientation to a valid position in low load conditions. With the increase in visual perceptual load, auditory preceding the visual stimulus by 100 ms (AV100) and A warning signals had stronger spatial orientation. The results are expected to theoretically support the optimization design of the cockpit display interface, contributing to immediate flight crew awareness.
Collapse
Affiliation(s)
- Xing Peng
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Hao Jiang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Jiazhong Yang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Rong Shi
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China
| | - Junyi Feng
- Technical Support Center, Operation Control Department, Beijing Capital Airlines, Beijing, China
| | - Yaowei Liang
- Institute of Aviation Human Factors and Cognitive Neuroscience, College of Flight Technology, Civil Aviation Flight University of China, Guanghan, China.,Flying Department of Southwest Branch, Air China Limited, Chengdu, China
| |
Collapse
|
5
|
Guiding spatial attention by multimodal reward cues. Atten Percept Psychophys 2021; 84:655-670. [PMID: 34964093 DOI: 10.3758/s13414-021-02422-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2021] [Indexed: 11/08/2022]
Abstract
Our attention is constantly captured and guided by visual and/or auditory inputs. One key contributor to selecting relevant information from the environment is reward prospect. Intriguingly, while both multimodal signal processing and reward effects on attention have been widely studied, research on multimodal reward signals is lacking. Here, we investigated this using a Posner task featuring peripheral cues of different modalities (audiovisual/visual/auditory), reward prospect (reward/no-reward), and cue-target stimulus-onset asynchronies (SOAs 100-1,300 ms). We found that audiovisual and visual reward cues (but not auditory ones) enhanced cue-validity effects, albeit with different time courses (Experiment 1). While the reward-modulated validity effect of visual cues was pronounced at short SOAs, the effect of audiovisual reward cues emerged at longer SOAs. Follow-up experiments exploring the effects of visual (Experiment 2) and auditory (Experiment 3) reward cues in isolation showed that reward modulated performance only in the visual condition. This suggests that the differential effect of visual and auditory reward cues in Experiment 1 is not merely a result of the mixed cue context, but confirms that visual reward cues have a stronger impact on attentional guidance in this paradigm. Taken together, it seems that adding an auditory reward cue to the inherently dominant visual one led to a shift/extension of the validity effect in time - instead of increasing its amplitude. While generally being in line with a multimodal cuing benefit, this specific pattern highlights that different reward signals are not simply combined in a linear fashion but lead to a qualitatively different process.
Collapse
|
6
|
Mendonça R, Garrido MV, Semin GR. The Effect of Simultaneously Presented Words and Auditory Tones on Visuomotor Performance. Multisens Res 2021; 34:1-28. [PMID: 34062511 DOI: 10.1163/22134808-bja10052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2020] [Accepted: 05/12/2021] [Indexed: 11/19/2022]
Abstract
The experiment reported here used a variation of the spatial cueing task to examine the effects of unimodal and bimodal attention-orienting primes on target identification latencies and eye gaze movements. The primes were a nonspatial auditory tone and words known to drive attention consistent with the dominant writing and reading direction, as well as introducing a semantic, temporal bias (past-future) on the horizontal dimension. As expected, past-related (visual) word primes gave rise to shorter response latencies on the left hemifield and future-related words on the right. This congruency effect was differentiated by an asymmetric performance on the right space following future words and driven by the left-to-right trajectory of scanning habits that facilitated search times and eye gaze movements to lateralized targets. Auditory tone prime alone acted as an alarm signal, boosting visual search and reducing response latencies. Bimodal priming, i.e., temporal visual words paired with the auditory tone, impaired performance by delaying visual attention and response times relative to the unimodal visual word condition. We conclude that bimodal primes were no more effective in capturing participants' spatial attention than the unimodal auditory and visual primes. Their contribution to the literature on multisensory integration is discussed.
Collapse
Affiliation(s)
- Rita Mendonça
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal
| | - Margarida V Garrido
- Iscte - Instituto Universitário de Lisboa, Cis-Iscte, Av. Das Forças Armadas, 1649-026 Lisboa, Portugal
| | - Gün R Semin
- William James Center for Research, ISPA - Instituto Universitário, Rua Jardim do Tabaco, 34, 1149-041 Lisboa, Portugal
- Faculty of Social and Behavioral Sciences, Utrecht University, 3584 CS Utrecht, The Netherlands
| |
Collapse
|
7
|
Ren Y, Zhang Y, Hou Y, Li J, Bi J, Yang W. Exogenous Bimodal Cues Attenuate Age-Related Audiovisual Integration. Iperception 2021; 12:20416695211020768. [PMID: 34104386 PMCID: PMC8165524 DOI: 10.1177/20416695211020768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/17/2022] Open
Abstract
Previous studies have demonstrated that exogenous attention decreases audiovisual integration (AVI); however, whether the AVI is different when exogenous attention is elicited by bimodal and unimodal cues and its aging effect remain unclear. To clarify this matter, 20 older adults and 20 younger adults were recruited to conduct an auditory/visual discrimination task following bimodal audiovisual cues or unimodal auditory/visual cues. The results showed that the response to all stimulus types was faster in younger adults compared with older adults, and the response was faster when responding to audiovisual stimuli compared with auditory or visual stimuli. Analysis using the race model revealed that the AVI was lower in the exogenous-cue conditions compared with the no-cue condition for both older and younger adults. The AVI was observed in all exogenous-cue conditions for the younger adults (visual cue > auditory cue > audiovisual cue); however, for older adults, the AVI was only found in the visual-cue condition. In addition, the AVI was lower in older adults compared to younger adults under no- and visual-cue conditions. These results suggested that exogenous attention decreased the AVI, and the AVI was lower in exogenous attention elicited by bimodal-cue than by unimodal-cue conditions. In addition, the AVI was reduced for older adults compared with younger adults under exogenous attention.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yawei Hou
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junyuan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junhao Bi
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
8
|
Marucci M, Di Flumeri G, Borghini G, Sciaraffa N, Scandola M, Pavone EF, Babiloni F, Betti V, Aricò P. The impact of multisensory integration and perceptual load in virtual reality settings on performance, workload and presence. Sci Rep 2021; 11:4831. [PMID: 33649348 PMCID: PMC7921449 DOI: 10.1038/s41598-021-84196-8] [Citation(s) in RCA: 35] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/19/2020] [Accepted: 01/07/2021] [Indexed: 01/31/2023] Open
Abstract
Real-world experience is typically multimodal. Evidence indicates that the facilitation in the detection of multisensory stimuli is modulated by the perceptual load, the amount of information involved in the processing of the stimuli. Here, we used a realistic virtual reality environment while concomitantly acquiring Electroencephalography (EEG) and Galvanic Skin Response (GSR) to investigate how multisensory signals impact target detection in two conditions, high and low perceptual load. Different multimodal stimuli (auditory and vibrotactile) were presented, alone or in combination with the visual target. Results showed that only in the high load condition, multisensory stimuli significantly improve performance, compared to visual stimulation alone. Multisensory stimulation also decreases the EEG-based workload. Instead, the perceived workload, according to the "NASA Task Load Index" questionnaire, was reduced only by the trimodal condition (i.e., visual, auditory, tactile). This trimodal stimulation was more effective in enhancing the sense of presence, that is the feeling of being in the virtual environment, compared to the bimodal or unimodal stimulation. Also, we show that in the high load task, the GSR components are higher compared to the low load condition. Finally, the multimodal stimulation (Visual-Audio-Tactile-VAT and Visual-Audio-VA) induced a significant decrease in latency, and a significant increase in the amplitude of the P300 potentials with respect to the unimodal (visual) and visual and tactile bimodal stimulation, suggesting a faster and more effective processing and detection of stimuli if auditory stimulation is included. Overall, these findings provide insights into the relationship between multisensory integration and human behavior and cognition.
Collapse
Affiliation(s)
- Matteo Marucci
- Department of Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185, Rome, Italy.
- Braintrends Ltd, Rome, Italy.
| | - Gianluca Di Flumeri
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Gianluca Borghini
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Nicolina Sciaraffa
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| | - Michele Scandola
- Npsy-Lab.VR, Human Sciences Department, University of Verona, Verona, Italy
| | | | - Fabio Babiloni
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
- College Computer Science and Technology, Hangzhou Dianzi University, Hangzhou, China
| | - Viviana Betti
- Department of Psychology, Sapienza University of Rome, Via dei Marsi 78, 00185, Rome, Italy
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
| | - Pietro Aricò
- IRCCS Fondazione Santa Lucia. Rome, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns Srl, Via Sesto Celere 7/C, 00152, Rome, Italy
| |
Collapse
|
9
|
Li Q. Semantic Congruency Modulates the Effect of Attentional Load on the Audiovisual Integration of Animate Images and Sounds. Iperception 2020; 11:2041669520981096. [PMID: 33456746 PMCID: PMC7783684 DOI: 10.1177/2041669520981096] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2020] [Accepted: 11/19/2020] [Indexed: 12/04/2022] Open
Abstract
Attentional processes play a complex and multifaceted role in the integration of input from different sensory modalities. However, whether increased attentional load disrupts the audiovisual (AV) integration of common objects that involve semantic content remains unclear. Furthermore, knowledge regarding how semantic congruency interacts with attentional load to influence the AV integration of common objects is limited. We investigated these questions by examining AV integration under various attentional-load conditions. AV integration was assessed by adopting an animal identification task using unisensory (animal images and sounds) and AV stimuli (semantically congruent AV objects and semantically incongruent AV objects), while attentional load was manipulated by using a rapid serial visual presentation task. Our results indicate that attentional load did not attenuate the integration of semantically congruent AV objects. However, semantically incongruent animal sounds and images were not integrated (as there was no multisensory facilitation), and the interference effect produced by the semantically incongruent AV objects was reduced by increased attentional-load manipulations. These findings highlight the critical role of semantic congruency in modulating the effect of attentional load on the AV integration of common objects.
Collapse
Affiliation(s)
- Qingqing Li
- Cognitive Neuroscience Laboratory, Graduate School of Natural
Science and Technology, Okayama University, Okayama, Japan
| |
Collapse
|
10
|
Ren Y, Li S, Wang T, Yang W. Age-Related Shifts in Theta Oscillatory Activity During Audio-Visual Integration Regardless of Visual Attentional Load. Front Aging Neurosci 2020; 12:571950. [PMID: 33192463 PMCID: PMC7556010 DOI: 10.3389/fnagi.2020.571950] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 09/15/2020] [Indexed: 12/20/2022] Open
Abstract
Audio-visual integration (AVI) is higher in attended conditions than in unattended conditions. Here, we explore the AVI effect when the attentional recourse is competed by additional visual distractors, and its aging effect using single- and dual-tasks. The results showed the highest AVI effect under single-task-attentional-load condition than under no- and dual-task-attentional-load conditions (all P < 0.05) in both older and younger groups, but the AVI effect was weaker and delayed for older adults compared to younger adults for all attentional-load conditions (all P < 0.05). The non-phase-locked oscillation for AVI analysis illustrated the highest theta and alpha oscillatory activity for single-task-attentional-load condition than for no- and dual-task-attentional-load conditions, and the AVI oscillatory activity mainly occurred in the Cz, CP1 and Oz of older adults but in the Fz, FC1, and Cz of younger adults. The AVI effect was significantly negatively correlated with FC1 (r2 = 0.1468, P = 0.05) and Cz (r2 = 0.1447, P = 0.048) theta activity and with Fz (r2 = 0.1557, P = 0.043), FC1 (r2 = 0.1042, P = 0.008), and Cz (r2 = 0.0897, P = 0.010) alpha activity for older adults but not for younger adults in dual task. These results suggested a reduction in AVI ability for peripheral stimuli and a shift in AVI oscillation from anterior to posterior regions in older adults as an adaptive mechanism.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
| | - Shengnan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Tao Wang
- Department of Light and Chemical Engineering, Guizhou Light Industry Technical College, Guiyang, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
11
|
Zuanazzi A, Noppeney U. The Intricate Interplay of Spatial Attention and Expectation: a Multisensory Perspective. Multisens Res 2020; 33:383-416. [DOI: 10.1163/22134808-20201482] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 12/07/2019] [Indexed: 11/19/2022]
Abstract
Abstract
Attention (i.e., task relevance) and expectation (i.e., signal probability) are two critical top-down mechanisms guiding perceptual inference. Attention prioritizes processing of information that is relevant for observers’ current goals. Prior expectations encode the statistical structure of the environment. Research to date has mostly conflated spatial attention and expectation. Most notably, the Posner cueing paradigm manipulates spatial attention using probabilistic cues that indicate where the subsequent stimulus is likely to be presented. Only recently have studies attempted to dissociate the mechanisms of attention and expectation and characterized their interactive (i.e., synergistic) or additive influences on perception. In this review, we will first discuss methodological challenges that are involved in dissociating the mechanisms of attention and expectation. Second, we will review research that was designed to dissociate attention and expectation in the unisensory domain. Third, we will review the broad field of crossmodal endogenous and exogenous spatial attention that investigates the impact of attention across the senses. This raises the critical question of whether attention relies on amodal or modality-specific mechanisms. Fourth, we will discuss recent studies investigating the role of both spatial attention and expectation in multisensory perception, where the brain constructs a representation of the environment based on multiple sensory inputs. We conclude that spatial attention and expectation are closely intertwined in almost all circumstances of everyday life. Yet, despite their intimate relationship, attention and expectation rely on partly distinct neural mechanisms: while attentional resources are mainly shared across the senses, expectations can be formed in a modality-specific fashion.
Collapse
Affiliation(s)
- Arianna Zuanazzi
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 2Department of Psychology, New York University, New York, NY, USA
| | - Uta Noppeney
- 1Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
- 3Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, The Netherlands
| |
Collapse
|
12
|
Cue-target onset asynchrony modulates interaction between exogenous attention and audiovisual integration. Cogn Process 2020; 21:261-270. [PMID: 31953644 DOI: 10.1007/s10339-020-00950-2] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/07/2019] [Accepted: 01/07/2020] [Indexed: 10/25/2022]
Abstract
Previous studies have shown that exogenous attention decreases audiovisual integration (AVI); however, whether the interaction between exogenous attention and AVI is influenced by cue-target onset asynchrony (CTOA) remains unclear. To clarify this matter, twenty participants were recruited to perform an auditory/visual discrimination task, and they were instructed to respond to the target stimuli as rapidly and accurately as possible. The analysis of the mean response times showed an effective cueing effect under all cued conditions and significant response facilitation for all audiovisual stimuli. A further comparison of the differences between the probability of audiovisual cumulative distributive functions (CDFs) and race model CDFs showed that the AVI latency was shortened under the cued condition relative to that under the no-cue condition, and there was a significant break point when the CTOA was 200 ms, with a decrease in the AVI upon going from 100 to 200 ms and an increase upon going from 200 to 400 ms. These results indicated different mechanisms for the interaction between exogenous attention and the AVI under the shorter and longer CTOA conditions and further suggested that there may be a temporal window in which the AVI effect is mainly affected by exogenous attention, but the interaction might be interfered with by endogenous attention when exceeding the temporal window.
Collapse
|
13
|
Multisensory feature integration in (and out) of the focus of spatial attention. Atten Percept Psychophys 2019; 82:363-376. [DOI: 10.3758/s13414-019-01813-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Stenzel H, Francombe J, Jackson PJB. Limits of Perceived Audio-Visual Spatial Coherence as Defined by Reaction Time Measurements. Front Neurosci 2019; 13:451. [PMID: 31191211 PMCID: PMC6538976 DOI: 10.3389/fnins.2019.00451] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2018] [Accepted: 04/23/2019] [Indexed: 11/30/2022] Open
Abstract
The ventriloquism effect describes the phenomenon of audio and visual signals with common features, such as a voice and a talking face merging perceptually into one percept even if they are spatially misaligned. The boundaries of the fusion of spatially misaligned stimuli are of interest for the design of multimedia products to ensure a perceptually satisfactory product. They have mainly been studied using continuous judgment scales and forced-choice measurement methods. These results vary greatly between different studies. The current experiment aims to evaluate audio-visual fusion using reaction time (RT) measurements as an indirect method of measurement to overcome these great variances. A two-alternative forced-choice (2AFC) word recognition test was designed and tested with noise and multi-talker speech background distractors. Visual signals were presented centrally and audio signals were presented between 0° and 31° audio-visual offset in azimuth. RT data were analyzed separately for the underlying Simon effect and attentional effects. In the case of the attentional effects, three models were identified but no single model could explain the observed RTs for all participants so data were grouped and analyzed accordingly. The results show that significant differences in RTs are measured from 5° to 10° onwards for the Simon effect. The attentional effect varied at the same audio-visual offset for two out of the three defined participant groups. In contrast with the prior research, these results suggest that, even for speech signals, small audio-visual offsets influence spatial integration subconsciously.
Collapse
Affiliation(s)
- Hanne Stenzel
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| | | | - Philip J. B. Jackson
- Centre for Vision, Speech and Signal Processing, University of Surrey, Guildford, United Kingdom
| |
Collapse
|
15
|
Marsja E, Marsh JE, Hansson P, Neely G. Examining the Role of Spatial Changes in Bimodal and Uni-Modal To-Be-Ignored Stimuli and How They Affect Short-Term Memory Processes. Front Psychol 2019; 10:299. [PMID: 30914983 PMCID: PMC6421315 DOI: 10.3389/fpsyg.2019.00299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Accepted: 01/30/2019] [Indexed: 11/13/2022] Open
Abstract
This study examines the potential vulnerability of short-term memory processes to distraction by spatial changes within to-be-ignored bimodal, vibratory, and auditory stimuli. Participants were asked to recall sequences of serially presented digits or locations of dots while being exposed to to-be-ignored stimuli. On unexpected occasions, the bimodal to-be-ignored sequence, vibratory to-be-ignored sequence, or auditory to-be-ignored sequence changed their spatial origin from one side of the body (e.g., ear and arm, arm only, ear only) to the other. It was expected that the bimodal stimuli would make the spatial change more salient compared to that of the uni-modal stimuli and that this, in turn, would yield an increase in distraction of serial short-term memory in both the verbal and spatial domains. Our results support this assumption as a disruptive effect of the spatial deviant was only observed when presented within the bimodal to-be-ignored sequence: uni-modal to-be-ignored sequences, whether vibratory: or auditory, had no impact on either verbal or spatial short-term memory. Implications for models of attention capture and the potential special attention capturing role of bimodal stimuli are discussed.
Collapse
Affiliation(s)
- Erik Marsja
- Department of Psychology, Umeå University, Umeå, Sweden
| | - John E Marsh
- School of Psychology, University of Central Lancashire, Preston, United Kingdom
| | | | - Gregory Neely
- Department of Psychology, Umeå University, Umeå, Sweden
| |
Collapse
|
16
|
Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing. Psychon Bull Rev 2018; 24:690-707. [PMID: 27586002 PMCID: PMC5486865 DOI: 10.3758/s13423-016-1154-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Collapse
|
17
|
Development of a puff- and suction-type pressure stimulator for human tactile studies. Behav Res Methods 2017; 50:703-710. [PMID: 28411335 DOI: 10.3758/s13428-017-0895-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In this study, we developed a tactile stimulator capable of administering either puff- or suction-type stimuli. The system is composed of three parts: a control unit, an air-handling unit, and a stimulation unit. The control unit controls the type, intensity, and time of stimulation. The air-handling unit delivers the stimulation power quantitatively to the stimulation unit, as commanded by the control unit. The stimulation unit stably administers either type of pressure to the skin, without any change of the tactor. Although the design of the stimulator is simple, it allows for five levels of control of the stimulation intensity (2-6 psi) and 0.1-s steps of control of the stimulation time, as we confirmed by tests. Preliminary electroencephalographic and event-related potential (ERP) studies of our system in humans confirmed the presence of N100 and P300 waves at standard electrode position C3, which are related to perception and cognition, respectively, in the somatosensory area of the brain. In addition, different stimulation types (puff and suction) and intensities (2 and 6 psi) were reflected in different peak-to-peak amplitudes and slopes of the mean ERP signal. The system developed in this study is expected to contribute to human tactile studies by providing the ability to administer puff- or suction-type stimuli interchangeably.
Collapse
|
18
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
19
|
Ahtamad M, Spence C, Ho C, Gray R. Warning Drivers about Impending Collisions Using Vibrotactile Flow. IEEE TRANSACTIONS ON HAPTICS 2016; 9:134-141. [PMID: 26625421 DOI: 10.1109/toh.2015.2501798] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
Vibrotactile collision warning signals that create a sensation of motion across a driver's body result in faster brake reaction times (BRTs) to potential collision events. To date, however, such warnings have only simulated linear motion. We extended this research by exploring the effectiveness of collision warnings that incorporate vibrotactile patterns or "vibrotactile flow". In Experiment 1, expanding and contracting vibrotactile flow warnings were compared with a static warning (all tactors activated simultaneously) and a no warning condition in a car following scenario. Both vibrotactile flow warnings produced significantly faster BRTs than the static and no warning conditions. However, there was no directional effect. That is, there was no significant difference between contracting and expanding signals. Warnings that utilize vibrotactile flow therefore appear to provide an effective means of informing drivers about potential collision events. However, unlike comparable warnings utilizing linear motion, their effectiveness does not seem to depend on the precise relationship between the warning and collision events. Experiment 2 demonstrated that a tactile warning incorporating linear motion produced significantly faster BRTs than an expanding vibrotactile flow warning. Taken together, these results suggest that vibrotactile warnings that simulate linear motion may be more effective than vibrotactile flow warnings.
Collapse
|
20
|
Tang X, Wu J, Shen Y. The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 2015; 61:208-24. [PMID: 26546734 DOI: 10.1016/j.neubiorev.2015.11.002] [Citation(s) in RCA: 89] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 11/01/2015] [Accepted: 11/02/2015] [Indexed: 11/24/2022]
Abstract
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.
Collapse
Affiliation(s)
- Xiaoyu Tang
- College of Psychology, Liaoning Normal University, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, 116029, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and System, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan.
| | - Yong Shen
- Neurodegenerative Disease Research Center, School of Life Sciences, University of Science and Technology of China, CAS Key Laboratory of Brain Functions and Disease, Hefei, China; Center for Advanced Therapeutic Strategies for Brain Disorders, Roskamp Institute, Sarasota, FL 34243, USA
| |
Collapse
|
21
|
Effects of Unilateral Cochlear Implantation on Balance Control and Sensory Organization in Adult Patients with Profound Hearing Loss. BIOMED RESEARCH INTERNATIONAL 2015; 2015:621845. [PMID: 26583121 PMCID: PMC4637149 DOI: 10.1155/2015/621845] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 04/20/2015] [Revised: 06/30/2015] [Accepted: 07/09/2015] [Indexed: 11/17/2022]
Abstract
Many studies were interested in the consequence of vestibular dysfunction related to cochlear implantation on balance control. This pilot study aimed to assess the effects of unilateral cochlear implantation on the modalities of balance control and sensorimotor strategies. Posturographic and vestibular evaluations were performed in 10 patients (55 ± 20 years) with profound hearing loss who were candidates to undergo unilateral multichannel cochlear implantation. The evaluation was carried out shortly before and one year after surgery. Posturographic tests were also performed in 10 age-matched healthy participants (63 ± 16 years). Vestibular compensation was observed within one year. In addition, postural performances of the patients increased within one year after cochlear implantation, especially in the more complex situations, in which sensory information is either unavailable or conflicting. Before surgery, postural performances were higher in the control group compared to the patients' group. One year after cochlear implantation, postural control was close to normalize. The improvement of postural performance could be explained by a mechanism of vestibular compensation. In addition, the recovery of auditory information which is the consequence of cochlear implantation could lead to an extended exploration of the environment possibly favoring the development of new balance strategies.
Collapse
|
22
|
Trojan J. Representations of body and space: theoretical concepts and controversies. Cogn Process 2015. [PMID: 26224274 DOI: 10.1007/s10339-015-0724-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022]
Abstract
Recent years have seen a revived interest in how body and space are represented perceptually and how they affect human cognition and behaviour. Various conceptualisations of body and space have been proposed, alternately stressing neurophysiological, cognitive, or social aspects, but unified approaches are scarce. This short paper will give an overview of different views on body and space. At least three relevant dimensions can be identified in which concepts of body and space may differ: (1) perspective: while we conceptually differentiate between body and space perception, they imply each other and the underlying mechanisms overlap. (2) Level: representations of body and space may emerge at different processing levels, from spinal mechanisms guiding reflex movements to those we construct in our imagination. (3) Affect: representations of body and space are closely linked to affect, but this relationship has not received enough attention yet. Despite many empirical findings, our current views on body and space representations remain ambiguous. One problem may lie in the implicit diversity of "bodies" and "spaces" examined in different studies. Specifications of these concepts may help understand existing results better and are important for guiding future research.
Collapse
Affiliation(s)
- Jörg Trojan
- University of Koblenz-Landau, Landau, Germany,
| |
Collapse
|
23
|
Ferri F, Tajadura-Jiménez A, Väljamäe A, Vastano R, Costantini M. Emotion-inducing approaching sounds shape the boundaries of multisensory peripersonal space. Neuropsychologia 2015; 70:468-75. [DOI: 10.1016/j.neuropsychologia.2015.03.001] [Citation(s) in RCA: 53] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 02/16/2015] [Accepted: 03/01/2015] [Indexed: 10/23/2022]
|
24
|
Baldwin CL, Lewis BA. Perceived urgency mapping across modalities within a driving context. APPLIED ERGONOMICS 2014; 45:1270-1277. [PMID: 23910716 DOI: 10.1016/j.apergo.2013.05.002] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2012] [Revised: 04/19/2013] [Accepted: 05/01/2013] [Indexed: 06/02/2023]
Abstract
Hazard mapping is essential to effective driver-vehicle interface (DVI) design. Determining which modality to use for situations of different criticality requires an understanding of the relative impact of signal parameters within each modality on perceptions of urgency and annoyance. Towards this goal we obtained psychometric functions for visual, auditory and tactile interpulse interval (IPI), visual color, signal word, and auditory fundamental frequency on perceptions of urgency, annoyance, and acceptability. Results indicate that manipulation of IPI in the tactile modality, relative to visual and auditory, has greater utility (greater impact on urgency than annoyance). Manipulations of color were generally rated as less annoying and more acceptable than auditory and tactile stimuli; but they were also rated as lower in urgency relative to other modality manipulations. Manipulation of auditory fundamental frequency resulted in high ratings of both urgency and annoyance. Results of the current investigation can be used to guide DVI design and evaluation.
Collapse
Affiliation(s)
- Carryl L Baldwin
- Department of Psychology, George Mason University, MS 3Fa, Fairfax, VA 22030, USA.
| | - Bridget A Lewis
- Department of Psychology, George Mason University, MS 3Fa, Fairfax, VA 22030, USA
| |
Collapse
|
25
|
Steenken R, Weber L, Colonius H, Diederich A. Designing driver assistance systems with crossmodal signals: multisensory integration rules for saccadic reaction times apply. PLoS One 2014; 9:e92666. [PMID: 24800823 PMCID: PMC4011748 DOI: 10.1371/journal.pone.0092666] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2013] [Accepted: 02/25/2014] [Indexed: 11/19/2022] Open
Abstract
Modern driver assistance systems make increasing use of auditory and tactile signals in order to reduce the driver's visual information load. This entails potential crossmodal interaction effects that need to be taken into account in designing an optimal system. Here we show that saccadic reaction times to visual targets (cockpit or outside mirror), presented in a driving simulator environment and accompanied by auditory or tactile accessories, follow some well-known spatiotemporal rules of multisensory integration, usually found under confined laboratory conditions. Auditory nontargets speed up reaction time by about 80 ms. The effect tends to be maximal when the nontarget is presented 50 ms before the target and when target and nontarget are spatially coincident. The effect of a tactile nontarget (vibrating steering wheel) was less pronounced and not spatially specific. It is shown that the average reaction times are well-described by the stochastic "time window of integration" model for multisensory integration developed by the authors. This two-stage model postulates that crossmodal interaction occurs only if the peripheral processes from the different sensory modalities terminate within a fixed temporal interval, and that the amount of crossmodal interaction manifests itself in an increase or decrease of second stage processing time. A qualitative test is consistent with the model prediction that the probability of interaction, but not the amount of crossmodal interaction, depends on target-nontarget onset asynchrony. A quantitative model fit yields estimates of individual participants' parameters, including the size of the time window. Some consequences for the design of driver assistance systems are discussed.
Collapse
Affiliation(s)
- Rike Steenken
- Department of Psychology, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
- * E-mail:
| | - Lars Weber
- OFFIS, Department for Transportation, Human-Centred Design, Oldenburg, Germany
| | - Hans Colonius
- Department of Psychology, Cluster of Excellence “Hearing4all”, and Research Center Neurosensory Science, European Medical School, Carl von Ossietzky Universität, Oldenburg, Germany
| | - Adele Diederich
- School of Humanities and Social Sciences, Jacobs University, Bremen, Germany
| |
Collapse
|
26
|
Shestopalova L, Bőhm TM, Bendixen A, Andreou AG, Georgiou J, Garreau G, Hajdu B, Denham SL, Winkler I. Do audio-visual motion cues promote segregation of auditory streams? Front Neurosci 2014; 8:64. [PMID: 24778604 PMCID: PMC3985028 DOI: 10.3389/fnins.2014.00064] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 03/19/2014] [Indexed: 11/19/2022] Open
Abstract
An audio-visual experiment using moving sound sources was designed to investigate whether the analysis of auditory scenes is modulated by synchronous presentation of visual information. Listeners were presented with an alternating sequence of two pure tones delivered by two separate sound sources. In different conditions, the two sound sources were either stationary or moving on random trajectories around the listener. Both the sounds and the movement trajectories were derived from recordings in which two humans were moving with loudspeakers attached to their heads. Visualized movement trajectories modeled by a computer animation were presented together with the sounds. In the main experiment, behavioral reports on sound organization were collected from young healthy volunteers. The proportion and stability of the different sound organizations were compared between the conditions in which the visualized trajectories matched the movement of the sound sources and when the two were independent of each other. The results corroborate earlier findings that separation of sound sources in space promotes segregation. However, no additional effect of auditory movement per se on the perceptual organization of sounds was obtained. Surprisingly, the presentation of movement-congruent visual cues did not strengthen the effects of spatial separation on segregating auditory streams. Our findings are consistent with the view that bistability in the auditory modality can occur independently from other modalities.
Collapse
Affiliation(s)
- Lidia Shestopalova
- Pavlov Institute of Physiology, Russian Academy of Sciences St.-Petersburg, Russia
| | - Tamás M Bőhm
- Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences Budapest, Hungary ; Department of Telecommunications and Media Informatics, Budapest University of Technology and Economics Budapest, Hungary
| | - Alexandra Bendixen
- Auditory Psychophysiology Lab, Department of Psychology, Cluster of Excellence "Hearing4all", European Medical School, Carl von Ossietzky University of Oldenburg Oldenburg, Germany
| | - Andreas G Andreou
- Department of Electrical and Computer Engineering, Johns Hopkins University Baltimore, MD, USA ; Department of Electrical and Computer Engineering, University of Cyprus Nicosia, Cyprus
| | - Julius Georgiou
- Department of Electrical and Computer Engineering, University of Cyprus Nicosia, Cyprus
| | - Guillaume Garreau
- Department of Electrical and Computer Engineering, University of Cyprus Nicosia, Cyprus
| | - Botond Hajdu
- Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences Budapest, Hungary
| | - Susan L Denham
- School of Psychology, Cognition Institute, University of Plymouth Plymouth, UK
| | - István Winkler
- Research Centre for Natural Sciences, Institute of Cognitive Neuroscience and Psychology, Hungarian Academy of Sciences Budapest, Hungary ; Department of Cognitive and Neuropsychology, Institute of Psychology, University of Szeged Szeged, Hungary
| |
Collapse
|
27
|
Wan X, Spence C, Mu B, Zhou X, Ho C. Assessing the benefits of multisensory audiotactile stimulation for overweight individuals. Exp Brain Res 2013; 232:1085-93. [DOI: 10.1007/s00221-013-3792-x] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2013] [Accepted: 11/19/2013] [Indexed: 10/25/2022]
|
28
|
Lanz F, Moret V, Rouiller EM, Loquet G. Multisensory Integration in Non-Human Primates during a Sensory-Motor Task. Front Hum Neurosci 2013; 7:799. [PMID: 24319421 PMCID: PMC3837444 DOI: 10.3389/fnhum.2013.00799] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2013] [Accepted: 11/03/2013] [Indexed: 12/12/2022] Open
Abstract
Daily our central nervous system receives inputs via several sensory modalities, processes them and integrates information in order to produce a suitable behavior. The amazing part is that such a multisensory integration brings all information into a unified percept. An approach to start investigating this property is to show that perception is better and faster when multimodal stimuli are used as compared to unimodal stimuli. This forms the first part of the present study conducted in a non-human primate's model (n = 2) engaged in a detection sensory-motor task where visual and auditory stimuli were displayed individually or simultaneously. The measured parameters were the reaction time (RT) between stimulus and onset of arm movement, successes and errors percentages, as well as the evolution as a function of time of these parameters with training. As expected, RTs were shorter when the subjects were exposed to combined stimuli. The gains for both subjects were around 20 and 40 ms, as compared with the auditory and visual stimulus alone, respectively. Moreover the number of correct responses increased in response to bimodal stimuli. We interpreted such multisensory advantage through redundant signal effect which decreases perceptual ambiguity, increases speed of stimulus detection, and improves performance accuracy. The second part of the study presents single-unit recordings derived from the premotor cortex (PM) of the same subjects during the sensory-motor task. Response patterns to sensory/multisensory stimulation are documented and specific type proportions are reported. Characterization of bimodal neurons indicates a mechanism of audio-visual integration possibly through a decrease of inhibition. Nevertheless the neural processing leading to faster motor response from PM as a polysensory association cortical area remains still unclear.
Collapse
Affiliation(s)
- Florian Lanz
- Domain of Physiology, Department of Medicine, Fribourg Cognition Center, University of Fribourg , Fribourg , Switzerland
| | | | | | | |
Collapse
|
29
|
Lewis BA, Baldwin CL. Equating Perceived Urgency Across Auditory, Visual, and Tactile Signals. ACTA ACUST UNITED AC 2012. [DOI: 10.1177/1071181312561379] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Determining the most effective modality to use to draw an operator’s attention to a specific situation has been a topic of recent interest. Making this determination requires ensuring that the signals being compared have been equated for saliency and perceived urgency. We conducted an experiment to examine how perceptions of urgency and annoyance change with changes in physical parameters across auditory, visual, and tactile modalities. While urgency ratings in the low, medium, and high range were found in each modality, parameters such as interpulse interval had a greater impact on perceived urgency than annoyance in the auditory and tactile modality, while having relatively little impact in the visual modality. Results can be used to facilitate the design of alerts and warnings with pre-specified urgency levels while minimizing annoyance and have implications for both research and interface design.
Collapse
|
30
|
Baldwin CL, Spence C, Bliss JP, Brill JC, Wogalter MS, Mayhorn CB, Ferris TK. Multimodal Cueing: The Relative Benefits of the Auditory, Visual, and Tactile Channels in Complex Environments. ACTA ACUST UNITED AC 2012. [DOI: 10.1177/1071181312561404] [Citation(s) in RCA: 25] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
Determining the most effective modality or combination of modalities for presenting time sensitive information to operators in complex environments is critical to effective display design. This panel of display design experts will briefly review the most important empirical research regarding the key issues to be considered including the temporal demands of the situation, the complexity of the information to be presented, and issues of information reliability and trust. Included in the discussion will be a focus on the relative benefits and potential costs of providing information in one modality versus another and under what conditions it may be preferable to use a multisensory display. Key issues to be discussed among panelists and audience members will be the implications of the existing knowledge for facilitating the design of alerts and warnings in complex environments such as aviation, driving, medicine and educational settings.
Collapse
|
31
|
Barrett DJK, Krumbholz K. Evidence for multisensory integration in the elicitation of prior entry by bimodal cues. Exp Brain Res 2012; 222:11-20. [PMID: 22975896 PMCID: PMC3442165 DOI: 10.1007/s00221-012-3191-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 07/08/2012] [Indexed: 11/25/2022]
Abstract
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.
Collapse
|
32
|
Thurlings ME, Brouwer AM, Van Erp JBF, Blankertz B, Werkhoven PJ. Does bimodal stimulus presentation increase ERP components usable in BCIs? J Neural Eng 2012; 9:045005. [PMID: 22831989 DOI: 10.1088/1741-2560/9/4/045005] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
33
|
Kim HS, Choi MH, Yeon HW, Jun JH, Yi JH, Park JR, Lim DW, Chung SC. A new tactile stimulator using a planar coil type actuator. SENSORS AND ACTUATORS A: PHYSICAL 2012; 178:209-216. [DOI: 10.1016/j.sna.2012.02.044] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/14/2024]
|
34
|
Oskarsson PA, Eriksson L, Carlander O. Enhanced perception and performance by multimodal threat cueing in simulated combat vehicle. HUMAN FACTORS 2012; 54:122-137. [PMID: 22409107 DOI: 10.1177/0018720811424895] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
OBJECTIVE In a simulated combat vehicle, uni-, bi-, and trimodal cueing of direction to threat were compared with the purpose to investigate whether multisensory redundant information may enhance dynamic perception and performance. BACKGROUND Previous research has shown that multimodal display presentation can enhance perception of information and task performance. METHOD Two experiments in a simulated combat vehicle were performed under the instructions to turn the vehicle toward the threat as fast and accurately as possible after threat cue onset. In Experiment 1, direction to threat was presented by four display types: visual head-down display, tactile belt, 3-D audio, and trimodal with the three displays combined. In Experiment 2, direction to threat was presented by three display types: visual head-up display (HUD)-3-D audio, tactile belt-3-D audio, and trimodal with HUD, tactile belt, and 3-D audio combined. RESULTS In Experiment I,the trimodal display provided overall best performance and perception of threat direction. In Experiment 2, both the trimodal and HUD--3-D audio displays led to overall best performance, and the trimodal display provided overall the best perception of threat direction. None of the trimodal displays induced higher mental workload or secondary task interference. CONCLUSION The trimodal displays provided overall enhanced perception and performance in the dynamically framed threat scenario and did not entail higher mental workload or decreased spare capacity. APPLICATION Trimodal displays with redundant information may contribute to safer and more reliable peak performance in time-critical dynamic tasks and especially in more extreme and stressful situations with high perceptual or mental workload.
Collapse
|
35
|
Kim HS, Yeon HW, Choi MH, Kim JH, Choi JS, Park JY, Jun JH, Yi JH, Tack GR, Chung SC. Development of a tactile stimulator with simultaneous visual and auditory stimulation using E-Prime software. Comput Methods Biomech Biomed Engin 2011; 16:481-7. [PMID: 22149159 DOI: 10.1080/10255842.2011.625018] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/14/2022]
Abstract
In this study, a tactile stimulator was developed, which can stimulate visual and auditory senses simultaneously by using the E-Prime software. This study tried to compensate for systematic stimulation control and other problems that occurred with previously developed tactile stimulators. The newly developed system consists of three units: a control unit, a drive unit and a vibrator. Since the developed system is a small, lightweight, simple structure with low electrical consumption, a maximum of 35 stimulation channels and various visual and auditory stimulation combinations without delay time, the previous systematic problem is corrected in this study. The system was designed to stimulate any part of the body including the fingers. Since the developed tactile stimulator used E-Prime software, which is widely used in the study of visual and auditory senses, the stimulator is expected to be highly practical due to a diverse combination of stimuli, such as tactile-visual, tactile-auditory, visual-auditory and tactile-visual-auditory stimulation.
Collapse
Affiliation(s)
- Hyung-Sik Kim
- Department of Biomedical Engineering, College of Biomedical and Health Science, Research Institute of Biomedical Engineering, Konkuk University, 322 Danwol-dong, Chungju-si, Chungcheongbuk-do 380-701, South Korea
| | | | | | | | | | | | | | | | | | | |
Collapse
|
36
|
|
37
|
Sperdin HF, Cappe C, Murray MM. Auditory-somatosensory multisensory interactions in humans: dissociating detection and spatial discrimination. Neuropsychologia 2010; 48:3696-705. [PMID: 20833194 DOI: 10.1016/j.neuropsychologia.2010.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 07/09/2010] [Accepted: 09/03/2010] [Indexed: 10/19/2022]
Abstract
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Collapse
Affiliation(s)
- Holger F Sperdin
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | | | | |
Collapse
|
38
|
Auditory, tactile, and multisensory cues facilitate search for dynamic visual stimuli. Atten Percept Psychophys 2010; 72:1654-65. [DOI: 10.3758/app.72.6.1654] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
39
|
Occelli V, Spence C, Zampini M. Audiotactile interactions in front and rear space. Neurosci Biobehav Rev 2010; 35:589-98. [PMID: 20621120 DOI: 10.1016/j.neubiorev.2010.07.004] [Citation(s) in RCA: 46] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2010] [Revised: 07/01/2010] [Accepted: 07/02/2010] [Indexed: 11/17/2022]
Abstract
The last few years have seen a growing interest in the assessment of audiotactile interactions in information processing in peripersonal space. In particular, these studies have focused on investigating peri-hand space [corrected] and, more recently, on the functional differences that have been demonstrated between the space close to front and back of the head (i.e., the peri-head space). In this review, the issue of how audiotactile interactions vary as a function of the region of space in which stimuli are presented (i.e., front vs. rear, peripersonal vs. extra-personal) will be described. We review evidence from both monkey and human studies. This evidence, providing insight into the differential attributes qualifying the frontal and the rear regions of space, sheds light on an until now neglected research topic and may help to contribute to the formulation of new rehabilitative approaches to disorders of spatial representation. A tentative explanation of the evolutionary reasons underlying these particular patterns of results, as well as suggestions for possible future developments, are also provided.
Collapse
Affiliation(s)
- Valeria Occelli
- Center for Mind/Brain Sciences, University of Trento, Corso Bettini 31, 38068 Rovereto (TN), Italy.
| | | | | |
Collapse
|
40
|
Ngo MK, Spence C. Crossmodal facilitation of masked visual target discrimination by informative auditory cuing. Neurosci Lett 2010; 479:102-6. [PMID: 20580658 DOI: 10.1016/j.neulet.2010.05.035] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2010] [Revised: 04/13/2010] [Accepted: 05/11/2010] [Indexed: 11/16/2022]
Abstract
Temporally synchronous, auditory cues can facilitate participants' performance on dynamic visual search tasks. Making auditory cues spatially informative with regard to the target location can reduce search latencies still further. In the present study, we investigated how multisensory integration, and temporal and spatial attention, might conjointly influence participants' performance on an elevation discrimination task for a masked visual target presented in a rapidly-changing sequence of masked visual distractors. Participants were presented with either spatially uninformative (centrally presented), spatially valid (with the target side), or spatially invalid tones that were synchronous with the presentation of the visual target. Participants responded significantly more accurately following the presentation of the spatially valid as compared to the uninformative or invalid auditory cues. Participants endogenously shifted their attention to the likely location of the target indicated by the valid spatial auditory cue (reflecting top-down, cognitive processing mechanisms), which facilitated their processing of the visual target over and above any bottom-up benefits associated solely with the synchronous presentation of the auditory and visual stimuli. The results of the present study therefore suggest that crossmodal attention (both spatial and temporal) and multisensory integration can work in parallel to facilitate people's ability to most efficiently respond to multisensory information.
Collapse
Affiliation(s)
- Mary Kim Ngo
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Road, OX1 3UD, Oxford, UK.
| | | |
Collapse
|
41
|
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, United Kingdom.
| |
Collapse
|
42
|
Capturing spatial attention with multisensory cues: a review. Hear Res 2009; 258:134-42. [PMID: 19409472 DOI: 10.1016/j.heares.2009.04.015] [Citation(s) in RCA: 51] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2009] [Revised: 04/18/2009] [Accepted: 04/20/2009] [Indexed: 11/22/2022]
Abstract
The last 30 years have seen numerous studies demonstrating unimodal and crossmodal spatial cuing effects. However, surprisingly few studies have attempted to investigate whether multisensory cues might be any more effective in capturing a person's spatial attention than unimodal cues. Indeed, until very recently, the consensus view was that multisensory cues were, in fact, no more effective. However, the results of several recent studies have overturned this conclusion, by showing that multisensory cues retain their attention-capturing ability under conditions of perceptual load (i.e., when participants are simultaneously engaged in a concurrent attention-demanding task) while their constituent signals (when presented unimodally) do not. Here we review the empirical literature on multisensory spatial cuing effects and highlight the implications that this research has for the design of more effective warning signals in applied settings.
Collapse
|