1
|
Einhäuser W, Neubert CR, Grimm S, Bendixen A. High visual salience of alert signals can lead to a counterintuitive increase of reaction times. Sci Rep 2024; 14:8858. [PMID: 38632303 PMCID: PMC11024089 DOI: 10.1038/s41598-024-58953-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2019] [Accepted: 03/26/2024] [Indexed: 04/19/2024] Open
Abstract
It is often assumed that rendering an alert signal more salient yields faster responses to this alert. Yet, there might be a trade-off between attracting attention and distracting from task execution. Here we tested this in four behavioral experiments with eye-tracking using an abstract alert-signal paradigm. Participants performed a visual discrimination task (primary task) while occasional alert signals occurred in the visual periphery accompanied by a congruently lateralized tone. Participants had to respond to the alert before proceeding with the primary task. When visual salience (contrast) or auditory salience (tone intensity) of the alert were increased, participants directed their gaze to the alert more quickly. This confirms that more salient alerts attract attention more efficiently. Increasing auditory salience yielded quicker responses for the alert and primary tasks, apparently confirming faster responses altogether. However, increasing visual salience did not yield similar benefits: instead, it increased the time between fixating the alert and responding, as high-salience alerts interfered with alert-task execution. Such task interference by high-salience alert-signals counteracts their more efficient attentional guidance. The design of alert signals must be adapted to a "sweet spot" that optimizes this stimulus-dependent trade-off between maximally rapid attentional orienting and minimal task interference.
Collapse
Affiliation(s)
- Wolfgang Einhäuser
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany.
| | - Christiane R Neubert
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| | - Sabine Grimm
- Physics of Cognition Group, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
- BioCog - Cognitive and Biological Psychology, Institute of Psychology, Leipzig University, Leipzig, Germany
| | - Alexandra Bendixen
- Cognitive Systems Lab, Institute of Physics, Chemnitz University of Technology, Chemnitz, Germany
| |
Collapse
|
2
|
Chen L, Zhu P, Li J, Song H, Liu H, Shen M, Chen H. The modulation of expectation violation on attention: Evidence from the spatial cueing effects. Cognition 2023; 238:105488. [PMID: 37178591 DOI: 10.1016/j.cognition.2023.105488] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/03/2022] [Revised: 03/21/2023] [Accepted: 05/03/2023] [Indexed: 05/15/2023]
Abstract
The study sought to investigate whether and how expectation violation can modulate attention using the exogenous spatial cueing paradigm, under the theoretical framework of the Memory Encoding Cost (MEC) model. The MEC proposes that exogenous spatial cueing effects are mainly driven by a combination of two distinct mechanisms: attentional facilitation triggered by the presence of an abrupt cue, and attentional suppression induced by memory encoding of the cue. In current experiments, participants needed to identify a target letter that was sometimes preceded by a peripheral onset cue. Various types of expectation violation were introduced by regulating the probability of cue presentation (Experiments 1 & 5), the probability of cue location (Experiments 2 & 4), and the probability of irrelevant sound presentation (Experiment 3). The results showed that expectation violation could enhance the cueing effect (valid vs. invalid cue) in some cases. More crucially, all experiments consistently observed asymmetrical modulation of expectation violation on the cost (invalid vs. neutral cue) and benefit (valid vs. neutral cue) effects: Expectation violation increased the cost effects while did not modulate or decreased (or even reversed) the benefit effects. Furthermore, Experiment 5 provided direct evidence that violation of expectations could enhance the memory encoding of a cue (e.g., color) and this memory advantage could manifest quickly in the early stages of the experiment. The MEC better explains these findings than some traditional models like the spotlight: Expectation violation can both enhance the attentional facilitation of the cue and memory encoding of irrelevant cue information. These findings suggest that expectation violation has a general adaptive function in modulating the attention selectivity.
Collapse
Affiliation(s)
- Luo Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Ping Zhu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Jian Li
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Huixin Song
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Huiying Liu
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China
| | - Mowei Shen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China.
| | - Hui Chen
- Department of Psychology and Behavioral Sciences, Zhejiang University, Zijingang Campus, 866 Yuhangtang Road, Hangzhou 310007, China.
| |
Collapse
|
3
|
Wang X, Ren P, Miao X, Zhang X, Qian Y, Chi L. Attention Load Regulates the Facilitation of Audio-Visual Information on Landing Perception in Badminton. Percept Mot Skills 2023; 130:1687-1713. [PMID: 37284745 DOI: 10.1177/00315125231180893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Based on the role of the high temporal sensitivity of the auditory modality and the advantage of audio-visual integration in motion perception and anticipation, we investigated the effect of audio-visual information on landing perception in badminton through two experiments; and we explored the regulatory role of attention load. In this study, experienced badminton players were asked to predict the landing position of the shuttle under the conditions of video (visual) or audio-video (audio-visual) presentation. We manipulated flight information or attention load. The results of Experiment 1 showed that, whether the visual information was rich or not, that is, whether or not it contained the early flight trajectory, the addition of auditory information played a promoting role. The results of Experiment 2 showed that attention load regulated the facilitation of multi-modal integration on landing perception. The facilitation of audio-visual information was impaired under high load, meaning that audio-visual integration tended to be guided by attention from top to bottom. The results support the superiority effect of multi-modal integration, suggesting that adding auditory perception training to sports training could significantly improve athletes' performance.
Collapse
Affiliation(s)
- Xiaoting Wang
- School of Psychology, Beijing Sport University, Beijing, China
| | - Pengfei Ren
- School of Physical Education, Yan'an University, Yan'an, China
| | - Xiuying Miao
- School of Psychology, Beijing Sport University, Beijing, China
| | - Xin Zhang
- School of Psychology, Beijing Sport University, Beijing, China
| | - Yiming Qian
- Department of Psychology, Tsinghua University, Beijing, China
| | - Lizhong Chi
- School of Psychology, Beijing Sport University, Beijing, China
| |
Collapse
|
4
|
Yuan Y, He X, Yue Z. Working memory load modulates the processing of audiovisual distractors: A behavioral and event-related potentials study. Front Integr Neurosci 2023; 17:1120668. [PMID: 36908504 PMCID: PMC9995450 DOI: 10.3389/fnint.2023.1120668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Accepted: 01/30/2023] [Indexed: 02/25/2023] Open
Abstract
The interplay between different modalities can help to perceive stimuli more effectively. However, very few studies have focused on how multisensory distractors affect task performance. By adopting behavioral and event-related potentials (ERPs) techniques, the present study examined whether multisensory audiovisual distractors could attract attention more effectively than unisensory distractors. Moreover, we explored whether such a process was modulated by working memory load. Across three experiments, n-back tasks (1-back and 2-back) were adopted with peripheral auditory, visual, or audiovisual distractors. Visual and auditory distractors were white discs and pure tones (Experiments 1 and 2), pictures and sounds of animals (Experiment 3), respectively. Behavioral results in Experiment 1 showed a significant interference effect under high working memory load but not under low load condition. The responses to central letters with audiovisual distractors were significantly slower than those to letters without distractors, while no significant difference was found between unisensory distractor and without distractor conditions. Similarly, ERP results in Experiments 2 and 3 showed that there existed an integration only under high load condition. That is, an early integration for simple audiovisual distractors (240-340 ms) and a late integration for complex audiovisual distractors (440-600 ms). These findings suggest that multisensory distractors can be integrated and effectively attract attention away from the main task, i.e., interference effect. Moreover, this effect is pronounced only under high working memory load condition.
Collapse
Affiliation(s)
- Yichen Yuan
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Xiang He
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| | - Zhenzhu Yue
- Department of Psychology, Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
5
|
Tang X, Yuan M, Shi Z, Gao M, Ren R, Wei M, Gao Y. Multisensory integration attenuates visually induced oculomotor inhibition of return. J Vis 2022; 22:7. [PMID: 35297999 PMCID: PMC8944392 DOI: 10.1167/jov.22.4.7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Inhibition of return (IOR) is a mechanism of the attention system involving bias toward novel stimuli and delayed generation of responses to targets at previously attended locations. According to the two-component theory, IOR consists of a perceptual component and an oculomotor component (oculomotor IOR [O-IOR]) depending on whether the eye movement system is activated. Previous studies have shown that multisensory integration weakens IOR when paying attention to both visual and auditory modalities. However, it remains unclear whether the O-IOR effect attenuated by multisensory integration also occurs when the oculomotor system is activated. Here, using two eye movement experiments, we investigated the effect of multisensory integration on O-IOR using the exogenous spatial cueing paradigm. In Experiment 1, we found a greater visual O-IOR effect compared with audiovisual and auditory O-IOR in divided modality attention. The relative multisensory response enhancement (rMRE) and violations of Miller's bound showed a greater magnitude of multisensory integration in the cued location compared with the uncued location. In Experiment 2, the magnitude of the audiovisual O-IOR effect was significantly less than that of the visual O-IOR in single visual modality selective attention. Implications for the effect of multisensory integration on O-IOR were discussed under conditions of oculomotor system activation, shedding new light on the two-component theory of IOR.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Mengying Yuan
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Zhongyu Shi
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Min Gao
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Rongxia Ren
- Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.,
| | - Ming Wei
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, China.,
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, China.,
| |
Collapse
|
6
|
Guiding spatial attention by multimodal reward cues. Atten Percept Psychophys 2021; 84:655-670. [PMID: 34964093 DOI: 10.3758/s13414-021-02422-x] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/30/2021] [Indexed: 11/08/2022]
Abstract
Our attention is constantly captured and guided by visual and/or auditory inputs. One key contributor to selecting relevant information from the environment is reward prospect. Intriguingly, while both multimodal signal processing and reward effects on attention have been widely studied, research on multimodal reward signals is lacking. Here, we investigated this using a Posner task featuring peripheral cues of different modalities (audiovisual/visual/auditory), reward prospect (reward/no-reward), and cue-target stimulus-onset asynchronies (SOAs 100-1,300 ms). We found that audiovisual and visual reward cues (but not auditory ones) enhanced cue-validity effects, albeit with different time courses (Experiment 1). While the reward-modulated validity effect of visual cues was pronounced at short SOAs, the effect of audiovisual reward cues emerged at longer SOAs. Follow-up experiments exploring the effects of visual (Experiment 2) and auditory (Experiment 3) reward cues in isolation showed that reward modulated performance only in the visual condition. This suggests that the differential effect of visual and auditory reward cues in Experiment 1 is not merely a result of the mixed cue context, but confirms that visual reward cues have a stronger impact on attentional guidance in this paradigm. Taken together, it seems that adding an auditory reward cue to the inherently dominant visual one led to a shift/extension of the validity effect in time - instead of increasing its amplitude. While generally being in line with a multimodal cuing benefit, this specific pattern highlights that different reward signals are not simply combined in a linear fashion but lead to a qualitatively different process.
Collapse
|
7
|
Dozio N, Maggioni E, Pittera D, Gallace A, Obrist M. May I Smell Your Attention: Exploration of Smell and Sound for Visuospatial Attention in Virtual Reality. Front Psychol 2021; 12:671470. [PMID: 34366990 PMCID: PMC8339311 DOI: 10.3389/fpsyg.2021.671470] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2021] [Accepted: 06/21/2021] [Indexed: 11/14/2022] Open
Abstract
When interacting with technology, attention is mainly driven by audiovisual and increasingly haptic stimulation. Olfactory stimuli are widely neglected, although the sense of smell influences many of our daily life choices, affects our behavior, and can catch and direct our attention. In this study, we investigated the effect of smell and sound on visuospatial attention in a virtual environment. We implemented the Bells Test, an established neuropsychological test to assess attentional and visuospatial disorders, in virtual reality (VR). We conducted an experiment with 24 participants comparing the performance of users under three experimental conditions (smell, sound, and smell and sound). The results show that multisensory stimuli play a key role in driving the attention of the participants and highlight asymmetries in directing spatial attention. We discuss the relevance of the results within and beyond human-computer interaction (HCI), particularly with regard to the opportunity of using VR for rehabilitation and assessment procedures for patients with spatial attention deficits.
Collapse
Affiliation(s)
- Nicolò Dozio
- Politecnico di Milano, Department of Mechanical Engineering, Milan, Italy
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
| | - Emanuela Maggioni
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| | - Dario Pittera
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Ultraleap Ltd., Bristol, United Kingdom
| | - Alberto Gallace
- Mind and Behavior Technological Center - MibTec, University of Milano-Bicocca, Milan, Italy
| | - Marianna Obrist
- Sussex Computer-Human Interaction Lab, Department of Informatics, University of Sussex, Brighton, United Kingdom
- Department of Computer Science, University College London, London, United Kingdom
| |
Collapse
|
8
|
Ren Y, Zhang Y, Hou Y, Li J, Bi J, Yang W. Exogenous Bimodal Cues Attenuate Age-Related Audiovisual Integration. Iperception 2021; 12:20416695211020768. [PMID: 34104386 PMCID: PMC8165524 DOI: 10.1177/20416695211020768] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2020] [Accepted: 05/09/2021] [Indexed: 11/17/2022] Open
Abstract
Previous studies have demonstrated that exogenous attention decreases audiovisual integration (AVI); however, whether the AVI is different when exogenous attention is elicited by bimodal and unimodal cues and its aging effect remain unclear. To clarify this matter, 20 older adults and 20 younger adults were recruited to conduct an auditory/visual discrimination task following bimodal audiovisual cues or unimodal auditory/visual cues. The results showed that the response to all stimulus types was faster in younger adults compared with older adults, and the response was faster when responding to audiovisual stimuli compared with auditory or visual stimuli. Analysis using the race model revealed that the AVI was lower in the exogenous-cue conditions compared with the no-cue condition for both older and younger adults. The AVI was observed in all exogenous-cue conditions for the younger adults (visual cue > auditory cue > audiovisual cue); however, for older adults, the AVI was only found in the visual-cue condition. In addition, the AVI was lower in older adults compared to younger adults under no- and visual-cue conditions. These results suggested that exogenous attention decreased the AVI, and the AVI was lower in exogenous attention elicited by bimodal-cue than by unimodal-cue conditions. In addition, the AVI was reduced for older adults compared with younger adults under exogenous attention.
Collapse
Affiliation(s)
- Yanna Ren
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Ying Zhang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yawei Hou
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junyuan Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Junhao Bi
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| |
Collapse
|
9
|
Conci A, Bilalić M, Gaschler R. Can You See What I Hear? Exp Psychol 2020; 67:186-193. [PMID: 32900295 DOI: 10.1027/1618-3169/a000487] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
Previous research on inattentional blindness (IB) has focused almost entirely on the visual modality. This study extends the paradigm by pairing visual with auditory stimuli. New visual and auditory stimuli were created to investigate the phenomenon of inattention in visual, auditory, and paired modality. The goal of the study was to assess to what extent the pairing of visual and auditory modality fosters the detection of change. Participants watched a video sequence and counted predetermined words in a spoken text. IB and inattentional deafness occurred in about 40% of participants when attention was engaged by this difficult (auditory) counting task. Most importantly, participants detected the changes considerably more often (88%) when the change occurred in both modalities rather than just one. One possible reason for the drastic reduction of IB or deafness in a multimodal context is that discrepancy between expected and encountered course of events proportionally increases across sensory modalities.
Collapse
Affiliation(s)
- Anna Conci
- FernUniversität in Hagen, Hagen, Germany.,Alpen-Adria-Universität Klagenfurt, Klagenfurt, Austria
| | - Merim Bilalić
- Northumbria University, Newcastle-upon-Tyne, United Kingdom
| | | |
Collapse
|
10
|
Mühlberg S, Müller MM. Alignment of Continuous Auditory and Visual Distractor Stimuli Is Leading to an Increased Performance. Front Psychol 2020; 11:790. [PMID: 32457678 PMCID: PMC7225351 DOI: 10.3389/fpsyg.2020.00790] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2019] [Accepted: 03/31/2020] [Indexed: 12/02/2022] Open
Abstract
Information across different senses can affect our behavior in both positive and negative ways. Stimuli aligned with a target stimulus can lead to improved behavioral performances, while competing, transient stimuli often negatively affect our task performance. But what about subtle changes in task-irrelevant multisensory stimuli? Within this experiment we tested the effect of the alignment of subtle auditory and visual distractor stimuli on the performance of detection and discrimination tasks respectively. Participants performed either a detection or a discrimination task on a centrally presented Gabor patch, while being simultaneously subjected to a random dot kinematogram, which alternated its color from green to red with a frequency of 7.5 Hz and a continuous tone, which was either a frequency modulated pure tone for the audiovisual congruent and incongruent conditions or white noise for the visual control condition. While the modulation frequency of the pure tone initially differed from the modulation frequency of the random dot kinematogram, the modulation frequencies of both stimuli could align after a variable delay, and we measured accuracy and reaction times around the possible alignment time. We found increases in accuracy for the audiovisual congruent condition suggesting subtle alignments of multisensory background stimuli can increase performance on the current task.
Collapse
|
11
|
|
12
|
Interference of irrelevant information in multisensory selection depends on attentional set. Atten Percept Psychophys 2019; 82:1176-1195. [PMID: 31444699 DOI: 10.3758/s13414-019-01848-8] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In the multisensory world in which we live, certain objects and events are of more relevance than others. In the laboratory, this broadly equates to the distinction between targets and distractors. In selection situations like the flanker task, the evidence suggests that the processing of multisensory distractors is influenced by attention. Here, multisensory distractor processing was investigated by modulating attentional set in three experiments in a flanker interference task, in which the targets were unisensory while the distractors were multisensory. Attentional set was modulated by making the target modality either predictable or unpredictable (Experiments 1 vs. 2, respectively). In Experiment 3, this manipulation was implemented on a within-experiment basis. Furthermore, the third experiment compared audiovisual distractors (used in all experiments) with distractors with one feature in a neutral modality (i.e., touch), that never appeared as the target modality in the flanker task. The results demonstrate that there was no interference from the response-compatible crossmodal distractor feature when the target modality was predictable (i.e., blocked). However, when the modality was varied on a trial-by-trial basis, this crossmodal feature significantly influenced information processing. By contrast, a multisensory distractor with a neutral crossmodal feature never influenced behavior. This finding suggests that the processing of multisensory distractors depends on attentional set. When the target modality varies randomly, participants include features from both modalities in their attentional set and the irrelevant crossmodal feature, now part of the set, influences information processing. In contrast, interference from the crossmodal distractor feature does not occur when it is not part of the attentional set.
Collapse
|
13
|
Multisensory feature integration in (and out) of the focus of spatial attention. Atten Percept Psychophys 2019; 82:363-376. [DOI: 10.3758/s13414-019-01813-5] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
14
|
Marsja E, Marsh JE, Hansson P, Neely G. Examining the Role of Spatial Changes in Bimodal and Uni-Modal To-Be-Ignored Stimuli and How They Affect Short-Term Memory Processes. Front Psychol 2019; 10:299. [PMID: 30914983 PMCID: PMC6421315 DOI: 10.3389/fpsyg.2019.00299] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2018] [Accepted: 01/30/2019] [Indexed: 11/13/2022] Open
Abstract
This study examines the potential vulnerability of short-term memory processes to distraction by spatial changes within to-be-ignored bimodal, vibratory, and auditory stimuli. Participants were asked to recall sequences of serially presented digits or locations of dots while being exposed to to-be-ignored stimuli. On unexpected occasions, the bimodal to-be-ignored sequence, vibratory to-be-ignored sequence, or auditory to-be-ignored sequence changed their spatial origin from one side of the body (e.g., ear and arm, arm only, ear only) to the other. It was expected that the bimodal stimuli would make the spatial change more salient compared to that of the uni-modal stimuli and that this, in turn, would yield an increase in distraction of serial short-term memory in both the verbal and spatial domains. Our results support this assumption as a disruptive effect of the spatial deviant was only observed when presented within the bimodal to-be-ignored sequence: uni-modal to-be-ignored sequences, whether vibratory: or auditory, had no impact on either verbal or spatial short-term memory. Implications for models of attention capture and the potential special attention capturing role of bimodal stimuli are discussed.
Collapse
Affiliation(s)
- Erik Marsja
- Department of Psychology, Umeå University, Umeå, Sweden
| | - John E Marsh
- School of Psychology, University of Central Lancashire, Preston, United Kingdom
| | | | - Gregory Neely
- Department of Psychology, Umeå University, Umeå, Sweden
| |
Collapse
|
15
|
Lunn J, Sjoblom A, Ward J, Soto-Faraco S, Forster S. Multisensory enhancement of attention depends on whether you are already paying attention. Cognition 2019; 187:38-49. [PMID: 30825813 DOI: 10.1016/j.cognition.2019.02.008] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2018] [Revised: 02/12/2019] [Accepted: 02/13/2019] [Indexed: 10/27/2022]
Abstract
Multisensory stimuli are argued to capture attention more effectively than unisensory stimuli due to their ability to elicit a super-additive neuronal response. However, behavioural evidence for enhanced multisensory attentional capture is mixed. Furthermore, the notion of multisensory enhancement of attention conflicts with findings suggesting that multisensory integration may itself be dependent upon top-down attention. The present research resolves this discrepancy by examining how both endogenous attentional settings and the availability of attentional capacity modulate capture by multisensory stimuli. Across a series of four studies, two measures of attentional capture were used which vary in their reliance on endogenous attention: facilitation and distraction. Perceptual load was additionally manipulated to determine whether multisensory stimuli are still able to capture attention when attention is occupied by a demanding primary task. Multisensory stimuli presented as search targets were consistently detected faster than unisensory stimuli regardless of perceptual load, although they are nevertheless subject to load modulation. In contrast, task irrelevant multisensory stimuli did not cause greater distraction than unisensory stimuli, suggesting that the enhanced attentional status of multisensory stimuli may be mediated by the availability of endogenous attention. Implications for multisensory alerts in practical settings such as driving and aviation are discussed, namely that these may be advantageous during demanding tasks, but may be less suitable to signaling unexpected events.
Collapse
|
16
|
Tang X, Gao Y, Yang W, Ren Y, Wu J, Zhang M, Wu Q. Bimodal-divided attention attenuates visually induced inhibition of return with audiovisual targets. Exp Brain Res 2019; 237:1093-1107. [PMID: 30770958 DOI: 10.1007/s00221-019-05488-0] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2017] [Accepted: 02/04/2019] [Indexed: 11/27/2022]
Abstract
Inhibition of return (IOR) refers to the slower response to a target appearing at a previously attended location in a cue-target paradigm. It has been greatly explored in the visual or auditory modality. This study investigates differences between the IOR of audiovisual targets and the IOR of visual targets under conditions of modality-specific selective attention (Experiment 1) and divided-modalities attention (Experiment 2). We employed an exogenous spatial cueing paradigm and manipulated the modalities of targets, including visual, auditory, or audiovisual modalities. The participants were asked to detect targets in visual modality or both visual and auditory modalities, which were presented on the same (cued) or opposite (uncued) side as the preceding visual peripheral cues. In Experiment 1, we found the comparable IOR with visual and audiovisual targets when participants were asked to selectively focus on visual modality. In Experiment 2, however, there was a smaller magnitude of IOR with audiovisual targets as compared with visual targets when paying attention to both visual and auditory modalities. We also observed a reduced multisensory response enhancement effect and race model inequality violation at cued locations relative to uncued locations. These results provide the first evidence of the IOR with audiovisual targets. Furthermore, IOR with audiovisual targets decreases when paying attention to both modalities. The interaction between exogenous spatial attention and audiovisual integration is discussed.
Collapse
Affiliation(s)
- Xiaoyu Tang
- School of Psychology, Liaoning Collaborative Innovation Center of Children and Adolescents Healthy Personality Assessment and Cultivation, Liaoning Normal University, Dalian, 116029, China.
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| | - Yulin Gao
- Department of Psychology, Jilin University, Changchun, 130012, China
| | - Weiping Yang
- Department of Psychology, Hubei University, Wuhan, 430062, China
| | - Yanna Ren
- Department of Psychology, Guiyang University of Chinese Medicine, Guiyang, 550025, China
| | - Jinglong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan
- Shanghai University of Traditional Chinese Medicine, Shanghai, 201203, China
- Key Laboratory of Biomimetic Robots and Systems, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, Beijing, 100081, China
| | - Ming Zhang
- Department of Psychology, Soochow University, Suzhou, 215123, China.
| | - Qiong Wu
- Cognitive Neuroscience Laboratory, Okayama University, Okayama, 7008530, Japan.
| |
Collapse
|
17
|
Curry CM, Des Brisay PG, Rosa P, Koper N. Noise Source and Individual Physiology Mediate Effectiveness of Bird Songs Adjusted to Anthropogenic Noise. Sci Rep 2018; 8:3942. [PMID: 29500452 PMCID: PMC5834586 DOI: 10.1038/s41598-018-22253-5] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Accepted: 02/14/2018] [Indexed: 12/14/2022] Open
Abstract
Anthropogenic noise is a pervasive pollutant altering behaviour of wildlife that communicates acoustically. Some species adjust vocalisations to compensate for noise. However, we know little about whether signal adjustments improve communication in noise, the extent to which effectiveness of adjustments varies with noise source, or how individual variation in physiology varies with response capacity. We played noise-adjusted and unadjusted songs to wild Passerculus sandwichensis (Savannah Sparrows) after measurements of adrenocortical responsiveness of individuals. Playbacks using songs adjusted to noisy environments were effective in restoring appropriate conspecific territorial aggression behaviours in some altered acoustic environments. Surprisingly, however, levels of adrenocortical responsiveness that reduced communication errors at some types of infrastructure were correlated with increased errors at others. Song adjustments that were effective in communicating for individuals with lower adrenocortical responsiveness at pumpjacks were not effective at screwpumps and vice versa. Our results demonstrate that vocal adjustments can sometimes allow birds to compensate for disruptions in communication caused by anthropogenic noise, but that physiological variation among receivers may alter effectiveness of these adjustments. Thus mitigation strategies to minimize anthropogenic noise must account for both acoustic and physiological impacts of infrastructure.
Collapse
Affiliation(s)
- Claire M Curry
- Natural Resources Institute, University of Manitoba, 70 Dysart Road, 303 Sinnott Building, Winnipeg, Manitoba, R3T 2M7, Canada. .,Oklahoma Biological Survey, University of Oklahoma, Norman, OK, USA.
| | - Paulson G Des Brisay
- Natural Resources Institute, University of Manitoba, 70 Dysart Road, 303 Sinnott Building, Winnipeg, Manitoba, R3T 2M7, Canada
| | - Patricia Rosa
- Natural Resources Institute, University of Manitoba, 70 Dysart Road, 303 Sinnott Building, Winnipeg, Manitoba, R3T 2M7, Canada
| | - Nicola Koper
- Natural Resources Institute, University of Manitoba, 70 Dysart Road, 303 Sinnott Building, Winnipeg, Manitoba, R3T 2M7, Canada
| |
Collapse
|
18
|
Hemispheric asymmetry: Looking for a novel signature of the modulation of spatial attention in multisensory processing. Psychon Bull Rev 2018; 24:690-707. [PMID: 27586002 PMCID: PMC5486865 DOI: 10.3758/s13423-016-1154-y] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022]
Abstract
The extent to which attention modulates multisensory processing in a top-down fashion is still a subject of debate among researchers. Typically, cognitive psychologists interested in this question have manipulated the participants’ attention in terms of single/dual tasking or focal/divided attention between sensory modalities. We suggest an alternative approach, one that builds on the extensive older literature highlighting hemispheric asymmetries in the distribution of spatial attention. Specifically, spatial attention in vision, audition, and touch is typically biased preferentially toward the right hemispace, especially under conditions of high perceptual load. We review the evidence demonstrating such an attentional bias toward the right in extinction patients and healthy adults, along with the evidence of such rightward-biased attention in multisensory experimental settings. We then evaluate those studies that have demonstrated either a more pronounced multisensory effect in right than in left hemispace, or else similar effects in the two hemispaces. The results suggest that the influence of rightward-biased attention is more likely to be observed when the crossmodal signals interact at later stages of information processing and under conditions of higher perceptual load—that is, conditions under which attention is perhaps a compulsory enhancer of information processing. We therefore suggest that the spatial asymmetry in attention may provide a useful signature of top-down attentional modulation in multisensory processing.
Collapse
|
19
|
Zou Z, Chau BKH, Ting KH, Chan CCH. Aging Effect on Audiovisual Integrative Processing in Spatial Discrimination Task. Front Aging Neurosci 2017; 9:374. [PMID: 29184494 PMCID: PMC5694625 DOI: 10.3389/fnagi.2017.00374] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/25/2017] [Accepted: 11/01/2017] [Indexed: 11/13/2022] Open
Abstract
Multisensory integration is an essential process that people employ daily, from conversing in social gatherings to navigating the nearby environment. The aim of this study was to investigate the impact of aging on modulating multisensory integrative processes using event-related potential (ERP), and the validity of the study was improved by including “noise” in the contrast conditions. Older and younger participants were involved in perceiving visual and/or auditory stimuli that contained spatial information. The participants responded by indicating the spatial direction (far vs. near and left vs. right) conveyed in the stimuli using different wrist movements. electroencephalograms (EEGs) were captured in each task trial, along with the accuracy and reaction time of the participants’ motor responses. Older participants showed a greater extent of behavioral improvements in the multisensory (as opposed to unisensory) condition compared to their younger counterparts. Older participants were found to have fronto-centrally distributed super-additive P2, which was not the case for the younger participants. The P2 amplitude difference between the multisensory condition and the sum of the unisensory conditions was found to correlate significantly with performance on spatial discrimination. The results indicated that the age-related effect modulated the integrative process in the perceptual and feedback stages, particularly the evaluation of auditory stimuli. Audiovisual (AV) integration may also serve a functional role during spatial-discrimination processes to compensate for the compromised attention function caused by aging.
Collapse
Affiliation(s)
- Zhi Zou
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Bolton K H Chau
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Kin-Hung Ting
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| | - Chetwyn C H Chan
- Applied Cognitive Neuroscience Laboratory, Department of Rehabilitation Sciences, The Hong Kong Polytechnic University, Kowloon, Hong Kong
| |
Collapse
|
20
|
Abstract
Mixed results have been found for the impact of auditory information presented during high-perceptual-load visual search tasks, with some studies showing large effects and others indicating inattentional deafness, with such stimuli going largely undetected. In three experiments, we demonstrated that task relatedness is a key factor in whether extraneous auditory stimuli impact high-load visual searches. Experiment 1 addressed a methodological concern (e.g., Lavie Trends in Cognitive Sciences, 9, 75-82, 2005) regarding the timing of the relative onsets and offsets of task-related, to-be-ignored auditory stimuli and visual search arrays in experiments that have shown auditory distractor effects. Robust auditory distractor effects were found in each timing condition, and no inattentional deafness for high-load searches. Experiments 2 and 3 demonstrated that the relationship between the auditory stimuli and visual targets determined whether attention was captured and whether the response times to identify targets were impacted. Auditory stimuli that named a response-specific category influenced responses to targets mapped exclusively to one response, but not to the same targets mapped nonexclusively. These compatibility effects were larger if the distractors named an actual target item than if they named the category to which the item belonged. This pattern suggests that to-be-ignored auditory information that closely relates to a visual target search task influences the processing of that task, particularly in a high-perceptual-load search.
Collapse
|
21
|
Dean CL, Eggleston BA, Gibney KD, Aligbe E, Blackwell M, Kwakye LD. Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task. PLoS One 2017; 12:e0179564. [PMID: 28723907 PMCID: PMC5516972 DOI: 10.1371/journal.pone.0179564] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 05/30/2017] [Indexed: 12/15/2022] Open
Abstract
The ability to synthesize information across multiple senses is known as multisensory integration and is essential to our understanding of the world around us. Sensory stimuli that occur close in time are likely to be integrated, and the accuracy of this integration is dependent on our ability to precisely discriminate the relative timing of unisensory stimuli (crossmodal temporal acuity). Previous research has shown that multisensory integration is modulated by both bottom-up stimulus features, such as the temporal structure of unisensory stimuli, and top-down processes such as attention. However, it is currently uncertain how attention alters crossmodal temporal acuity. The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity by utilizing a dual-task paradigm. In this study, participants were asked to judge the temporal order of a flash and beep presented at various temporal offsets (crossmodal temporal order judgment (CTOJ) task) while also directing their attention to a secondary distractor task in which they detected a target stimulus within a stream visual or auditory distractors. We found decreased performance on the CTOJ task as well as increases in both the positive and negative just noticeable difference with increasing load for both the auditory and visual distractor tasks. This strongly suggests that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity to process multisensory stimuli results in detriments to multisensory temporal processing. Our study is the first to demonstrate changes in multisensory temporal processing with decreased attentional capacity using a dual task paradigm and has strong implications for developmental disorders such as autism spectrum disorders and developmental dyslexia which are associated with alterations in both multisensory temporal processing and attention.
Collapse
Affiliation(s)
- Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Brady A. Eggleston
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Kyla David Gibney
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Enimielen Aligbe
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Marissa Blackwell
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Leslie Dowell Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
- * E-mail:
| |
Collapse
|
22
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
23
|
van der Stoep N, Serino A, Farnè A, Di Luca M, Spence C. Depth: the Forgotten Dimension in Multisensory Research. Multisens Res 2016. [DOI: 10.1163/22134808-00002525] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The last quarter of a century has seen a dramatic rise of interest in the spatial constraints on multisensory integration. However, until recently, the majority of this research has investigated integration in the space directly in front of the observer. The space around us, however, extends in three spatial dimensions in the front and to the rear beyond such a limited area. The question to be addressed in this review concerns whether multisensory integration operates according to the same rules throughout the whole of three-dimensional space. The results reviewed here not only show that the space around us seems to be divided into distinct functional regions, but they also suggest that multisensory interactions are modulated by the region of space in which stimuli happen to be presented. We highlight a number of key limitations with previous research in this area, including: (1) The focus on only a very narrow region of two-dimensional space in front of the observer; (2) the use of static stimuli in most research; (3) the study of observers who themselves have been mostly static; and (4) the study of isolated observers. All of these factors may change the way in which the senses interact at any given distance, as can the emotional state/personality of the observer. In summarizing these salient issues, we hope to encourage researchers to consider these factors in their own research in order to gain a better understanding of the spatial constraints on multisensory integration as they affect us in our everyday life.
Collapse
Affiliation(s)
- N. van der Stoep
- Department of Experimental Psychology, Helmholtz Institute, Utrecht University, Utrecht, The Netherlands
| | - A. Serino
- Center for Neuroprosthetics, EPFL, Lausanne, Switzerland
| | - A. Farnè
- ImpAct Team, Lyon Neuroscience Research Center, INSERM U1028, CNRS UMR5292, 69000 Lyon, France
| | - M. Di Luca
- School of Psychology, CNCR, University of Birmingham, Birmingham, United Kingdom
| | - C. Spence
- Department of Experimental Psychology, Oxford University, Oxford, United Kingdom
| |
Collapse
|
24
|
Tang X, Wu J, Shen Y. The interactions of multisensory integration with endogenous and exogenous attention. Neurosci Biobehav Rev 2015; 61:208-24. [PMID: 26546734 DOI: 10.1016/j.neubiorev.2015.11.002] [Citation(s) in RCA: 89] [Impact Index Per Article: 9.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Revised: 11/01/2015] [Accepted: 11/02/2015] [Indexed: 11/24/2022]
Abstract
Stimuli from multiple sensory organs can be integrated into a coherent representation through multiple phases of multisensory processing; this phenomenon is called multisensory integration. Multisensory integration can interact with attention. Here, we propose a framework in which attention modulates multisensory processing in both endogenous (goal-driven) and exogenous (stimulus-driven) ways. Moreover, multisensory integration exerts not only bottom-up but also top-down control over attention. Specifically, we propose the following: (1) endogenous attentional selectivity acts on multiple levels of multisensory processing to determine the extent to which simultaneous stimuli from different modalities can be integrated; (2) integrated multisensory events exert top-down control on attentional capture via multisensory search templates that are stored in the brain; (3) integrated multisensory events can capture attention efficiently, even in quite complex circumstances, due to their increased salience compared to unimodal events and can thus improve search accuracy; and (4) within a multisensory object, endogenous attention can spread from one modality to another in an exogenous manner.
Collapse
Affiliation(s)
- Xiaoyu Tang
- College of Psychology, Liaoning Normal University, 850 Huanghe Road, Shahekou District, Dalian, Liaoning, 116029, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan
| | - Jinglong Wu
- Key Laboratory of Biomimetic Robots and System, Ministry of Education, State Key Laboratory of Intelligent Control and Decision of Complex Systems, Beijing Institute of Technology, 5 Nandajie, Zhongguancun, Haidian, Beijing 100081, China; Biomedical Engineering Laboratory, Graduate School of Natural Science and Technology, Okayama University, 3-1-1 Tsushima-naka, Okayama, 700-8530, Japan.
| | - Yong Shen
- Neurodegenerative Disease Research Center, School of Life Sciences, University of Science and Technology of China, CAS Key Laboratory of Brain Functions and Disease, Hefei, China; Center for Advanced Therapeutic Strategies for Brain Disorders, Roskamp Institute, Sarasota, FL 34243, USA
| |
Collapse
|
25
|
Mastroberardino S, Santangelo V, Macaluso E. Crossmodal semantic congruence can affect visuo-spatial processing and activity of the fronto-parietal attention networks. Front Integr Neurosci 2015. [PMID: 26217199 PMCID: PMC4498104 DOI: 10.3389/fnint.2015.00045] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Previous studies have shown that multisensory stimuli can contribute to attention control. Here we investigate whether irrelevant audio–visual stimuli can affect the processing of subsequent visual targets, in the absence of any direct bottom–up signals generated by low-level sensory changes and any goal-related associations between the multisensory stimuli and the visual targets. Each trial included two pictures (cat/dog), one in each visual hemifield, and a central sound that was semantically congruent with one of the two pictures (i.e., either “meow” or “woof” sound). These irrelevant audio–visual stimuli were followed by a visual target that appeared either where the congruent or the incongruent picture had been presented (valid/invalid trials). The visual target was a Gabor patch requiring an orientation discrimination judgment, allowing us to uncouple the visual task from the audio–visual stimuli. Behaviourally we found lower performance for invalid than valid trials, but only when the task demands were high (Gabor target presented together with a Gabor distractor vs. Gabor target alone). The fMRI analyses revealed greater activity for invalid than for valid trials in the dorsal and the ventral fronto-parietal attention networks. The dorsal network was recruited irrespective of task demands, while the ventral network was recruited only when task demands were high and target discrimination required additional top–down control. We propose that crossmodal semantic congruence generates a processing bias associated with the location of congruent picture, and that the presentation of the visual target on the opposite side required updating these processing priorities. We relate the activation of the attention networks to these updating operations. We conclude that the fronto-parietal networks mediate the influence of crossmodal semantic congruence on visuo-spatial processing, even in the absence of any low-level sensory cue and any goal-driven task associations.
Collapse
Affiliation(s)
| | - Valerio Santangelo
- Neuroimaging Laboratory, Santa Lucia Foundation Rome, Italy ; Department of Philosophy, Social Sciences & Education, University of Perugia Perugia, Italy
| | | |
Collapse
|
26
|
|
27
|
Influence of auditory and audiovisual stimuli on the right-left prevalence effect. PSYCHOLOGICAL RESEARCH 2013; 78:400-10. [PMID: 24096315 DOI: 10.1007/s00426-013-0518-4] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2013] [Accepted: 09/24/2013] [Indexed: 10/26/2022]
Abstract
When auditory stimuli are used in two-dimensional spatial compatibility tasks, where the stimulus and response configurations vary along the horizontal and vertical dimensions simultaneously, a right-left prevalence effect occurs in which horizontal compatibility dominates over vertical compatibility. The right-left prevalence effects obtained with auditory stimuli are typically larger than that obtained with visual stimuli even though less attention should be demanded from the horizontal dimension in auditory processing. In the present study, we examined whether auditory or visual dominance occurs when the two-dimensional stimuli are audiovisual, as well as whether there will be cross-modal facilitation of response selection for the horizontal and vertical dimensions. We also examined whether there is an additional benefit of adding a pitch dimension to the auditory stimulus to facilitate vertical coding through use of the spatial-musical association of response codes (SMARC) effect, where pitch is coded in terms of height in space. In Experiment 1, we found a larger right-left prevalence effect for unimodal auditory than visual stimuli. Neutral, non-pitch coded, audiovisual stimuli did not result in cross-modal facilitation, but did show evidence of visual dominance. The right-left prevalence effect was eliminated in the presence of SMARC audiovisual stimuli, but the effect influenced horizontal rather than vertical coding. Experiment 2 showed that the influence of the pitch dimension was not in terms of influencing response selection on a trial-to-trial basis, but in terms of altering the salience of the task environment. Taken together, these findings indicate that in the absence of salient vertical cues, auditory and audiovisual stimuli tend to be coded along the horizontal dimension and vision tends to dominate audition in this two-dimensional spatial stimulus-response task.
Collapse
|
28
|
Pasqualotto A, Finucane CM, Newell FN. Ambient visual information confers a context-specific, long-term benefit on memory for haptic scenes. Cognition 2013; 128:363-79. [DOI: 10.1016/j.cognition.2013.04.011] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2012] [Revised: 04/25/2013] [Accepted: 04/30/2013] [Indexed: 11/25/2022]
|
29
|
Yang Z, Mayer AR. An event-related FMRI study of exogenous orienting across vision and audition. Hum Brain Mapp 2013; 35:964-74. [PMID: 23288620 DOI: 10.1002/hbm.22227] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 11/02/2012] [Accepted: 11/05/2012] [Indexed: 11/11/2022] Open
Abstract
The orienting of attention to the spatial location of sensory stimuli in one modality based on sensory stimuli presented in another modality (i.e., cross-modal orienting) is a common mechanism for controlling attentional shifts. The neuronal mechanisms of top-down cross-modal orienting have been studied extensively. However, the neuronal substrates of bottom-up audio-visual cross-modal spatial orienting remain to be elucidated. Therefore, behavioral and event-related functional magnetic resonance imaging (FMRI) data were collected while healthy volunteers (N = 26) performed a spatial cross-modal localization task modeled after the Posner cuing paradigm. Behavioral results indicated that although both visual and auditory cues were effective in producing bottom-up shifts of cross-modal spatial attention, reorienting effects were greater for the visual cues condition. Statistically significant evidence of inhibition of return was not observed for either condition. Functional results also indicated that visual cues with auditory targets resulted in greater activation within ventral and dorsal frontoparietal attention networks, visual and auditory "where" streams, primary auditory cortex, and thalamus during reorienting across both short and long stimulus onset asynchronys. In contrast, no areas of unique activation were associated with reorienting following auditory cues with visual targets. In summary, current results question whether audio-visual cross-modal orienting is supramodal in nature, suggesting rather that the initial modality of cue presentation heavily influences both behavioral and functional results. In the context of localization tasks, reorienting effects accompanied by the activation of the frontoparietal reorienting network are more robust for visual cues with auditory targets than for auditory cues with visual targets.
Collapse
Affiliation(s)
- Zhen Yang
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, New Mexico 87106
| | | |
Collapse
|
30
|
Lickliter R, Bahrick LE. The concept of homology as a basis for evaluating developmental mechanisms: exploring selective attention across the life-span. Dev Psychobiol 2013; 55:76-83. [PMID: 22711341 PMCID: PMC3962041 DOI: 10.1002/dev.21037] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2011] [Accepted: 03/28/2012] [Indexed: 11/11/2022]
Abstract
Research with human infants as well as non-human animal embryos and infants has consistently demonstrated the benefits of intersensory redundancy for perceptual learning and memory for redundantly specified information during early development. Studies of infant affect discrimination, face discrimination, numerical discrimination, sequence detection, abstract rule learning, and word comprehension and segmentation have all shown that intersensory redundancy promotes earlier detection of these properties when compared to unimodal exposure to the same properties. Here we explore the idea that such intersensory facilitation is evident across the life-span and that this continuity is an example of a developmental behavioral homology. We present evidence that intersensory facilitation is most apparent during early phases of learning for a variety of tasks, regardless of developmental level, including domains that are novel or tasks that require discrimination of fine detail or speeded responses. Under these conditions, infants, children, and adults all show intersensory facilitation, suggesting a developmental homology. We discuss the challenge and propose strategies for establishing appropriate guidelines for identifying developmental behavioral homologies. We conclude that evaluating the extent to which continuities observed across development are homologous can contribute to a better understanding of the processes of development.
Collapse
Affiliation(s)
- Robert Lickliter
- Department of Psychology, Florida International University, Miami, FL, USA.
| | | |
Collapse
|
31
|
Ngo MK, Pierce RS, Spence C. Using multisensory cues to facilitate air traffic management. HUMAN FACTORS 2012; 54:1093-1103. [PMID: 23397817 DOI: 10.1177/0018720812446623] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
OBJECTIVE In the present study, we sought to investigate whether auditory and tactile cuing could be used to facilitate a complex, real-world air traffic management scenario. BACKGROUND Auditory and tactile cuing provides an effective means of improving both the speed and accuracy of participants' performance in a variety of laboratory-based visual target detection and identification tasks. METHOD A low-fidelity air traffic simulation task was used in which participants monitored and controlled aircraft.The participants had to ensure that the aircraft landed or exited at the correct altitude, speed, and direction and that they maintained a safe separation from all other aircraft and boundaries. The performance measures recorded included en route time, handoff delay, and conflict resolution delay (the performance measure of interest). In a baseline condition, the aircraft in conflict was highlighted in red (visual cue), and in the experimental conditions, this standard visual cue was accompanied by a simultaneously presented auditory, vibrotactile, or audiotactile cue. RESULTS Participants responded significantly more rapidly, but no less accurately, to conflicts when presented with an additional auditory or audiotactile cue than with either a vibrotactile or visual cue alone. CONCLUSION Auditory and audiotactile cues have the potential for improving operator performance by reducing the time it takes to detect and respond to potential visual target events. APPLICATION These results have important implications for the design and use of multisensory cues in air traffic management.
Collapse
Affiliation(s)
- Mary K Ngo
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, South Parks Rd., Oxford, OX1 3UD, United Kingdom.
| | | | | |
Collapse
|
32
|
Barrett DJK, Krumbholz K. Evidence for multisensory integration in the elicitation of prior entry by bimodal cues. Exp Brain Res 2012; 222:11-20. [PMID: 22975896 PMCID: PMC3442165 DOI: 10.1007/s00221-012-3191-8] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/15/2011] [Accepted: 07/08/2012] [Indexed: 11/25/2022]
Abstract
This study reports an experiment investigating the relative effects of intramodal, crossmodal and bimodal cues on visual and auditory temporal order judgements. Pairs of visual or auditory targets, separated by varying stimulus onset asynchronies, were presented to either side of a central fixation (±45°), and participants were asked to identify the target that had occurred first. In some of the trials, one of the targets was preceded by a short, non-predictive visual, auditory or audiovisual cue stimulus. The cue and target stimuli were presented at the exact same locations in space. The point of subjective simultaneity revealed a consistent spatiotemporal bias towards targets at the cued location. For the visual targets, the intramodal cue elicited the largest, and the crossmodal cue the smallest, bias. The bias elicited by the bimodal cue fell between the intramodal and crossmodal cue biases, with significant differences between all cue types. The pattern for the auditory targets was similar apart from a scaling factor and greater variance, so the differences between the cue conditions did not reach significance. These results provide evidence for multisensory integration in exogenous attentional cueing. The magnitude of the bimodal cueing effect was equivalent to the average of the facilitation elicited by the intramodal and crossmodal cues. Under the assumption that the visual and auditory cues were equally informative, this is consistent with the notion that exogenous attention, like perception, integrates multimodal information in an optimal way.
Collapse
|
33
|
Salzer Y, Oron-Gilad T, Ronen A, Parmet Y. Vibrotactile "on-thigh" alerting system in the cockpit. HUMAN FACTORS 2011; 53:118-131. [PMID: 21702330 DOI: 10.1177/0018720811403139] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/31/2023]
Abstract
BACKGROUND Alerts in the cockpit must be robust, difficult to ignore, and easily recognized. Tactile alerts can provide means to direct the pilot's attention in the already visual-auditory overloaded cockpit environment. OBJECTIVE This research examined the thigh as a placement for vibrotactile display in the cockpit. The authors (a) report initial findings concerning the loci and properties of the display, (b) evaluate the added value of tactile cuing with respect to the existing audiovisual alerting system, and (c) address the issue of tactile orienting--whether the cue should display "flight" or "fight" orienting. The tactor display prototype was developed by a joint venture of Israel Aerospace Industries, Lahav Division, and the Ben Gurion University of the Negev (patent pending 11/968,405). A vibrotactile display mounted on the thigh provided directional cues in the vertical plane. Two vibrotactile display modes (eight and four tactors) and two response modes (compatible, i.e., fight [toward vibrotactile cue], and inverse, i.e., flight [away from vibrotactile cue]) were evaluated. RESULTS Vertical directional orienting can be achieved by a vibrotactile display assembled on the thigh. The four-tactor display mode and the compatible response mode produced more accurate results. CONCLUSION Tactile cues can provide directional orienting in the vertical plane. The benefit of adding compatible tactile cues compared with visual and auditory cues alone has yet to be reinforced. Nevertheless, fight mode, that is, directing the way to escape from hazardous situations, was preferred. APPLICATION Potential applications include providing directional collision alerts within the vertical plane, assisting pilot's elevation control, or navigation.
Collapse
|
34
|
|
35
|
Sperdin HF, Cappe C, Murray MM. Auditory-somatosensory multisensory interactions in humans: dissociating detection and spatial discrimination. Neuropsychologia 2010; 48:3696-705. [PMID: 20833194 DOI: 10.1016/j.neuropsychologia.2010.09.001] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/02/2009] [Revised: 07/09/2010] [Accepted: 09/03/2010] [Indexed: 10/19/2022]
Abstract
Simple reaction times (RTs) to auditory-somatosensory (AS) multisensory stimuli are facilitated over their unisensory counterparts both when stimuli are delivered to the same location and when separated. In two experiments we addressed the possibility that top-down and/or task-related influences can dynamically impact the spatial representations mediating these effects and the extent to which multisensory facilitation will be observed. Participants performed a simple detection task in response to auditory, somatosensory, or simultaneous AS stimuli that in turn were either spatially aligned or misaligned by lateralizing the stimuli. Additionally, we also informed the participants that they would be retrogradely queried (one-third of trials) regarding the side where a given stimulus in a given sensory modality was presented. In this way, we sought to have participants attending to all possible spatial locations and sensory modalities, while nonetheless having them perform a simple detection task. Experiment 1 provided no cues prior to stimulus delivery. Experiment 2 included spatially uninformative cues (50% of trials). In both experiments, multisensory conditions significantly facilitated detection RTs with no evidence for differences according to spatial alignment (though general benefits of cuing were observed in Experiment 2). Facilitated detection occurs even when attending to spatial information. Performance with probes, quantified using sensitivity (d'), was impaired following multisensory trials in general and significantly more so following misaligned multisensory trials. This indicates that spatial information is not available, despite being task-relevant. The collective results support a model wherein early AS interactions may result in a loss of spatial acuity for unisensory information.
Collapse
Affiliation(s)
- Holger F Sperdin
- Neuropsychology and Neurorehabilitation Service, Department of Clinical Neurosciences, Centre Hospitalier Universitaire Vaudois and University of Lausanne, Lausanne, Switzerland
| | | | | |
Collapse
|
36
|
Auditory, tactile, and multisensory cues facilitate search for dynamic visual stimuli. Atten Percept Psychophys 2010; 72:1654-65. [DOI: 10.3758/app.72.6.1654] [Citation(s) in RCA: 60] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
37
|
Attention and the multiple stages of multisensory integration: A review of audiovisual studies. Acta Psychol (Amst) 2010; 134:372-84. [PMID: 20427031 DOI: 10.1016/j.actpsy.2010.03.010] [Citation(s) in RCA: 166] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2009] [Revised: 03/23/2010] [Accepted: 03/27/2010] [Indexed: 11/20/2022] Open
Abstract
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.
Collapse
|
38
|
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, United Kingdom.
| |
Collapse
|
39
|
Re-examining the contribution of visuospatial working memory to inhibition of return. PSYCHOLOGICAL RESEARCH 2010; 74:524-31. [DOI: 10.1007/s00426-010-0274-7] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/18/2009] [Accepted: 12/21/2009] [Indexed: 10/19/2022]
|
40
|
Santangelo V, Belardinelli MO, Spence C, Macaluso E. Interactions between Voluntary and Stimulus-driven Spatial Attention Mechanisms across Sensory Modalities. J Cogn Neurosci 2009; 21:2384-97. [DOI: 10.1162/jocn.2008.21178] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
In everyday life, the allocation of spatial attention typically entails the interplay between voluntary (endogenous) and stimulus-driven (exogenous) attention. Furthermore, stimuli in different sensory modalities can jointly influence the direction of spatial attention, due to the existence of cross-sensory links in attentional control. Using fMRI, we examined the physiological basis of these interactions. We induced exogenous shifts of auditory spatial attention while participants engaged in an endogenous visuospatial cueing task. Participants discriminated visual targets in the left or right hemifield. A central visual cue preceded the visual targets, predicting the target location on 75% of the trials (endogenous visual attention). In the interval between the endogenous cue and the visual target, task-irrelevant nonpredictive auditory stimuli were briefly presented either in the left or right hemifield (exogenous auditory attention). Consistent with previous unisensory visual studies, activation of the ventral fronto-parietal attentional network was observed when the visual targets were presented at the uncued side (endogenous invalid trials, requiring visuospatial reorienting), as compared with validly cued targets. Critically, we found that the side of the task-irrelevant auditory stimulus modulated these activations, reducing spatial reorienting effects when the auditory stimulus was presented on the same side as the upcoming (invalid) visual target. These results demonstrate that multisensory mechanisms of attentional control can integrate endogenous and exogenous spatial information, jointly determining attentional orienting toward the most relevant spatial location.
Collapse
Affiliation(s)
- Valerio Santangelo
- 1Santa Lucia Foundation, Rome, Italy
- 2University of Rome “La Sapienza,” Italy
| | - Marta Olivetti Belardinelli
- 2University of Rome “La Sapienza,” Italy
- 3Interuniversity Center for Research in Natural and Artificial Systems, Rome, Italy
| | | | | |
Collapse
|
41
|
Capturing spatial attention with multisensory cues: a review. Hear Res 2009; 258:134-42. [PMID: 19409472 DOI: 10.1016/j.heares.2009.04.015] [Citation(s) in RCA: 56] [Impact Index Per Article: 3.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/15/2009] [Revised: 04/18/2009] [Accepted: 04/20/2009] [Indexed: 11/22/2022]
Abstract
The last 30 years have seen numerous studies demonstrating unimodal and crossmodal spatial cuing effects. However, surprisingly few studies have attempted to investigate whether multisensory cues might be any more effective in capturing a person's spatial attention than unimodal cues. Indeed, until very recently, the consensus view was that multisensory cues were, in fact, no more effective. However, the results of several recent studies have overturned this conclusion, by showing that multisensory cues retain their attention-capturing ability under conditions of perceptual load (i.e., when participants are simultaneously engaged in a concurrent attention-demanding task) while their constituent signals (when presented unimodally) do not. Here we review the empirical literature on multisensory spatial cuing effects and highlight the implications that this research has for the design of more effective warning signals in applied settings.
Collapse
|
42
|
Ho C, Santangelo V, Spence C. Multisensory warning signals: when spatial correspondence matters. Exp Brain Res 2009; 195:261-72. [PMID: 19381621 DOI: 10.1007/s00221-009-1778-5] [Citation(s) in RCA: 41] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2009] [Accepted: 03/17/2009] [Indexed: 10/20/2022]
|
43
|
Spence C, Ho C. Multisensory warning signals for event perception and safe driving. THEORETICAL ISSUES IN ERGONOMICS SCIENCE 2008. [DOI: 10.1080/14639220701816765] [Citation(s) in RCA: 42] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/21/2022]
|
44
|
Santangelo V, Spence C. Is the exogenous orienting of spatial attention truly automatic? Evidence from unimodal and multisensory studies. Conscious Cogn 2008; 17:989-1015. [DOI: 10.1016/j.concog.2008.02.006] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2007] [Revised: 02/24/2008] [Accepted: 02/28/2008] [Indexed: 11/25/2022]
|
45
|
Spence C, Ho C. Tactile and Multisensory Spatial Warning Signals for Drivers. IEEE TRANSACTIONS ON HAPTICS 2008; 1:121-129. [PMID: 27788068 DOI: 10.1109/toh.2008.14] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/20/2023]
Abstract
The last few years have seen many exciting developments in the area of tactile and multisensory interface design. One of the most rapidly-moving practical application areas for these findings is in the development of warning signals and information displays for drivers. For instance, tactile displays can be used to awaken sleepy drivers, to capture the attention of distracted drivers, and even to present more complex information to drivers who may be visually-overloaded. This review highlights the most important potential costs and benefits associated with the use of tactile and multisensory information displays in a vehicular setting. Multisensory displays that are based on the latest cognitive neuroscience research findings can capture driver attention significantly more effective than their unimodal (i.e., tactile) counterparts. Multisensory displays can also be used to transmit information more efficiently, as well as to reduce driver workload. Finally, we highlight the key research questions currently awaiting further research, including questions such as: Are tactile warning signals really intuitive? Are there certain regions of the body (or in the space surrounding the body) where tactile/multisensory warning signals are particularly effective? To what extent is the spatial coincidence and temporal synchrony of the individual sensory signals critical to determining the effectiveness of multisensory displays? And, finally, how does the issue of compliance vs. reliance (or the 'cry wolf' phenomenon associated with the presentation of signals that are perceived as false alarms) influence the effectiveness of tactile and/or multisensory warning signals?
Collapse
|
46
|
Santangelo V, Van der Lubbe RHJ, Olivetti Belardinelli M, Postma A. Multisensory integration affects ERP components elicited by exogenous cues. Exp Brain Res 2007; 185:269-77. [PMID: 17909764 DOI: 10.1007/s00221-007-1151-5] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2007] [Accepted: 09/18/2007] [Indexed: 10/22/2022]
Abstract
Previous studies have shown that the amplitude of event related brain potentials (ERPs) elicited by a combined audiovisual stimulus is larger than the sum of a single auditory and visual stimulus. This enlargement is thought to reflect multisensory integration. Based on these data, it may be hypothesized that the speeding up of responses, due to exogenous orienting effects induced by bimodal cues, exceeds the sum of single unimodal cues. Behavioral data, however, typically revealed no increased orienting effect following bimodal as compared to unimodal cues, which could be due to a failure of multisensory integration of the cues. To examine this possibility, we computed ERPs elicited by both bimodal (audiovisual) and unimodal (either auditory or visual) cues, and determined their exogenous orienting effects on responses to a to-be-discriminated visual target. Interestingly, the posterior P1 component elicited by bimodal cues was larger than the sum of the P1 components elicited by a single auditory and visual cue (i.e., a superadditive effect), but no enhanced orienting effect was found on response speed. The latter result suggests that multisensory integration elicited by our bimodal cues plays no special role for spatial orienting, at least in the present setting.
Collapse
Affiliation(s)
- Valerio Santangelo
- Department of Psychology, University of Rome La Sapienza, via dei Marsi 78, 00185 Rome, Italy.
| | | | | | | |
Collapse
|
47
|
Santangelo V, Finoia P, Raffone A, Belardinelli MO, Spence C. Perceptual load affects exogenous spatial orienting while working memory load does not. Exp Brain Res 2007; 184:371-82. [PMID: 17763843 DOI: 10.1007/s00221-007-1108-8] [Citation(s) in RCA: 26] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2007] [Accepted: 08/13/2007] [Indexed: 11/24/2022]
Abstract
We examined whether or not increasing visual perceptual load or visual working memory (WM) load would affect the exogenous orienting of visuo-spatial attention, in order to assess whether or not exogenous orienting is genuinely automatic. In Experiment 1, we manipulated visual perceptual load by means of a central morphing shape that in some trials morphed into a particular target shape (a rectangle) that participants had to detect. In Experiment 2, the possibility that the presentation of any changing stimulus at fixation would eliminate exogenous orienting was ruled out, by presenting two alternating letters at fixation. In Experiment 3, we manipulated visual WM load by means of arrays consisting of three (low-load) or five (high-load) randomly located coloured squares. The participants had to remember these items in order to judge whether a cued square had been presented in the same or different colour at the end of each trial. In all the experiments, exogenous visuo-spatial attentional orienting was measured by means of an orthogonal spatial cuing task, in which the participants had to discriminate the elevation (up vs. down) of a visual target previously cued by a spatially nonpredictive visual cue. The results showed that increasing the perceptual load of the task eliminated the exogenous orienting of visuo-spatial attention. By contrast, increasing the WM load had no effect on spatial orienting. These results are discussed in terms of the light that they shed on claims regarding the automaticity of visuo-spatial exogenous orienting.
Collapse
Affiliation(s)
- Valerio Santangelo
- Department of Experimental Psychology, University of Oxford, Oxford, UK.
| | | | | | | | | |
Collapse
|