1
|
Qian Q, Lu M, Sun D, Wang A, Zhang M. Rewards weaken cross-modal inhibition of return with visual targets. Perception 2023; 52:400-411. [PMID: 37186788 DOI: 10.1177/03010066231175016] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/17/2023]
Abstract
Previous studies have shown that rewards weaken visual inhibition of return (IOR). However, the specific mechanisms underlying the influence of rewards on cross-modal IOR remain unclear. Based on the Posner exogenous cue-target paradigm, the present study was conducted to investigate the effect of rewards on exogenous spatial cross-modal IOR in both visual cue with auditory target (VA) and auditory cue with visual target (AV) conditions. The results showed the following: in the AV condition, the IOR effect size in the high-reward condition was significantly lower than that in the low-reward condition. However, in the VA condition, there was no significant IOR in either the high- or low-reward condition and there was no significant difference between the two conditions. In other words, the use of rewards modulated exogenous spatial cross-modal IOR with visual targets; specifically, high rewards may have weakened IOR in the AV condition. Taken together, our study extended the effect of rewards on IOR to cross-modal attention conditions and demonstrated for the first time that higher motivation among individuals under high-reward conditions weakened the cross-modal IOR with visual targets. Moreover, the present study provided evidence for future research on the relationship between reward and attention.
Collapse
Affiliation(s)
| | | | | | | | - Ming Zhang
- Soochow University, China; Okayama University, Japan
| |
Collapse
|
2
|
Effect of Target Semantic Consistency in Different Sequence Positions and Processing Modes on T2 Recognition: Integration and Suppression Based on Cross-Modal Processing. Brain Sci 2023; 13:brainsci13020340. [PMID: 36831882 PMCID: PMC9954507 DOI: 10.3390/brainsci13020340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023] Open
Abstract
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants' recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants' judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities.
Collapse
|
3
|
Valzolgher C, Alzaher M, Gaveau V, Coudert A, Marx M, Truy E, Barone P, Farnè A, Pavani F. Capturing Visual Attention With Perturbed Auditory Spatial Cues. Trends Hear 2023; 27:23312165231182289. [PMID: 37611181 PMCID: PMC10467228 DOI: 10.1177/23312165231182289] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2022] [Revised: 05/25/2023] [Accepted: 05/29/2023] [Indexed: 08/25/2023] Open
Abstract
Lateralized sounds can orient visual attention, with benefits for audio-visual processing. Here, we asked to what extent perturbed auditory spatial cues-resulting from cochlear implants (CI) or unilateral hearing loss (uHL)-allow this automatic mechanism of information selection from the audio-visual environment. We used a classic paradigm from experimental psychology (capture of visual attention with sounds) to probe the integrity of audio-visual attentional orienting in 60 adults with hearing loss: bilateral CI users (N = 20), unilateral CI users (N = 20), and individuals with uHL (N = 20). For comparison, we also included a group of normal-hearing (NH, N = 20) participants, tested in binaural and monaural listening conditions (i.e., with one ear plugged). All participants also completed a sound localization task to assess spatial hearing skills. Comparable audio-visual orienting was observed in bilateral CI, uHL, and binaural NH participants. By contrast, audio-visual orienting was, on average, absent in unilateral CI users and reduced in NH listening with one ear plugged. Spatial hearing skills were better in bilateral CI, uHL, and binaural NH participants than in unilateral CI users and monaurally plugged NH listeners. In unilateral CI users, spatial hearing skills correlated with audio-visual-orienting abilities. These novel results show that audio-visual-attention orienting can be preserved in bilateral CI users and in uHL patients to a greater extent than unilateral CI users. This highlights the importance of assessing the impact of hearing loss beyond auditory difficulties alone: to capture to what extent it may enable or impede typical interactions with the multisensory environment.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | - Mariam Alzaher
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Valérie Gaveau
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | | | - Mathieu Marx
- Centre de Recherche Cerveau & Cognition, Toulouse, France
- Hospices Civils, Toulouse, France
| | - Eric Truy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Hospices Civils de Lyon, Lyon, France
| | - Pascal Barone
- Centre de Recherche Cerveau & Cognition, Toulouse, France
| | - Alessandro Farnè
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Neuro-immersion, Lyon, France
| | - Francesco Pavani
- Center for Mind/Brain Sciences - CIMeC, University of Trento, Rovereto, Italy
- Integrative, Multisensory, Perception, Action and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
- Centro Interuniversitario di Ricerca « Cognizione, Linguaggio e Sordità », Rovereto, Italy
| |
Collapse
|
4
|
Carlsen AN, Maslovat D, Kaga K. An unperceived acoustic stimulus decreases reaction time to visual information in a patient with cortical deafness. Sci Rep 2020; 10:5825. [PMID: 32242039 PMCID: PMC7118083 DOI: 10.1038/s41598-020-62450-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/07/2019] [Accepted: 03/13/2020] [Indexed: 11/16/2022] Open
Abstract
Responding to multiple stimuli of different modalities has been shown to reduce reaction time (RT), yet many different processes can potentially contribute to multisensory response enhancement. To investigate the neural circuits involved in voluntary response initiation, an acoustic stimulus of varying intensities (80, 105, or 120 dB) was presented during a visual RT task to a patient with profound bilateral cortical deafness and an intact auditory brainstem response. Despite being unable to consciously perceive sound, RT was reliably shortened (~100 ms) on trials where the unperceived acoustic stimulus was presented, confirming the presence of multisensory response enhancement. Although the exact locus of this enhancement is unclear, these results cannot be attributed to involvement of the auditory cortex. Thus, these data provide new and compelling evidence that activation from subcortical auditory processing circuits can contribute to other cortical or subcortical areas responsible for the initiation of a response, without the need for conscious perception.
Collapse
Affiliation(s)
| | - Dana Maslovat
- School of Kinesiology, University of British Columbia, Vancouver, Canada
| | - Kimitaka Kaga
- National Institute of Sensory Organs, National Tokyo Medical Center, Tokyo, Japan
| |
Collapse
|
5
|
Wang Y, Xiao R, Luo C, Yang L. Attentional disengagement from negative natural sounds for high-anxious individuals. ANXIETY STRESS AND COPING 2019; 32:298-311. [PMID: 30782012 DOI: 10.1080/10615806.2019.1583539] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Abstract
BACKGROUND AND OBJECTIVES Previous studies have not consistently concluded whether high-anxious persons exhibit attentional bias towards negative natural auditory stimuli. The present study explores whether auditory negative stimuli could induce attentional bias to negative sounds in real life and investigates the exact nature of these biases using an emotional spatial cueing task. DESIGN Experimental study with a mixed factorial design. METHOD We created two groups according to the state-trait anxiety scale, namely high and low trait anxiety. Participants (N = 68 undergraduate students) were required to respond to an auditory target after receiving a negative (aversive sounds from natural life) or neutral auditory stimuli. RESULTS A 2 (Validity: valid/invalid) × 2 (Cue Valence: negative/neutral) × 2 (Anxiety Group: LA/HA) repeated-measures ANOVA on reaction times revealed that participants with high trait anxiety exhibited slower reaction times in invalid trials following negative cues than following neutral cues. Higher levels of trait anxiety were associated with more difficult attentional disengagement from negative auditory information. CONCLUSIONS The results demonstrate that impaired attentional disengagement was one of the mechanisms by which high-anxious participants exhibited auditory attentional bias to natural negative information.
Collapse
Affiliation(s)
- Yanmei Wang
- a The Faculty of Education , East China Normal University , Shanghai , People's Republic of China.,b The School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China.,c Shanghai Changning-ECNU Mental Health Centre , Shanghai , People's Republic of China
| | - Ruiqi Xiao
- b The School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China
| | - Cheng Luo
- b The School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China
| | - Libing Yang
- b The School of Psychology and Cognitive Science , East China Normal University , Shanghai , People's Republic of China
| |
Collapse
|
6
|
Mechanisms underlying auditory and cross-modal emotional attentional biases: Engagement with and disengagement from aversive auditory stimuli. MOTIVATION AND EMOTION 2018. [DOI: 10.1007/s11031-018-9739-6] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
|
7
|
Hartmann M, Fischer MH, Mast FW. Sharing a mental number line across individuals? The role of body position and empathy in joint numerical cognition. Q J Exp Psychol (Hove) 2018; 72:1732-1740. [PMID: 30304994 DOI: 10.1177/1747021818809254] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
A growing body of research shows that the human brain acts differently when performing a task together with another person than when performing the same task alone. In this study, we investigated the influence of a co-actor on numerical cognition using a joint random number generation (RNG) task. We found that participants generated relatively smaller numbers when they were located to the left (vs. right) of a co-actor (Experiment 1), as if the two individuals shared a mental number line and predominantly selected numbers corresponding to their relative body position. Moreover, the mere presence of another person on the left or right side or the processing of numbers from loudspeaker on the left or right side had no influence on the magnitude of generated numbers (Experiment 2), suggesting that a bias in RNG only emerged during interpersonal interactions. Interestingly, the effect of relative body position on RNG was driven by participants with high trait empathic concern towards others, pointing towards a mediating role of feelings of sympathy for joint compatibility effects. Finally, the spatial bias emerged only after the co-actors swapped their spatial position, suggesting that joint spatial representations are constructed only after the spatial reference frame became salient. In contrast to previous studies, our findings cannot be explained by action co-representation because the consecutive production of numbers does not involve conflict at the motor response level. Our results therefore suggest that spatial reference coding, rather than motor mirroring, can determine joint compatibility effects. Our results demonstrate how physical properties of interpersonal situations, such as the relative body position, shape seemingly abstract cognition.
Collapse
Affiliation(s)
- Matthias Hartmann
- 1 Department of Psychology, University of Bern, Bern, Switzerland.,2 Faculty of Psychology, Swiss Distance Learning University, Brig, Switzerland
| | - Martin H Fischer
- 3 Division of Cognitive Sciences, University of Potsdam, Potsdam, Germany
| | - Fred W Mast
- 1 Department of Psychology, University of Bern, Bern, Switzerland
| |
Collapse
|
8
|
Li Z, Gu R, Zeng X, Qi M, Cen J, Zhang S, Gu J, Chen Q. Eyes and Ears: Cross-Modal Interference of Tinnitus on Visual Processing. Front Psychol 2018; 9:1779. [PMID: 30319490 PMCID: PMC6166004 DOI: 10.3389/fpsyg.2018.01779] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2018] [Accepted: 09/03/2018] [Indexed: 12/11/2022] Open
Abstract
The visual processing capacity of tinnitus patients is worse than normal controls, indicating cross-modal interference. However, the mechanism underlying the tinnitus-modulated visual processing is largely unclear. In order to explore the influence of tinnitus on visual processing, this study used a signal recognition paradigm to observe whether the tinnitus group would display a significantly longer reaction time in processing the letter symbols (Experiment 1) and emotional faces (Experiment 2) than the control group. Signal detection and signal recognition, which reflect the perceptual and conceptual aspects of visual processing respectively, were manipulated individually in different conditions to identify the pattern of the cross-modal interference of tinnitus. The results showed that the tinnitus group required a significantly prolonged reaction time in detecting and recognizing the letter symbols and emotional faces than the control group; meanwhile, no between-group difference was detected in signal encoding. In addition, any gender- and distress-modulated effects of processing were not found, suggesting the universality of the present findings. Finally, follow-up studies would be needed to explore the neural mechanism behind the decline in speed of visual processing. The positive emotional bias in tinnitus patients also needs to be further verified and discussed. Highlights: - The bottom-up visual processing speed is decreased in tinnitus patients. - Tinnitus primarily interferes with the detection of the visual signals in individuals.
Collapse
Affiliation(s)
- Zhicheng Li
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China.,Center for Studies of Psychological Application and Department of Psychology, South China Normal University, Guangzhou, China
| | - Ruolei Gu
- Key Laboratory of Behavioral Science, Institute of Psychology, Chinese Academy of Sciences, Beijing, China.,Department of Psychology, University of Chinese Academy of Sciences, Beijing, China
| | - Xiangli Zeng
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Min Qi
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jintian Cen
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Shuqi Zhang
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Jing Gu
- Department of Otolaryngology Head and Neck Surgery, The Third Affiliated Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qi Chen
- Center for Studies of Psychological Application and Department of Psychology, South China Normal University, Guangzhou, China
| |
Collapse
|
9
|
Paladini RE, Diana L, Zito GA, Nyffeler T, Wyss P, Mosimann UP, Müri RM, Nef T, Cazzoli D. Attentional reorienting triggers spatial asymmetries in a search task with cross-modal spatial cueing. PLoS One 2018; 13:e0190677. [PMID: 29293637 PMCID: PMC5749835 DOI: 10.1371/journal.pone.0190677] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2017] [Accepted: 12/19/2017] [Indexed: 11/18/2022] Open
Abstract
Cross-modal spatial cueing can affect performance in a visual search task. For example, search performance improves if a visual target and an auditory cue originate from the same spatial location, and it deteriorates if they originate from different locations. Moreover, it has recently been postulated that multisensory settings, i.e., experimental settings, in which critical stimuli are concurrently presented in different sensory modalities (e.g., visual and auditory), may trigger asymmetries in visuospatial attention. Thereby, a facilitation has been observed for visual stimuli presented in the right compared to the left visual space. However, it remains unclear whether auditory cueing of attention differentially affects search performance in the left and the right hemifields in audio-visual search tasks. The present study investigated whether spatial asymmetries would occur in a search task with cross-modal spatial cueing. Participants completed a visual search task that contained no auditory cues (i.e., unimodal visual condition), spatially congruent, spatially incongruent, and spatially non-informative auditory cues. To further assess participants' accuracy in localising the auditory cues, a unimodal auditory spatial localisation task was also administered. The results demonstrated no left/right asymmetries in the unimodal visual search condition. Both an additional incongruent, as well as a spatially non-informative, auditory cue resulted in lateral asymmetries. Thereby, search times were increased for targets presented in the left compared to the right hemifield. No such spatial asymmetry was observed in the congruent condition. However, participants' performance in the congruent condition was modulated by their tone localisation accuracy. The findings of the present study demonstrate that spatial asymmetries in multisensory processing depend on the validity of the cross-modal cues, and occur under specific attentional conditions, i.e., when visual attention has to be reoriented towards the left hemifield.
Collapse
Affiliation(s)
- Rebecca E. Paladini
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
| | - Lorenzo Diana
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, University Hospital Inselspital and University of Bern, Bern, Switzerland
| | - Giuseppe A. Zito
- Department of Bioengineering, Imperial College London, London, United Kingdom
| | - Thomas Nyffeler
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, University Hospital Inselspital and University of Bern, Bern, Switzerland
- Center of Neurology and Neurorehabilitation, Luzerner Kantonsspital, Switzerland
| | - Patric Wyss
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
| | - Urs P. Mosimann
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
- University Hospital of Old Age Psychiatry, University of Bern, Bern, Switzerland
| | - René M. Müri
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
- Perception and Eye Movement Laboratory, Departments of Neurology and Clinical Research, University Hospital Inselspital and University of Bern, Bern, Switzerland
| | - Tobias Nef
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
| | - Dario Cazzoli
- Gerontechnology and Rehabilitation Group, University of Bern, Bern, Switzerland
- ARTORG Center for Biomedical Engineering Research, University of Bern, Bern, Switzerland
- * E-mail:
| |
Collapse
|
10
|
Dean CL, Eggleston BA, Gibney KD, Aligbe E, Blackwell M, Kwakye LD. Auditory and visual distractors disrupt multisensory temporal acuity in the crossmodal temporal order judgment task. PLoS One 2017; 12:e0179564. [PMID: 28723907 PMCID: PMC5516972 DOI: 10.1371/journal.pone.0179564] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/13/2017] [Accepted: 05/30/2017] [Indexed: 12/15/2022] Open
Abstract
The ability to synthesize information across multiple senses is known as multisensory integration and is essential to our understanding of the world around us. Sensory stimuli that occur close in time are likely to be integrated, and the accuracy of this integration is dependent on our ability to precisely discriminate the relative timing of unisensory stimuli (crossmodal temporal acuity). Previous research has shown that multisensory integration is modulated by both bottom-up stimulus features, such as the temporal structure of unisensory stimuli, and top-down processes such as attention. However, it is currently uncertain how attention alters crossmodal temporal acuity. The present study investigated whether increasing attentional load would decrease crossmodal temporal acuity by utilizing a dual-task paradigm. In this study, participants were asked to judge the temporal order of a flash and beep presented at various temporal offsets (crossmodal temporal order judgment (CTOJ) task) while also directing their attention to a secondary distractor task in which they detected a target stimulus within a stream visual or auditory distractors. We found decreased performance on the CTOJ task as well as increases in both the positive and negative just noticeable difference with increasing load for both the auditory and visual distractor tasks. This strongly suggests that attention promotes greater crossmodal temporal acuity and that reducing the attentional capacity to process multisensory stimuli results in detriments to multisensory temporal processing. Our study is the first to demonstrate changes in multisensory temporal processing with decreased attentional capacity using a dual task paradigm and has strong implications for developmental disorders such as autism spectrum disorders and developmental dyslexia which are associated with alterations in both multisensory temporal processing and attention.
Collapse
Affiliation(s)
- Cassandra L. Dean
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Brady A. Eggleston
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Kyla David Gibney
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Enimielen Aligbe
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Marissa Blackwell
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
| | - Leslie Dowell Kwakye
- Department of Neuroscience, Oberlin College, Oberlin, Ohio, United States of America
- * E-mail:
| |
Collapse
|
11
|
Gibney KD, Aligbe E, Eggleston BA, Nunes SR, Kerkhoff WG, Dean CL, Kwakye LD. Visual Distractors Disrupt Audiovisual Integration Regardless of Stimulus Complexity. Front Integr Neurosci 2017; 11:1. [PMID: 28163675 PMCID: PMC5247431 DOI: 10.3389/fnint.2017.00001] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/22/2016] [Accepted: 01/04/2017] [Indexed: 11/30/2022] Open
Abstract
The intricate relationship between multisensory integration and attention has been extensively researched in the multisensory field; however, the necessity of attention for the binding of multisensory stimuli remains contested. In the current study, we investigated whether diverting attention from well-known multisensory tasks would disrupt integration and whether the complexity of the stimulus and task modulated this interaction. A secondary objective of this study was to investigate individual differences in the interaction of attention and multisensory integration. Participants completed a simple audiovisual speeded detection task and McGurk task under various perceptual load conditions: no load (multisensory task while visual distractors present), low load (multisensory task while detecting the presence of a yellow letter in the visual distractors), and high load (multisensory task while detecting the presence of a number in the visual distractors). Consistent with prior studies, we found that increased perceptual load led to decreased reports of the McGurk illusion, thus confirming the necessity of attention for the integration of speech stimuli. Although increased perceptual load led to longer response times for all stimuli in the speeded detection task, participants responded faster on multisensory trials than unisensory trials. However, the increase in multisensory response times violated the race model for no and low perceptual load conditions only. Additionally, a geometric measure of Miller’s inequality showed a decrease in multisensory integration for the speeded detection task with increasing perceptual load. Surprisingly, we found diverging changes in multisensory integration with increasing load for participants who did not show integration for the no load condition: no changes in integration for the McGurk task with increasing load but increases in integration for the detection task. The results of this study indicate that attention plays a crucial role in multisensory integration for both highly complex and simple multisensory tasks and that attention may interact differently with multisensory processing in individuals who do not strongly integrate multisensory information.
Collapse
Affiliation(s)
- Kyla D Gibney
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Sarah R Nunes
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| | | | | | - Leslie D Kwakye
- Department of Neuroscience, Oberlin College, Oberlin OH, USA
| |
Collapse
|
12
|
Abstract
Cross-modal attention and multisensory integration are very essential for us to perceive the world. The most intuitive feelings about the environment around us are based on what we see and what we hear. Therefore, it is important to understand the interactions between visual inputs and auditory inputs. Previous studies have shown that multisensory integration can be modulated by attention. However, how top-down attention is controlled or allocated across the sensory modalities remains unclear. In this study, we measured the cortical areas activated by the cue-target spatial attention paradigm in both visual and auditory fields using functional MRI. The reaction times of the behavioral results indicated that interactions between the two types of stimuli exist. The imaging results indicated that interactions between multisensory inputs can lead to enhancement or depression of the cortical response with top-down spatial attention. Moreover, the activation of the middle temporal gyrus and insula in tasks with irrelevant stimuli appears to indicate that multisensory integration proceeds automatically.
Collapse
|
13
|
Macaluso E, Noppeney U, Talsma D, Vercillo T, Hartcher-O’Brien J, Adam R. The Curious Incident of Attention in Multisensory Integration: Bottom-up vs. Top-down. Multisens Res 2016. [DOI: 10.1163/22134808-00002528] [Citation(s) in RCA: 50] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/19/2022]
Abstract
The role attention plays in our experience of a coherent, multisensory world is still controversial. On the one hand, a subset of inputs may be selected for detailed processing and multisensory integration in a top-down manner, i.e., guidance of multisensory integration by attention. On the other hand, stimuli may be integrated in a bottom-up fashion according to low-level properties such as spatial coincidence, thereby capturing attention. Moreover, attention itself is multifaceted and can be describedviaboth top-down and bottom-up mechanisms. Thus, the interaction between attention and multisensory integration is complex and situation-dependent. The authors of this opinion paper are researchers who have contributed to this discussion from behavioural, computational and neurophysiological perspectives. We posed a series of questions, the goal of which was to illustrate the interplay between bottom-up and top-down processes in various multisensory scenarios in order to clarify the standpoint taken by each author and with the hope of reaching a consensus. Although divergence of viewpoint emerges in the current responses, there is also considerable overlap: In general, it can be concluded that the amount of influence that attention exerts on MSI depends on the current task as well as prior knowledge and expectations of the observer. Moreover stimulus properties such as the reliability and salience also determine how open the processing is to influences of attention.
Collapse
Affiliation(s)
| | - Uta Noppeney
- Computational Neuroscience and Cognitive Robotics Centre, University of Birmingham, UK
| | - Durk Talsma
- Department of Experimental Psychology, Ghent University, Henri Dunantlaan 2, B-9000 Ghent, Belgium
| | | | | | - Ruth Adam
- Institute for Stroke and Dementia Research, Klinikum der Universität München, Ludwig-Maximilians-Universität LMU, Munich, Germany
| |
Collapse
|
14
|
Choi W, Lee G, Lee S. Effect of the cognitive-motor dual-task using auditory cue on balance of surviviors with chronic stroke: a pilot study. Clin Rehabil 2014; 29:763-70. [DOI: 10.1177/0269215514556093] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2014] [Accepted: 09/25/2014] [Indexed: 11/15/2022]
Abstract
Objective: To investigate the effect of a cognitive-motor dual-task using auditory cues on the balance of patients with chronic stroke. Design: Randomized controlled trial. Setting: Inpatient rehabilitation center. Subjects: Thirty-seven individuals with chronic stroke. Interventions: The participants were randomly allocated to the dual-task group ( n=19) and the single-task group ( n=18). The dual-task group performed a cognitive-motor dual-task in which they carried a circular ring from side to side according to a random auditory cue during treadmill walking. The single-task group walked on a treadmill only. All subjects completed 15 min per session, three times per week, for four weeks with conventional rehabilitation five times per week over the four weeks. Main measures: Before and after intervention, both static and dynamic balance were measured with a force platform and using the Timed Up and Go (TUG) test. Results: The dual-task group showed significant improvement in all variables compared to the single-task group, except for anteroposterior (AP) sway velocity with eyes open and TUG at follow-up: mediolateral (ML) sway velocity with eye open (dual-task group vs. single-task group: 2.11 mm/s vs. 0.38 mm/s), ML sway velocity with eye close (2.91 mm/s vs. 1.35 mm/s), AP sway velocity with eye close (4.84 mm/s vs. 3.12 mm/s). After intervention, all variables showed significant improvement in the dual-task group compared to baseline. Conclusion: The study results suggest that the performance of a cognitive-motor dual-task using auditory cues may influence balance improvements in chronic stroke patients.
Collapse
Affiliation(s)
- Wonjae Choi
- Department of Physical Therapy, College of Health and Welfare, Sahmyook University, Seoul, South Korea
- Institute of Rehabilitation Science, Seoul, South Korea
| | - GyuChang Lee
- Department of Physical Therapy, College of Natural Science, Kyungnam University, Changwon-si, South Korea
| | - Seungwon Lee
- Department of Physical Therapy, College of Health and Welfare, Sahmyook University, Seoul, South Korea
| |
Collapse
|
15
|
Wolf D, Schock L, Bhavsar S, Demenescu LR, Sturm W, Mathiak K. Emotional valence and spatial congruency differentially modulate crossmodal processing: an fMRI study. Front Hum Neurosci 2014; 8:659. [PMID: 25221495 PMCID: PMC4145656 DOI: 10.3389/fnhum.2014.00659] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/04/2013] [Accepted: 08/08/2014] [Indexed: 11/13/2022] Open
Abstract
Salient exogenous stimuli modulate attentional processes and lead to attention shifts-even across modalities and at a pre-attentive level. Stimulus properties such as hemispheric laterality and emotional valence influence processing, but their specific interaction in audio-visual attention paradigms remains ambiguous. We conducted an fMRI experiment to investigate the interaction of supramodal spatial congruency, emotional salience, and stimulus presentation side on neural processes of attention modulation. Emotionally neutral auditory deviants were presented in a dichotic listening oddball design. Simultaneously, visual target stimuli (schematic faces) were presented, which displayed either a negative or a positive emotion. These targets were presented in the left or in the right visual field and were either spatially congruent (valid) or incongruent (invalid) with the concurrent deviant auditory stimuli. According to our expectation we observed that deviant stimuli serve as attention-directing cues for visual target stimuli. Region-of-interest (ROI) analyses suggested differential effects of stimulus valence and spatial presentation on the hemodynamic response in bilateral auditory cortices. These results underline the importance of valence and presentation side for attention guidance by deviant sound events and may hint at a hemispheric specialization for valence and attention processing.
Collapse
Affiliation(s)
- Dhana Wolf
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University Aachen, Germany ; Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; JARA-Translational Brain Medicine, Research Centre Jülich, Jülich Aachen, Germany
| | - Lisa Schock
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University Aachen, Germany ; Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; JARA-Translational Brain Medicine, Research Centre Jülich, Jülich Aachen, Germany
| | - Saurabh Bhavsar
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University Aachen, Germany ; Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; JARA-Translational Brain Medicine, Research Centre Jülich, Jülich Aachen, Germany
| | - Liliana R Demenescu
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University Aachen, Germany ; Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; JARA-Translational Brain Medicine, Research Centre Jülich, Jülich Aachen, Germany
| | - Walter Sturm
- Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; Department of Neurology, Clinical Neuropsychology, Medical School, RWTH Aachen University Aachen, Germany
| | - Klaus Mathiak
- Department of Psychiatry, Psychotherapy and Psychosomatics, Medical School, RWTH Aachen University Aachen, Germany ; Interdisciplinary Centre for Clinical Research, Medical School, RWTH Aachen University Aachen, Germany ; JARA-Translational Brain Medicine, Research Centre Jülich, Jülich Aachen, Germany
| |
Collapse
|
16
|
Time-compressed spoken word primes crossmodally enhance processing of semantically congruent visual targets. Atten Percept Psychophys 2013; 76:575-90. [DOI: 10.3758/s13414-013-0569-z] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
|
17
|
Which factors are important for crossmodal attentional effect? Exp Brain Res 2013; 225:491-8. [DOI: 10.1007/s00221-012-3389-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2011] [Accepted: 12/18/2012] [Indexed: 11/27/2022]
|
18
|
Yang Z, Mayer AR. An event-related FMRI study of exogenous orienting across vision and audition. Hum Brain Mapp 2013; 35:964-74. [PMID: 23288620 DOI: 10.1002/hbm.22227] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/29/2012] [Revised: 11/02/2012] [Accepted: 11/05/2012] [Indexed: 11/11/2022] Open
Abstract
The orienting of attention to the spatial location of sensory stimuli in one modality based on sensory stimuli presented in another modality (i.e., cross-modal orienting) is a common mechanism for controlling attentional shifts. The neuronal mechanisms of top-down cross-modal orienting have been studied extensively. However, the neuronal substrates of bottom-up audio-visual cross-modal spatial orienting remain to be elucidated. Therefore, behavioral and event-related functional magnetic resonance imaging (FMRI) data were collected while healthy volunteers (N = 26) performed a spatial cross-modal localization task modeled after the Posner cuing paradigm. Behavioral results indicated that although both visual and auditory cues were effective in producing bottom-up shifts of cross-modal spatial attention, reorienting effects were greater for the visual cues condition. Statistically significant evidence of inhibition of return was not observed for either condition. Functional results also indicated that visual cues with auditory targets resulted in greater activation within ventral and dorsal frontoparietal attention networks, visual and auditory "where" streams, primary auditory cortex, and thalamus during reorienting across both short and long stimulus onset asynchronys. In contrast, no areas of unique activation were associated with reorienting following auditory cues with visual targets. In summary, current results question whether audio-visual cross-modal orienting is supramodal in nature, suggesting rather that the initial modality of cue presentation heavily influences both behavioral and functional results. In the context of localization tasks, reorienting effects accompanied by the activation of the frontoparietal reorienting network are more robust for visual cues with auditory targets than for auditory cues with visual targets.
Collapse
Affiliation(s)
- Zhen Yang
- The Mind Research Network/Lovelace Biomedical and Environmental Research Institute, Albuquerque, New Mexico 87106
| | | |
Collapse
|
19
|
Cervantes Constantino F, Pinggera L, Paranamana S, Kashino M, Chait M. Detection of appearing and disappearing objects in complex acoustic scenes. PLoS One 2012; 7:e46167. [PMID: 23029426 PMCID: PMC3459829 DOI: 10.1371/journal.pone.0046167] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2011] [Accepted: 08/30/2012] [Indexed: 11/19/2022] Open
Abstract
The ability to detect sudden changes in the environment is critical for survival. Hearing is hypothesized to play a major role in this process by serving as an "early warning device," rapidly directing attention to new events. Here, we investigate listeners' sensitivity to changes in complex acoustic scenes-what makes certain events "pop-out" and grab attention while others remain unnoticed? We use artificial "scenes" populated by multiple pure-tone components, each with a unique frequency and amplitude modulation rate. Importantly, these scenes lack semantic attributes, which may have confounded previous studies, thus allowing us to probe low-level processes involved in auditory change perception. Our results reveal a striking difference between "appear" and "disappear" events. Listeners are remarkably tuned to object appearance: change detection and identification performance are at ceiling; response times are short, with little effect of scene-size, suggesting a pop-out process. In contrast, listeners have difficulty detecting disappearing objects, even in small scenes: performance rapidly deteriorates with growing scene-size; response times are slow, and even when change is detected, the changed component is rarely successfully identified. We also measured change detection performance when a noise or silent gap was inserted at the time of change or when the scene was interrupted by a distractor that occurred at the time of change but did not mask any scene elements. Gaps adversely affected the processing of item appearance but not disappearance. However, distractors reduced both appearance and disappearance detection. Together, our results suggest a role for neural adaptation and sensitivity to transients in the process of auditory change detection, similar to what has been demonstrated for visual change detection. Importantly, listeners consistently performed better for item addition (relative to deletion) across all scene interruptions used, suggesting a robust perceptual representation of item appearance.
Collapse
Affiliation(s)
| | - Leyla Pinggera
- Ear Institute, University College London, London, United Kingdom
| | | | - Makio Kashino
- NTT Communication Science Laboratories, NTT Corporation, Atsugi, Japan
| | - Maria Chait
- Ear Institute, University College London, London, United Kingdom
- * E-mail:
| |
Collapse
|
20
|
CHEN XIAOXI, CHEN QI, GAO DINGGUO, YUE ZHENZHU. Interaction between endogenous and exogenous orienting in crossmodal attention. Scand J Psychol 2012; 53:303-8. [DOI: 10.1111/j.1467-9450.2012.00957.x] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
21
|
Dewey JA, Carr TH. Is that what I wanted to do? Cued vocalizations influence the phenomenology of controlling a moving object. Conscious Cogn 2012; 21:507-25. [PMID: 22301454 DOI: 10.1016/j.concog.2012.01.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/13/2011] [Revised: 01/04/2012] [Accepted: 01/05/2012] [Indexed: 11/17/2022]
Abstract
The phenomenology of controlled action depends on comparisons between predicted and actually perceived sensory feedback called action-effects. We investigated if intervening task-irrelevant but semantically related information influences monitoring processes that give rise to a sense of control. Participants judged whether a moving box "obeyed" or "disobeyed" their own arrow keystrokes (Experiments 1 and 2) or visual cues representing the computer's choices (Experiment 3). During 1s delays between keystrokes/cues and box movements, participants vocalized directions ("up", "down", "left", or "right") cued by letters inside the box. Congruency of cued vocalizations was manipulated relative to previously selected keystrokes and upcoming box movements. In Experiment 1, reported obey moves and feelings of control reflected the true frequency of obey moves, but were also modulated by vocalizations. Incongruent vocalizations reduced reported obey moves, whereas congruent vocalizations increased them. In Experiment 2, vocalizations had stronger effects when their congruence with primary-task box movement was consistent for several consecutive moves before congruence changed. In Experiment 3, analogous impacts of vocalizations occurred when the computer selected the directions and participants judged whether the computer had control of the box. We conclude that predicted and perceived action-effects associated with semantically related but separate and ostensibly irrelevant actions can be conflated with one another. This interference is not restricted to actions performed with the same effector or within the same modality, or even by the same actor. Thus in estimating degrees of control, the mind integrates across ongoing action systems, whether or not they are logically task-relevant.
Collapse
Affiliation(s)
- John A Dewey
- Department of Psychology, Michigan State University, East Lansing, MI 48824, USA.
| | | |
Collapse
|
22
|
Multisensory perceptual learning reshapes both fast and slow mechanisms of crossmodal processing. COGNITIVE AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2011; 11:1-12. [PMID: 21264643 DOI: 10.3758/s13415-010-0006-x] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Previous research has shown that sounds facilitate perception of visual patterns appearing immediately after the sound but impair perception of patterns appearing after some delay. Here we examined the spatial gradient of the fast crossmodal facilitation effect and the slow inhibition effect in order to test whether they reflect separate mechanisms. We found that crossmodal facilitation is only observed at visual field locations overlapping with the sound, whereas crossmodal inhibition affects the whole hemifield. Furthermore, we tested whether multisensory perceptual learning with misaligned audio-visual stimuli reshapes crossmodal facilitation and inhibition. We found that training shifts crossmodal facilitation towards the trained location without changing its range. By contrast, training narrows the range of inhibition without shifting its position. Our results suggest that crossmodal facilitation and inhibition reflect separate mechanisms that can both be reshaped by multisensory experience even in adult humans. Multisensory links seem to be more plastic than previously thought.
Collapse
|
23
|
The interplay of cue modality and response latency in brain areas supporting crossmodal motor preparation: an event-related fMRI study. Exp Brain Res 2011; 214:9-17. [PMID: 21656217 DOI: 10.1007/s00221-011-2745-5] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2010] [Accepted: 05/18/2011] [Indexed: 10/18/2022]
Abstract
Crossmodal (auditory, visual) motor facilitation can be defined as a cue in one sensory modality eliciting speeded responses to targets in a different sensory modality. We used event-related functional magnetic resonance imaging (fMRI) to isolate brain activity underlying crossmodal motor preparation. Our predictions were that interactions between input modality and processes underlying response selection would be indexed by distinct spatiotemporal brain dynamics. A crossmodal response selection task was designed in which a central, nonspatial cue indicated the response rule (compatible or incompatible) to a lateralized target. Cues and targets appeared in auditory and visual modalities and were separated by a lengthy delay period in which cue-related brain activity could be dissociated. We found faster reaction times to auditory compared with visual cues. Next, we correlated brain activity with behavioural performance using multivariate spatiotemporal partial least squares. We identified a distinct, significant brain-behaviour pattern in which faster reaction times to auditory cues were correlated with higher blood oxygenation level-dependent percent signal change in medial visual, frontoparietal (inferior parietal lobule, superior frontal gyrus and premotor cortex) and subcortical (thalamus and cerebellum) areas. For visual cues, quicker responses were linked to greater activity in the same frontoparietal and subcortical but not medial visual areas. Our results show that both modality-dependent and modality-independent brain areas with different brain-behaviour relationships are implicated in crossmodal motor preparation.
Collapse
|
24
|
Diffusion tensor imaging shows white matter tracts between human auditory and visual cortex. Exp Brain Res 2011; 213:299-308. [PMID: 21573953 DOI: 10.1007/s00221-011-2715-y] [Citation(s) in RCA: 77] [Impact Index Per Article: 5.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2010] [Accepted: 04/26/2011] [Indexed: 10/18/2022]
Abstract
Although it is known that sounds can affect visual perception, the neural correlates for crossmodal interactions are still disputed. Previous tracer studies in non-human primates revealed direct anatomical connections between auditory and visual brain areas. We examined the structural connectivity of the auditory cortex in normal humans by diffusion-weighted tensor magnetic resonance imaging and probabilistic tractography. Tracts were seeded in Heschl's region or the planum temporale. Fibres crossed hemispheres at the posterior corpus callosum. Ipsilateral fibres seeded in Heschl's region projected to the superior temporal sulcus, the supramarginal gyrus and intraparietal sulcus and the occipital cortex including the calcarine sulcus. Fibres seeded in the planum temporale terminated primarily in the superior temporal sulcus, the supramarginal gyrus, the central sulcus and adjacent regions. Our findings suggest the existence of direct white matter connections between auditory and visual cortex--in addition to subcortical, temporal and parietal connections.
Collapse
|
25
|
Effects of attention on a relative mislocalization with successively presented stimuli. Vision Res 2010; 50:1793-802. [DOI: 10.1016/j.visres.2010.05.036] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/25/2009] [Revised: 05/26/2010] [Accepted: 05/26/2010] [Indexed: 11/20/2022]
|
26
|
Attention and the multiple stages of multisensory integration: A review of audiovisual studies. Acta Psychol (Amst) 2010; 134:372-84. [PMID: 20427031 DOI: 10.1016/j.actpsy.2010.03.010] [Citation(s) in RCA: 166] [Impact Index Per Article: 11.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2009] [Revised: 03/23/2010] [Accepted: 03/27/2010] [Indexed: 11/20/2022] Open
Abstract
Multisensory integration and crossmodal attention have a large impact on how we perceive the world. Therefore, it is important to know under what circumstances these processes take place and how they affect our performance. So far, no consensus has been reached on whether multisensory integration and crossmodal attention operate independently and whether they represent truly automatic processes. This review describes the constraints under which multisensory integration and crossmodal attention occur and in what brain areas these processes take place. Some studies suggest that multisensory integration and crossmodal attention take place in higher heteromodal brain areas, while others show the involvement of early sensory specific areas. Additionally, the current literature suggests that multisensory integration and attention interact depending on what processing level integration takes place. To shed light on this issue, different frameworks regarding the level at which multisensory interactions takes place are discussed. Finally, this review focuses on the question whether audiovisual interactions and crossmodal attention in particular are automatic processes. Recent studies suggest that this is not always the case. Overall, this review provides evidence for a parallel processing framework suggesting that both multisensory integration and attentional processes take place and can interact at multiple stages in the brain.
Collapse
|
27
|
Nuku P, Bekkering H. When one sees what the other hears: Crossmodal attentional modulation for gazed and non-gazed upon auditory targets. Conscious Cogn 2010; 19:135-43. [DOI: 10.1016/j.concog.2009.07.012] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2008] [Revised: 07/16/2009] [Accepted: 07/24/2009] [Indexed: 10/20/2022]
|
28
|
Affiliation(s)
- Charles Spence
- Crossmodal Research Laboratory, Department of Experimental Psychology, University of Oxford, Oxford, OX1 3UD, United Kingdom.
| |
Collapse
|
29
|
Santangelo V, Belardinelli MO, Spence C, Macaluso E. Interactions between Voluntary and Stimulus-driven Spatial Attention Mechanisms across Sensory Modalities. J Cogn Neurosci 2009; 21:2384-97. [DOI: 10.1162/jocn.2008.21178] [Citation(s) in RCA: 37] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Abstract
In everyday life, the allocation of spatial attention typically entails the interplay between voluntary (endogenous) and stimulus-driven (exogenous) attention. Furthermore, stimuli in different sensory modalities can jointly influence the direction of spatial attention, due to the existence of cross-sensory links in attentional control. Using fMRI, we examined the physiological basis of these interactions. We induced exogenous shifts of auditory spatial attention while participants engaged in an endogenous visuospatial cueing task. Participants discriminated visual targets in the left or right hemifield. A central visual cue preceded the visual targets, predicting the target location on 75% of the trials (endogenous visual attention). In the interval between the endogenous cue and the visual target, task-irrelevant nonpredictive auditory stimuli were briefly presented either in the left or right hemifield (exogenous auditory attention). Consistent with previous unisensory visual studies, activation of the ventral fronto-parietal attentional network was observed when the visual targets were presented at the uncued side (endogenous invalid trials, requiring visuospatial reorienting), as compared with validly cued targets. Critically, we found that the side of the task-irrelevant auditory stimulus modulated these activations, reducing spatial reorienting effects when the auditory stimulus was presented on the same side as the upcoming (invalid) visual target. These results demonstrate that multisensory mechanisms of attentional control can integrate endogenous and exogenous spatial information, jointly determining attentional orienting toward the most relevant spatial location.
Collapse
Affiliation(s)
- Valerio Santangelo
- 1Santa Lucia Foundation, Rome, Italy
- 2University of Rome “La Sapienza,” Italy
| | - Marta Olivetti Belardinelli
- 2University of Rome “La Sapienza,” Italy
- 3Interuniversity Center for Research in Natural and Artificial Systems, Rome, Italy
| | | | | |
Collapse
|
30
|
Competition between auditory and visual spatial cues during visual task performance. Exp Brain Res 2009; 195:593-602. [DOI: 10.1007/s00221-009-1829-y] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2009] [Accepted: 04/23/2009] [Indexed: 11/26/2022]
|
31
|
Beer AL, Watanabe T. Specificity of auditory-guided visual perceptual learning suggests crossmodal plasticity in early visual cortex. Exp Brain Res 2009; 198:353-61. [PMID: 19306091 DOI: 10.1007/s00221-009-1769-6] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2008] [Accepted: 03/05/2009] [Indexed: 11/25/2022]
Abstract
Sounds modulate visual perception. Blind humans show altered brain activity in early visual cortex. However, it is still unclear whether crossmodal activity in visual cortex results from unspecific top-down feedback, a lack of visual input, or genuinely reflects crossmodal interactions at early sensory levels. We examined how sounds affect visual perceptual learning in sighted adults. Visual motion discrimination was tested prior to and following eight sessions in which observers were exposed to irrelevant moving dots while detecting sounds. After training, visual discrimination improved more strongly for motion directions that were paired with a relevant sound during training than for other directions. Crossmodal learning was limited to visual field locations that overlapped with the sound source and was little affected by attention. The specificity and automatic nature of these learning effects suggest that sounds automatically guide visual plasticity at a relatively early level of processing.
Collapse
Affiliation(s)
- Anton L Beer
- Department of Psychology, Boston University, Boston, MA 02215, USA.
| | | |
Collapse
|
32
|
Santangelo V, Spence C. Is the exogenous orienting of spatial attention truly automatic? Evidence from unimodal and multisensory studies. Conscious Cogn 2008; 17:989-1015. [DOI: 10.1016/j.concog.2008.02.006] [Citation(s) in RCA: 67] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2007] [Revised: 02/24/2008] [Accepted: 02/28/2008] [Indexed: 11/25/2022]
|
33
|
Abstract
We assessed the influence ofmultisensory interactions on the exogenous orienting of spatial attention by comparing the ability of auditory, tactile, and audiotactile exogenous cues to capture visuospatial attention under conditions of no perceptual load versus high perceptual load. In Experiment 1, participants discriminated the elevation of visual targets preceded by either unimodal or bimodal cues under conditions of either a high perceptual load (involving the monitoring of a rapidly presented central stream of visual letters for occasionally presented target digits) or no perceptual load (when the central stream was replaced by a fixation point). All of the cues captured spatial attention in the no-load condition, whereas only the bimodal cues captured visuospatial attention in the high-load condition. In Experiment 2, we ruled out the possibility that the presentation of any changing stimulus at fixation (i.e., a passively monitored stream of letters) would eliminate exogenous orienting, which instead appears to be a consequence of high perceptual load conditions (Experiment 1). These results demonstrate that multisensory cues capture spatial attention more effectively than unimodal cues under conditions of concurrent perceptual load.
Collapse
|
34
|
Kayser C, Petkov CI, Logothetis NK. Visual modulation of neurons in auditory cortex. ACTA ACUST UNITED AC 2008; 18:1560-74. [PMID: 18180245 DOI: 10.1093/cercor/bhm187] [Citation(s) in RCA: 367] [Impact Index Per Article: 22.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Our brain integrates the information provided by the different sensory modalities into a coherent percept, and recent studies suggest that this process is not restricted to higher association areas. Here we evaluate the hypothesis that auditory cortical fields are involved in cross-modal processing by probing individual neurons for audiovisual interactions. We find that visual stimuli modulate auditory processing both at the level of field potentials and single-unit activity and already in primary and secondary auditory fields. These interactions strongly depend on a stimulus' efficacy in driving the neurons but occur independently of stimulus category and for naturalistic as well as artificial stimuli. In addition, interactions are sensitive to the relative timing of audiovisual stimuli and are strongest when visual stimuli lead by 20-80 msec. Exploring the underlying mechanisms, we find that enhancement correlates with the resetting of slow (approximately 10 Hz) oscillations to a phase angle of optimal excitability. These results demonstrate that visual stimuli can modulate the firing of neurons in auditory cortex in a manner that depends on stimulus efficacy and timing. These neurons thus meet the criteria for sensory integration and provide the auditory modality with multisensory contextual information about co-occurring environmental events.
Collapse
Affiliation(s)
- Christoph Kayser
- Max Planck Institute for Biological Cybernetics, Spemannstrasse 38, 72076 Tübingen, Germany.
| | | | | |
Collapse
|