1
|
Wang X, Tang X, Wang A, Zhang M. Non-spatial inhibition of return attenuates audiovisual integration owing to modality disparities. Atten Percept Psychophys 2024; 86:2315-2328. [PMID: 38127253 DOI: 10.3758/s13414-023-02825-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/29/2023] [Indexed: 12/23/2023]
Abstract
Although previous studies have investigated the relationship between inhibition of return (IOR) and multisensory integration, the influence of non-spatial has not been explored. The present study aimed to investigate the influence of non-spatial IOR on audiovisual integration by using a "prime-neutral cue-target" paradigm. In Experiment 1, which manipulated prime validity and target modality, the targets were positioned centrally, revealing significant non-spatial IOR effects in the visual, auditory, and audiovisual modalities. Analysis of relative multisensory response enhancement (rMRE) indicated substantial audiovisual integration enhancement in both valid and invalid target conditions. Furthermore, the enhancement was weaker for valid targets than for invalid targets. In Experiment 2, the targets were positioned above and below to rule out repetition blindness (RB); this experiment successfully replicated the results observed in Experiment 1. Notably, Experiments 1 and 2 consistently found that the correlation between modality differences and rMRE for valid targets indicated that differences in signal strength between visual and auditory modalities contributed to a reduction in audiovisual integration. However, the absence of correlation with the invalid target suggests that attention, as a key factor, may play a significant role in this process. The present study highlights how non-spatial IOR reduces audiovisual integration and sheds light on the complex interaction between attention and multisensory integration.
Collapse
Affiliation(s)
- Xiaoxue Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China
| | - Xiaoyu Tang
- School of Psychology, Liaoning Normal University, Dalian, China
| | - Aijun Wang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
| | - Ming Zhang
- Department of Psychology, Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, People's Republic of China.
- Department of Psychology, Suzhou University of Science and Technology, Suzhou, China.
- Cognitive Neuroscience Laboratory, Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan.
| |
Collapse
|
2
|
Jiang Y, Qiao R, Shi Y, Tang Y, Hou Z, Tian Y. The effects of attention in auditory-visual integration revealed by time-varying networks. Front Neurosci 2023; 17:1235480. [PMID: 37600005 PMCID: PMC10434229 DOI: 10.3389/fnins.2023.1235480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/06/2023] [Accepted: 07/17/2023] [Indexed: 08/22/2023] Open
Abstract
Attention and audiovisual integration are crucial subjects in the field of brain information processing. A large number of previous studies have sought to determine the relationship between them through specific experiments, but failed to reach a unified conclusion. The reported studies explored the relationship through the frameworks of early, late, and parallel integration, though network analysis has been employed sparingly. In this study, we employed time-varying network analysis, which offers a comprehensive and dynamic insight into cognitive processing, to explore the relationship between attention and auditory-visual integration. The combination of high spatial resolution functional magnetic resonance imaging (fMRI) and high temporal resolution electroencephalography (EEG) was used. Firstly, a generalized linear model (GLM) was employed to find the task-related fMRI activations, which was selected as regions of interesting (ROIs) for nodes of time-varying network. Then the electrical activity of the auditory-visual cortex was estimated via the normalized minimum norm estimation (MNE) source localization method. Finally, the time-varying network was constructed using the adaptive directed transfer function (ADTF) technology. Notably, Task-related fMRI activations were mainly observed in the bilateral temporoparietal junction (TPJ), superior temporal gyrus (STG), primary visual and auditory areas. And the time-varying network analysis revealed that V1/A1↔STG occurred before TPJ↔STG. Therefore, the results supported the theory that auditory-visual integration occurred before attention, aligning with the early integration framework.
Collapse
Affiliation(s)
- Yuhao Jiang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
- Central Nervous System Drug Key Laboratory of Sichuan Province, Luzhou, China
| | - Rui Qiao
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yupan Shi
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yi Tang
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Zhengjun Hou
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| | - Yin Tian
- Institute for Advanced Sciences, Chongqing University of Posts and Telecommunications, Chongqing, China
- Guangyang Bay Laboratory, Chongqing Institute for Brain and Intelligence, Chongqing, China
| |
Collapse
|
3
|
Abstract
In the rapid serial visual presentation (RSVP) paradigm, response accuracy for the target decreases when it appears within a short time window (200~500 ms) after the previous target. This phenomenon is termed the attentional blink (AB). Although mechanisms of cross-modal processing that reduce the AB have been documented, researchers have not explored the differences across modal attentional conditions. In the present study, we used the RSVP paradigm to investigate the effect of auditory-driven visual target perceptual enhancement on the AB under modality-specific selective attention (Experiment 1) and bimodal-divided attention (Experiment 2). The results showed that cross-modal attentional enhancement was not moderated by stimulus salience. Moreover, the results also showed that accuracy was higher when the attended sound appeared simultaneously with the target. These results indicated that audiovisual enhancement reduced AB and that stronger attentional enhancement in the bimodal-divided attentional condition led to the disappearance of AB.
Collapse
|
4
|
Abstract
Previous studies have found that processing of a second stimulus is slower when the modality of the first stimulus differs, which is termed the modality shift effect. Moreover, people tend to respond more slowly to the second stimulus when the two stimuli are similar in the semantic dimension, which is termed the nonspatial repetition inhibition effect. This study aimed to explore the modality shift effect on nonspatial repetition inhibition and whether such modulation was influenced by different temporal intervals. A cue-target paradigm was adopted in which modality priming and identity priming were manipulated at three interstimuli intervals. The results showed that the response times under the modality shift condition were slower than those under the modality repeat condition. In trials with modality shift, responses to congruent cues and targets were slower than to incongruent cue-target combinations, indicating crossmodal nonspatial repetition inhibition. The crossmodal nonspatial repetition inhibition effect decreased with increasing interstimuli interval. These results provide evidence that the additional intervening event proposed in previous studies is not necessary for the occurrence of crossmodal nonspatial repetition inhibition.
Collapse
Affiliation(s)
| | - Xiaogang Wu
- Suzhou University of Science and Technology, China; 12582Soochow University, China
| | | |
Collapse
|
5
|
Visual aperiodic temporal prediction increases perceptual sensitivity and reduces response latencies. Acta Psychol (Amst) 2020; 209:103129. [PMID: 32619784 DOI: 10.1016/j.actpsy.2020.103129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2019] [Revised: 06/17/2020] [Accepted: 06/22/2020] [Indexed: 11/23/2022] Open
Abstract
As a predictive organ, the brain can predict upcoming events to guide perception and action in the process of adaptive behavior. The classical models of oscillatory entrainment explain the facilitating effects that occur after periodic stimulation in behavior but cannot explain aperiodic facilitating effects. In the present study, by comparing the behavior performance of participants in periodic predictable (PP), aperiodic predictable (AP) and aperiodic unpredictable (AU) stimulus streams, we investigated the effect of an aperiodic predictable stream on the perceptual sensitivity and response latencies in the visual modality. The results showed that there was no difference between PP and AP conditions in sensitivity (d') and reaction times (RTs), both of which were significantly different from those in the AU condition. Moreover, a significant correlation between d' and RTs was observed when predictability existed. These results indicate that the aperiodic predictable stimulus streams increases perceptual sensitivity and reduces response latencies in a top-down manner. Individuals proactively and flexibly predict upcoming events based on the temporal structure of visual stimuli in the service of adaptive behavior.
Collapse
|