1
|
Zhao S, Zhou Y, Ma F, Xie J, Feng C, Feng W. The dissociation of semantically congruent and incongruent cross-modal effects on the visual attentional blink. Front Neurosci 2023; 17:1295010. [PMID: 38161792 PMCID: PMC10755906 DOI: 10.3389/fnins.2023.1295010] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Accepted: 11/29/2023] [Indexed: 01/03/2024] Open
Abstract
Introduction Recent studies have found that the sound-induced alleviation of visual attentional blink, a well-known phenomenon exemplifying the beneficial influence of multisensory integration on time-based attention, was larger when that sound was semantically congruent relative to incongruent with the second visual target (T2). Although such an audiovisual congruency effect has been attributed mainly to the semantic conflict carried by the incongruent sound restraining that sound from facilitating T2 processing, it is still unclear whether the integrated semantic information carried by the congruent sound benefits T2 processing. Methods To dissociate the congruence-induced benefit and incongruence-induced reduction in the alleviation of visual attentional blink at the behavioral and neural levels, the present study combined behavioral measures and event-related potential (ERP) recordings in a visual attentional blink task wherein the T2-accompanying sound, when delivered, could be semantically neutral in addition to congruent or incongruent with respect to T2. Results The behavioral data clearly showed that compared to the neutral sound, the congruent sound improved T2 discrimination during the blink to a higher degree while the incongruent sound improved it to a lesser degree. The T2-locked ERP data revealed that the early occipital cross-modal N195 component (192-228 ms after T2 onset) was uniquely larger in the congruent-sound condition than in the neutral-sound and incongruent-sound conditions, whereas the late parietal cross-modal N440 component (400-500 ms) was prominent only in the incongruent-sound condition. Discussion These findings provide strong evidence that the modulating effect of audiovisual semantic congruency on the sound-induced alleviation of visual attentional blink contains not only a late incongruence-induced cost but also an early congruence-induced benefit, thereby demonstrating for the first time an unequivocal congruent-sound-induced benefit in alleviating the limitation of time-based visual attention.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yuxin Zhou
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Fangfang Ma
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Jimei Xie
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
2
|
Zhao S, Wang C, Chen M, Zhai M, Leng X, Zhao F, Feng C, Feng W. Cross-modal enhancement of spatially unpredictable visual target discrimination during the attentional blink. Atten Percept Psychophys 2023; 85:2178-2195. [PMID: 37312000 DOI: 10.3758/s13414-023-02739-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/20/2023] [Indexed: 06/15/2023]
Abstract
The attentional blink can be substantially reduced by delivering a task-irrelevant sound synchronously with the second target (T2) embedded in a rapid serial visual presentation stream, which is further modulated by the semantic congruency between the sound and T2. The present study extended the cross-modal boost during attentional blink and the modulation of audiovisual semantic congruency in the spatial domain by showing that a spatially uninformative, semantically congruent (but not incongruent) sound could even improve the discrimination of spatially unpredictable T2 during attentional blink. T2-locked event-related potential (ERP) data yielded that the early cross-modal P195 difference component (184-234 ms) over the occipital scalp contralateral to the T2 location was larger preceding accurate than inaccurate discriminations of semantically congruent, but not incongruent, audiovisual T2s. Interestingly, the N2pc component (194-244 ms) associated with visual-spatial attentional allocation was enlarged for incongruent audiovisual T2s relative to congruent audiovisual and unisensory visual T2s only when they were accurately discriminated. These ERP findings suggest that the spatially extended cross-modal boost during attentional blink involves an early cross-modal interaction strengthening the perceptual processing of T2, without any sound-induced enhancement of visual-spatial attentional allocation toward T2. In contrast, the absence of an accuracy decrease in response to semantically incongruent audiovisual T2s may originate from the semantic mismatch capturing extra visual-spatial attentional resources toward T2.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Minran Chen
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Mengdie Zhai
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Xuechen Leng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Fan Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China.
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, 215123, Jiangsu, China.
- Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, 215123, Jiangsu, China.
| |
Collapse
|
3
|
Effect of Target Semantic Consistency in Different Sequence Positions and Processing Modes on T2 Recognition: Integration and Suppression Based on Cross-Modal Processing. Brain Sci 2023; 13:brainsci13020340. [PMID: 36831882 PMCID: PMC9954507 DOI: 10.3390/brainsci13020340] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/17/2022] [Revised: 02/09/2023] [Accepted: 02/14/2023] [Indexed: 02/19/2023] Open
Abstract
In the rapid serial visual presentation (RSVP) paradigm, sound affects participants' recognition of targets. Although many studies have shown that sound improves cross-modal processing, researchers have not yet explored the effects of sound semantic information with respect to different locations and processing modalities after removing sound saliency. In this study, the RSVP paradigm was used to investigate the difference between attention under conditions of consistent and inconsistent semantics with the target (Experiment 1), as well as the difference between top-down (Experiment 2) and bottom-up processing (Experiment 3) for sounds with consistent semantics with target 2 (T2) at different sequence locations after removing sound saliency. The results showed that cross-modal processing significantly improved attentional blink (AB). The early or lagged appearance of sounds consistent with T2 did not affect participants' judgments in the exogenous attentional modality. However, visual target judgments were improved with endogenous attention. The sequential location of sounds consistent with T2 influenced the judgment of auditory and visual congruency. The results illustrate the effects of sound semantic information in different locations and processing modalities.
Collapse
|
4
|
Zhao S, Wang C, Feng C, Wang Y, Feng W. The interplay between audiovisual temporal synchrony and semantic congruency in the cross-modal boost of the visual target discrimination during the attentional blink. Hum Brain Mapp 2022; 43:2478-2494. [PMID: 35122347 PMCID: PMC9057096 DOI: 10.1002/hbm.25797] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Revised: 01/12/2022] [Accepted: 01/17/2022] [Indexed: 11/09/2022] Open
Abstract
The visual attentional blink can be substantially reduced by delivering a task-irrelevant sound synchronously with the second visual target (T2), and this effect is further modulated by the semantic congruency between the sound and T2. However, whether the cross-modal benefit originates from audiovisual interactions or sound-induced alertness remains controversial, and whether the semantic congruency effect is contingent on audiovisual temporal synchrony needs further investigation. The current study investigated these questions by recording event-related potentials (ERPs) in a visual attentional blink task wherein a sound could either synchronize with T2, precede T2 by 200 ms, be delayed by 100 ms, or be absent, and could be either semantically congruent or incongruent with T2 when delivered. The behavioral data showed that both the cross-modal boost of T2 discrimination and the further semantic modulation were the largest when the sound synchronized with T2. In parallel, the ERP data yielded that both the early occipital cross-modal P195 component (192-228 ms after T2 onset) and late parietal cross-modal N440 component (424-448 ms) were prominent only when the sound synchronized with T2, with the former being elicited solely when the sound was further semantically congruent whereas the latter occurring only when that sound was incongruent. These findings demonstrate not only that the cross-modal boost of T2 discrimination during the attentional blink stems from early audiovisual interactions and the semantic congruency effect depends on audiovisual temporal synchrony, but also that the semantic modulation can unfold at the early stage of visual discrimination processing.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Department of English, School of Foreign Languages, Soochow University, Suzhou, China
| | - Chongzhi Wang
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, Soochow University, Suzhou, China.,Research Center for Psychology and Behavioral Sciences, Soochow University, Suzhou, China
| |
Collapse
|
5
|
Abstract
In the rapid serial visual presentation (RSVP) paradigm, response accuracy for the target decreases when it appears within a short time window (200~500 ms) after the previous target. This phenomenon is termed the attentional blink (AB). Although mechanisms of cross-modal processing that reduce the AB have been documented, researchers have not explored the differences across modal attentional conditions. In the present study, we used the RSVP paradigm to investigate the effect of auditory-driven visual target perceptual enhancement on the AB under modality-specific selective attention (Experiment 1) and bimodal-divided attention (Experiment 2). The results showed that cross-modal attentional enhancement was not moderated by stimulus salience. Moreover, the results also showed that accuracy was higher when the attended sound appeared simultaneously with the target. These results indicated that audiovisual enhancement reduced AB and that stronger attentional enhancement in the bimodal-divided attentional condition led to the disappearance of AB.
Collapse
|
6
|
Zhao S, Feng C, Liao Y, Huang X, Feng W. Attentional blink suppresses both stimulus-driven and representation-driven cross-modal spread of attention. Psychophysiology 2021; 58:e13761. [PMID: 33400294 DOI: 10.1111/psyp.13761] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Revised: 11/05/2020] [Accepted: 12/14/2020] [Indexed: 11/30/2022]
Abstract
Previous studies have shown that visual attention effect can spread to the task-irrelevant auditory modality automatically through either the stimulus-driven binding process or the representation-driven priming process. Using an attentional blink paradigm, the present study investigated whether the long-latency stimulus-driven and representation-driven cross-modal spread of attention would be inhibited or facilitated when the attentional resources operating at the post-perceptual stage of processing are inadequate, whereas ensuring all visual stimuli were spatially attended and the representations of visual target object categories were activated, which were previously thought to be the only endogenous prerequisites for triggering cross-modal spread of attention. The results demonstrated that both types of attentional spreading were completely suppressed during the attentional blink interval but were highly prominent outside the attentional blink interval, with the stimulus-driven process being independent of, whereas the representation-driven process being dependent on, audiovisual semantic congruency. These findings provide the first evidence that the occurrences of both stimulus-driven and representation-driven spread of attention are contingent on the amount of post-perceptual attentional resources responsible for the late consolidation processing of visual stimuli, whereas the early detection of visual stimuli and the top-down activation of the visual representations are not the sole endogenous prerequisites for triggering any types of cross-modal attentional spreading.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Yu Liao
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, China
| |
Collapse
|
7
|
Zhao S, Feng C, Huang X, Wang Y, Feng W. Neural Basis of Semantically Dependent and Independent Cross-Modal Boosts on the Attentional Blink. Cereb Cortex 2020; 31:2291-2304. [DOI: 10.1093/cercor/bhaa362] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Revised: 10/15/2020] [Accepted: 11/02/2020] [Indexed: 01/26/2023] Open
Abstract
Abstract
The present study recorded event-related potentials (ERPs) in a visual object-recognition task under the attentional blink paradigm to explore the temporal dynamics of the cross-modal boost on attentional blink and whether this auditory benefit would be modulated by semantic congruency between T2 and the simultaneous sound. Behaviorally, the present study showed that not only a semantically congruent but also a semantically incongruent sound improved T2 discrimination during the attentional blink interval, whereas the enhancement was larger for the congruent sound. The ERP results revealed that the behavioral improvements induced by both the semantically congruent and incongruent sounds were closely associated with an early cross-modal interaction on the occipital N195 (192–228 ms). In contrast, the lower T2 accuracy for the incongruent than congruent condition was accompanied by a larger late occurring cento-parietal N440 (424–448 ms). These findings suggest that the cross-modal boost on attentional blink is hierarchical: the task-irrelevant but simultaneous sound, irrespective of its semantic relevance, firstly enables T2 to escape the attentional blink via cross-modally strengthening the early stage of visual object-recognition processing, whereas the semantic conflict of the sound begins to interfere with visual awareness only at a later stage when the representation of visual object is extracted.
Collapse
Affiliation(s)
- Song Zhao
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Chengzhi Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Xinyin Huang
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| | - Yijun Wang
- Institute of Semiconductors, Chinese Academy of Sciences, Beijing 100083, China
| | - Wenfeng Feng
- Department of Psychology, School of Education, SooChow University, Suzhou, Jiangsu 215123, China
| |
Collapse
|
8
|
Galojan J, Kranczioch C. No Evidence for an Awareness-Dependent Emotional Modulation of the Attentional Blink. Front Psychol 2019; 10:2422. [PMID: 31749738 PMCID: PMC6842977 DOI: 10.3389/fpsyg.2019.02422] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/04/2019] [Accepted: 10/11/2019] [Indexed: 11/13/2022] Open
Abstract
Pictures of faces with emotional expressions presented before a temporal attention task have been reported to affect temporal attention in an awareness-dependent manner: Awareness of a fearful face was linked to an increased deficit in the temporal attention task, while preventing the face from reaching awareness was linked to a decreased deficit, both relative to neutral faces. Here we report the results of two temporal attention experiments which aimed to extend and conceptually replicate this basic finding. The temporal attention task was preceded by an unmasked or a masked fearful face on a trial-by-trial basis. In both experiments the finding of an awareness-dependent emotional modulation of temporal attention through fearful faces could not be replicated, even when data were pooled across experiments. Pooling of experiments indicated however that, independent of awareness level, fearful faces can be associated with slightly worse temporal attention performance than neutral faces, and suggested a lag-specific practice effect in terms of a reduced deficit in temporal attention in the second half of the experiment.
Collapse
|
9
|
Beniczky S, Rosenzweig I, Scherg M, Jordanov T, Lanfer B, Lantz G, Larsson PG. Ictal EEG source imaging in presurgical evaluation: High agreement between analysis methods. Seizure 2016; 43:1-5. [PMID: 27764709 PMCID: PMC5176190 DOI: 10.1016/j.seizure.2016.09.017] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2016] [Revised: 09/24/2016] [Accepted: 09/30/2016] [Indexed: 11/17/2022] Open
Abstract
There was good agreement between different methods of ictal EEG source imaging. Ictal source imaging achieved an accuracy of 73% (for operated patients: 86%). Agreement between all methods did not necessarily imply accuracy of localization.
Purpose To determine the agreement between five different methods of ictal EEG source imaging, and to assess their accuracy in presurgical evaluation of patients with focal epilepsy. It was hypothesized that high agreement between methods was associated with higher localization-accuracy. Methods EEGs were recorded with a 64-electrode array. Thirty-eight seizures from 22 patients were analyzed using five different methods phase mapping, dipole fitting, CLARA, cortical-CLARA and minimum norm. Localization accuracy was determined at sub-lobar level. Reference standard was the final decision of the multidisciplinary epilepsy surgery team, and, for the operated patients, outcome one year after surgery. Results Agreement between all methods was obtained in 13 patients (59%) and between all but one methods in additional six patients (27%). There was a trend for minimum norm being less accurate than phase mapping, but none of the comparisons reached significance. Source imaging in cases with agreement between all methods was not more accurate than in the other cases. Ictal source imaging achieved an accuracy of 73% (for operated patients: 86%). Conclusion There was good agreement between different methods of ictal source imaging. However, good inter-method agreement did not necessarily imply accurate source localization, since all methods faced the limitations of the inverse solution.
Collapse
Affiliation(s)
- Sándor Beniczky
- Department of Clinical Neurophysiology, Danish Epilepsy Centre, Dianalund, Denmark; Department of Clinical Neurophysiology, Aarhus University Hospital, Aarhus, Denmark.
| | - Ivana Rosenzweig
- Department of Clinical Neurophysiology, Danish Epilepsy Centre, Dianalund, Denmark; Sleep and Brain Plasticity Centre, Department of Neuroimaging, IOPPN, King's College and Imperial College, London, UK
| | | | | | | | - Göran Lantz
- Clinical Neurophysiology Unit, Department of Clinical Sciences, Lund University, Lund, Sweden; Electrical Geodesics, Inc., Eugene, OR, USA
| | - Pål Gunnar Larsson
- Clinical Neurophysiology Section, Department of Neurosurgery, Oslo University Hospital, Norway
| |
Collapse
|