1
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
2
|
Cheng Y, Jackson TB, MacNamara A. Modulation of threat extinction by working memory load: An event-related potential study. Behav Res Ther 2022; 150:104031. [PMID: 35032699 PMCID: PMC8844280 DOI: 10.1016/j.brat.2022.104031] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2021] [Revised: 11/26/2021] [Accepted: 01/05/2022] [Indexed: 11/30/2022]
Abstract
Distraction is typically discouraged during exposure therapy for anxiety, because it is thought to interfere with extinction learning by diverting attention away from anxiety-provoking stimuli. Working memory load is one form of distraction that might interfere with extinction learning. Alternatively, working memory load might reduce threat responding and benefit extinction learning by engaging prefrontal brain regions that have a reciprocal relationship with brain circuits involved in threat detection and processing. Prior work examining the effect of working memory load on threat extinction has been limited and has found mixed results. Here, we used the late positive potential (LPP), an event-related potential that is larger for threatening compared to non-threatening stimuli to assess the effect of working memory load on threat extinction. After acquisition, 38 participants performed three blocks of an extinction task interspersed with low and high working memory load trials. Results showed that overall, the LPP was reduced under high compared to low working memory load, and that working memory load slowed extinction learning. Results provide empirical evidence in support of limiting distraction during exposure therapy in order to optimize extinction learning efficiency.
Collapse
Affiliation(s)
| | | | - Annmarie MacNamara
- Department of Psychological and Brain Sciences, Texas A&M University, College Station, TX, USA.
| |
Collapse
|
3
|
Kern L, Niedeggen M. ERP signatures of auditory awareness in cross-modal distractor-induced deafness. Conscious Cogn 2021; 96:103241. [PMID: 34823076 DOI: 10.1016/j.concog.2021.103241] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2021] [Revised: 10/15/2021] [Accepted: 11/05/2021] [Indexed: 10/19/2022]
Abstract
Previous research showed that dual-task processes such as the attentional blink are not always transferable from unimodal to cross-modal settings. This study investigated whether such a transfer can be stated for a distractor-induced impairment of target detection established in vision (distractor-induced blindness, DIB) and recently observed in the auditory modality (distractor-induced deafness, DID). A cross-modal DID effect was confirmed: The detection of an auditory target indicated by a visual cue was impaired if multiple auditory distractors preceded the target. Event-related potentials (ERPs) were used to identify psychophysiological correlates of target detection. A frontal negativity about 200 ms succeeded by a sustained, widespread negativity was associated with auditory target awareness. In contrast to unimodal findings, P3 amplitude was not enhanced for hits. The results support the notion that early frontal attentional processes are linked to auditory awareness, whereas the P3 does not seem to be a reliable indicator of target access.
Collapse
Affiliation(s)
- Lea Kern
- Freie Universität Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| | - Michael Niedeggen
- Freie Universität Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Habelschwerdter Allee 45, 14195 Berlin, Germany.
| |
Collapse
|
4
|
Rau PLP, Zheng J, Wang L, Zhao J, Wang D. Haptic and Auditory-Haptic Attentional Blink in Spatial and Object-Based Tasks. Multisens Res 2020; 33:295-312. [PMID: 31883506 DOI: 10.1163/22134808-20191483] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2019] [Accepted: 10/14/2019] [Indexed: 11/19/2022]
Abstract
Dual-task performance depends on both modalities (e.g., vision, audition, haptics) and task types (spatial or object-based), and the order by which different task types are organized. Previous studies on haptic and especially auditory-haptic attentional blink (AB) are scarce, and the effect of task types and their order have not been fully explored. In this study, 96 participants, divided into four groups of task type combinations, identified auditory or haptic Target 1 (T1) and haptic Target 2 (T2) in rapid series of sounds and forces. We observed a haptic AB (i.e., the accuracy of identifying T2 increased with increasing stimulus onset asynchrony between T1 and T2) in spatial, object-based, and object-spatial tasks, but not in spatial-object task. Changing the modality of an object-based T1 from haptics to audition eliminated the AB, but similar haptic-to-auditory change of the modality of a spatial T1 had no effect on the AB (if it exists). Our findings fill a gap in the literature regarding the auditory-haptic AB, and substantiate the importance of modalities, task types and their order, and the interaction between them. These findings were explained by how the cerebral cortex is organized for processing spatial and object-based information in different modalities.
Collapse
Affiliation(s)
| | - Jian Zheng
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Lijun Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China
| | - Jingyu Zhao
- 1Department of Industrial Engineering, Tsinghua University, Beijing, China
| | - Dangxiao Wang
- 2State Key Lab of Virtual Reality Technology and Systems, Beihang University, Beijing, China.,3Beijing Advanced Innovation Center for Biomedical Engineering, Beihang University, Beijing, China.,4Peng Cheng Laboratory (PCL), Shenzhen, Guangdong Province, China
| |
Collapse
|
5
|
Wahn B, König P. Can Limitations of Visuospatial Attention Be Circumvented? A Review. Front Psychol 2017; 8:1896. [PMID: 29163278 PMCID: PMC5665179 DOI: 10.3389/fpsyg.2017.01896] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2017] [Accepted: 10/12/2017] [Indexed: 12/03/2022] Open
Abstract
In daily life, humans are bombarded with visual input. Yet, their attentional capacities for processing this input are severely limited. Several studies have investigated factors that influence these attentional limitations and have identified methods to circumvent them. Here, we provide a review of these findings. We first review studies that have demonstrated limitations of visuospatial attention and investigated physiological correlates of these limitations. We then review studies in multisensory research that have explored whether limitations in visuospatial attention can be circumvented by distributing information processing across several sensory modalities. Finally, we discuss research from the field of joint action that has investigated how limitations of visuospatial attention can be circumvented by distributing task demands across people and providing them with multisensory input. We conclude that limitations of visuospatial attention can be circumvented by distributing attentional processing across sensory modalities when tasks involve spatial as well as object-based attentional processing. However, if only spatial attentional processing is required, limitations of visuospatial attention cannot be circumvented by distributing attentional processing. These findings from multisensory research are applicable to visuospatial tasks that are performed jointly by two individuals. That is, in a joint visuospatial task requiring object-based as well as spatial attentional processing, joint performance is facilitated when task demands are distributed across sensory modalities. Future research could further investigate how applying findings from multisensory research to joint action research may facilitate joint performance. Generally, findings are applicable to real-world scenarios such as aviation or car-driving to circumvent limitations of visuospatial attention.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany
| | - Peter König
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück, Germany.,Institut für Neurophysiologie und Pathophysiologie, Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
6
|
Wahn B, König P. Is Attentional Resource Allocation Across Sensory Modalities Task-Dependent? Adv Cogn Psychol 2017; 13:83-96. [PMID: 28450975 PMCID: PMC5405449 DOI: 10.5709/acp-0209-2] [Citation(s) in RCA: 62] [Impact Index Per Article: 8.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2016] [Accepted: 01/04/2017] [Indexed: 11/23/2022] Open
Abstract
Human information processing is limited by attentional resources. That is, via
attentional mechanisms, humans select a limited amount of sensory input to
process while other sensory input is neglected. In multisensory research, a
matter of ongoing debate is whether there are distinct pools of attentional
resources for each sensory modality or whether attentional resources are shared
across sensory modalities. Recent studies have suggested that attentional
resource allocation across sensory modalities is in part task-dependent. That
is, the recruitment of attentional resources across the sensory modalities
depends on whether processing involves object-based attention
(e.g., the discrimination of stimulus attributes) or spatial
attention (e.g., the localization of stimuli). In the present
paper, we review findings in multisensory research related to this view. For the
visual and auditory sensory modalities, findings suggest that distinct resources
are recruited when humans perform object-based attention tasks, whereas for the
visual and tactile sensory modalities, partially shared resources are recruited.
If object-based attention tasks are time-critical, shared resources are
recruited across the sensory modalities. When humans perform an object-based
attention task in combination with a spatial attention task, partly shared
resources are recruited across the sensory modalities as well. Conversely, for
spatial attention tasks, attentional processing does consistently involve shared
attentional resources for the sensory modalities. Generally, findings suggest
that the attentional system flexibly allocates attentional resources depending
on task demands. We propose that such flexibility reflects a large-scale
optimization strategy that minimizes the brain’s costly resource expenditures
and simultaneously maximizes capability to process currently relevant
information.
Collapse
Affiliation(s)
- Basil Wahn
- Institute of Cognitive Science, Universität Osnabrück, Osnabrück,
Germany
| | - Peter König
- Institut für Neurophysiologie und Pathophysiologie,
Universitätsklinikum Hamburg-Eppendorf, Hamburg, Germany
| |
Collapse
|
7
|
Wahn B, Schwandt J, Krüger M, Crafa D, Nunnendorf V, König P. Multisensory teamwork: using a tactile or an auditory display to exchange gaze information improves performance in joint visual search. ERGONOMICS 2016; 59:781-795. [PMID: 26587687 DOI: 10.1080/00140139.2015.1099742] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/27/2014] [Accepted: 09/15/2015] [Indexed: 06/05/2023]
Abstract
In joint tasks, adjusting to the actions of others is critical for success. For joint visual search tasks, research has shown that when search partners visually receive information about each other's gaze, they use this information to adjust to each other's actions, resulting in faster search performance. The present study used a visual, a tactile and an auditory display, respectively, to provide search partners with information about each other's gaze. Results showed that search partners performed faster when the gaze information was received via a tactile or auditory display in comparison to receiving it via a visual display or receiving no gaze information. Findings demonstrate the effectiveness of tactile and auditory displays for receiving task-relevant information in joint tasks and are applicable to circumstances in which little or no visual information is available or the visual modality is already taxed with a demanding task such as air-traffic control. Practitioner Summary: The present study demonstrates that tactile and auditory displays are effective for receiving information about actions of others in joint tasks. Findings are either applicable to circumstances in which little or no visual information is available or when the visual modality is already taxed with a demanding task.
Collapse
Affiliation(s)
- Basil Wahn
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
| | - Jessika Schwandt
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
| | - Matti Krüger
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
| | - Daina Crafa
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
- b Integrated Program in Neuroscience, Douglas Mental Health Institute , McGill University , Montreal , Canada
| | - Vanessa Nunnendorf
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
| | - Peter König
- a Institute of Cognitive Science , Universität Osnabrück , Osnabrück , Germany
- c Institut für Neurophysiologie und Pathophysiologie , Universitätsklinikum Hamburg-Eppendorf , Hamburg , Germany
| |
Collapse
|
8
|
Wahn B, König P. Attentional Resource Allocation in Visuotactile Processing Depends on the Task, But Optimal Visuotactile Integration Does Not Depend on Attentional Resources. Front Integr Neurosci 2016; 10:13. [PMID: 27013994 PMCID: PMC4781873 DOI: 10.3389/fnint.2016.00013] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2015] [Accepted: 02/23/2016] [Indexed: 12/01/2022] Open
Abstract
Humans constantly process and integrate sensory input from multiple sensory modalities. However, the amount of input that can be processed is constrained by limited attentional resources. A matter of ongoing debate is whether attentional resources are shared across sensory modalities, and whether multisensory integration is dependent on attentional resources. Previous research suggested that the distribution of attentional resources across sensory modalities depends on the the type of tasks. Here, we tested a novel task combination in a dual task paradigm: Participants performed a self-terminated visual search task and a localization task in either separate sensory modalities (i.e., haptics and vision) or both within the visual modality. Tasks considerably interfered. However, participants performed the visual search task faster when the localization task was performed in the tactile modality in comparison to performing both tasks within the visual modality. This finding indicates that tasks performed in separate sensory modalities rely in part on distinct attentional resources. Nevertheless, participants integrated visuotactile information optimally in the localization task even when attentional resources were diverted to the visual search task. Overall, our findings suggest that visual search and tactile localization partly rely on distinct attentional resources, and that optimal visuotactile integration is not dependent on attentional resources.
Collapse
Affiliation(s)
- Basil Wahn
- Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany
| | - Peter König
- Neurobiopsychology, Institute of Cognitive Science, Universität OsnabrückOsnabrück, Germany; Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-EppendorfHamburg, Germany
| |
Collapse
|
9
|
Wahn B, König P. Audition and vision share spatial attentional resources, yet attentional load does not disrupt audiovisual integration. Front Psychol 2015; 6:1084. [PMID: 26284008 PMCID: PMC4518141 DOI: 10.3389/fpsyg.2015.01084] [Citation(s) in RCA: 38] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2015] [Accepted: 07/14/2015] [Indexed: 11/22/2022] Open
Abstract
Humans continuously receive and integrate information from several sensory modalities. However, attentional resources limit the amount of information that can be processed. It is not yet clear how attentional resources and multisensory processing are interrelated. Specifically, the following questions arise: (1) Are there distinct spatial attentional resources for each sensory modality? and (2) Does attentional load affect multisensory integration? We investigated these questions using a dual task paradigm: participants performed two spatial tasks (a multiple object tracking task and a localization task), either separately (single task condition) or simultaneously (dual task condition). In the multiple object tracking task, participants visually tracked a small subset of several randomly moving objects. In the localization task, participants received either visual, auditory, or redundant visual and auditory location cues. In the dual task condition, we found a substantial decrease in participants' performance relative to the results of the single task condition. Importantly, participants performed equally well in the dual task condition regardless of the location cues' modality. This result suggests that having spatial information coming from different modalities does not facilitate performance, thereby indicating shared spatial attentional resources for the auditory and visual modality. Furthermore, we found that participants integrated redundant multisensory information similarly even when they experienced additional attentional load in the dual task condition. Overall, findings suggest that (1) visual and auditory spatial attentional resources are shared and that (2) audiovisual integration of spatial information occurs in an pre-attentive processing stage.
Collapse
Affiliation(s)
- Basil Wahn
- Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany
| | - Peter König
- Neurobiopsychology, Institute of Cognitive Science, Universität Osnabrück Osnabrück, Germany ; Department of Neurophysiology and Pathophysiology, Center of Experimental Medicine, University Medical Center Hamburg-Eppendorf Hamburg, Germany
| |
Collapse
|
10
|
Finoia P, Mitchell DJ, Hauk O, Beste C, Pizzella V, Duncan J. Concurrent brain responses to separate auditory and visual targets. J Neurophysiol 2015; 114:1239-47. [PMID: 26084914 PMCID: PMC4540000 DOI: 10.1152/jn.01050.2014] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2014] [Accepted: 06/11/2015] [Indexed: 12/29/2022] Open
Abstract
In the attentional blink, a target event (T1) strongly interferes with perception of a second target (T2) presented within a few hundred milliseconds. Concurrently, the brain's electromagnetic response to the second target is suppressed, especially a late negative-positive EEG complex including the traditional P3 wave. An influential theory proposes that conscious perception requires access to a distributed, frontoparietal global workspace, explaining the attentional blink by strong mutual inhibition between concurrent workspace representations. Often, however, the attentional blink is reduced or eliminated for targets in different sensory modalities, suggesting a limit to such global inhibition. Using functional magnetic resonance imaging, we confirm that visual and auditory targets produce similar, distributed patterns of frontoparietal activity. In an attentional blink EEG/MEG design, however, an auditory T1 and visual T2 are identified without mutual interference, with largely preserved electromagnetic responses to T2. The results suggest parallel brain responses to target events in different sensory modalities.
Collapse
Affiliation(s)
- Paola Finoia
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom;
| | | | - Olaf Hauk
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom
| | - Christian Beste
- Cognitive Neurophysiology, Department of Child and Adolescent Psychiatry, Universitätsklinikum Carl Gustav Carus an der Technischen Universität Dresden, Dresden, Germany
| | - Vittorio Pizzella
- Institute for Advanced Biomedical Technologies-I.T.A.B., University of Chieti and Pescara "G. D'Annunzio," Chieti, Italy; and
| | - John Duncan
- MRC Cognition and Brain Sciences Unit, Cambridge, United Kingdom; Department of Experimental Psychology, University of Oxford, Oxford, United Kingdom
| |
Collapse
|
11
|
Abstract
High perceptual load in a task is known to reduce the visual perception of unattended items (e.g., Lavie, Beck, & Konstantinou, 2014). However, it remains an open question whether perceptual load in one modality (e.g., vision) can affect the detection of stimuli in another modality (e.g., hearing). We report four experiments that establish that high visual perceptual load leads to reduced detection sensitivity in hearing. Participants were requested to detect a tone that was presented during performance of a visual search task of either low or high perceptual load (varied through item similarity). The findings revealed that auditory detection sensitivity was consistently reduced with higher load, and that this effect persisted even when the auditory detection response was made first (before the search response) and when the auditory stimulus was highly expected (50 % present). These findings demonstrate a phenomenon of load-induced deafness and provide evidence for shared attentional capacity across vision and hearing.
Collapse
Affiliation(s)
- Dana Raveh
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom,
| | | |
Collapse
|
12
|
Van der Burg E, Nieuwenstein MR, Theeuwes J, Olivers CNL. Irrelevant auditory and visual events induce a visual attentional blink. Exp Psychol 2012; 60:80-9. [PMID: 23047915 DOI: 10.1027/1618-3169/a000174] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022]
Abstract
In the present study we investigated whether a task-irrelevant distractor can induce a visual attentional blink pattern. Participants were asked to detect only a visual target letter (A, B, or C) and to ignore the preceding auditory, visual, or audiovisual distractor. An attentional blink was observed regardless of the distractor modality. The magnitude of the attentional blink was greater when the target was preceded by a visual or an audiovisual distractor than when the target letter was preceded by an auditory distractor. The presence of a distractor-induced attentional blink regardless of the distractor modality suggests that the attentional blink phenomenon is at least partly due to an amodal processing limitation.
Collapse
Affiliation(s)
- Erik Van der Burg
- Department of Cognitive Psychology, VU University Amsterdam, The Netherlands.
| | | | | | | |
Collapse
|
13
|
|
14
|
Martens S, Kandula M, Duncan J. Restricted attentional capacity within but not between sensory modalities: an individual differences approach. PLoS One 2010; 5:e15280. [PMID: 21151865 PMCID: PMC2998418 DOI: 10.1371/journal.pone.0015280] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2010] [Accepted: 11/04/2010] [Indexed: 11/30/2022] Open
Abstract
Background Most people show a remarkable deficit to report the second of two targets when presented in close temporal succession, reflecting an attentional blink (AB). An aspect of the AB that is often ignored is that there are large individual differences in the magnitude of the effect. Here we exploit these individual differences to address a long-standing question: does attention to a visual target come at a cost for attention to an auditory target (and vice versa)? More specifically, the goal of the current study was to investigate a) whether individuals with a large within-modality AB also show a large cross-modal AB, and b) whether individual differences in AB magnitude within different modalities correlate or are completely separate. Methodology/Principal Findings While minimizing differential task difficulty and chances for a task-switch to occur, a significant AB was observed when targets were both presented within the auditory or visual modality, and a positive correlation was found between individual within-modality AB magnitudes. However, neither a cross-modal AB nor a correlation between cross-modal and within-modality AB magnitudes was found. Conclusion/Significance The results provide strong evidence that a major source of attentional restriction must lie in modality-specific sensory systems rather than a central amodal system, effectively settling a long-standing debate. Individuals with a large within-modality AB may be especially committed or focused in their processing of the first target, and to some extent that tendency to focus could cross modalities, reflected in the within-modality correlation. However, what they are focusing (resource allocation, blocking of processing) is strictly within-modality as it only affects the second target on within-modality trials. The findings show that individual differences in AB magnitude can provide important information about the modular structure of human cognition.
Collapse
Affiliation(s)
- Sander Martens
- Neuroimaging Center, University of Groningen, Groningen, The Netherlands.
| | | | | |
Collapse
|
15
|
Van der Burg E, Brederoo SG, Nieuwenstein MR, Theeuwes J, Olivers CNL. Audiovisual semantic interference and attention: evidence from the attentional blink paradigm. Acta Psychol (Amst) 2010; 134:198-205. [PMID: 20176341 DOI: 10.1016/j.actpsy.2010.01.010] [Citation(s) in RCA: 14] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/23/2009] [Revised: 01/29/2010] [Accepted: 01/30/2010] [Indexed: 11/29/2022] Open
Abstract
In the present study we investigate the role of attention in audiovisual semantic interference, by using an attentional blink paradigm. Participants were asked to make an unspeeded response to the identity of a visual target letter. This target letter was preceded at various SOAs by a synchronized audiovisual letter-pair, which was either congruent (e.g. hearing an "F" and viewing an "F") or incongruent (e.g. hearing an "F" and viewing a "Z"). In Experiment 1, participants were asked to match the members of the audiovisual letter-pair. In Experiment 2, participants were asked to ignore the synchronized audiovisual letter-pairs altogether and only report the visual target. In Experiment 3, participants were asked to identify only one of the audiovisual letters (identify the auditory letter, and ignore the synchronized visual letter, or vice versa). An attentional blink was found in all three experiments indicating that the audiovisual letter-pairs were processed. However, a congruency effect on subsequent target detection was observed in Experiments 1 and 3, but not in Experiment 2. The results indicate that attention to the semantic contents of at least one modality is necessary to establish audiovisual semantic interference.
Collapse
Affiliation(s)
- Erik Van der Burg
- Cognitive Psychology, Vrije Universiteit Amsterdam, Amsterdam, The Netherlands.
| | | | | | | | | |
Collapse
|