1
|
Li J, Hua L, Deng SW. Modality-specific impacts of distractors on visual and auditory categorical decision-making: an evidence accumulation perspective. Front Psychol 2024; 15:1380196. [PMID: 38765839 PMCID: PMC11099231 DOI: 10.3389/fpsyg.2024.1380196] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/01/2024] [Accepted: 04/16/2024] [Indexed: 05/22/2024] Open
Abstract
Our brain constantly processes multisensory inputs to make decisions and guide behaviors, but how goal-relevant processes are influenced by irrelevant information is unclear. Here, we investigated the effects of intermodal and intramodal task-irrelevant information on visual and auditory categorical decision-making. In both visual and auditory tasks, we manipulated the modality of irrelevant inputs (visual vs. auditory vs. none) and used linear discrimination analysis of EEG and hierarchical drift-diffusion modeling (HDDM) to identify when and how task-irrelevant information affected decision-relevant processing. The results revealed modality-specific impacts of irrelevant inputs on visual and auditory categorical decision-making. The distinct effects on the visual task were shown on the neural components, with auditory distractors amplifying the sensory processing whereas visual distractors amplifying the post-sensory process. Conversely, the distinct effects on the auditory task were shown in behavioral performance and underlying cognitive processes. Visual distractors facilitate behavioral performance and affect both stages, but auditory distractors interfere with behavioral performance and impact on the sensory processing rather than the post-sensory decision stage. Overall, these findings suggested that auditory distractors affect the sensory processing stage of both tasks while visual distractors affect the post-sensory decision stage of visual categorical decision-making and both stages of auditory categorical decision-making. This study provides insights into how humans process information from multiple sensory modalities during decision-making by leveraging modality-specific impacts.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
| | - Lin Hua
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
- Faculty of Health Sciences, University of Macau, Macau, China
| | - Sophia W. Deng
- Department of Psychology, University of Macau, Macau, China
- Center for Cognitive and Brain Sciences, University of Macau, Macau, China
| |
Collapse
|
2
|
Li J, Deng SW. Attentional focusing and filtering in multisensory categorization. Psychon Bull Rev 2024; 31:708-720. [PMID: 37673842 DOI: 10.3758/s13423-023-02370-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/11/2023] [Indexed: 09/08/2023]
Abstract
Selective attention refers to the ability to focus on goal-relevant information while filtering out irrelevant information. In a multisensory context, how do people selectively attend to multiple inputs when making categorical decisions? Here, we examined the role of selective attention in cross-modal categorization in two experiments. In a speed categorization task, participants were asked to attend to visual or auditory targets and categorize them while ignoring other irrelevant stimuli. A response-time extended multinomial processing tree (RT-MPT) model was implemented to estimate the contribution of attentional focusing on task-relevant information and attentional filtering of distractors. The results indicated that the role of selective attention was modality-specific, with differences found in attentional focusing and filtering between visual and auditory modalities. Visual information could be focused on or filtered out more effectively, whereas auditory information was more difficult to filter out, causing greater interference with task-relevant performance. The findings suggest that selective attention plays a critical and differential role across modalities, which provides a novel and promising approach to understanding multisensory processing and attentional focusing and filtering mechanisms of categorical decision-making.
Collapse
Affiliation(s)
- Jianhua Li
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau
| | - Sophia W Deng
- Department of Psychology, University of Macau, Avenida da Universidade, Taipa, Macau.
- Center for Cognitive and Brain Sciences, University of Macau, Taipa, Macau.
| |
Collapse
|
3
|
Zhu R, Ma X, You X. The effect of working memory load on inattentional deafness during aeronautical decision-making. APPLIED ERGONOMICS 2023; 113:104099. [PMID: 37480663 DOI: 10.1016/j.apergo.2023.104099] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 07/16/2023] [Accepted: 07/17/2023] [Indexed: 07/24/2023]
Abstract
Operating an aircraft requires pilots to handle a significant amount of multi-modal information, which creates a high working memory load. Detecting auditory alarms in this high-load scenario is crucial for aviation safety. According to cognitive control load theory, an increase in working memory load may enhance distractor interference, resulting in improved detection sensitivity for task-irrelevant stimuli. Therefore, understanding the effect of working memory load on auditory alarm detection is of particular interest in aviation safety research. The studies were designed to investigate the effect of storage load and executive function load of working memory on auditory alarm detection during aeronautical decision-making through three experiments. In Experiment 1 and 2, participants performed an aeronautical decision-making task while also detecting an auditory alarm during the retention interval of a working memory task (visual-spatial, visual-verbal and auditory-verbal). In Experiment 3, participants were required to detect an auditory alarm while performing the 2-back and 3-back aeronautical decision-making tasks. Experiment 1 found that the auditory alarm sensitivity was higher in conditions of low visual-spatial working memory storage load compare to high load conditions. Experiment 2 found that a high storage load of visual-verbal working memory reduced auditory alarm sensitivity but auditory-verbal working memory load did not. Experiment 3 found that, unlike storage load, auditory alarm sensitivity was stronger under high executive function load relative to low executive function load. These findings show that working memory storage load and executive function load have different effects on auditory alarm sensitivity. The relationship between executive function and auditory alarm sensitivity supports cognitive control load theory, while the impact of the storage function on auditory alarm sensitivity does not adhere to this theory.
Collapse
Affiliation(s)
- Rongjuan Zhu
- School of Management, Xi'an University of Science and Technology, Xi'an, 710054, China
| | - Xiaoliang Ma
- Geovis Spatial Technology Co.,Ltd, Xi'an, 710100, China
| | - Xuqun You
- Key Laboratory for Behavior and Cognitive Neuroscience of Shaanxi Province, School of Psychology, Shaanxi Normal University, Xi'an, 710062, China.
| |
Collapse
|
4
|
Speed LJ, Brysbaert M. Ratings of valence, arousal, happiness, anger, fear, sadness, disgust, and surprise for 24,000 Dutch words. Behav Res Methods 2023:10.3758/s13428-023-02239-6. [PMID: 37783901 DOI: 10.3758/s13428-023-02239-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/07/2023] [Indexed: 10/04/2023]
Abstract
Emotion is a fundamental aspect of human life and therefore is critically encoded in language. To facilitate research into the encoding of emotion in language and how emotion associations affect language processing, we present a new set of emotion norms for over 24,000 Dutch words. The emotion norms include ratings of two key dimensions of emotion: valence and arousal, as well as ratings on discrete emotion categories: happiness, anger, fear, sadness, disgust, and surprise. We show that emotional information can predict word processing, such that responses to positive words are facilitated in contrast to neutral and negative words. We also demonstrate how the ratings of emotion are related to personality characteristics. The data are available via the Open Science Framework ( https://osf.io/9htuv/ ) and serve as a valuable resource for research into emotion as well as in applied settings such as healthcare and digital communication.
Collapse
Affiliation(s)
- Laura J Speed
- Centre for Language Studies, Radboud University, Nijmegen, Netherlands.
| | - Marc Brysbaert
- Department of Experimental Psychology, Ghent University, Ghent, Belgium
| |
Collapse
|
5
|
Gustafson SJ, Nelson L, Silcox JW. Effect of Auditory Distractors on Speech Recognition and Listening Effort. Ear Hear 2023; 44:1121-1132. [PMID: 36935395 PMCID: PMC10440215 DOI: 10.1097/aud.0000000000001356] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023]
Abstract
OBJECTIVES Everyday listening environments are filled with competing noise and distractors. Although significant research has examined the effect of competing noise on speech recognition and listening effort, little is understood about the effect of distraction. The framework for understanding effortful listening recognizes the importance of attention-related processes in speech recognition and listening effort; however, it underspecifies the role that they play, particularly with respect to distraction. The load theory of attention predicts that resources will be automatically allocated to processing a distractor, but only if perceptual load in the listening task is low enough. If perceptual load is high (i.e., listening in noise), then resources that would otherwise be allocated to processing a distractor are used to overcome the increased perceptual load and are unavailable for distractor processing. Although there is ample evidence for this theory in the visual domain, there has been little research investigating how the load theory of attention may apply to speech processing. In this study, we sought to measure the effect of distractors on speech recognition and listening effort and to evaluate whether the load theory of attention can be used to understand a listener's resource allocation in the presence of distractors. DESIGN Fifteen adult listeners participated in a monosyllabic words repetition task. Test stimuli were presented in quiet or in competing speech (+5 dB signal-to-noise ratio) and in distractor or no distractor conditions. In conditions with distractors, auditory distractors were presented before the target words on 24% of the trials in quiet and in noise. Percent-correct was recorded as speech recognition, and verbal response time (VRT) was recorded as a measure of listening effort. RESULTS A significant interaction was present for speech recognition, showing reduced speech recognition when distractors were presented in the quiet condition but no effect of distractors when noise was present. VRTs were significantly longer when distractors were present, regardless of listening condition. CONCLUSIONS Consistent with the load theory of attention, distractors significantly reduced speech recognition in the low-perceptual load condition (i.e., listening in quiet) but did not impact speech recognition scores in conditions of high perceptual load (i.e., listening in noise). The increases in VRTs in the presence of distractors in both low- and high-perceptual load conditions (i.e., quiet and noise) suggest that the load theory of attention may not apply to listening effort. However, the large effect of distractors on VRT in both conditions is consistent with the previous work demonstrating that distraction-related shifts of attention can delay processing of the target task. These findings also fit within the framework for understanding effortful listening, which proposes that involuntary attentional shifts result in a depletion of cognitive resources, leaving less resources readily available to process the signal of interest; resulting in increased listening effort (i.e., elongated VRT).
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Communication Sciences and Disorders, University of Utah, Salt Lake City, Utah
- These authors contributed equally to this work
| | - Loren Nelson
- Department of Communication Sciences and Disorders, University of Utah, Salt Lake City, Utah
- These authors contributed equally to this work
| | - Jack W Silcox
- Department of Psychology, University of Utah, Salt Lake City, Utah
| |
Collapse
|
6
|
Takai S, Kanno A, Kawase T, Shirakura M, Suzuki J, Nakasato N, Kawashima R, Katori Y. Possibility of additive effects by the presentation of visual information related to distractor sounds on the contra-sound effects of the N100m responses. Hear Res 2023; 434:108778. [PMID: 37105052 DOI: 10.1016/j.heares.2023.108778] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Revised: 04/13/2023] [Accepted: 04/21/2023] [Indexed: 04/29/2023]
Abstract
Auditory-evoked responses can be affected by different types of contralateral sounds or by attention modulation. The present study examined the additive effects of presenting visual information about contralateral sounds as distractions during dichotic listening tasks on the contralateral effects of N100m responses in the auditory-evoked cortex in 16 subjects (12 males and 4 females). In magnetoencephalography, a tone-burst of 500 ms duration at a frequency of 1000 Hz was played to the left ear at a level of 70 dB as a stimulus to elicit the N100m response, and a movie clip was used as a distractor stimulus under audio-only, visual-only, and audio-visual conditions. Subjects were instructed to pay attention to the left ear and press the response button each time they heard a tone-burst stimulus in their left ear. The results suggest that the presentation of visual information related to the contralateral sound, which worked as a distractor, significantly suppressed the amplitude of the N100m response compared with only the contralateral sound condition. In contrast, the presentation of visual information related to contralateral sound did not affect the latency of the N100m response. These results suggest that the integration of contralateral sounds and related movies may have resulted in a more perceptually loaded stimulus and reduced the intensity of attention to tone-bursts. Our findings suggest that selective attention and saliency mechanisms may have cross-modal effects on other modes of perception.
Collapse
Affiliation(s)
- Shunsuke Takai
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan.
| | - Akitake Kanno
- Department of Advanced Spintronics Medical Engineering, Graduate School of Engineering, Tohoku University, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8575, Japan; Department of Epileptology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8575, Japan
| | - Tetsuaki Kawase
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan; Laboratory of Rehabilitative Auditory Science, Tohoku University Graduate School of Biomedical Engineering, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan; Department of Audiology, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan
| | - Masayuki Shirakura
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan
| | - Jun Suzuki
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan
| | - Nobukatsu Nakasato
- Department of Advanced Spintronics Medical Engineering, Graduate School of Engineering, Tohoku University, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8575, Japan; Department of Epileptology, Tohoku University Graduate School of Medicine, 2-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8575, Japan
| | - Ryuta Kawashima
- Institute of Development, Aging and Cancer, Tohoku University, 4-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8575, Japan
| | - Yukio Katori
- Department of Otolaryngology-Head and Neck Surgery, Tohoku University Graduate School of Medicine, 1-1 Seiryo-machi, Aoba-ku, Sendai, Miyagi 980-8574, Japan
| |
Collapse
|
7
|
He Y, Yang T, He C, Sun K, Guo Y, Wang X, Bai L, Xue T, Xu T, Guo Q, Liao Y, Liu X, Wu S. Effects of audiovisual interactions on working memory: Use of the combined N-back + Go/NoGo paradigm. Front Psychol 2023; 14:1080788. [PMID: 36874804 PMCID: PMC9982107 DOI: 10.3389/fpsyg.2023.1080788] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2022] [Accepted: 01/27/2023] [Indexed: 02/19/2023] Open
Abstract
Background Approximately 94% of sensory information acquired by humans originates from the visual and auditory channels. Such information can be temporarily stored and processed in working memory, but this system has limited capacity. Working memory plays an important role in higher cognitive functions and is controlled by central executive function. Therefore, elucidating the influence of the central executive function on information processing in working memory, such as in audiovisual integration, is of great scientific and practical importance. Purpose This study used a paradigm that combined N-back and Go/NoGo tasks, using simple Arabic numerals as stimuli, to investigate the effects of cognitive load (modulated by varying the magnitude of N) and audiovisual integration on the central executive function of working memory as well as their interaction. Methods Sixty college students aged 17-21 years were enrolled and performed both unimodal and bimodal tasks to evaluate the central executive function of working memory. The order of the three cognitive tasks was pseudorandomized, and a Latin square design was used to account for order effects. Finally, working memory performance, i.e., reaction time and accuracy, was compared between unimodal and bimodal tasks with repeated-measures analysis of variance (ANOVA). Results As cognitive load increased, the presence of auditory stimuli interfered with visual working memory by a moderate to large extent; similarly, as cognitive load increased, the presence of visual stimuli interfered with auditory working memory by a moderate to large effect size. Conclusion Our study supports the theory of competing resources, i.e., that visual and auditory information interfere with each other and that the magnitude of this interference is primarily related to cognitive load.
Collapse
Affiliation(s)
- Yang He
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Tianqi Yang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Chunyan He
- Department of Nursing, Fourth Military Medical University, Xi'an, China
| | - Kewei Sun
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Yaning Guo
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Xiuchao Wang
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Lifeng Bai
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Ting Xue
- Faculty of Humanities and Social Sciences, Aviation University of Air Force, Changchun, China
| | - Tao Xu
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Qingjun Guo
- Psychology Section, Secondary Sanatorium of Air Force Healthcare Center for Special Services, Hangzhou, China
| | - Yang Liao
- Air Force Medical Center, Air Force Medical University, Beijing, China
| | - Xufeng Liu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| | - Shengjun Wu
- Department of Military Medical Psychology, Fourth Military Medical University, Xi'an, China
| |
Collapse
|
8
|
Yang W, Li S, Guo A, Li Z, Yang X, Ren Y, Yang J, Wu J, Zhang Z. Auditory attentional load modulates the temporal dynamics of audiovisual integration in older adults: An ERPs study. Front Aging Neurosci 2022; 14:1007954. [PMID: 36325188 PMCID: PMC9618958 DOI: 10.3389/fnagi.2022.1007954] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/31/2022] [Accepted: 09/23/2022] [Indexed: 12/02/2022] Open
Abstract
As older adults experience degenerations in perceptual ability, it is important to gain perception from audiovisual integration. Due to attending to one or more auditory stimuli, performing other tasks is a common challenge for older adults in everyday life. Therefore, it is necessary to probe the effects of auditory attentional load on audiovisual integration in older adults. The present study used event-related potentials (ERPs) and a dual-task paradigm [Go / No-go task + rapid serial auditory presentation (RSAP) task] to investigate the temporal dynamics of audiovisual integration. Behavioral results showed that both older and younger adults responded faster and with higher accuracy to audiovisual stimuli than to either visual or auditory stimuli alone. ERPs revealed weaker audiovisual integration under the no-attentional auditory load condition at the earlier processing stages and, conversely, stronger integration in the late stages. Moreover, audiovisual integration was greater in older adults than in younger adults at the following time intervals: 60–90, 140–210, and 430–530 ms. Notably, only under the low load condition in the time interval of 140–210 ms, we did find that the audiovisual integration of older adults was significantly greater than that of younger adults. These results delineate the temporal dynamics of the interactions with auditory attentional load and audiovisual integration in aging, suggesting that modulation of auditory attentional load affects audiovisual integration, enhancing it in older adults.
Collapse
Affiliation(s)
- Weiping Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
- Brain and Cognition Research Center (BCRC), Faculty of Education, Hubei University, Wuhan, China
| | - Shengnan Li
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Ao Guo
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Zimo Li
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Xiangfu Yang
- Department of Psychology, Faculty of Education, Hubei University, Wuhan, China
| | - Yanna Ren
- Department of Psychology, College of Humanities and Management, Guizhou University of Traditional Chinese Medicine, Guiyang, China
- *Correspondence: Yanna Ren
| | - Jiajia Yang
- Applied Brain Science Lab, Faculty of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
| | - Jinglong Wu
- Graduate School of Interdisciplinary Science and Engineering in Health Systems, Okayama University, Okayama, Japan
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
| | - Zhilin Zhang
- Research Center for Medical Artificial Intelligence, Shenzhen Institute of Advanced Technology, Chinese Academy of Science, Shenzhen, China
- Zhilin Zhang
| |
Collapse
|
9
|
The unnoticed zoo: Inattentional deafness to animal sounds in music. Atten Percept Psychophys 2022; 85:1238-1252. [PMID: 36008746 PMCID: PMC10167135 DOI: 10.3758/s13414-022-02553-9] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 08/08/2022] [Indexed: 11/08/2022]
Abstract
Inattentional unawareness potentially occurs in several different sensory domains but is mainly described in visual paradigms ("inattentional blindness"; e.g., Simons & Chabris, 1999, Perception, 28, 1059-1074). Dalton and Fraenkel (2012, Cognition, 124, 367-372) were introducing "inattentional deafness" by showing that participants missed by 70% a voice repeatedly saying "I'm a Gorilla" when focusing on a primary conversation. The present study expanded this finding from the acoustic domain in a multifaceted way: First, we extended the validity perspective by using 10 acoustic samples-specifically, excerpts of popular musical pieces from different music genres. Second, we used as the secondary acoustic signal animal sounds. Those sounds originate from a completely different acoustic domain and are therefore highly distinctive from the primary sound. Participants' task was to count different musical features. Results (N = 37 participants) showed that the frequency of missed animal sounds was higher in participants with higher attentional focus and motivation. Additionally, attentional focus, perceptual load, and feature similarity/saliency were analyzed and did not have an influence on detecting or missing animal sounds. We could demonstrate that for 31.2% of the music plays, people did not recognize highly salient animal voices (regarding the type of acoustic source as well as the frequency spectra) when executing the primary (counting) task. This uncovered, significant effect supports the idea that inattentional deafness is even available when the unattended acoustic stimuli are highly salient.
Collapse
|
10
|
Are auditory cues special? Evidence from cross-modal distractor-induced blindness. Atten Percept Psychophys 2022; 85:889-904. [PMID: 35902451 PMCID: PMC10066119 DOI: 10.3758/s13414-022-02540-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 07/08/2022] [Indexed: 11/08/2022]
Abstract
A target that shares features with preceding distractor stimuli is less likely to be detected due to a distractor-driven activation of a negative attentional set. This transient impairment in perceiving the target (distractor-induced blindness/deafness) can be found within vision and audition. Recently, the phenomenon was observed in a cross-modal setting involving an auditory target and additional task-relevant visual information (cross-modal distractor-induced deafness). In the current study, consisting of three behavioral experiments, a visual target, indicated by an auditory cue, had to be detected despite the presence of visual distractors. Multiple distractors consistently led to reduced target detection if cue and target appeared in close temporal proximity, confirming cross-modal distractor-induced blindness. However, the effect on target detection was reduced compared to the effect of cross-modal distractor-induced deafness previously observed for reversed modalities. The physical features defining cue and target could not account for the diminished distractor effect in the current cross-modal task. Instead, this finding may be attributed to the auditory cue acting as an especially efficient release signal of the distractor-induced inhibition. Additionally, a multisensory enhancement of visual target detection by the concurrent auditory signal might have contributed to the reduced distractor effect.
Collapse
|
11
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
12
|
Melara RD, Varela T, Baidya T. Neural and behavioral effects of perceptual load on auditory selective attention. Behav Brain Res 2021; 405:113213. [PMID: 33657438 DOI: 10.1016/j.bbr.2021.113213] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2020] [Revised: 02/01/2021] [Accepted: 02/23/2021] [Indexed: 11/17/2022]
Abstract
Healthy adults performed an auditory version of the flanker task under low versus high perceptual load while behavioral and electrophysiological measures were recorded. Participants experienced less attentional interference under low load than high load, whether analyses were performed between tasks (Garner interference; found in accuracy and RT), between stimuli (flanker congruity; found in accuracy), or between sequences (Gratton effect; found in accuracy). Analysis of event-related potentials to the distractor (flanker), which was physically identical across load conditions, revealed load modulation of tasks effects in the P1 component (peak amplitude and latency), an early perceptual component peaking approximately 75 ms after distractor onset. As in behavioral performance, ERP analyses showed that auditory attentional disruption in P1 was significantly smaller under low perceptual load. Dipole source analysis suggested activation of prefrontal inhibitory control during low load and default mode network during high load. The results are in keeping with the predictions of tectonic theory (Melara & Algom, 2003), but inconsistent with expectations derived from perceptual load theory (Lavie, 1995).
Collapse
|
13
|
Schlossmacher I, Dellert T, Bruchmann M, Straube T. Dissociating neural correlates of consciousness and task relevance during auditory processing. Neuroimage 2020; 228:117712. [PMID: 33387630 DOI: 10.1016/j.neuroimage.2020.117712] [Citation(s) in RCA: 14] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/15/2020] [Revised: 09/25/2020] [Accepted: 12/19/2020] [Indexed: 10/22/2022] Open
Abstract
In recent years, several ERP components have been identified as potential neural correlates of consciousness (NCC), including early negativities and late positivities. Based on experiments in the visual modality, it has recently been shown that awareness is often confounded with reporting it, possibly overestimating the NCC. It is unknown whether similar constraints also exist in the auditory modality. In order to address this gap, we presented spoken words in a sustained inattentional deafness paradigm. Electrophysiological responses were obtained in three physically identical experimental conditions that differed only with respect to the participants' instructions. Participants were either left uninformed or informed about the presence of spoken words while confronted with an auditory distractor task (U/I condition), informed about the words while exposed to the same task as before (I condition), or requested to respond to the now task-relevant speech stimuli (TR condition). After completion of the U/I condition, only informed participants reported awareness of the words. In ERPs, awareness of words in the U/I and I condition was accompanied by an anterior auditory awareness negativity (AAN). Only when stimuli were task-relevant, i.e., during the TR condition, late positivities emerged. Taken together, these results indicate that early negativities but not late positivities index awareness across sensory modalities. Thus, they provide evidence for a recurrent processing framework, which highlights the importance of early sensory processing in conscious perception.
Collapse
Affiliation(s)
- Insa Schlossmacher
- Institute of Medical Psychology and Systems Neuroscience, University of Münster, Von-Esmarch-Str. 52, 48149 Münster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149 Münster, Germany.
| | - Torge Dellert
- Institute of Medical Psychology and Systems Neuroscience, University of Münster, Von-Esmarch-Str. 52, 48149 Münster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149 Münster, Germany
| | - Maximilian Bruchmann
- Institute of Medical Psychology and Systems Neuroscience, University of Münster, Von-Esmarch-Str. 52, 48149 Münster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149 Münster, Germany
| | - Thomas Straube
- Institute of Medical Psychology and Systems Neuroscience, University of Münster, Von-Esmarch-Str. 52, 48149 Münster, Germany; Otto Creutzfeldt Center for Cognitive and Behavioral Neuroscience, University of Münster, 48149 Münster, Germany
| |
Collapse
|
14
|
Kern L, Niedeggen M. Distractor-induced deafness: The effect of multiple auditory distractors on conscious target processing. Cortex 2020; 134:181-194. [PMID: 33279811 DOI: 10.1016/j.cortex.2020.10.015] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 05/12/2020] [Accepted: 10/21/2020] [Indexed: 11/19/2022]
Abstract
Conscious access to a target stimulus embedded in a rapid serial visual presentation can be impaired by the preceding presentation of multiple task-irrelevant distractors. While this phenomenon - labeled distractor-induced blindness (DIB) - is established in vision, it is unknown whether a similar effect can be observed in the auditory modality. Considering the differences in the processing of visual and auditory stimuli, modality-specific effects in the inhibitory mechanisms triggered by distractors can be expected. First, we aimed to find evidence for a distractor-induced deafness (DID) for auditory targets in a behavioral experiment. The target was defined by a transient increase in amplitude in a continuous sinusoidal tone, which was to be detected if accompanied or preceded by a deviant tone (cue). Both events were embedded in separate streams in a binaural rapid serial auditory presentation. Distractors preceded the cue and shared the target's features. As previously observed for DIB, a failure to detect the auditory target critically relied on the presentation of multiple distractor episodes. This DID effect was followed up in a subsequent event-related brain potentials (ERP) study to identify the signature of target detection. In contrast to missed targets, hits were characterized by a larger frontal negativity and by a more pronounced centro-parietal P3b wave. Whereas the latter process was also observed in the visual domain, indicating a post-perceptual updating process, the frontal negativity was exclusively observed for auditory DID. This modality-specific process might signal that early attentional control processes support conscious access to relevant auditory events.
Collapse
Affiliation(s)
- Lea Kern
- FU Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Berlin, Germany.
| | - Michael Niedeggen
- FU Berlin, Department of Education and Psychology, Division General Psychology and Neuropsychology, Berlin, Germany.
| |
Collapse
|
15
|
de Kerangal M, Vickers D, Chait M. The effect of healthy aging on change detection and sensitivity to predictable structure in crowded acoustic scenes. Hear Res 2020; 399:108074. [PMID: 33041093 DOI: 10.1016/j.heares.2020.108074] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/27/2020] [Revised: 08/01/2020] [Accepted: 09/01/2020] [Indexed: 01/25/2023]
Abstract
The auditory system plays a critical role in supporting our ability to detect abrupt changes in our surroundings. Here we study how this capacity is affected in the course of healthy ageing. Artifical acoustic 'scenes', populated by multiple concurrent streams of pure tones ('sources') were used to capture the challenges of listening in complex acoustic environments. Two scene conditions were included: REG scenes consisted of sources characterized by a regular temporal structure. Matched RAND scenes contained sources which were temporally random. Changes, manifested as the abrupt disappearance of one of the sources, were introduced to a subset of the trials and participants ('young' group N = 41, age 20-38 years; 'older' group N = 41, age 60-82 years) were instructed to monitor the scenes for these events. Previous work demonstrated that young listeners exhibit better change detection performance in REG scenes, reflecting sensitivity to temporal structure. Here we sought to determine: (1) Whether 'baseline' change detection ability (i.e. in RAND scenes) is affected by age. (2) Whether aging affects listeners' sensitivity to temporal regularity. (3) How change detection capacity relates to listeners' hearing and cognitive profile (a battery of tests that capture hearing and cognitive abilities hypothesized to be affected by aging). The results demonstrated that healthy aging is associated with reduced sensitivity to abrupt scene changes in RAND scenes but that performance does not correlate with age or standard audiological measures such as pure tone audiometry or speech in noise performance. Remarkably older listeners' change detection performance improved substantially (up to the level exhibited by young listeners) in REG relative to RAND scenes. This suggests that the ability to extract and track the regularity associated with scene sources, even in crowded acoustic environments, is relatively preserved in older listeners.
Collapse
Affiliation(s)
- Mathilde de Kerangal
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK
| | - Deborah Vickers
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK; Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, UK
| | - Maria Chait
- Ear Institute, University College London, 332 Gray's Inn Road, London WC1 X 8EE, UK.
| |
Collapse
|
16
|
Pomper U, Schmid R, Ansorge U. Continuous, Lateralized Auditory Stimulation Biases Visual Spatial Processing. Front Psychol 2020; 11:1183. [PMID: 32655440 PMCID: PMC7325992 DOI: 10.3389/fpsyg.2020.01183] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2020] [Accepted: 05/07/2020] [Indexed: 11/25/2022] Open
Abstract
Sounds in our environment can easily capture human visual attention. Previous studies have investigated the impact of spatially localized, brief sounds on concurrent visuospatial attention. However, little is known on how the presence of a continuous, lateralized auditory stimulus (e.g., a person talking next to you while driving a car) impacts visual spatial attention (e.g., detection of critical events in traffic). In two experiments, we investigated whether a continuous auditory stream presented from one side biases visual spatial attention toward that side. Participants had to either passively or actively listen to sounds of various semantic complexities (tone pips, spoken digits, and a spoken story) while performing a visual target discrimination task. During both passive and active listening, we observed faster response times to visual targets presented spatially close to the relevant auditory stream. Additionally, we found that higher levels of semantic complexity of the presented sounds led to reduced visual discrimination sensitivity, but only during active listening to the sounds. We provide important novel results by showing that the presence of a continuous, ongoing auditory stimulus can impact visual processing, even when the sounds are not endogenously attended to. Together, our findings demonstrate the implications of ongoing sounds on visual processing in everyday scenarios such as moving about in traffic.
Collapse
Affiliation(s)
- Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Rebecca Schmid
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria
| | - Ulrich Ansorge
- Department of Cognition, Emotion, and Methods in Psychology, Faculty of Psychology, University of Vienna, Vienna, Austria.,Cognitive Science Hub, University of Vienna, Vienna, Austria
| |
Collapse
|
17
|
Volosin M, Horváth J. Task difficulty modulates voluntary attention allocation, but not distraction in an auditory distraction paradigm. Brain Res 2020; 1727:146565. [PMID: 31765629 DOI: 10.1016/j.brainres.2019.146565] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2019] [Revised: 10/27/2019] [Accepted: 11/21/2019] [Indexed: 11/25/2022]
Abstract
Keeping task-relevant sensory events in the focus of attention while ignoring irrelevant ones is crucial for optimizing task behavior. This attention-distraction balance might change with the perceptual demands of the ongoing task: while easy tasks might be performed with low attentional effort, difficult ones require enhanced attention. The goal of the present study was to investigate how task difficulty affected allocation of attention and distractibility in an auditory distraction paradigm. Participants performed a tone duration discrimination task in which tones were rarely, occasionally presented at a rare pitch (distracters), and task difficulty was manipulated by the duration difference between short and long tones. Short tones were consistently 200 ms long, while long tone duration was 400 ms in the easy, and 260 ms in the difficult condition. Behavioral results and deviant-minus-standard event-related potential (ERP) waveforms suggested similar magnitudes of distraction in both conditions. ERPs without such a subtraction showed that tone onsets were preceded by a negative-going trend, suggesting that participants prepared for tone onsets. In the difficult condition, N1 amplitudes to tone onsets were enhanced, indicating that participants invested more attentional resources. Increased difficulty also slowed down tone offset processing as reflected by significantly delayed offset-related P1 and N1/N2 waveforms. These results suggest that although task difficulty compels participants to attend the tones more strongly, this does not have significant impact on distraction-related processing.
Collapse
Affiliation(s)
- Márta Volosin
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar Tudósok körútja 2, H-1117 Budapest, Hungary; Institute of Psychology, University of Szeged, Egyetem utca 2, H-6722 Szeged, Hungary.
| | - János Horváth
- Institute of Cognitive Neuroscience and Psychology, Research Centre for Natural Sciences, Hungarian Academy of Sciences, Magyar Tudósok körútja 2, H-1117 Budapest, Hungary; Institute of Psychology, Károli Gáspár University of the Reformed Church in Hungary, Bécsi út 324, H-1037 Budapest, Hungary.
| |
Collapse
|
18
|
Paterson E, Sanderson PM, Brecknell B, Paterson NAB, Loeb RG. Comparison of Standard and Enhanced Pulse Oximeter Auditory Displays of Oxygen Saturation. Anesth Analg 2019; 129:997-1004. [DOI: 10.1213/ane.0000000000004267] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
|
19
|
Rapid Ocular Responses Are Modulated by Bottom-up-Driven Auditory Salience. J Neurosci 2019; 39:7703-7714. [PMID: 31391262 PMCID: PMC6764203 DOI: 10.1523/jneurosci.0776-19.2019] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2019] [Revised: 06/28/2019] [Accepted: 07/12/2019] [Indexed: 02/03/2023] Open
Abstract
Despite the prevalent use of alerting sounds in alarms and human-machine interface systems and the long-hypothesized role of the auditory system as the brain's "early warning system," we have only a rudimentary understanding of what determines auditory salience-the automatic attraction of attention by sound-and which brain mechanisms underlie this process. A major roadblock has been the lack of a robust, objective means of quantifying sound-driven attentional capture. Here we demonstrate that: (1) a reliable salience scale can be obtained from crowd-sourcing (N = 911), (2) acoustic roughness appears to be a driving feature behind this scaling, consistent with previous reports implicating roughness in the perceptual distinctiveness of sounds, and (3) crowd-sourced auditory salience correlates with objective autonomic measures. Specifically, we show that a salience ranking obtained from online raters correlated robustly with the superior colliculus-mediated ocular freezing response, microsaccadic inhibition (MSI), measured in naive, passively listening human participants (of either sex). More salient sounds evoked earlier and larger MSI, consistent with a faster orienting response. These results are consistent with the hypothesis that MSI reflects a general reorienting response that is evoked by potentially behaviorally important events regardless of their modality.SIGNIFICANCE STATEMENT Microsaccades are small, rapid, fixational eye movements that are measurable with sensitive eye-tracking equipment. We reveal a novel, robust link between microsaccade dynamics and the subjective salience of brief sounds (salience rankings obtained from a large number of participants in an online experiment): Within 300 ms of sound onset, the eyes of naive, passively listening participants demonstrate different microsaccade patterns as a function of the sound's crowd-sourced salience. These results position the superior colliculus (hypothesized to underlie microsaccade generation) as an important brain area to investigate in the context of a putative multimodal salience hub. They also demonstrate an objective means for quantifying auditory salience.
Collapse
|
20
|
Voices to remember: Comparing neural signatures of intentional and non-intentional voice learning and recognition. Brain Res 2019; 1711:214-225. [PMID: 30685271 DOI: 10.1016/j.brainres.2019.01.028] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Revised: 01/17/2019] [Accepted: 01/23/2019] [Indexed: 11/21/2022]
Abstract
Recent electrophysiological evidence suggests a rapid acquisition of novel speaker representations during intentional voice learning. We investigated effects of learning intention on voice recognition, using a variant of the directed forgetting paradigm. In an old/new recognition task following voice learning, we compared performance and event-related brain potentials (ERPs) for studied voices, half of which had been prompted to be remembered (TBR) or forgotten (TBF). Furthermore, to assess incidental encoding of episodic information, participants indicated for each recognized test voice the ear of presentation during study. During study, TBR voices elicited more positive ERPs than TBF voices (from ∼250 ms), possibly reflecting deeper voice encoding. In parallel, subsequent recognition performance was higher for TBR than for TBF voices. Importantly, above-chance recognition for both learning conditions nevertheless suggested a degree of non-intentional voice learning. In a surprise episodic memory test for voice location, above-chance performance was observed for TBR voices only, suggesting that episodic memory for ear of presentation depended on intentional voice encoding. At test, a left posterior ERP OLD/NEW effect for both TBR and TBF voices (from ∼500 ms) reflected recognition of studied voices under both encoding conditions. By contrast, a right frontal ERP OLD/NEW effect for TBF voices only (from ∼800 ms) possibly reflected additional elaborative retrieval processes. Overall, we show that ERPs are sensitive 1) to strategic voice encoding during study (from ∼250 ms), and 2) to voice recognition at test (from ∼500 ms), with the specific pattern of ERP OLD/NEW effects partly depending on previous encoding intention.
Collapse
|
21
|
Folyi T, Wentura D. Involuntary sensory enhancement of gain- and loss-associated tones: A general relevance principle. Int J Psychophysiol 2019; 138:11-26. [PMID: 30685230 DOI: 10.1016/j.ijpsycho.2019.01.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2018] [Revised: 01/21/2019] [Accepted: 01/22/2019] [Indexed: 12/27/2022]
Abstract
In a recent event-related potential (ERP) study (Folyi et al., 2016), we have demonstrated that sensory processing of task-irrelevant tones is enhanced when they were previously associated with positive or negative (by the means of monetary gains and losses, respectively) affective meaning relative to tones with neutral meaning, as indexed by the enhancement of the auditory N1-amplitude. In the present study, (1) in line with the hypothesis of affective counter-regulation, we investigated whether positive versus negative tones can receive differential attentional enhancement, depending on motivational context (Experiment 1); and (2) whether the early facilitation of positive and negative tones can operate strictly outside of the focus of voluntary attention (Experiment 2). In Experiment 1, we replicated the basic N1 valence effect, but found no moderation by motivational context. In Experiment 2, we found a small valence effect on the N1. By combining data from the three experiments (i.e., our previous experiment and the present ones; N = 72), we found a clear enhancement of N1-amplitudes for valenced tones without moderation by experiment. This pattern of results suggests comparable early attentional enhancement of valenced tones in general: (a) despite different level of concurrent task-relevant attentional and motivational demands in these experiments; and (b) without prioritizing one valence category over another, supporting our claim that the general relevance of the tones with high motivational value that governs early attentional facilitation.
Collapse
Affiliation(s)
- Timea Folyi
- Department of Psychology, Saarland University, Germany.
| | - Dirk Wentura
- Department of Psychology, Saarland University, Germany
| |
Collapse
|
22
|
Olguin A, Bekinschtein TA, Bozic M. Neural Encoding of Attended Continuous Speech under Different Types of Interference. J Cogn Neurosci 2018; 30:1606-1619. [PMID: 30004849 DOI: 10.1162/jocn_a_01303] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
We examined how attention modulates the neural encoding of continuous speech under different types of interference. In an EEG experiment, participants attended to a narrative in English while ignoring a competing stream in the other ear. Four different types of interference were presented to the unattended ear: a different English narrative, a narrative in a language unknown to the listener (Spanish), a well-matched nonlinguistic acoustic interference (Musical Rain), and no interference. Neural encoding of attended and unattended signals was assessed by calculating cross-correlations between their respective envelopes and the EEG recordings. Findings revealed more robust neural encoding for the attended envelopes compared with the ignored ones. Critically, however, the type of the interfering stream significantly modulated this process, with the fully intelligible distractor (English) causing the strongest encoding of both attended and unattended streams and latest dissociation between them and nonintelligible distractors causing weaker encoding and early dissociation between attended and unattended streams. The results were consistent over the time course of the spoken narrative. These findings suggest that attended and unattended information can be differentiated at different depths of processing analysis, with the locus of selective attention determined by the nature of the competing stream. They provide strong support to flexible accounts of auditory selective attention.
Collapse
|
23
|
Murphy S, Dalton P. Inattentional numbness and the influence of task difficulty. Cognition 2018; 178:1-6. [PMID: 29753983 DOI: 10.1016/j.cognition.2018.05.001] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2017] [Revised: 04/30/2018] [Accepted: 05/02/2018] [Indexed: 10/16/2022]
Abstract
Research suggests that clearly detectable stimuli can be missed when attention is focused elsewhere, particularly when the observer is engaged in a complex task. Although this phenomenon has been demonstrated in vision and audition, much less is known about the possibility of a similar phenomenon within touch. Across two experiments, we investigated reported awareness of an unexpected tactile event as a function of the difficulty of a concurrent tactile task. Participants were presented with sequences of tactile stimuli to one hand and performed either an easy or a difficult counting task. On the final trial, an additional tactile stimulus was concurrently presented to the unattended hand. Retrospective reports revealed that more participants in the difficult (vs. easy) condition remained unaware of this unexpected stimulus, even though it was clearly detectable under full attention conditions. These experiments are the first demonstrating the phenomenon of inattentional numbness modulated by concurrent tactile task difficulty.
Collapse
Affiliation(s)
- Sandra Murphy
- Department of Psychology, Royal Holloway, University of London, United Kingdom
| | - Polly Dalton
- Department of Psychology, Royal Holloway, University of London, United Kingdom.
| |
Collapse
|
24
|
Abstract
Adaptation to female voices causes subsequent voices to be perceived as more male, and vice versa. This contrastive aftereffect disappears under spatial inattention to adaptors, suggesting that voices are not encoded automatically. According to Lavie, Hirst, de Fockert, and Viding (2004), the processing of task-irrelevant stimuli during selective attention depends on perceptual resources and working memory. Possibly due to their social significance, faces may be an exceptional domain: That is, task-irrelevant faces can escape perceptual load effects. Here we tested voice processing, to study whether voice gender aftereffects (VGAEs) depend on low or high perceptual (Exp. 1) or working memory (Exp. 2) load in a relevant visual task. Participants adapted to irrelevant voices while either searching digit displays for a target (Exp. 1) or recognizing studied digits (Exp. 2). We found that the VGAE was unaffected by perceptual load, indicating that task-irrelevant voices, like faces, can also escape perceptual-load effects. Intriguingly, the VGAE was increased under high memory load. Therefore, visual working memory load, but not general perceptual load, determines the processing of task-irrelevant voices.
Collapse
|
25
|
Abstract
Mixed results have been found for the impact of auditory information presented during high-perceptual-load visual search tasks, with some studies showing large effects and others indicating inattentional deafness, with such stimuli going largely undetected. In three experiments, we demonstrated that task relatedness is a key factor in whether extraneous auditory stimuli impact high-load visual searches. Experiment 1 addressed a methodological concern (e.g., Lavie Trends in Cognitive Sciences, 9, 75-82, 2005) regarding the timing of the relative onsets and offsets of task-related, to-be-ignored auditory stimuli and visual search arrays in experiments that have shown auditory distractor effects. Robust auditory distractor effects were found in each timing condition, and no inattentional deafness for high-load searches. Experiments 2 and 3 demonstrated that the relationship between the auditory stimuli and visual targets determined whether attention was captured and whether the response times to identify targets were impacted. Auditory stimuli that named a response-specific category influenced responses to targets mapped exclusively to one response, but not to the same targets mapped nonexclusively. These compatibility effects were larger if the distractors named an actual target item than if they named the category to which the item belonged. This pattern suggests that to-be-ignored auditory information that closely relates to a visual target search task influences the processing of that task, particularly in a high-perceptual-load search.
Collapse
|
26
|
Murphy S, Spence C, Dalton P. Auditory perceptual load: A review. Hear Res 2017; 352:40-48. [DOI: 10.1016/j.heares.2017.02.005] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/23/2016] [Revised: 12/21/2016] [Accepted: 02/05/2017] [Indexed: 11/26/2022]
|
27
|
Strauss DJ, Francis AL. Toward a taxonomic model of attention in effortful listening. COGNITIVE, AFFECTIVE & BEHAVIORAL NEUROSCIENCE 2017; 17:809-825. [PMID: 28567568 PMCID: PMC5548861 DOI: 10.3758/s13415-017-0513-0] [Citation(s) in RCA: 38] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
In recent years, there has been increasing interest in studying listening effort. Research on listening effort intersects with the development of active theories of speech perception and contributes to the broader endeavor of understanding speech perception within the context of neuroscientific theories of perception, attention, and effort. Due to the multidisciplinary nature of the problem, researchers vary widely in their precise conceptualization of the catch-all term listening effort. Very recent consensus work stresses the relationship between listening effort and the allocation of cognitive resources, providing a conceptual link to current cognitive neuropsychological theories associating effort with the allocation of selective attention. By linking listening effort to attentional effort, we enable the application of a taxonomy of external and internal attention to the characterization of effortful listening. More specifically, we use a vectorial model to decompose the demand causing listening effort into its mutually orthogonal external and internal components and map the relationship between demanded and exerted effort by means of a resource-limiting term that can represent the influence of motivation as well as vigilance and arousal. Due to its quantitative nature and easy graphical interpretation, this model can be applied to a broad range of problems dealing with listening effort. As such, we conclude that the model provides a good starting point for further research on effortful listening within a more differentiated neuropsychological framework.
Collapse
Affiliation(s)
- Daniel J Strauss
- Systems Neuroscience and Neurotechnology Unit, Neurocenter, Faculty of Medicine, Saarland University & School of Engineering, Building 90.5, 66421, htw saar, Homburg/Saar, Germany.
- Leibniz-Institute for New Materials, Saarbruecken, Germany.
- Key Numerics GmbH - Neurocognitive Technologies, Saarbruecken, Germany.
| | - Alexander L Francis
- Speech Perception and Cognitive Effort Laboratory Department of Speech, Language & Hearing Sciences, Purdue University, West Lafayette, IN, USA
| |
Collapse
|
28
|
Dykstra AR, Cariani PA, Gutschalk A. A roadmap for the study of conscious audition and its neural basis. Philos Trans R Soc Lond B Biol Sci 2017; 372:20160103. [PMID: 28044014 PMCID: PMC5206271 DOI: 10.1098/rstb.2016.0103] [Citation(s) in RCA: 35] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 11/03/2016] [Indexed: 12/16/2022] Open
Abstract
How and which aspects of neural activity give rise to subjective perceptual experience-i.e. conscious perception-is a fundamental question of neuroscience. To date, the vast majority of work concerning this question has come from vision, raising the issue of generalizability of prominent resulting theories. However, recent work has begun to shed light on the neural processes subserving conscious perception in other modalities, particularly audition. Here, we outline a roadmap for the future study of conscious auditory perception and its neural basis, paying particular attention to how conscious perception emerges (and of which elements or groups of elements) in complex auditory scenes. We begin by discussing the functional role of the auditory system, particularly as it pertains to conscious perception. Next, we ask: what are the phenomena that need to be explained by a theory of conscious auditory perception? After surveying the available literature for candidate neural correlates, we end by considering the implications that such results have for a general theory of conscious perception as well as prominent outstanding questions and what approaches/techniques can best be used to address them.This article is part of the themed issue 'Auditory and visual scene analysis'.
Collapse
Affiliation(s)
- Andrew R Dykstra
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| | | | - Alexander Gutschalk
- Department of Neurology, Ruprecht-Karls-Universität Heidelberg, Heidelberg, Germany
| |
Collapse
|
29
|
The Implications of Cognitive Aging for Listening and the Framework for Understanding Effortful Listening (FUEL). Ear Hear 2016; 37 Suppl 1:44S-51S. [DOI: 10.1097/aud.0000000000000309] [Citation(s) in RCA: 24] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
30
|
Sörqvist P, Dahlström Ö, Karlsson T, Rönnberg J. Concentration: The Neural Underpinnings of How Cognitive Load Shields Against Distraction. Front Hum Neurosci 2016; 10:221. [PMID: 27242485 PMCID: PMC4870472 DOI: 10.3389/fnhum.2016.00221] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/19/2015] [Accepted: 04/28/2016] [Indexed: 11/13/2022] Open
Abstract
Whether cognitive load—and other aspects of task difficulty—increases or decreases distractibility is subject of much debate in contemporary psychology. One camp argues that cognitive load usurps executive resources, which otherwise could be used for attentional control, and therefore cognitive load increases distraction. The other camp argues that cognitive load demands high levels of concentration (focal-task engagement), which suppresses peripheral processing and therefore decreases distraction. In this article, we employed an functional magnetic resonance imaging (fMRI) protocol to explore whether higher cognitive load in a visually-presented task suppresses task-irrelevant auditory processing in cortical and subcortical areas. The results show that selectively attending to an auditory stimulus facilitates its neural processing in the auditory cortex, and switching the locus-of-attention to the visual modality decreases the neural response in the auditory cortex. When the cognitive load of the task presented in the visual modality increases, the neural response to the auditory stimulus is further suppressed, along with increased activity in networks related to effortful attention. Taken together, the results suggest that higher cognitive load decreases peripheral processing of task-irrelevant information—which decreases distractibility—as a side effect of the increased activity in a focused-attention network.
Collapse
Affiliation(s)
- Patrik Sörqvist
- Department of Building, Energy and Environmental Engineering, University of GävleGävle, Sweden; Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| | - Örjan Dahlström
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden; Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping UniversityLinköping, Sweden
| | - Thomas Karlsson
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden; Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden; Center for Medical Image Science and Visualization (CMIV), Linköping UniversityLinköping, Sweden
| | - Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden; Department of Behavioral Sciences and Learning, Linköping UniversityLinköping, Sweden
| |
Collapse
|
31
|
Lange K, Nowak M, Lauer W. A human factors perspective on medical device alarms: problems with operating alarming devices and responding to device alarms. BIOMED ENG-BIOMED TE 2016; 61:147-64. [PMID: 25427057 DOI: 10.1515/bmt-2014-0068] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/15/2014] [Accepted: 10/24/2014] [Indexed: 11/15/2022]
Abstract
Medical devices emit alarms when a problem with the device or with the patient needs to be addressed by healthcare personnel. At present, problems with device alarms are frequently discussed in the literature, the main message being that patient safety is compromised because device alarms are not as effective and safe as they should - and could - be. There is a general consensus that alarm-related hazards result, to a considerable degree, from the interactions of human users with the device. The present paper addresses key aspects of human perception and cognition that may relate to both operating alarming devices and responding to device alarms. Recent publications suggested solutions to alarm-related hazards associated with usage errors based on assumptions on the causal relations between, for example, alarm management and human perception, cognition, and responding. However, although there is face validity in many of these assumptions, future research should provide objective empirical evidence in order to deepen our understanding of the actual causal relationships, and hence improve and expand the possibilities for taking appropriate action.
Collapse
|
32
|
Murphy S, Dalton P. Out of touch? Visual load induces inattentional numbness. J Exp Psychol Hum Percept Perform 2016; 42:761-5. [PMID: 26974412 PMCID: PMC4873046 DOI: 10.1037/xhp0000218] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
It is now well known that the absence of attention can leave people unaware of both visual and auditory stimuli (e.g., Dalton & Fraenkel, 2012; Mack & Rock, 1998). However, the possibility of similar effects within the tactile domain has received much less research. Here, we introduce a new tactile inattention paradigm and use it to test whether tactile awareness depends on the level of perceptual load in a concurrent visual task. Participants performed a visual search task of either low or high perceptual load, as well as responding to the presence or absence of a brief vibration delivered simultaneously to either the left or the right hand (50% of trials). Detection sensitivity to the clearly noticeable tactile stimulus was reduced under high (vs. low) visual perceptual load. These findings provide the first robust demonstration of “inattentional numbness,” as well as demonstrating that this phenomenon can be induced by concurrent visual perceptual load.
Collapse
Affiliation(s)
- Sandra Murphy
- Department of Psychology, Royal Holloway, University of London
| | - Polly Dalton
- Department of Psychology, Royal Holloway, University of London
| |
Collapse
|
33
|
Sohoglu E, Chait M. Neural dynamics of change detection in crowded acoustic scenes. Neuroimage 2016; 126:164-72. [PMID: 26631816 PMCID: PMC4739509 DOI: 10.1016/j.neuroimage.2015.11.050] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/20/2015] [Revised: 11/09/2015] [Accepted: 11/22/2015] [Indexed: 11/30/2022] Open
Abstract
Two key questions concerning change detection in crowded acoustic environments are the extent to which cortical processing is specialized for different forms of acoustic change and when in the time-course of cortical processing neural activity becomes predictive of behavioral outcomes. Here, we address these issues by using magnetoencephalography (MEG) to probe the cortical dynamics of change detection in ongoing acoustic scenes containing as many as ten concurrent sources. Each source was formed of a sequence of tone pips with a unique carrier frequency and temporal modulation pattern, designed to mimic the spectrotemporal structure of natural sounds. Our results show that listeners are more accurate and quicker to detect the appearance (than disappearance) of an auditory source in the ongoing scene. Underpinning this behavioral asymmetry are change-evoked responses differing not only in magnitude and latency, but also in their spatial patterns. We find that even the earliest (~50 ms) cortical response to change is predictive of behavioral outcomes (detection times), consistent with the hypothesized role of local neural transients in supporting change detection.
Collapse
Affiliation(s)
- Ediz Sohoglu
- UCL Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK.
| | - Maria Chait
- UCL Ear Institute, 332 Gray's Inn Road, London WC1X 8EE, UK.
| |
Collapse
|
34
|
Cross-modal perceptual load: the impact of modality and individual differences. Exp Brain Res 2015; 234:1279-91. [DOI: 10.1007/s00221-015-4517-0] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2015] [Accepted: 11/30/2015] [Indexed: 11/30/2022]
|
35
|
Masutomi K, Barascud N, Kashino M, McDermott JH, Chait M. Sound segregation via embedded repetition is robust to inattention. J Exp Psychol Hum Percept Perform 2015; 42:386-400. [PMID: 26480248 PMCID: PMC4763252 DOI: 10.1037/xhp0000147] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
The segregation of sound sources from the mixture of sounds that enters the ear is a core capacity of human hearing, but the extent to which this process is dependent on attention remains unclear. This study investigated the effect of attention on the ability to segregate sounds via repetition. We utilized a dual task design in which stimuli to be segregated were presented along with stimuli for a "decoy" task that required continuous monitoring. The task to assess segregation presented a target sound 10 times in a row, each time concurrent with a different distractor sound. McDermott, Wrobleski, and Oxenham (2011) demonstrated that repetition causes the target sound to be segregated from the distractors. Segregation was queried by asking listeners whether a subsequent probe sound was identical to the target. A control task presented similar stimuli but probed discrimination without engaging segregation processes. We present results from 3 different decoy tasks: a visual multiple object tracking task, a rapid serial visual presentation (RSVP) digit encoding task, and a demanding auditory monitoring task. Load was manipulated by using high- and low-demand versions of each decoy task. The data provide converging evidence of a small effect of attention that is nonspecific, in that it affected the segregation and control tasks to a similar extent. In all cases, segregation performance remained high despite the presence of a concurrent, objectively demanding decoy task. The results suggest that repetition-based segregation is robust to inattention.
Collapse
Affiliation(s)
- Keiko Masutomi
- Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
| | | | - Makio Kashino
- Interdisciplinary Graduate School of Science and Engineering, Tokyo Institute of Technology
| | - Josh H McDermott
- Department of Brain and Cognitive Sciences, Massachusetts Institute of Technology
| | | |
Collapse
|
36
|
Giraudet L, St-Louis ME, Scannella S, Causse M. P300 event-related potential as an indicator of inattentional deafness? PLoS One 2015; 10:e0118556. [PMID: 25714746 PMCID: PMC4340620 DOI: 10.1371/journal.pone.0118556] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2014] [Accepted: 01/20/2015] [Indexed: 11/19/2022] Open
Abstract
An analysis of airplane accidents reveals that pilots sometimes purely fail to react to critical auditory alerts. This inability of an auditory stimulus to reach consciousness has been coined under the term of inattentional deafness. Recent data from literature tends to show that tasks involving high cognitive load consume most of the attentional capacities, leaving little or none remaining for processing any unexpected information. In addition, there is a growing body of evidence for a shared attentional capacity between vision and hearing. In this context, the abundant information in modern cockpits is likely to produce inattentional deafness. We investigated this hypothesis by combining electroencephalographic (EEG) measurements with an ecological aviation task performed under contextual variation of the cognitive load (high or low), including an alarm detection task. Two different audio tones were played: standard tones and deviant tones. Participants were instructed to ignore standard tones and to report deviant tones using a response pad. More than 31% of the deviant tones were not detected in the high load condition. Analysis of the EEG measurements showed a drastic diminution of the auditory P300 amplitude concomitant with this behavioral effect, whereas the N100 component was not affected. We suggest that these behavioral and electrophysiological results provide new insights on explaining the trend of pilots' failure to react to critical auditory information. Relevant applications concern prevention of alarms omission, mental workload measurements and enhanced warning designs.
Collapse
Affiliation(s)
| | | | | | - Mickaël Causse
- DMIA, ISAE, Université de Toulouse, Toulouse, 31055, France
| |
Collapse
|
37
|
Abstract
High perceptual load in a task is known to reduce the visual perception of unattended items (e.g., Lavie, Beck, & Konstantinou, 2014). However, it remains an open question whether perceptual load in one modality (e.g., vision) can affect the detection of stimuli in another modality (e.g., hearing). We report four experiments that establish that high visual perceptual load leads to reduced detection sensitivity in hearing. Participants were requested to detect a tone that was presented during performance of a visual search task of either low or high perceptual load (varied through item similarity). The findings revealed that auditory detection sensitivity was consistently reduced with higher load, and that this effect persisted even when the auditory detection response was made first (before the search response) and when the auditory stimulus was highly expected (50 % present). These findings demonstrate a phenomenon of load-induced deafness and provide evidence for shared attentional capacity across vision and hearing.
Collapse
Affiliation(s)
- Dana Raveh
- Institute of Cognitive Neuroscience, University College London, London, United Kingdom,
| | | |
Collapse
|
38
|
Auditory attentional capture: implicit and explicit approaches. PSYCHOLOGICAL RESEARCH 2014; 78:313-20. [PMID: 24643575 DOI: 10.1007/s00426-014-0557-5] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Accepted: 02/24/2014] [Indexed: 10/25/2022]
Abstract
The extent to which distracting items capture attention despite being irrelevant to the task at hand can be measured either implicitly or explicitly (e.g., Simons, Trends Cogn Sci 4:147-155, 2000). Implicit approaches include the standard attentional capture paradigm in which distraction is measured in terms of reaction time and/or accuracy costs within a focal task in the presence (vs. absence) of a task-irrelevant distractor. Explicit measures include the inattention paradigm in which people are asked directly about their noticing of an unexpected task-irrelevant item. Although the processes of attentional capture have been studied extensively using both approaches in the visual domain, there is much less research on similar processes as they may operate within audition, and the research that does exist in the auditory domain has tended to focus exclusively on either an explicit or an implicit approach. This paper provides an overview of recent research on auditory attentional capture, integrating the key conclusions that may be drawn from both methodological approaches.
Collapse
|
39
|
Amaral AA, Langers DRM. The relevance of task-irrelevant sounds: hemispheric lateralization and interactions with task-relevant streams. Front Neurosci 2013; 7:264. [PMID: 24409115 PMCID: PMC3873511 DOI: 10.3389/fnins.2013.00264] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2013] [Accepted: 12/16/2013] [Indexed: 11/13/2022] Open
Abstract
The effect of unattended task-irrelevant auditory stimuli in the context of an auditory task is not well understood. Using human functional magnetic resonance imaging (fMRI) we compared blood oxygenation level dependent (BOLD) signal changes resulting from monotic task-irrelevant stimulation, monotic task-relevant stimulation and dichotic stimulation with an attended task-relevant stream to one ear and an unattended task-irrelevant stream to the other ear simultaneously. We found strong bilateral BOLD signal changes in the auditory cortex (AC) resulting from monotic stimulation in a passive listening condition. Consistent with previous work, these responses were largest on the side contralateral to stimulation. AC responses to the unattended (task-irrelevant) sounds were preferentially contralateral and strongest for the most difficult condition. Stronger bilateral AC responses occurred during monotic passive-listening than to an unattended stream presented in a dichotic condition, with attention focused on one ear. Additionally, the visual cortex showed negative responses compared to the baseline in all stimulus conditions including passive listening. Our results suggest that during dichotic listening, with attention focused on one ear, (1) the contralateral and the ipsilateral auditory pathways are suppressively interacting; and (2) cross-modal inhibition occurs during purely acoustic stimulation. These findings support the existence of response suppressions within and between modalities in the presence of competing interfering stimuli.
Collapse
Affiliation(s)
- Ana A Amaral
- International Neuroscience Doctoral Programme, Champalimaud Neuroscience Programme, Champalimaud Centre for the Unknown Lisbon, Portugal ; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands
| | - Dave R M Langers
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen Groningen, Netherlands ; National Institute for Health Research, Nottingham Hearing Biomedical Research Unit, School of Medicine, University of Nottingham Nottingham, UK
| |
Collapse
|