1
|
Richardson ML, Luo J, Zeng FG. Attention-Modulated Cortical Responses as a Biomarker for Tinnitus. Brain Sci 2024; 14:421. [PMID: 38790400 PMCID: PMC11118879 DOI: 10.3390/brainsci14050421] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/30/2024] [Revised: 04/17/2024] [Accepted: 04/22/2024] [Indexed: 05/26/2024] Open
Abstract
Attention plays an important role in not only the awareness and perception of tinnitus but also its interactions with external sounds. Recent evidence suggests that attention is heightened in the tinnitus brain, likely as a result of relatively local cortical changes specific to deafferentation sites or global changes that help maintain normal cognitive capabilities in individuals with hearing loss. However, most electrophysiological studies have used passive listening paradigms to probe the tinnitus brain and produced mixed results in terms of finding a distinctive biomarker for tinnitus. Here, we designed a selective attention task, in which human adults attended to one of two interleaved tonal (500 Hz and 5 kHz) sequences. In total, 16 tinnitus (5 females) and 13 age- and hearing-matched control (8 females) subjects participated in the study, with the tinnitus subjects matching the tinnitus pitch to 5.4 kHz (range = 1.9-10.8 kHz). Cortical responses were recorded in both passive and attentive listening conditions, producing no differences in P1, N1, and P2 between the tinnitus and control subjects under any conditions. However, a different pattern of results emerged when the difference was examined between the attended and unattended responses. This attention-modulated cortical response was significantly greater in the tinnitus than control subjects: 3.9-times greater for N1 at 5 kHz (95% CI: 2.9 to 5.0, p = 0.007, ηp2 = 0.24) and 3.0 for P2 at 500 Hz (95% CI: 1.9 to 4.5, p = 0.026, ηp2 = 0.17). We interpreted the greater N1 modulation as local neural changes specific to the tinnitus frequency and the greater P2 as global changes to hearing loss. These two cortical measures were used to differentiate between the tinnitus and control subjects, producing 83.3% sensitivity and 76.9% specificity (AUC = 0.81, p = 0.006). These results suggest that the tinnitus brain is more plastic than that of the matched non-tinnitus controls and that the attention-modulated cortical response can be developed as a clinically meaningful biomarker for tinnitus.
Collapse
Affiliation(s)
- Matthew L. Richardson
- Department of Otolaryngology—Head and Neck Surgery, University of California at Irvine, Irvine, CA 92697, USA;
- Center for Hearing Research, University of California at Irvine, Irvine, CA 92697, USA
| | - Jiaxin Luo
- Center for Hearing Research, University of California at Irvine, Irvine, CA 92697, USA
- Department of Biomedical Engineering, University of California at Irvine, Irvine, CA 92697, USA
| | - Fan-Gang Zeng
- Department of Otolaryngology—Head and Neck Surgery, University of California at Irvine, Irvine, CA 92697, USA;
- Center for Hearing Research, University of California at Irvine, Irvine, CA 92697, USA
- Department of Biomedical Engineering, University of California at Irvine, Irvine, CA 92697, USA
- Departments of Anatomy and Neurobiology, Cognitive Sciences, University of California at Irvine, Irvine, CA 92697, USA
| |
Collapse
|
2
|
Wang X, Nie S, Wen Y, Zhao Z, Li J, Wang N, Zhang J. Age-related differences in auditory spatial processing revealed by acoustic change complex. Front Hum Neurosci 2024; 18:1342931. [PMID: 38681742 PMCID: PMC11045960 DOI: 10.3389/fnhum.2024.1342931] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2023] [Accepted: 04/01/2024] [Indexed: 05/01/2024] Open
Abstract
Objectives The auditory spatial processing abilities mature throughout childhood and degenerate in older adults. This study aimed to compare the differences in onset cortical auditory evoked potentials (CAEPs) and location-evoked acoustic change complex (ACC) responses among children, adults, and the elderly and to investigate the impact of aging and development on ACC responses. Design One hundred and seventeen people were recruited in the study, including 57 typically-developed children, 30 adults, and 30 elderlies. The onset-CAEP evoked by white noise and ACC by sequential changes in azimuths were recorded. Latencies and amplitudes as a function of azimuths were analyzed using the analysis of variance, Pearson correlation analysis, and multiple linear regression model. Results The ACC N1'-P2' amplitudes and latencies in adults, P1'-N1' amplitudes in children, and N1' amplitudes and latencies in the elderly were correlated with angles of shifts. The N1'-P2' and P2' amplitudes decreased in the elderly compared to adults. In Children, the ACC P1'-N1' responses gradually differentiated into the P1'-N1'-P2' complex. Multiple regression analysis showed that N1'-P2' amplitudes (R2 = 0.33) and P2' latencies (R2 = 0.18) were the two most variable predictors in adults, while in the elderly, N1' latencies (R2 = 0.26) explained most variances. Although the amplitudes of onset-CAEP differed at some angles, it could not predict angle changes as effectively as ACC responses. Conclusion The location-evoked ACC responses varied among children, adults, and the elderly. The N1'-P2' amplitudes and P2' latencies in adults and N1' latencies in the elderly explained most variances of changes in spatial position. The differentiation of the N1' waveform was observed in children. Further research should be conducted across all age groups, along with behavioral assessments, to confirm the relationship between aging and immaturity in objective ACC responses and poorer subjective spatial performance. Significance ACCs evoked by location changes were assessed in adults, children, and the elderly to explore the impact of aging and development on these differences.
Collapse
Affiliation(s)
| | | | | | | | | | - Ningyu Wang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| |
Collapse
|
3
|
Wikman P, Salmela V, Sjöblom E, Leminen M, Laine M, Alho K. Attention to audiovisual speech shapes neural processing through feedback-feedforward loops between different nodes of the speech network. PLoS Biol 2024; 22:e3002534. [PMID: 38466713 PMCID: PMC10957087 DOI: 10.1371/journal.pbio.3002534] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/15/2023] [Revised: 03/21/2024] [Accepted: 01/30/2024] [Indexed: 03/13/2024] Open
Abstract
Selective attention-related top-down modulation plays a significant role in separating relevant speech from irrelevant background speech when vocal attributes separating concurrent speakers are small and continuously evolving. Electrophysiological studies have shown that such top-down modulation enhances neural tracking of attended speech. Yet, the specific cortical regions involved remain unclear due to the limited spatial resolution of most electrophysiological techniques. To overcome such limitations, we collected both electroencephalography (EEG) (high temporal resolution) and functional magnetic resonance imaging (fMRI) (high spatial resolution), while human participants selectively attended to speakers in audiovisual scenes containing overlapping cocktail party speech. To utilise the advantages of the respective techniques, we analysed neural tracking of speech using the EEG data and performed representational dissimilarity-based EEG-fMRI fusion. We observed that attention enhanced neural tracking and modulated EEG correlates throughout the latencies studied. Further, attention-related enhancement of neural tracking fluctuated in predictable temporal profiles. We discuss how such temporal dynamics could arise from a combination of interactions between attention and prediction as well as plastic properties of the auditory cortex. EEG-fMRI fusion revealed attention-related iterative feedforward-feedback loops between hierarchically organised nodes of the ventral auditory object related processing stream. Our findings support models where attention facilitates dynamic neural changes in the auditory cortex, ultimately aiding discrimination of relevant sounds from irrelevant ones while conserving neural resources.
Collapse
Affiliation(s)
- Patrik Wikman
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Viljami Salmela
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| | - Eetu Sjöblom
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
| | - Miika Leminen
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- AI and Analytics Unit, Helsinki University Hospital, Helsinki, Finland
| | - Matti Laine
- Department of Psychology, Åbo Akademi University, Turku, Finland
| | - Kimmo Alho
- Department of Psychology and Logopedics, University of Helsinki, Helsinki, Finland
- Advanced Magnetic Imaging Centre, Aalto NeuroImaging, Aalto University, Espoo, Finland
| |
Collapse
|
4
|
Makov S, Pinto D, Har-Shai Yahav P, Miller LM, Zion Golumbic E. "Unattended, distracting or irrelevant": Theoretical implications of terminological choices in auditory selective attention research. Cognition 2023; 231:105313. [PMID: 36344304 DOI: 10.1016/j.cognition.2022.105313] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/17/2022] [Revised: 09/30/2022] [Accepted: 10/19/2022] [Indexed: 11/06/2022]
Abstract
For seventy years, auditory selective attention research has focused on studying the cognitive mechanisms of prioritizing the processing a 'main' task-relevant stimulus, in the presence of 'other' stimuli. However, a closer look at this body of literature reveals deep empirical inconsistencies and theoretical confusion regarding the extent to which this 'other' stimulus is processed. We argue that many key debates regarding attention arise, at least in part, from inappropriate terminological choices for experimental variables that may not accurately map onto the cognitive constructs they are meant to describe. Here we critically review the more common or disruptive terminological ambiguities, differentiate between methodology-based and theory-derived terms, and unpack the theoretical assumptions underlying different terminological choices. Particularly, we offer an in-depth analysis of the terms 'unattended' and 'distractor' and demonstrate how their use can lead to conflicting theoretical inferences. We also offer a framework for thinking about terminology in a more productive and precise way, in hope of fostering more productive debates and promoting more nuanced and accurate cognitive models of selective attention.
Collapse
Affiliation(s)
- Shiri Makov
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Danna Pinto
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Paz Har-Shai Yahav
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel
| | - Lee M Miller
- The Center for Mind and Brain, University of California, Davis, CA, United States of America; Department of Neurobiology, Physiology, & Behavior, University of California, Davis, CA, United States of America; Department of Otolaryngology / Head and Neck Surgery, University of California, Davis, CA, United States of America
| | - Elana Zion Golumbic
- The Gonda Multidisciplinary Center for Brain Research, Bar Ilan University, Israel.
| |
Collapse
|
5
|
Ogino M, Hamada N, Mitsukura Y. Simultaneous multiple-stimulus auditory brain-computer interface with semi-supervised learning and prior probability distribution tuning. J Neural Eng 2022; 19. [PMID: 36317357 DOI: 10.1088/1741-2552/ac9edd] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2022] [Accepted: 10/31/2022] [Indexed: 11/13/2022]
Abstract
Objective.Auditory brain-computer interfaces (BCIs) enable users to select commands based on the brain activity elicited by auditory stimuli. However, existing auditory BCI paradigms cannot increase the number of available commands without decreasing the selection speed, because each stimulus needs to be presented independently and sequentially under the standard oddball paradigm. To solve this problem, we propose a double-stimulus paradigm that simultaneously presents multiple auditory stimuli.Approach.For addition to an existing auditory BCI paradigm, the best discriminable sound was chosen following a subjective assessment. The new sound was located on the right-hand side and presented simultaneously with an existing sound from the left-hand side. A total of six sounds were used for implementing the auditory BCI with a 6 × 6 letter matrix. We employ semi-supervised learning (SSL) and prior probability distribution tuning to improve the accuracy of the paradigm. The SSL method involved updating of the classifier weights, and their prior probability distributions were adjusted using the following three types of distributions: uniform, empirical, and extended empirical (e-empirical). The performance was evaluated based on the BCI accuracy and information transfer rate (ITR).Main results.The double-stimulus paradigm resulted in a BCI accuracy of 67.89 ± 11.46% and an ITR of 2.67 ± 1.09 bits min-1, in the absence of SSL and with uniform distribution. The proposed combination of SSL with e-empirical distribution improved the BCI accuracy and ITR to 74.59 ± 12.12% and 3.37 ± 1.27 bits min-1, respectively. The event-related potential analysis revealed that contralateral and right-hemispheric dominances contributed to the BCI performance improvement.Significance.Our study demonstrated that a BCI based on multiple simultaneous auditory stimuli, incorporating SSL and e-empirical prior distribution, can increase the number of commands without sacrificing typing speed beyond the acceptable level of accuracy.
Collapse
Affiliation(s)
- Mikito Ogino
- Graduate School of Science and Technology, Keio University, Yokohama, Kanagawa, Japan
| | - Nozomu Hamada
- Faculty of Science and Technology, Keio University, Yokohama, Kanagawa, Japan
| | - Yasue Mitsukura
- Faculty of Science and Technology, Keio University, Yokohama, Kanagawa, Japan
| |
Collapse
|
6
|
Kachlicka M, Laffere A, Dick F, Tierney A. Slow phase-locked modulations support selective attention to sound. Neuroimage 2022; 252:119024. [PMID: 35231629 PMCID: PMC9133470 DOI: 10.1016/j.neuroimage.2022.119024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 11/16/2022] Open
Abstract
To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England; Division of Psychology & Language Sciences, UCL, Gower Street, London WC1E 6BT, England
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England.
| |
Collapse
|
7
|
What and where in the auditory systems of sighted and early blind individuals: Evidence from representational similarity analysis. J Neurol Sci 2020; 413:116805. [PMID: 32259708 DOI: 10.1016/j.jns.2020.116805] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2019] [Revised: 03/14/2020] [Accepted: 03/24/2020] [Indexed: 11/24/2022]
Abstract
Separated ventral and dorsal streams in auditory system have been proposed to process sound identification and localization respectively. Despite the popularity of the dual-pathway model, it remains controversial how much independence two neural pathways enjoy and whether visual experiences can influence the distinct cortical organizational scheme. In this study, representational similarity analysis (RSA) was used to explore the functional roles of distinct cortical regions that lay within either the ventral or dorsal auditory streams of sighted and early blind (EB) participants. We found functionally segregated auditory networks in both sighted and EB groups where anterior superior temporal gyrus (aSTG) and inferior frontal junction (IFJ) were more related to the sound identification, while posterior superior temporal gyrus (pSTG) and inferior parietal lobe (IPL) preferred the sound localization. The findings indicated visual experiences may not have an influence on this functional dissociation and the cortex of the human brain may be organized as task-specific and modality-independent strategies. Meanwhile, partial overlap of spatial and non-spatial auditory information processing was observed, illustrating the existence of interaction between the two auditory streams. Furthermore, we investigated the effect of visual experiences on the neural bases of auditory perception and observed the cortical reorganization in EB participants in whom middle occipital gyrus was recruited to process auditory information. Our findings examined the distinct cortical networks that abstractly encoded sound identification and localization, and confirmed the existence of interaction from the multivariate perspective. Furthermore, the results suggested visual experience might not impact the functional specialization of auditory regions.
Collapse
|
8
|
Object-based attention in complex, naturalistic auditory streams. Sci Rep 2019; 9:2854. [PMID: 30814547 PMCID: PMC6393668 DOI: 10.1038/s41598-019-39166-6] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Accepted: 01/14/2019] [Indexed: 11/08/2022] Open
Abstract
In vision, objects have been described as the 'units' on which non-spatial attention operates in many natural settings. Here, we test the idea of object-based attention in the auditory domain within ecologically valid auditory scenes, composed of two spatially and temporally overlapping sound streams (speech signal vs. environmental soundscapes in Experiment 1 and two speech signals in Experiment 2). Top-down attention was directed to one or the other auditory stream by a non-spatial cue. To test for high-level, object-based attention effects we introduce an auditory repetition detection task in which participants have to detect brief repetitions of auditory objects, ruling out any possible confounds with spatial or feature-based attention. The participants' responses were significantly faster and more accurate in the valid cue condition compared to the invalid cue condition, indicating a robust cue-validity effect of high-level, object-based auditory attention.
Collapse
|
9
|
Rämä P, Leminen A, Koskenoja-Vainikka S, Leminen M, Alho K, Kujala T. Effect of language experience on selective auditory attention: An event-related potential study. Int J Psychophysiol 2018. [DOI: 10.1016/j.ijpsycho.2018.03.007] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
|
10
|
Neural correlates of distraction and conflict resolution for nonverbal auditory events. Sci Rep 2017; 7:1595. [PMID: 28487563 PMCID: PMC5431653 DOI: 10.1038/s41598-017-00811-7] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/17/2016] [Accepted: 03/16/2017] [Indexed: 11/16/2022] Open
Abstract
In everyday situations auditory selective attention requires listeners to suppress task-irrelevant stimuli and to resolve conflicting information in order to make appropriate goal-directed decisions. Traditionally, these two processes (i.e. distractor suppression and conflict resolution) have been studied separately. In the present study we measured neuroelectric activity while participants performed a new paradigm in which both processes are quantified. In separate block of trials, participants indicate whether two sequential tones share the same pitch or location depending on the block’s instruction. For the distraction measure, a positive component peaking at ~250 ms was found – a distraction positivity. Brain electrical source analysis of this component suggests different generators when listeners attended to frequency and location, with the distraction by location more posterior than the distraction by frequency, providing support for the dual-pathway theory. For the conflict resolution measure, a negative frontocentral component (270–450 ms) was found, which showed similarities with that of prior studies on auditory and visual conflict resolution tasks. The timing and distribution are consistent with two distinct neural processes with suppression of task-irrelevant information occurring before conflict resolution. This new paradigm may prove useful in clinical populations to assess impairments in filtering out task-irrelevant information and/or resolving conflicting information.
Collapse
|
11
|
Binder M. Neural correlates of audiovisual temporal processing – Comparison of temporal order and simultaneity judgments. Neuroscience 2015; 300:432-47. [DOI: 10.1016/j.neuroscience.2015.05.011] [Citation(s) in RCA: 41] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2014] [Revised: 05/05/2015] [Accepted: 05/06/2015] [Indexed: 01/09/2023]
|
12
|
Gamble ML, Woldorff MG. Rapid Context-based Identification of Target Sounds in an Auditory Scene. J Cogn Neurosci 2015; 27:1675-84. [PMID: 25848684 DOI: 10.1162/jocn_a_00814] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
To make sense of our dynamic and complex auditory environment, we must be able to parse the sensory input into usable parts and pick out relevant sounds from all the potentially distracting auditory information. Although it is unclear exactly how we accomplish this difficult task, Gamble and Woldorff [Gamble, M. L., & Woldorff, M. G. The temporal cascade of neural processes underlying target detection and attentional processing during auditory search. Cerebral Cortex (New York, N.Y.: 1991), 2014] recently reported an ERP study of an auditory target-search task in a temporally and spatially distributed, rapidly presented, auditory scene. They reported an early, differential, bilateral activation (beginning at 60 msec) between feature-deviating target stimuli and physically equivalent feature-deviating nontargets, reflecting a rapid target detection process. This was followed shortly later (at 130 msec) by the lateralized N2ac ERP activation, that reflects the focusing of auditory spatial attention toward the target sound and parallels the attentional-shifting processes widely studied in vision. Here we directly examined the early, bilateral, target-selective effect to better understand its nature and functional role. Participants listened to midline-presented sounds that included target and nontarget stimuli that were randomly either embedded in a brief rapid stream or presented alone. The results indicate that this early bilateral effect results from a template for the target that utilizes its feature deviancy within a stream to enable rapid identification. Moreover, individual-differences analysis showed that the size of this effect was larger for participants with faster RTs. The findings support the hypothesis that our auditory attentional systems can implement and utilize a context-based relational template for a target sound, making use of additional auditory information in the environment when needing to rapidly detect a relevant sound.
Collapse
|
13
|
Bidet-Caulet A, Buchanan KG, Viswanath H, Black J, Scabini D, Bonnet-Brilhault F, Knight RT. Impaired Facilitatory Mechanisms of Auditory Attention After Damage of the Lateral Prefrontal Cortex. Cereb Cortex 2014; 25:4126-34. [PMID: 24925773 DOI: 10.1093/cercor/bhu131] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
There is growing evidence that auditory selective attention operates via distinct facilitatory and inhibitory mechanisms enabling selective enhancement and suppression of sound processing, respectively. The lateral prefrontal cortex (LPFC) plays a crucial role in the top-down control of selective attention. However, whether the LPFC controls facilitatory, inhibitory, or both attentional mechanisms is unclear. Facilitatory and inhibitory mechanisms were assessed, in patients with LPFC damage, by comparing event-related potentials (ERPs) to attended and ignored sounds with ERPs to these same sounds when attention was equally distributed to all sounds. In control subjects, we observed 2 late frontally distributed ERP components: a transient facilitatory component occurring from 150 to 250 ms after sound onset; and an inhibitory component onsetting at 250 ms. Only the facilitatory component was affected in patients with LPFC damage: this component was absent when attending to sounds delivered in the ear contralateral to the lesion, with the most prominent decreases observed over the damaged brain regions. These findings have 2 important implications: (i) they provide evidence for functionally distinct facilitatory and inhibitory mechanisms supporting late auditory selective attention; (ii) they show that the LPFC is involved in the control of the facilitatory mechanisms of auditory attention.
Collapse
Affiliation(s)
- Aurélie Bidet-Caulet
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, CRNL, INSERM U1028, CNRS UMR5292, University of Lyon 1, Lyon, France
| | - Kelly G Buchanan
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Humsini Viswanath
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Jessica Black
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Donatella Scabini
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Frédérique Bonnet-Brilhault
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA INSERM, UMR930, Université François-Rabelais de Tours, CHRU de Tours, France
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
14
|
Gamble ML, Woldorff MG. The Temporal Cascade of Neural Processes Underlying Target Detection and Attentional Processing During Auditory Search. Cereb Cortex 2014; 25:2456-65. [PMID: 24711486 DOI: 10.1093/cercor/bhu047] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022] Open
Abstract
The posterior visual event-related potential (ERP) component, the N2pc, has been widely used to study lateralized shifts of attention within visual arrays. Recently, Gamble and Luck (2011) reported an auditory analog of this activity (the fronto-central "N2ac"), reflecting the lateralized focusing of attention toward a Target sound among 2 simultaneous auditory stimuli. Here, we directed an electrophysiological approach toward understanding auditory Target search within a more complex auditory environment in which rapidly occurring sounds were distributed across both time and space. Trials consisted of ten 40-ms monaural sounds rapidly presented to the 2 ears: 8 medium-pitch tones and 2 deviant sounds (one high and one low). For each block, one deviant type was designated as the Target, which participants needed to identify within each trial to discriminate its tonal quality. The extracted electrophysiological results included a very early enhancement, starting at approximately 50 ms, of a bilateral negative-polarity auditory brain response to the designated Target Deviant (compared with the Nontarget Deviant), followed at approximately 130 ms by the N2ac activity reflecting the lateralized focusing of attention toward that Target. The results delineate the tightly orchestrated sequence of neural processes underlying the detection of, and focusing of attention toward, Target sounds in complex auditory scenes.
Collapse
Affiliation(s)
- Marissa L Gamble
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Department of Psychology and Neuroscience, Duke University, Durham, NC, USA
| | - Marty G Woldorff
- Center for Cognitive Neuroscience, Duke University, Durham, NC 27708, USA Department of Psychology and Neuroscience, Duke University, Durham, NC, USA Department of Psychiatry, Duke University, Durham, NC, USA
| |
Collapse
|
15
|
Mittag M, Inauri K, Huovilainen T, Leminen M, Salo E, Rinne T, Kujala T, Alho K. Attention effects on the processing of task-relevant and task-irrelevant speech sounds and letters. Front Neurosci 2013; 7:231. [PMID: 24348324 PMCID: PMC3847663 DOI: 10.3389/fnins.2013.00231] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2013] [Accepted: 11/16/2013] [Indexed: 11/21/2022] Open
Abstract
We used event-related brain potentials (ERPs) to study effects of selective attention on the processing of attended and unattended spoken syllables and letters. Participants were presented with syllables randomly occurring in the left or right ear and spoken by different voices and with a concurrent foveal stream of consonant letters written in darker or lighter fonts. During auditory phonological (AP) and non-phonological tasks, they responded to syllables in a designated ear starting with a vowel and spoken by female voices, respectively. These syllables occurred infrequently among standard syllables starting with a consonant and spoken by male voices. During visual phonological and non-phonological tasks, they responded to consonant letters with names starting with a vowel and to letters written in dark fonts, respectively. These letters occurred infrequently among standard letters with names starting with a consonant and written in light fonts. To examine genuine effects of attention and task on ERPs not overlapped by ERPs associated with target processing or deviance detection, these effects were studied only in ERPs to auditory and visual standards. During selective listening to syllables in a designated ear, ERPs to the attended syllables were negatively displaced during both phonological and non-phonological auditory tasks. Selective attention to letters elicited an early negative displacement and a subsequent positive displacement (Pd) of ERPs to attended letters being larger during the visual phonological than non-phonological task suggesting a higher demand for attention during the visual phonological task. Active suppression of unattended speech during the AP and non-phonological tasks and during the visual phonological tasks was suggested by a rejection positivity (RP) to unattended syllables. We also found evidence for suppression of the processing of task-irrelevant visual stimuli in visual ERPs during auditory tasks involving left-ear syllables.
Collapse
Affiliation(s)
- Maria Mittag
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Karina Inauri
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Tatu Huovilainen
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Miika Leminen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä Jyväskylä, Finland
| | - Emma Salo
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teemu Rinne
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Teija Kujala
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Cicero Learning Network, University of Helsinki Helsinki, Finland
| | - Kimmo Alho
- Division of Cognitive Psychology and Neuropsychology, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Helsinki Collegium for Advanced Studies, University of Helsinki Helsinki, Finland
| |
Collapse
|
16
|
Talja S, Alho K, Rinne T. Source analysis of event-related potentials during pitch discrimination and pitch memory tasks. Brain Topogr 2013; 28:445-58. [PMID: 24043402 DOI: 10.1007/s10548-013-0307-9] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2013] [Accepted: 08/10/2013] [Indexed: 11/26/2022]
Abstract
Our previous studies using fMRI have demonstrated that activations in human auditory cortex (AC) are strongly dependent on the characteristics of the task. The present study tested whether source estimation of scalp-recorded event-related potentials (ERPs) can be used to investigate task-dependent AC activations. Subjects were presented with frequency-varying two-part tones during pitch discrimination, pitch n-back memory, and visual tasks identical to our previous fMRI study (Rinne et al., J Neurosci 29:13338-13343, 2009). ERPs and their minimum-norm source estimates in AC were strongly modulated by task at 200-700 ms from tone onset. As in the fMRI study, the pitch discrimination and pitch memory tasks were associated with distinct AC activation patterns. In the pitch discrimination task, increased activity in the anterior AC was detected relatively late at 300-700 ms from tone onset. Therefore, this activity was probably not associated with enhanced pitch processing but rather with the actual discrimination process (comparison between the two parts of tone). Increased activity in more posterior areas associated with the pitch memory task, in turn, occurred at 200-700 ms suggesting that this activity was related to operations on pitch categories after pitch analysis was completed. Finally, decreased activity associated with the pitch memory task occurred at 150-300 ms consistent with the notion that, in the demanding pitch memory task, spectrotemporal analysis is actively halted as soon as category information has been obtained. These results demonstrate that ERP source analysis can be used to complement fMRI to investigate task-dependent activations of human AC.
Collapse
Affiliation(s)
- Suvi Talja
- Institute of Behavioural Sciences, University of Helsinki, PO Box 9, 00014, Helsinki, Finland,
| | | | | |
Collapse
|
17
|
Du Y, He Y, Arnott SR, Ross B, Wu X, Li L, Alain C. Rapid tuning of auditory "what" and "where" pathways by training. ACTA ACUST UNITED AC 2013; 25:496-506. [PMID: 24042339 DOI: 10.1093/cercor/bht251] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
Abstract
Behavioral improvement within the first hour of training is commonly explained as procedural learning (i.e., strategy changes resulting from task familiarization). However, it may additionally reflect a rapid adjustment of the perceptual and/or attentional system in a goal-directed task. In support of this latter hypothesis, we show feature-specific gains in performance for groups of participants briefly trained to use either a spectral or spatial difference between 2 vowels presented simultaneously during a vowel identification task. In both groups, the neuromagnetic activity measured during the vowel identification task following training revealed source activity in auditory cortices, prefrontal, inferior parietal, and motor areas. More importantly, the contrast between the 2 groups revealed a striking double dissociation in which listeners trained on spectral or spatial cues showed higher source activity in ventral ("what") and dorsal ("where") brain areas, respectively. These feature-specific effects indicate that brief training can implicitly bias top-down processing to a trained acoustic cue and induce a rapid recalibration of the ventral and dorsal auditory streams during speech segregation and identification.
Collapse
Affiliation(s)
- Yi Du
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Stephen R Arnott
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Bernhard Ross
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1
| | - Xihong Wu
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Liang Li
- Department of Psychology, Speech and Hearing Research Center, Key Laboratory on Machine Perception (Ministry of Education), PKU-IDG/McGovern Institute for Brain Research, Peking University, Beijing 100871, China and
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada M6A 2E1 Department of Psychology, University of Toronto, Ontario, Canada M8V 2S4
| |
Collapse
|
18
|
Alho K, Rinne T, Herron TJ, Woods DL. Stimulus-dependent activations and attention-related modulations in the auditory cortex: a meta-analysis of fMRI studies. Hear Res 2013; 307:29-41. [PMID: 23938208 DOI: 10.1016/j.heares.2013.08.001] [Citation(s) in RCA: 99] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2013] [Revised: 07/22/2013] [Accepted: 08/01/2013] [Indexed: 11/28/2022]
Abstract
We meta-analyzed 115 functional magnetic resonance imaging (fMRI) studies reporting auditory-cortex (AC) coordinates for activations related to active and passive processing of pitch and spatial location of non-speech sounds, as well as to the active and passive speech and voice processing. We aimed at revealing any systematic differences between AC surface locations of these activations by statistically analyzing the activation loci using the open-source Matlab toolbox VAMCA (Visualization and Meta-analysis on Cortical Anatomy). AC activations associated with pitch processing (e.g., active or passive listening to tones with a varying vs. fixed pitch) had median loci in the middle superior temporal gyrus (STG), lateral to Heschl's gyrus. However, median loci of activations due to the processing of infrequent pitch changes in a tone stream were centered in the STG or planum temporale (PT), significantly posterior to the median loci for other types of pitch processing. Median loci of attention-related modulations due to focused attention to pitch (e.g., attending selectively to low or high tones delivered in concurrent sequences) were, in turn, centered in the STG or superior temporal sulcus (STS), posterior to median loci for passive pitch processing. Activations due to spatial processing were centered in the posterior STG or PT, significantly posterior to pitch processing loci (processing of infrequent pitch changes excluded). In the right-hemisphere AC, the median locus of spatial attention-related modulations was in the STS, significantly inferior to the median locus for passive spatial processing. Activations associated with speech processing and those associated with voice processing had indistinguishable median loci at the border of mid-STG and mid-STS. Median loci of attention-related modulations due to attention to speech were in the same mid-STG/STS region. Thus, while attention to the pitch or location of non-speech sounds seems to recruit AC areas less involved in passive pitch or location processing, focused attention to speech predominantly enhances activations in regions that already respond to human vocalizations during passive listening. This suggests that distinct attention mechanisms might be engaged by attention to speech and attention to more elemental auditory features such as tone pitch or location. This article is part of a Special Issue entitled Human Auditory Neuroimaging.
Collapse
Affiliation(s)
- Kimmo Alho
- Helsinki Collegium for Advanced Studies, University of Helsinki, PO Box 4, FI 00014 Helsinki, Finland; Institute of Behavioural Sciences, University of Helsinki, PO Box 9, FI 00014 Helsinki, Finland.
| | | | | | | |
Collapse
|
19
|
Leung AWS, He Y, Grady CL, Alain C. Age differences in the neuroelectric adaptation to meaningful sounds. PLoS One 2013; 8:e68892. [PMID: 23935900 PMCID: PMC3723892 DOI: 10.1371/journal.pone.0068892] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/29/2013] [Accepted: 06/02/2013] [Indexed: 11/18/2022] Open
Abstract
Much of what we know regarding the effect of stimulus repetition on neuroelectric adaptation comes from studies using artificially produced pure tones or harmonic complex sounds. Little is known about the neural processes associated with the representation of everyday sounds and how these may be affected by aging. In this study, we used real life, meaningful sounds presented at various azimuth positions and found that auditory evoked responses peaking at about 100 and 180 ms after sound onset decreased in amplitude with stimulus repetition. This neural adaptation was greater in young than in older adults and was more pronounced when the same sound was repeated at the same location. Moreover, the P2 waves showed differential patterns of domain-specific adaptation when location and identity was repeated among young adults. Background noise decreased ERP amplitudes and modulated the magnitude of repetition effects on both the N1 and P2 amplitude, and the effects were comparable in young and older adults. These findings reveal an age-related difference in the neural processes associated with adaptation to meaningful sounds, which may relate to older adults' difficulty in ignoring task-irrelevant stimuli.
Collapse
Affiliation(s)
- Ada W. S. Leung
- Department of Occupational Therapy and Centre for Neuroscience, University of Alberta, Edmonton, Canada
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Yu He
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
| | - Cheryl L. Grady
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
- Department of Psychiatry, University of Toronto, Ontario, Canada
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, Ontario, Canada
- Department of Psychology, University of Toronto, Ontario, Canada
- Institute of Medical Sciences, University of Toronto, Ontario, Canada
| |
Collapse
|
20
|
Leung S, Croft R, McKenzie R, Iskra S, Silber B, Cooper N, O’Neill B, Cropley V, Diaz-Trujillo A, Hamblin D, Simpson D. Effects of 2G and 3G mobile phones on performance and electrophysiology in adolescents, young adults and older adults. Clin Neurophysiol 2011; 122:2203-16. [DOI: 10.1016/j.clinph.2011.04.006] [Citation(s) in RCA: 29] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/01/2009] [Revised: 02/24/2011] [Accepted: 04/10/2011] [Indexed: 10/18/2022]
|
21
|
Chait M, de Cheveigné A, Poeppel D, Simon JZ. Neural dynamics of attending and ignoring in human auditory cortex. Neuropsychologia 2010; 48:3262-71. [PMID: 20633569 DOI: 10.1016/j.neuropsychologia.2010.07.007] [Citation(s) in RCA: 44] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2010] [Revised: 06/03/2010] [Accepted: 07/02/2010] [Indexed: 10/19/2022]
Abstract
Studies in all sensory modalities have demonstrated amplification of early brain responses to attended signals, but less is known about the processes by which listeners selectively ignore stimuli. Here we use MEG and a new paradigm to dissociate the effects of selectively attending, and ignoring in time. Two different tasks were performed successively on the same acoustic stimuli: triplets of tones (A, B, C) with noise-bursts interspersed between the triplets. In the COMPARE task subjects were instructed to respond when tones A and C were of same frequency. In the PASSIVE task they were instructed to respond as fast as possible to noise-bursts. COMPARE requires attending to A and C and actively ignoring tone B, but PASSIVE involves neither attending to nor ignoring the tones. The data were analyzed separately for frontal and auditory-cortical channels to independently address attentional effects on low-level sensory versus putative control processing. We observe the earliest attend/ignore effects as early as 100 ms post-stimulus onset in auditory cortex. These appear to be generated by modulation of exogenous (stimulus-driven) sensory evoked activity. Specifically related to ignoring, we demonstrate that active-ignoring-induced input inhibition involves early selection. We identified a sequence of early (<200 ms post-onset) auditory cortical effects, comprised of onset response attenuation and the emergence of an inhibitory response, and provide new, direct evidence that listeners actively ignoring a sound can reduce their stimulus related activity in auditory cortex by 100 ms after onset when this is required to execute specific behavioral objectives.
Collapse
|
22
|
Bizley JK, Walker KMM. Sensitivity and selectivity of neurons in auditory cortex to the pitch, timbre, and location of sounds. Neuroscientist 2010; 16:453-69. [PMID: 20530254 DOI: 10.1177/1073858410371009] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
We are able to rapidly recognize and localize the many sounds in our environment. We can describe any of these sounds in terms of various independent "features" such as their loudness, pitch, or position in space. However, we still know surprisingly little about how neurons in the auditory brain, specifically the auditory cortex, might form representations of these perceptual characteristics from the information that the ear provides about sound acoustics. In this article, the authors examine evidence that the auditory cortex is necessary for processing the pitch, timbre, and location of sounds, and document how neurons across multiple auditory cortical fields might represent these as trains of action potentials. They conclude by asking whether neurons in different regions of the auditory cortex might not be simply sensitive to each of these three sound features but whether they might be selective for one of them. The few studies that have examined neural sensitivity to multiple sound attributes provide only limited support for neural selectivity within auditory cortex. Providing an explanation of the neural basis of feature invariance is thus one of the major challenges to sensory neuroscience obtaining the ultimate goal of understanding how neural firing patterns in the brain give rise to perception.
Collapse
Affiliation(s)
- Jennifer K Bizley
- Department of Physiology, Anatomy and Genetics, University of Oxford, Oxford, United Kingdom.
| | | |
Collapse
|
23
|
Cortical encoding of pitch: recent results and open questions. Hear Res 2010; 271:74-87. [PMID: 20457240 PMCID: PMC3098378 DOI: 10.1016/j.heares.2010.04.015] [Citation(s) in RCA: 32] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/03/2009] [Revised: 04/30/2010] [Accepted: 04/30/2010] [Indexed: 11/16/2022]
Abstract
It is widely appreciated that the key predictor of the pitch of a sound is its periodicity. Neural structures which support pitch perception must therefore be able to reflect the repetition rate of a sound, but this alone is not sufficient. Since pitch is a psychoacoustic property, a putative cortical code for pitch must also be able to account for the relationship between the amount to which a sound is periodic (i.e. its temporal regularity) and the perceived pitch salience, as well as limits in our ability to detect pitch changes or to discriminate rising from falling pitch. Pitch codes must also be robust in the presence of nuisance variables such as loudness or timbre. Here, we review a large body of work on the cortical basis of pitch perception, which illustrates that the distribution of cortical processes that give rise to pitch perception is likely to depend on both the acoustical features and functional relevance of a sound. While previous studies have greatly advanced our understanding, we highlight several open questions regarding the neural basis of pitch perception. These questions can begin to be addressed through a cooperation of investigative efforts across species and experimental techniques, and, critically, by examining the responses of single neurons in behaving animals.
Collapse
|
24
|
Bidet-Caulet A, Mikyska C, Knight RT. Load effects in auditory selective attention: evidence for distinct facilitation and inhibition mechanisms. Neuroimage 2009; 50:277-84. [PMID: 20026231 DOI: 10.1016/j.neuroimage.2009.12.039] [Citation(s) in RCA: 29] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2009] [Revised: 12/02/2009] [Accepted: 12/08/2009] [Indexed: 10/20/2022] Open
Abstract
It is unknown whether facilitation and inhibition of stimulus processing represent one or two mechanisms in auditory attention. We performed electrophysiological experiments in humans to address these two competing hypothesis. Participants performed an attention task under low or high memory load. Facilitation and inhibition were measured by recording electrophysiological responses to attended and ignored sounds and comparing them to responses to these same sounds when attention was considered to be equally distributed towards all sounds. We observed two late frontally distributed components: a negative one in response to attended sounds, and a positive one to ignored sounds. These two frontally distributed responses had distinct timing and scalp topographies and were differentially affected by memory load. Taken together these results provide evidence that attention-mediated top-down control reflects the activity of distinct facilitation and inhibition mechanisms.
Collapse
Affiliation(s)
- Aurélie Bidet-Caulet
- Helen Wills Neuroscience Institute, University of California, Berkeley, 132 Barker Hall, Berkeley, CA 94720, USA.
| | | | | |
Collapse
|
25
|
Electrophysiological attention effects in a virtual cocktail-party setting. Brain Res 2009; 1307:78-88. [PMID: 19853586 DOI: 10.1016/j.brainres.2009.10.044] [Citation(s) in RCA: 10] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/08/2009] [Revised: 10/15/2009] [Accepted: 10/16/2009] [Indexed: 11/21/2022]
Abstract
The selection of one of two concurrent speech messages for comprehension was investigated in healthy young adults in two event-related potential experiments. The stories were presented from virtual locations located 30 degrees to the left and right azimuth by convolving the speech message by the appropriate head-related transfer function determined for each individual participant. In addition, task irrelevant probe stimuli were presented in rapid sequence from the same virtual locations. In experiment 1, phoneme probes (/da/ voiced by the same talkers as attended and unattended messages) and band-pass filtered noise probes were presented. Phoneme probes coinciding with the attended message gave rise to a fronto-central negativity similar to the Nd-attention effect relative to the phoneme probes coinciding with the unattended speech message, whereas noise probes from the attended message's location showed a more positive frontal ERP response compared to probes from the unattended location resembling the so-called rejection positivity. In experiment 2, phoneme probes (as in exp. 1) and frequency-shifted (+400 Hz) were compared. The latter were characterized by a succession of negative and positive components that were modulated by location. The results suggest that at least two different neural mechanisms contribute to stream segregation in a cocktail-party setting: enhanced neural processing of stimuli matching the attended message closely (indexed by the Nd-effect) and rejection of stimuli that do not match the attended message at the attended location only (indexed by the rejection positivity).
Collapse
|
26
|
Zimmer U, Macaluso E. Interaural temporal and coherence cues jointly contribute to successful sound movement perception and activation of parietal cortex. Neuroimage 2009; 46:1200-8. [PMID: 19303934 DOI: 10.1016/j.neuroimage.2009.03.022] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2008] [Revised: 01/27/2009] [Accepted: 03/08/2009] [Indexed: 11/24/2022] Open
Abstract
The perception of movement in the auditory modality requires dynamic changes in the input that reaches the two ears (e.g. sequential changes of interaural time differences; dynamic ITDs). However, it is still unclear as to what extent these temporal cues interact with other interaural cues to determine successful movement perception, and which brain regions are involved in sound movement processing. Here, we presented trains of white-noise bursts containing either static or dynamic ITDs, and we varied parametrically the level of binaural coherence (BC) of both types of stimuli. Behaviorally, we found that movement discrimination sensitivity decreased with decreasing levels of BC. fMRI analyses highlighted a network of temporal, frontal and parietal regions where activity decreased with decreasing BC. Critically, in the intra-parietal sulcus and the supra-marginal gyrus brain activity decreased with decreasing BC, but only for dynamic-ITD sounds (BC by ITD interaction). Thus, these regions activated selectively when the sounds contained both dynamic ITDs and high levels of BC; i.e. when subjects perceived sound movement. We conclude that sound movement perception requires both dynamic changes of the auditory input and effective sound-source localization, and that parietal cortex utilizes interaural temporal and coherence cues for the successful perception of sound movement.
Collapse
Affiliation(s)
- U Zimmer
- NeuroImaging Laboratory, Santa Lucia Foundation, Italy.
| | | |
Collapse
|