1
|
Melara RD, Root JC, Edelman JA, Estelle MC, Mohr I, Ahles TA. Effects of Breast Cancer Treatment on Neural Noise: a Longitudinal Design. Arch Clin Neuropsychol 2024:acae066. [PMID: 39197121 DOI: 10.1093/arclin/acae066] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2024] [Revised: 07/08/2024] [Accepted: 08/09/2024] [Indexed: 08/30/2024] Open
Abstract
OBJECTIVE Cognitive dysfunction has been observed consistently in a subset of breast cancer survivors. Yet the precise neurophysiological origins of cancer-related cognitive decline remain unknown. The current study assessed neural noise (1/f activity in electroencephalogram [EEG]) in breast cancer survivors as a potential contributor to observed cognitive dysfunction from pre- to post-treatment. METHODS We measured EEG in a longitudinal design during performance of the paired-click task and the revised Attention Network Test (ANT-R) to investigate pre- versus post-treatment effects of neural noise in breast cancer patients (n = 20 in paired click; n = 19 in ANT-R) compared with healthy controls (n = 32 in paired click; n = 29 in ANT-R). RESULTS In both paradigms, one sensory (paired click) and one cognitive (ANT-R), we found that neural noise was significantly elevated after treatment in patients, remaining constant from pretest to posttest in controls. In the ANT-R, patients responded more slowly than controls on invalid cuing trials. Increased neural noise was associated with poorer alerting and poorer inhibitory control of attention (as measured by behavioral network scores), particularly for patients after treatment. CONCLUSIONS The current study is the first to show a deleterious effect of breast cancer and/or cancer treatment on neural noise, pointing to alterations in the relative balance of excitatory and inhibitory synaptic inputs, while also suggesting promising approaches for cognitive rehabilitation.
Collapse
Affiliation(s)
- Robert D Melara
- Department of Psychology, The City College, City University of New York, 160 Convent Avenue, NAC 7-120, New York, NY 10031, USA
| | - James C Root
- Memorial Sloan Kettering Cancer Center, Department of Psychiatry and Behavioral Science Services, 641 Lexington Avenue, 7th Floor, New York, New York 10022, USA
| | - Jay A Edelman
- Department of Biology, The City College, City University of New York, 160 Convent Avenue, MR 526, New York, NY 10031, USA
| | - Maria Camilla Estelle
- Memorial Sloan Kettering Cancer Center, Department of Psychiatry and Behavioral Science Services, 641 Lexington Avenue, 7th Floor, New York, New York 10022, USA
| | - Isabella Mohr
- Memorial Sloan Kettering Cancer Center, Department of Psychiatry and Behavioral Science Services, 641 Lexington Avenue, 7th Floor, New York, New York 10022, USA
| | - Tim A Ahles
- Memorial Sloan Kettering Cancer Center, Department of Psychiatry and Behavioral Science Services, 641 Lexington Avenue, 7th Floor, New York, New York 10022, USA
| |
Collapse
|
2
|
Brown JA, Bidelman GM. Attention, Musicality, and Familiarity Shape Cortical Speech Tracking at the Musical Cocktail Party. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.10.28.562773. [PMID: 37961204 PMCID: PMC10634879 DOI: 10.1101/2023.10.28.562773] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/15/2023]
Abstract
The "cocktail party problem" challenges our ability to understand speech in noisy environments, which often include background music. Here, we explored the role of background music in speech-in-noise listening. Participants listened to an audiobook in familiar and unfamiliar music while tracking keywords in either speech or song lyrics. We used EEG to measure neural tracking of the audiobook. When speech was masked by music, the modeled peak latency at 50 ms (P1TRF) was prolonged compared to unmasked. Additionally, P1TRF amplitude was larger in unfamiliar background music, suggesting improved speech tracking. We observed prolonged latencies at 100 ms (N1TRF) when speech was not the attended stimulus, though only in less musical listeners. Our results suggest early neural representations of speech are enhanced with both attention and concurrent unfamiliar music, indicating familiar music is more distracting. One's ability to perceptually filter "musical noise" at the cocktail party depends on objective musical abilities.
Collapse
Affiliation(s)
- Jane A. Brown
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Institute for Intelligent Systems, University of Memphis, Memphis, TN 38152, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA
- Program in Neuroscience, Indiana University, Bloomington, IN, USA
- Cognitive Science Program, Indiana University, Bloomington, IN, USA
| |
Collapse
|
3
|
Kwasa JA, Noyce AL, Torres LM, Richardson BN, Shinn-Cunningham BG. Top-down auditory attention modulates neural responses more strongly in neurotypical than ADHD young adults. Brain Res 2023; 1798:148144. [PMID: 36328068 PMCID: PMC9749882 DOI: 10.1016/j.brainres.2022.148144] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/24/2022] [Revised: 10/24/2022] [Accepted: 10/26/2022] [Indexed: 11/06/2022]
Abstract
Human cognitive abilities naturally vary along a spectrum, even among those we call "neurotypical". Individuals differ in their ability to selectively attend to goal-relevant auditory stimuli. We sought to characterize this variability in a cohort of people with diverse attentional functioning. We recruited both neurotypical (N = 20) and ADHD (N = 25) young adults, all with normal hearing. Participants listened to one of three concurrent, spatially separated speech streams and reported the order of the syllables in that stream while we recorded electroencephalography (EEG). We tested both the ability to sustain attentional focus on a single "Target" stream and the ability to monitor the Target but flexibly either ignore or switch attention to an unpredictable "Interrupter" stream from another direction that sometimes appeared. Although differences in both stimulus structure and task demands affected behavioral performance, ADHD status did not. In both groups, the Interrupter evoked larger neural responses when it was to be attended compared to when it was irrelevant, including for the P3a "reorienting" response previously described as involuntary. This attentional modulation was weaker in ADHD listeners, even though their behavioral performance was the same. Across the entire cohort, individual performance correlated with the degree of top-down modulation of neural responses. These results demonstrate that listeners differ in their ability to modulate neural representations of sound based on task goals, while suggesting that adults with ADHD may have weaker volitional control of attentional processes than their neurotypical counterparts.
Collapse
Affiliation(s)
- Jasmine A. Kwasa
- Neuroscience Institute, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, 15213, United States, Department of Biomedical Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, United States, Corresponding author at: 4825 Frew St, A52A Baker Hall, Pittsburgh, PA 15213, United States. (J.A. Kwasa)
| | - Abigail L. Noyce
- Neuroscience Institute, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, 15213, United States
| | - Laura M. Torres
- Department of Biomedical Engineering, Boston University, 1 Silber Way, Boston, MA, 02215, United States
| | - Benjamin N. Richardson
- Neuroscience Institute, Carnegie Mellon University, 5000 Forbes Ave., Pittsburgh, PA, 15213, United States
| | | |
Collapse
|
4
|
Suri H, Rothschild G. Enhanced stability of complex sound representations relative to simple sounds in the auditory cortex. eNeuro 2022; 9:ENEURO.0031-22.2022. [PMID: 35868858 PMCID: PMC9347310 DOI: 10.1523/eneuro.0031-22.2022] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 06/29/2022] [Accepted: 06/30/2022] [Indexed: 11/29/2022] Open
Abstract
Typical everyday sounds, such as those of speech or running water, are spectrotemporally complex. The ability to recognize complex sounds (CxS) and their associated meaning is presumed to rely on their stable neural representations across time. The auditory cortex is critical for processing of CxS, yet little is known of the degree of stability of auditory cortical representations of CxS across days. Previous studies have shown that the auditory cortex represents CxS identity with a substantial degree of invariance to basic sound attributes such as frequency. We therefore hypothesized that auditory cortical representations of CxS are more stable across days than those of sounds that lack spectrotemporal structure such as pure tones (PTs). To test this hypothesis, we recorded responses of identified L2/3 auditory cortical excitatory neurons to both PTs and CxS across days using two-photon calcium imaging in awake mice. Auditory cortical neurons showed significant daily changes of responses to both types of sounds, yet responses to CxS exhibited significantly lower rates of daily change than those of PTs. Furthermore, daily changes in response profiles to PTs tended to be more stimulus-specific, reflecting changes in sound selectivity, as compared to changes of CxS responses. Lastly, the enhanced stability of responses to CxS was evident across longer time intervals as well. Together, these results suggest that spectrotemporally CxS are more stably represented in the auditory cortex across time than PTs. These findings support a role of the auditory cortex in representing CxS identity across time.Significance statementThe ability to recognize everyday complex sounds such as those of speech or running water is presumed to rely on their stable neural representations. Yet, little is known of the degree of stability of single-neuron sound responses across days. As the auditory cortex is critical for complex sound perception, we hypothesized that the auditory cortical representations of complex sounds are relatively stable across days. To test this, we recorded sound responses of identified auditory cortical neurons across days in awake mice. We found that auditory cortical responses to complex sounds are significantly more stable across days as compared to those of simple pure tones. These findings support a role of the auditory cortex in representing complex sound identity across time.
Collapse
Affiliation(s)
- Harini Suri
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
| | - Gideon Rothschild
- Department of Psychology, University of Michigan, Ann Arbor, MI, 48109, USA
- Kresge Hearing Research Institute and Department of Otolaryngology - Head and Neck Surgery, University of Michigan, Ann Arbor, MI 48109, USA
| |
Collapse
|
5
|
Kachlicka M, Laffere A, Dick F, Tierney A. Slow phase-locked modulations support selective attention to sound. Neuroimage 2022; 252:119024. [PMID: 35231629 PMCID: PMC9133470 DOI: 10.1016/j.neuroimage.2022.119024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2022] [Revised: 02/16/2022] [Accepted: 02/19/2022] [Indexed: 11/16/2022] Open
Abstract
To make sense of complex soundscapes, listeners must select and attend to task-relevant streams while ignoring uninformative sounds. One possible neural mechanism underlying this process is alignment of endogenous oscillations with the temporal structure of the target sound stream. Such a mechanism has been suggested to mediate attentional modulation of neural phase-locking to the rhythms of attended sounds. However, such modulations are compatible with an alternate framework, where attention acts as a filter that enhances exogenously-driven neural auditory responses. Here we attempted to test several predictions arising from the oscillatory account by playing two tone streams varying across conditions in tone duration and presentation rate; participants attended to one stream or listened passively. Attentional modulation of the evoked waveform was roughly sinusoidal and scaled with rate, while the passive response did not. However, there was only limited evidence for continuation of modulations through the silence between sequences. These results suggest that attentionally-driven changes in phase alignment reflect synchronization of slow endogenous activity with the temporal structure of attended stimuli.
Collapse
Affiliation(s)
- Magdalena Kachlicka
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England; Division of Psychology & Language Sciences, UCL, Gower Street, London WC1E 6BT, England
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London WC1E 7HX, England.
| |
Collapse
|
6
|
Symons AE, Dick F, Tierney AT. Dimension-selective attention and dimensional salience modulate cortical tracking of acoustic dimensions. Neuroimage 2021; 244:118544. [PMID: 34492294 DOI: 10.1016/j.neuroimage.2021.118544] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2021] [Revised: 08/19/2021] [Accepted: 08/31/2021] [Indexed: 11/17/2022] Open
Abstract
Some theories of auditory categorization suggest that auditory dimensions that are strongly diagnostic for particular categories - for instance voice onset time or fundamental frequency in the case of some spoken consonants - attract attention. However, prior cognitive neuroscience research on auditory selective attention has largely focused on attention to simple auditory objects or streams, and so little is known about the neural mechanisms that underpin dimension-selective attention, or how the relative salience of variations along these dimensions might modulate neural signatures of attention. Here we investigate whether dimensional salience and dimension-selective attention modulate the cortical tracking of acoustic dimensions. In two experiments, participants listened to tone sequences varying in pitch and spectral peak frequency; these two dimensions changed at different rates. Inter-trial phase coherence (ITPC) and amplitude of the EEG signal at the frequencies tagged to pitch and spectral changes provided a measure of cortical tracking of these dimensions. In Experiment 1, tone sequences varied in the size of the pitch intervals, while the size of spectral peak intervals remained constant. Cortical tracking of pitch changes was greater for sequences with larger compared to smaller pitch intervals, with no difference in cortical tracking of spectral peak changes. In Experiment 2, participants selectively attended to either pitch or spectral peak. Cortical tracking was stronger in response to the attended compared to unattended dimension for both pitch and spectral peak. These findings suggest that attention can enhance the cortical tracking of specific acoustic dimensions rather than simply enhancing tracking of the auditory object as a whole.
Collapse
Affiliation(s)
- Ashley E Symons
- Department of Psychological Sciences, Birkbeck College, University of London UK.
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck College, University of London UK; Division of Psychology & Language Sciences, University College London UK
| | - Adam T Tierney
- Department of Psychological Sciences, Birkbeck College, University of London UK
| |
Collapse
|
7
|
Kim S, Emory C, Choi I. Neurofeedback Training of Auditory Selective Attention Enhances Speech-In-Noise Perception. Front Hum Neurosci 2021; 15:676992. [PMID: 34239430 PMCID: PMC8258151 DOI: 10.3389/fnhum.2021.676992] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 05/28/2021] [Indexed: 12/25/2022] Open
Abstract
Selective attention enhances cortical responses to attended sensory inputs while suppressing others, which can be an effective strategy for speech-in-noise (SiN) understanding. Emerging evidence exhibits a large variance in attentional control during SiN tasks, even among normal-hearing listeners. Yet whether training can enhance the efficacy of attentional control and, if so, whether the training effects can be transferred to performance on a SiN task has not been explicitly studied. Here, we introduce a neurofeedback training paradigm designed to reinforce the attentional modulation of auditory evoked responses. Young normal-hearing adults attended one of two competing speech streams consisting of five repeating words (“up”) in a straight rhythm spoken by a female speaker and four straight words (“down”) spoken by a male speaker. Our electroencephalography-based attention decoder classified every single trial using a template-matching method based on pre-defined patterns of cortical auditory responses elicited by either an “up” or “down” stream. The result of decoding was provided on the screen as online feedback. After four sessions of this neurofeedback training over 4 weeks, the subjects exhibited improved attentional modulation of evoked responses to the training stimuli as well as enhanced cortical responses to target speech and better performance during a post-training SiN task. Such training effects were not found in the Placebo Group that underwent similar attention training except that feedback was given only based on behavioral accuracy. These results indicate that the neurofeedback training may reinforce the strength of attentional modulation, which likely improves SiN understanding. Our finding suggests a potential rehabilitation strategy for SiN deficits.
Collapse
Affiliation(s)
- Subong Kim
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, United States
| | - Caroline Emory
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States.,Department of Otolaryngology - Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, United States
| |
Collapse
|
8
|
Balkenhol T, Wallhäusser-Franke E, Rotter N, Servais JJ. Cochlear Implant and Hearing Aid: Objective Measures of Binaural Benefit. Front Neurosci 2020; 14:586119. [PMID: 33381008 PMCID: PMC7768047 DOI: 10.3389/fnins.2020.586119] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 10/15/2020] [Indexed: 11/13/2022] Open
Abstract
Cochlear implants (CI) improve hearing for the severely hearing impaired. With an extension of implantation candidacy, today many CI listeners use a hearing aid on their contralateral ear, referred to as bimodal listening. It is uncertain, however, whether the brains of bimodal listeners can combine the electrical and acoustical sound information and how much CI experience is needed to achieve an improved performance with bimodal listening. Patients with bilateral sensorineural hearing loss undergoing implant surgery were tested in their ability to understand speech in quiet and in noise, before and again 3 and 6 months after provision of a CI. Results of these bimodal listeners were compared to age-matched, normal hearing controls (NH). The benefit of adding a contralateral hearing aid was calculated in terms of head shadow, binaural summation, binaural squelch, and spatial release from masking from the results of a sentence recognition test. Beyond that, bimodal benefit was estimated from the difference in amplitudes and latencies of the N1, P2, and N2 potentials of the brains' auditory evoked response (AEP) toward speech. Data of fifteen participants contributed to the results. CI provision resulted in significant improvement of speech recognition with the CI ear, and in taking advantage of the head shadow effect for understanding speech in noise. Some amount of binaural processing was suggested by a positive binaural summation effect 6 month post-implantation that correlated significantly with symmetry of pure tone thresholds. Moreover, a significant negative correlation existed between binaural summation and latency of the P2 potential. With CI experience, morphology of the N1 and P2 potentials in the AEP response approximated that of NH, whereas, N2 remained different. Significant AEP differences between monaural and binaural processing were shown for NH and for bimodal listeners 6 month post-implantation. Although the grand-averaged difference in N1 amplitude between monaural and binaural listening was similar for NH and the bimodal group, source localization showed group-dependent differences in auditory and speech-relevant cortex, suggesting different processing in the bimodal listeners.
Collapse
Affiliation(s)
- Tobias Balkenhol
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Elisabeth Wallhäusser-Franke
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Nicole Rotter
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| | - Jérôme J Servais
- Department of Otorhinolaryngology Head and Neck Surgery, Medical Faculty Mannheim, University Medical Center Mannheim, Heidelberg University, Mannheim, Germany
| |
Collapse
|
9
|
Laffere A, Dick F, Holt LL, Tierney A. Attentional modulation of neural entrainment to sound streams in children with and without ADHD. Neuroimage 2020; 224:117396. [PMID: 32979522 DOI: 10.1016/j.neuroimage.2020.117396] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2020] [Revised: 08/25/2020] [Accepted: 09/14/2020] [Indexed: 01/06/2023] Open
Abstract
To extract meaningful information from complex auditory scenes like a noisy playground, rock concert, or classroom, children can direct attention to different sound streams. One means of accomplishing this might be to align neural activity with the temporal structure of a target stream, such as a specific talker or melody. However, this may be more difficult for children with ADHD, who can struggle with accurately perceiving and producing temporal intervals. In this EEG study, we found that school-aged children's attention to one of two temporally-interleaved isochronous tone 'melodies' was linked to an increase in phase-locking at the melody's rate, and a shift in neural phase that aligned the neural responses with the attended tone stream. Children's attention task performance and neural phase alignment with the attended melody were linked to performance on temporal production tasks, suggesting that children with more robust control over motor timing were better able to direct attention to the time points associated with the target melody. Finally, we found that although children with ADHD performed less accurately on the tonal attention task than typically developing children, they showed the same degree of attentional modulation of phase locking and neural phase shifts, suggesting that children with ADHD may have difficulty with attentional engagement rather than attentional selection.
Collapse
Affiliation(s)
- Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom; Division of Psychology & Language Sciences, UCL, Gower Street, London, WC1E 6BT, United Kingdom
| | - Lori L Holt
- Department of Psychology, Carnegie Mellon University, 5000 Forbes Avenue, Pittsburgh, PA 15213, United States
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, United Kingdom.
| |
Collapse
|
10
|
Miran S, Presacco A, Simon JZ, Fu MC, Marcus SI, Babadi B. Dynamic estimation of auditory temporal response functions via state-space models with Gaussian mixture process noise. PLoS Comput Biol 2020; 16:e1008172. [PMID: 32813712 PMCID: PMC7485982 DOI: 10.1371/journal.pcbi.1008172] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2020] [Revised: 09/11/2020] [Accepted: 07/21/2020] [Indexed: 11/19/2022] Open
Abstract
Estimating the latent dynamics underlying biological processes is a central problem in computational biology. State-space models with Gaussian statistics are widely used for estimation of such latent dynamics and have been successfully utilized in the analysis of biological data. Gaussian statistics, however, fail to capture several key features of the dynamics of biological processes (e.g., brain dynamics) such as abrupt state changes and exogenous processes that affect the states in a structured fashion. Although Gaussian mixture process noise models have been considered as an alternative to capture such effects, data-driven inference of their parameters is not well-established in the literature. The objective of this paper is to develop efficient algorithms for inferring the parameters of a general class of Gaussian mixture process noise models from noisy and limited observations, and to utilize them in extracting the neural dynamics that underlie auditory processing from magnetoencephalography (MEG) data in a cocktail party setting. We develop an algorithm based on Expectation-Maximization to estimate the process noise parameters from state-space observations. We apply our algorithm to simulated and experimentally-recorded MEG data from auditory experiments in the cocktail party paradigm to estimate the underlying dynamic Temporal Response Functions (TRFs). Our simulation results show that the richer representation of the process noise as a Gaussian mixture significantly improves state estimation and capturing the heterogeneity of the TRF dynamics. Application to MEG data reveals improvements over existing TRF estimation techniques, and provides a reliable alternative to current approaches for probing neural dynamics in a cocktail party scenario, as well as attention decoding in emerging applications such as smart hearing aids. Our proposed methodology provides a framework for efficient inference of Gaussian mixture process noise models, with application to a wide range of biological data with underlying heterogeneous and latent dynamics.
Collapse
Affiliation(s)
- Sina Miran
- Starkey Hearing Technologies, Eden Prairie, Minnesota, United States of America
| | - Alessandro Presacco
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
| | - Jonathan Z. Simon
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical & Computer Engineering, University of Maryland, College Park, Maryland, United States of America
- Department of Biology, University of Maryland, College Park, Maryland, United States of America
| | - Michael C. Fu
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Robert H. Smith School of Business, University of Maryland, College Park, Maryland, United States of America
| | - Steven I. Marcus
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical & Computer Engineering, University of Maryland, College Park, Maryland, United States of America
| | - Behtash Babadi
- Institute for Systems Research, University of Maryland, College Park, Maryland, United States of America
- Department of Electrical & Computer Engineering, University of Maryland, College Park, Maryland, United States of America
| |
Collapse
|
11
|
Makov S, Zion Golumbic E. Irrelevant Predictions: Distractor Rhythmicity Modulates Neural Encoding in Auditory Cortex. Cereb Cortex 2020; 30:5792-5805. [DOI: 10.1093/cercor/bhaa153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2019] [Revised: 04/10/2020] [Accepted: 05/02/2020] [Indexed: 12/12/2022] Open
Abstract
Abstract
Dynamic attending theory suggests that predicting the timing of upcoming sounds can assist in focusing attention toward them. However, whether similar predictive processes are also applied to background noises and assist in guiding attention “away” from potential distractors, remains an open question. Here we address this question by manipulating the temporal predictability of distractor sounds in a dichotic listening selective attention task. We tested the influence of distractors’ temporal predictability on performance and on the neural encoding of sounds, by comparing the effects of Rhythmic versus Nonrhythmic distractors. Using magnetoencephalography we found that, indeed, the neural responses to both attended and distractor sounds were affected by distractors’ rhythmicity. Baseline activity preceding the onset of Rhythmic distractor sounds was enhanced relative to nonrhythmic distractor sounds, and sensory response to them was suppressed. Moreover, detection of nonmasked targets improved when distractors were Rhythmic, an effect accompanied by stronger lateralization of the neural responses to attended sounds to contralateral auditory cortex. These combined behavioral and neural results suggest that not only are temporal predictions formed for task-irrelevant sounds, but that these predictions bear functional significance for promoting selective attention and reducing distractibility.
Collapse
Affiliation(s)
- Shiri Makov
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| | - Elana Zion Golumbic
- Gonda Multidisciplinary Brain Research Center, Bar-Ilan University, Ramat-Gan 5290002, Israel
| |
Collapse
|
12
|
Balkenhol T, Wallhäusser-Franke E, Rotter N, Servais JJ. Changes in Speech-Related Brain Activity During Adaptation to Electro-Acoustic Hearing. Front Neurol 2020; 11:161. [PMID: 32300327 PMCID: PMC7145411 DOI: 10.3389/fneur.2020.00161] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 02/19/2020] [Indexed: 12/17/2022] Open
Abstract
Objectives: Hearing improves significantly with bimodal provision, i.e., a cochlear implant (CI) at one ear and a hearing aid (HA) at the other, but performance shows a high degree of variability resulting in substantial uncertainty about the performance that can be expected by the individual CI user. The objective of this study was to explore how auditory event-related potentials (AERPs) of bimodal listeners in response to spoken words approximate the electrophysiological response of normal hearing (NH) listeners. Study Design: Explorative prospective analysis during the first 6 months of bimodal listening using a within-subject repeated measures design. Setting: Academic tertiary care center. Participants: Twenty-seven adult participants with bilateral sensorineural hearing loss who received a HiRes 90K CI and continued use of a HA at the non-implanted ear. Age-matched NH listeners served as controls. Intervention: Cochlear implantation. Main Outcome Measures: Obligatory auditory evoked potentials N1 and P2, and the event-related N2 potential in response to monosyllabic words and their reversed sound traces before, as well as 3 and 6 months post-implantation. The task required word/non-word classification. Stimuli were presented within speech-modulated noise. Loudness of word/non-word signals was adjusted individually to achieve the same intelligibility across groups and assessments. Results: Intelligibility improved significantly with bimodal hearing, and the N1-P2 response approximated the morphology seen in NH with enhanced and earlier responses to the words compared to their reversals. For bimodal listeners, a prominent negative deflection was present between 370 and 570 ms post stimulus onset (N2), irrespective of stimulus type. This was absent for NH controls; hence, this response did not approximate the NH response during the study interval. N2 source localization evidenced extended activation of general cognitive areas in frontal and prefrontal brain areas in the CI group. Conclusions: Prolonged and spatially extended processing in bimodal CI users suggests employment of additional auditory-cognitive mechanisms during speech processing. This does not reduce within 6 months of bimodal experience and may be a correlate of the enhanced listening effort described by CI listeners.
Collapse
|
13
|
Laffere A, Dick F, Tierney A. Effects of auditory selective attention on neural phase: individual differences and short-term training. Neuroimage 2020; 213:116717. [PMID: 32165265 DOI: 10.1016/j.neuroimage.2020.116717] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2019] [Revised: 03/02/2020] [Accepted: 03/04/2020] [Indexed: 02/06/2023] Open
Abstract
How does the brain follow a sound that is mixed with others in a noisy environment? One possible strategy is to allocate attention to task-relevant time intervals. Prior work has linked auditory selective attention to alignment of neural modulations with stimulus temporal structure. However, since this prior research used relatively easy tasks and focused on analysis of main effects of attention across participants, relatively little is known about the neural foundations of individual differences in auditory selective attention. Here we investigated individual differences in auditory selective attention by asking participants to perform a 1-back task on a target auditory stream while ignoring a distractor auditory stream presented 180° out of phase. Neural entrainment to the attended auditory stream was strongly linked to individual differences in task performance. Some variability in performance was accounted for by degree of musical training, suggesting a link between long-term auditory experience and auditory selective attention. To investigate whether short-term improvements in auditory selective attention are possible, we gave participants 2 h of auditory selective attention training and found improvements in both task performance and enhancements of the effects of attention on neural phase angle. Our results suggest that although there exist large individual differences in auditory selective attention and attentional modulation of neural phase angle, this skill improves after a small amount of targeted training.
Collapse
Affiliation(s)
- Aeron Laffere
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK
| | - Fred Dick
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK; Division of Psychology & Language Sciences, UCL, Gower Street, London, WC1E 6BT, UK
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, Malet Street, London, WC1E 7HX, UK.
| |
Collapse
|
14
|
Deng Y, Choi I, Shinn-Cunningham B, Baumgartner R. Impoverished auditory cues limit engagement of brain networks controlling spatial selective attention. Neuroimage 2019; 202:116151. [PMID: 31493531 DOI: 10.1016/j.neuroimage.2019.116151] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/22/2019] [Revised: 08/02/2019] [Accepted: 08/31/2019] [Indexed: 12/30/2022] Open
Abstract
Spatial selective attention enables listeners to process a signal of interest in natural settings. However, most past studies on auditory spatial attention used impoverished spatial cues: presenting competing sounds to different ears, using only interaural differences in time (ITDs) and/or intensity (IIDs), or using non-individualized head-related transfer functions (HRTFs). Here we tested the hypothesis that impoverished spatial cues impair spatial auditory attention by only weakly engaging relevant cortical networks. Eighteen normal-hearing listeners reported the content of one of two competing syllable streams simulated at roughly +30° and -30° azimuth. The competing streams consisted of syllables from two different-sex talkers. Spatialization was based on natural spatial cues (individualized HRTFs), individualized IIDs, or generic ITDs. We measured behavioral performance as well as electroencephalographic markers of selective attention. Behaviorally, subjects recalled target streams most accurately with natural cues. Neurally, spatial attention significantly modulated early evoked sensory response magnitudes only for natural cues, not in conditions using only ITDs or IIDs. Consistent with this, parietal oscillatory power in the alpha band (8-14 Hz; associated with filtering out distracting events from unattended directions) showed significantly less attentional modulation with isolated spatial cues than with natural cues. Our findings support the hypothesis that spatial selective attention networks are only partially engaged by impoverished spatial auditory cues. These results not only suggest that studies using unnatural spatial cues underestimate the neural effects of spatial auditory attention, they also illustrate the importance of preserving natural spatial cues in assistive listening devices to support robust attentional control.
Collapse
Affiliation(s)
- Yuqi Deng
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Inyong Choi
- Communication Sciences & Disorders, University of Iowa, Iowa City, IA, 52242, USA
| | - Barbara Shinn-Cunningham
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA
| | - Robert Baumgartner
- Biomedical Engineering, Boston University, Boston, MA, 02215, USA; Acoustics Research Institute, Austrian Academy of Sciences, Vienna, Austria.
| |
Collapse
|
15
|
Short-term effects of single-dose chloral hydrate on neonatal auditory perception: An auditory event-related potential study. PLoS One 2019; 14:e0212195. [PMID: 30735558 PMCID: PMC6368310 DOI: 10.1371/journal.pone.0212195] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2018] [Accepted: 01/29/2019] [Indexed: 11/19/2022] Open
Abstract
Objective To study the short-term effects of a single-dose chloral hydrate on neonatal auditory perception by measuring auditory event-related potentials (aERPs). Methods Thirty-nine full-term neonates, aged 2–28 days and weighing 2980–4350 g, were divided into two groups including a chloral hydrate group (CH group, n = 17) and a non-chloral hydrate control group (non-CH group, n = 22). The CH group was given single-dose chloral hydrate (30 mg/kg) orally before aERPs measurement. An auditory oddball paradigm was used to elicit aERPs. P2 and N2 components of the ERP were recorded from electrodes at the Fz and Cz locations, and the areas under their curves (P2 and N2 areas) were calculated for the comparison between two groups. Results Significant differences was found in the P2 area between the two groups at Fz and Cz (Fz: F (1,37) = 487.75, P < 0.05; Cz: F (1,37) = 1465.94, P < 0.05). Similarly, significant difference was also in the N2 area between the two groups at both locations (Fz: F(1,37) = 153.38, P < 0.05; Cz: F(1,37) = 798.42, P < 0.05). Conclusion A single-dose of chloral hydrate impacts neonatal auditory perception in the short-term. Long-term effects will also be studied in future.
Collapse
|
16
|
Rogers CS, Payne L, Maharjan S, Wingfield A, Sekuler R. Older adults show impaired modulation of attentional alpha oscillations: Evidence from dichotic listening. Psychol Aging 2019; 33:246-258. [PMID: 29658746 DOI: 10.1037/pag0000238] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
Auditory attention is critical for selectively listening to speech from a single talker in a multitalker environment (e.g., Cherry, 1953). Listening in such situations is notoriously more difficult and more poorly encoded to long-term memory in older than in young adults (Tun, O'Kane, & Wingfield, 2002). Recent work by Payne, Rogers, Wingfield, and Sekuler (2017) in young adults demonstrated a neural correlate of auditory attention in the directed dichotic listening task (DDLT), where listeners attend to one ear while ignoring the other. Measured using electroencephalography, differences in alpha band power (8-14 Hz) between left and right hemisphere parietal regions mark the direction to which auditory attention is focused. Little prior research has been conducted on alpha power modulations in older adults, particularly with regard to auditory attention directed toward speech stimuli. In the current study, an older adult sample was administered the DDLT and delayed recognition procedures used by Payne et al. (2017). Compared to young adults, older adults showed reduced selective attention in the DDLT, evidenced by a higher rate of intrusions from the unattended ear. Moreover, older adults did not exhibit attention-related alpha modulation evidenced by young adults, nor did their event-related potentials (ERPs) to recognition probes differentiate between attended or unattended probes. Older adults' delayed recognition did not reveal a pattern of suppression of unattended items evidenced by young adults. These results serve as evidence for an age-related decline in selective auditory attention, potentially mediated by age-related decline in the ability to modulate alpha oscillations. (PsycINFO Database Record
Collapse
Affiliation(s)
- Chad S Rogers
- Volen National Center for Complex Systems, Brandeis University
| | - Lisa Payne
- Volen National Center for Complex Systems, Brandeis University
| | - Sujala Maharjan
- Volen National Center for Complex Systems, Brandeis University
| | | | - Robert Sekuler
- Volen National Center for Complex Systems, Brandeis University
| |
Collapse
|
17
|
de Cheveigné A, Di Liberto GM, Arzounian D, Wong DDE, Hjortkjær J, Fuglsang S, Parra LC. Multiway canonical correlation analysis of brain data. Neuroimage 2018; 186:728-740. [PMID: 30496819 DOI: 10.1016/j.neuroimage.2018.11.026] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2018] [Revised: 10/11/2018] [Accepted: 11/16/2018] [Indexed: 01/12/2023] Open
Abstract
Brain data recorded with electroencephalography (EEG), magnetoencephalography (MEG) and related techniques often have poor signal-to-noise ratios due to the presence of multiple competing sources and artifacts. A common remedy is to average responses over repeats of the same stimulus, but this is not applicable for temporally extended stimuli that are presented only once (speech, music, movies, natural sound). An alternative is to average responses over multiple subjects that were presented with identical stimuli, but differences in geometry of brain sources and sensors reduce the effectiveness of this solution. Multiway canonical correlation analysis (MCCA) brings a solution to this problem by allowing data from multiple subjects to be fused in such a way as to extract components common to all. This paper reviews the method, offers application examples that illustrate its effectiveness, and outlines the caveats and risks entailed by the method.
Collapse
Affiliation(s)
- Alain de Cheveigné
- Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France; UCL Ear Institute, London, United Kingdom.
| | - Giovanni M Di Liberto
- Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - Dorothée Arzounian
- Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - Daniel D E Wong
- Laboratoire des Systèmes Perceptifs, UMR 8248, CNRS, France; Département d'Etudes Cognitives, Ecole Normale Supérieure, PSL University, Paris, France
| | - Jens Hjortkjær
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark; Danish Research Centre for Magnetic Resonance, Centre for Functional and Diagnostic Imaging and Research, Copenhagen University Hospital Hvidovre, Denmark
| | - Søren Fuglsang
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Denmark
| | | |
Collapse
|
18
|
Fiedler L, Wöstmann M, Herbst SK, Obleser J. Late cortical tracking of ignored speech facilitates neural selectivity in acoustically challenging conditions. Neuroimage 2018; 186:33-42. [PMID: 30367953 DOI: 10.1016/j.neuroimage.2018.10.057] [Citation(s) in RCA: 68] [Impact Index Per Article: 11.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/06/2018] [Revised: 09/12/2018] [Accepted: 10/21/2018] [Indexed: 11/25/2022] Open
Abstract
Listening requires selective neural processing of the incoming sound mixture, which in humans is borne out by a surprisingly clean representation of attended-only speech in auditory cortex. How this neural selectivity is achieved even at negative signal-to-noise ratios (SNR) remains unclear. We show that, under such conditions, a late cortical representation (i.e., neural tracking) of the ignored acoustic signal is key to successful separation of attended and distracting talkers (i.e., neural selectivity). We recorded and modeled the electroencephalographic response of 18 participants who attended to one of two simultaneously presented stories, while the SNR between the two talkers varied dynamically between +6 and -6 dB. The neural tracking showed an increasing early-to-late attention-biased selectivity. Importantly, acoustically dominant (i.e., louder) ignored talkers were tracked neurally by late involvement of fronto-parietal regions, which contributed to enhanced neural selectivity. This neural selectivity, by way of representing the ignored talker, poses a mechanistic neural account of attention under real-life acoustic conditions.
Collapse
Affiliation(s)
- Lorenz Fiedler
- Department of Psychology, University of Lübeck, Lübeck, Germany.
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Sophie K Herbst
- Department of Psychology, University of Lübeck, Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, Lübeck, Germany.
| |
Collapse
|
19
|
Kida T, Tanaka E, Kakigi R. Adaptive flexibility of the within-hand attentional gradient in touch: An MEG study. Neuroimage 2018; 179:373-384. [DOI: 10.1016/j.neuroimage.2018.06.063] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 06/19/2018] [Accepted: 06/21/2018] [Indexed: 10/28/2022] Open
|
20
|
Two Sides of the Same Coin: Distinct Sub-Bands in the α Rhythm Reflect Facilitation and Suppression Mechanisms during Auditory Anticipatory Attention. eNeuro 2018; 5:eN-NWR-0141-18. [PMID: 30225355 PMCID: PMC6140117 DOI: 10.1523/eneuro.0141-18.2018] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Revised: 05/31/2018] [Accepted: 06/21/2018] [Indexed: 11/30/2022] Open
Abstract
Anticipatory attention results in enhanced response to task-relevant stimulus, and reduced processing of unattended input, suggesting the deployment of distinct facilitatory and suppressive mechanisms. α Oscillations are a suitable candidate for supporting these mechanisms. We aimed to examine the role of α oscillations, with a special focus on peak frequencies, in facilitatory and suppressive mechanisms during auditory anticipation, within the auditory and visual regions. Magnetoencephalographic (MEG) data were collected from fourteen healthy young human adults (eight female) performing an auditory task in which spatial attention to sounds was manipulated by visual cues, either informative or not of the target side. By incorporating uninformative cues, we could delineate facilitating and suppressive mechanisms. During anticipation of a visually-cued auditory target, we observed a decrease in α power around 9 Hz in the auditory cortices; and an increase around 13 Hz in the visual regions. Only this power increase in high α significantly correlated with behavior. Importantly, within the right auditory cortex, we showed a larger increase in high α power when attending an ipsilateral sound; and a stronger decrease in low α power when attending a contralateral sound. In summary, we found facilitatory and suppressive attentional mechanisms with distinct timing in task-relevant and task-irrelevant brain areas, differentially correlated to behavior and supported by distinct α sub-bands. We provide new insight into the role of the α peak-frequency by showing that anticipatory attention is supported by distinct facilitatory and suppressive mechanisms, mediated in different low and high sub-bands of the α rhythm, respectively.
Collapse
|
21
|
Mehraei G, Shinn-Cunningham B, Dau T. Influence of talker discontinuity on cortical dynamics of auditory spatial attention. Neuroimage 2018; 179:548-556. [PMID: 29960089 DOI: 10.1016/j.neuroimage.2018.06.067] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/05/2018] [Revised: 06/12/2018] [Accepted: 06/25/2018] [Indexed: 11/25/2022] Open
Abstract
In everyday acoustic scenes, listeners face the challenge of selectively attending to a sound source and maintaining attention on that source long enough to extract meaning. This task is made more daunting by frequent perceptual discontinuities in the acoustic scene: talkers move in space and conversations switch from one speaker to another in a background of many other sources. The inherent dynamics of such switches directly impact our ability to sustain attention. Here we asked how discontinuity in talker voice affects the ability to focus auditory attention to sounds from a particular location as well as neural correlates of underlying processes. During electroencephalography recordings, listeners attended to a stream of spoken syllables from one direction while ignoring distracting syllables from a different talker from the opposite hemifield. On some trials, the talker switched locations in the middle of the streams, creating a discontinuity. This switch disrupted attentional modulation of cortical responses; specifically, event-related potentials evoked by syllables in the to-be-attended direction were suppressed and power in alpha oscillations (8-12 Hz) were reduced following the discontinuity. Importantly, at an individual level, the ability to maintain attention to a target stream and report its content, despite the discontinuity, correlates with the magnitude of the disruption of these cortical responses. These results have implications for understanding cortical mechanisms supporting attention. The changes in the cortical responses may serve as a predictor of how well individuals can communicate in complex acoustic scenes and may help in the development of assistive devices and interventions to aid clinical populations.
Collapse
Affiliation(s)
- Golbarg Mehraei
- Hearing Systems Group, Technical University of Denmark, Ørsteds Plads Building 352, 2800, Kongens Lyngby, Denmark.
| | - Barbara Shinn-Cunningham
- Center for Research in Sensory Communication and Emerging Neural Technology, Boston University, Boston, MA, 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA, 02215, USA
| | - Torsten Dau
- Hearing Systems Group, Technical University of Denmark, Ørsteds Plads Building 352, 2800, Kongens Lyngby, Denmark
| |
Collapse
|
22
|
Miran S, Akram S, Sheikhattar A, Simon JZ, Zhang T, Babadi B. Real-Time Tracking of Selective Auditory Attention From M/EEG: A Bayesian Filtering Approach. Front Neurosci 2018; 12:262. [PMID: 29765298 PMCID: PMC5938416 DOI: 10.3389/fnins.2018.00262] [Citation(s) in RCA: 53] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/18/2017] [Accepted: 04/05/2018] [Indexed: 11/13/2022] Open
Abstract
Humans are able to identify and track a target speaker amid a cacophony of acoustic interference, an ability which is often referred to as the cocktail party phenomenon. Results from several decades of studying this phenomenon have culminated in recent years in various promising attempts to decode the attentional state of a listener in a competing-speaker environment from non-invasive neuroimaging recordings such as magnetoencephalography (MEG) and electroencephalography (EEG). To this end, most existing approaches compute correlation-based measures by either regressing the features of each speech stream to the M/EEG channels (the decoding approach) or vice versa (the encoding approach). To produce robust results, these procedures require multiple trials for training purposes. Also, their decoding accuracy drops significantly when operating at high temporal resolutions. Thus, they are not well-suited for emerging real-time applications such as smart hearing aid devices or brain-computer interface systems, where training data might be limited and high temporal resolutions are desired. In this paper, we close this gap by developing an algorithmic pipeline for real-time decoding of the attentional state. Our proposed framework consists of three main modules: (1) Real-time and robust estimation of encoding or decoding coefficients, achieved by sparse adaptive filtering, (2) Extracting reliable markers of the attentional state, and thereby generalizing the widely-used correlation-based measures thereof, and (3) Devising a near real-time state-space estimator that translates the noisy and variable attention markers to robust and statistically interpretable estimates of the attentional state with minimal delay. Our proposed algorithms integrate various techniques including forgetting factor-based adaptive filtering, ℓ1-regularization, forward-backward splitting algorithms, fixed-lag smoothing, and Expectation Maximization. We validate the performance of our proposed framework using comprehensive simulations as well as application to experimentally acquired M/EEG data. Our results reveal that the proposed real-time algorithms perform nearly as accurately as the existing state-of-the-art offline techniques, while providing a significant degree of adaptivity, statistical robustness, and computational savings.
Collapse
Affiliation(s)
- Sina Miran
- Department of Electrical and Computer Engineering, University of Maryland College Park, MD, United States
| | | | - Alireza Sheikhattar
- Department of Electrical and Computer Engineering, University of Maryland College Park, MD, United States
| | - Jonathan Z Simon
- Department of Electrical and Computer Engineering, University of Maryland College Park, MD, United States.,Institute for Systems Research, University of Maryland College Park, MD, United States.,Department of Biology, University of Maryland College Park, MD, United States
| | - Tao Zhang
- Starkey Hearing Technologies Eden Prairie, MN, United States
| | - Behtash Babadi
- Department of Electrical and Computer Engineering, University of Maryland College Park, MD, United States.,Institute for Systems Research, University of Maryland College Park, MD, United States
| |
Collapse
|
23
|
Melara RD, Ruglass LM, Fertuck EA, Hien DA. Regulation of threat in post-traumatic stress disorder: Associations between inhibitory control and dissociative symptoms. Biol Psychol 2018; 133:89-98. [DOI: 10.1016/j.biopsycho.2018.01.017] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/10/2017] [Revised: 01/12/2018] [Accepted: 01/28/2018] [Indexed: 11/26/2022]
|
24
|
Abstract
Auditory perception is our main gateway to communication with others via speech and music, and it also plays an important role in alerting and orienting us to new events. This review provides an overview of selected topics pertaining to the perception and neural coding of sound, starting with the first stage of filtering in the cochlea and its profound impact on perception. The next topic, pitch, has been debated for millennia, but recent technical and theoretical developments continue to provide us with new insights. Cochlear filtering and pitch both play key roles in our ability to parse the auditory scene, enabling us to attend to one auditory object or stream while ignoring others. An improved understanding of the basic mechanisms of auditory perception will aid us in the quest to tackle the increasingly important problem of hearing loss in our aging population.
Collapse
Affiliation(s)
- Andrew J Oxenham
- Department of Psychology, University of Minnesota, Minneapolis, Minnesota 55455;
| |
Collapse
|
25
|
Ulke C, Huang J, Schwabedal JTC, Surova G, Mergl R, Hensch T. Coupling and dynamics of cortical and autonomic signals are linked to central inhibition during the wake-sleep transition. Sci Rep 2017; 7:11804. [PMID: 28924202 PMCID: PMC5603599 DOI: 10.1038/s41598-017-09513-6] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/15/2017] [Accepted: 07/25/2017] [Indexed: 01/04/2023] Open
Abstract
Maintaining temporal coordination across physiological systems is crucial at the wake-sleep transition. As shown in recent studies, the degree of coordination between brain and autonomic arousal influences attention, which highlights a previously unrecognised point of potential failure in the attention system. To investigate how cortical and autonomic dynamics are linked to the attentive process we analysed electroencephalogram, electrocardiogram and skin conductance data of 39 healthy adults recorded during a 2-h resting-state oddball experiment. We related cross-correlations to fluctuation periods of cortical and autonomic signals and correlated obtained measures to event-related potentials N1 and P2, reflecting excitatory and inhibitory processes. Increasing alignment of cortical and autonomic signals and longer periods of vigilance fluctuations corresponded to a larger and earlier P2; no such relations were found for N1. We compared two groups, with (I) and without measurable (II) delay in cortico-autonomic correlations. Individuals in Group II had more stable vigilance fluctuations, larger and earlier P2 and fell asleep more frequently than individuals in Group I. Our results support the hypothesis of a link between cortico-autonomic coupling and dynamics and central inhibition. Quantifying this link could help refine classification in psychiatric disorders with attention and sleep-related symptoms, particularly in ADHD, depression, and insomnia.
Collapse
Affiliation(s)
- Christine Ulke
- Department of Psychiatry and Psychotherapy, University of Leipzig, Leipzig, Germany. .,Research Center of the German Depression Foundation, Leipzig, Germany.
| | - Jue Huang
- Department of Psychiatry and Psychotherapy, University of Leipzig, Leipzig, Germany
| | | | - Galina Surova
- Department of Psychiatry and Psychotherapy, University of Leipzig, Leipzig, Germany
| | - Roland Mergl
- Department of Psychiatry and Psychotherapy, University of Leipzig, Leipzig, Germany
| | - Tilman Hensch
- Department of Psychiatry and Psychotherapy, University of Leipzig, Leipzig, Germany
| |
Collapse
|
26
|
The impact of visual gaze direction on auditory object tracking. Sci Rep 2017; 7:4640. [PMID: 28680049 PMCID: PMC5498632 DOI: 10.1038/s41598-017-04475-1] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2017] [Accepted: 05/16/2017] [Indexed: 11/25/2022] Open
Abstract
Subjective experience suggests that we are able to direct our auditory attention independent of our visual gaze, e.g when shadowing a nearby conversation at a cocktail party. But what are the consequences at the behavioural and neural level? While numerous studies have investigated both auditory attention and visual gaze independently, little is known about their interaction during selective listening. In the present EEG study, we manipulated visual gaze independently of auditory attention while participants detected targets presented from one of three loudspeakers. We observed increased response times when gaze was directed away from the locus of auditory attention. Further, we found an increase in occipital alpha-band power contralateral to the direction of gaze, indicative of a suppression of distracting input. Finally, this condition also led to stronger central theta-band power, which correlated with the observed effect in response times, indicative of differences in top-down processing. Our data suggest that a misalignment between gaze and auditory attention both reduce behavioural performance and modulate underlying neural processes. The involvement of central theta-band and occipital alpha-band effects are in line with compensatory neural mechanisms such as increased cognitive control and the suppression of task irrelevant inputs.
Collapse
|
27
|
Shinn-Cunningham B, Best V, Lee AKC. Auditory Object Formation and Selection. SPRINGER HANDBOOK OF AUDITORY RESEARCH 2017. [DOI: 10.1007/978-3-319-51662-2_2] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/03/2022]
|
28
|
Payne L, Rogers CS, Wingfield A, Sekuler R. A right-ear bias of auditory selective attention is evident in alpha oscillations. Psychophysiology 2016; 54:528-535. [PMID: 28039860 DOI: 10.1111/psyp.12815] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/04/2015] [Accepted: 09/13/2016] [Indexed: 11/27/2022]
Abstract
Auditory selective attention makes it possible to pick out one speech stream that is embedded in a multispeaker environment. We adapted a cued dichotic listening task to examine suppression of a speech stream lateralized to the nonattended ear, and to evaluate the effects of attention on the right ear's well-known advantage in the perception of linguistic stimuli. After being cued to attend to input from either their left or right ear, participants heard two different four-word streams presented simultaneously to the separate ears. Following each dichotic presentation, participants judged whether a spoken probe word had been in the attended ear's stream. We used EEG signals to track participants' spatial lateralization of auditory attention, which is marked by interhemispheric differences in EEG alpha (8-14 Hz) power. A right-ear advantage (REA) was evident in faster response times and greater sensitivity in distinguishing attended from unattended words. Consistent with the REA, we found strongest parietal and right frontotemporal alpha modulation during the attend-right condition. These findings provide evidence for a link between selective attention and the REA during directed dichotic listening.
Collapse
Affiliation(s)
- Lisa Payne
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Chad S Rogers
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Arthur Wingfield
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| | - Robert Sekuler
- Volen Center for Complex Systems, Brandeis University, Waltham, Massachusetts, USA
| |
Collapse
|
29
|
Akram S, Simon JZ, Babadi B. Dynamic Estimation of the Auditory Temporal Response Function From MEG in Competing-Speaker Environments. IEEE Trans Biomed Eng 2016; 64:1896-1905. [PMID: 28113290 DOI: 10.1109/tbme.2016.2628884] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
OBJECTIVE A central problem in computational neuroscience is to characterize brain function using neural activity recorded from the brain in response to sensory inputs with statistical confidence. Most of existing estimation techniques, such as those based on reverse correlation, exhibit two main limitations: first, they are unable to produce dynamic estimates of the neural activity at a resolution comparable with that of the recorded data, and second, they often require heavy averaging across time as well as multiple trials in order to construct statistical confidence intervals for a precise interpretation of data. In this paper, we address the above-mentioned issues for estimating auditory temporal response function (TRF) as a parametric computational model for selective auditory attention in competing-speaker environments. METHODS The TRF is a sparse kernel which regresses auditory MEG data with respect to the envelopes of the speech streams. We develop an efficient estimation technique by exploiting the sparsity of the TRF and adopting an ℓ1-regularized least squares estimator which is capable of producing dynamic TRF estimates as well as confidence intervals at sampling resolution from single-trial MEG data. RESULTS We evaluate the performance of our proposed estimator using evoked MEG responses from the human brain in an auditory attention experiment with two competing speakers. The TRFs are estimated dynamically over time using the proposed technique with multisecond resolution, which is a significant improvement over previous results with a temporal resolution of the order of a minute. CONCLUSION Application of our method to MEG data reveals a precise characterization of the modulation of M50 and M100 evoked responses with respect to the attentional state of the subject at multisecond resolution. SIGNIFICANCE Our proposed estimation technique provides a high resolution real-time attention decoding framework in multispeaker environments with potential application in smart hearing aid technology.
Collapse
|
30
|
Petsas T, Harrison J, Kashino M, Furukawa S, Chait M. The effect of distraction on change detection in crowded acoustic scenes. Hear Res 2016; 341:179-189. [PMID: 27598040 PMCID: PMC5090045 DOI: 10.1016/j.heares.2016.08.015] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/14/2016] [Revised: 08/23/2016] [Accepted: 08/31/2016] [Indexed: 11/13/2022]
Abstract
In this series of behavioural experiments we investigated the effect of distraction on the maintenance of acoustic scene information in short-term memory. Stimuli are artificial acoustic ‘scenes’ composed of several (up to twelve) concurrent tone-pip streams (‘sources’). A gap (1000 ms) is inserted partway through the ‘scene’; Changes in the form of an appearance of a new source or disappearance of an existing source, occur after the gap in 50% of the trials. Listeners were instructed to monitor the unfolding ‘soundscapes’ for these events. Distraction was measured by presenting distractor stimuli during the gap. Experiments 1 and 2 used a dual task design where listeners were required to perform a task with varying attentional demands (‘High Demand’ vs. ‘Low Demand’) on brief auditory (Experiment 1a) or visual (Experiment 1b) signals presented during the gap. Experiments 2 and 3 required participants to ignore distractor sounds and focus on the change detection task. Our results demonstrate that the maintenance of scene information in short-term memory is influenced by the availability of attentional and/or processing resources during the gap, and that this dependence appears to be modality specific. We also show that these processes are susceptible to bottom up driven distraction even in situations when the distractors are not novel, but occur on each trial. Change detection performance is systematically linked with the, independently determined, perceptual salience of the distractor sound. The findings also demonstrate that the present task may be a useful objective means for determining relative perceptual salience. Distraction is measured by presenting distractor stimuli during a scene gap. Scene maintenance in memory depends on availability of resources during the gap. This dependence appears to be modality specific. Scene maintenance also prone to bottom up distraction even when distractors not novel. Performance depends on the perceptual salience of the distractor sound.
Collapse
Affiliation(s)
| | | | - Makio Kashino
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, 3-1, Morinosato-Wakamiya, Atsugi-shi, Kanagawa, Japan
| | - Shigeto Furukawa
- Human Information Science Laboratory, NTT Communication Science Laboratories, NTT Corporation, 3-1, Morinosato-Wakamiya, Atsugi-shi, Kanagawa, Japan
| | - Maria Chait
- UCL Ear Institute, 332 Gray's Inn Rd, London, UK.
| |
Collapse
|
31
|
Cortese BM, Leslie K, Uhde TW. Differential odor sensitivity in PTSD: Implications for treatment and future research. J Affect Disord 2015; 179:23-30. [PMID: 25845746 PMCID: PMC4437877 DOI: 10.1016/j.jad.2015.03.026] [Citation(s) in RCA: 30] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/09/2015] [Accepted: 03/12/2015] [Indexed: 10/23/2022]
Abstract
BACKGROUND Given that odors enhance the retrieval of autobiographical memories, induce physiological arousal, and trigger trauma-related flashbacks, it is reasonable to hypothesize that odors play a significant role in the pathophysiology of posttraumatic stress disorder (PTSD). For these reasons, this preliminary study sought to examine self-reported, odor-elicited distress in PTSD. METHODS Combat veterans with (N=30) and without (N=22) PTSD and healthy controls (HC: N=21), completed an olfactory questionnaire that provided information on the hedonic valence of odors as well as their ability to elicit distress or relaxation. RESULTS Two main findings were revealed: Compared to HC, CV+PTSD, but not CV-PTSD, reported a higher prevalence of distress to a limited number of select odors that included fuel (p=.004), blood (p=.02), gunpowder (p=.03), and burning hair (p=.02). In contrast to this increased sensitivity, a blunting effect was reported by both groups of veterans compared to HC that revealed lower rates of distress and relaxation in response to negative hedonic odors (p=.03) and positive hedonic odors (p<.001), respectively. LIMITATIONS The study is limited by its use of retrospective survey methods, whereas future investigations would benefit from laboratory measures taken prior, during, and after deployment. CONCLUSION The present findings suggest a complex role of olfaction in the biological functions of threat detection. Several theoretical models are discussed. One possible explanation for increased sensitivity to select odors with decreased sensitivity to other odors is the co-occurrence of attentional bias toward threat odors with selective ignoring of distractor odors. Working together, these processes may optimize survival.
Collapse
Affiliation(s)
| | - Kimberly Leslie
- Department of Psychiatry and Behavioral Sciences, MUSC, Charleston, SC, US
| | - Thomas W. Uhde
- Department of Psychiatry and Behavioral Sciences, MUSC, Charleston, SC, US
| |
Collapse
|
32
|
Paiva TO, Almeida PR, Ferreira-Santos F, Vieira JB, Silveira C, Chaves PL, Barbosa F, Marques-Teixeira J. Similar sound intensity dependence of the N1 and P2 components of the auditory ERP: Averaged and single trial evidence. Clin Neurophysiol 2015; 127:499-508. [PMID: 26154993 DOI: 10.1016/j.clinph.2015.06.016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/09/2015] [Revised: 05/08/2015] [Accepted: 06/16/2015] [Indexed: 10/23/2022]
Abstract
OBJECTIVE The literature suggests that the N1 and P2 waves of the auditory ERP are dissociable at the developmental, experimental, and source levels. At the experimental level, inconsistent findings suggest different effects of intensity on the amplitudes of the auditory N1 and P2. Our main goal was to analyze the intensity dependence of the auditory N1 and P2 while controlling for habituation effects. METHODS We examined the intensity dependence of both averaged and single-trial auditory N1 and P2 waves elicited in a repeated-stimulation protocol. RESULTS N1 and P2 revealed similar intensity dependence on both standard and filter denoised ERP, with a linear tendency for higher intensities to elicit higher absolute peak amplitudes. At the single-trial level, both waves covary irrespective of stimulus intensity and trial order. CONCLUSIONS Our results suggest that stimulus intensity variation induces similar effects on both and N1 and P2 and partially contradict previous data that classified the P2 as a non-habituating component. SIGNIFICANCE Our findings contribute to the ongoing discussion on the functional significance of the auditory P2 deflection. In addition, the present work demonstrated the applicability of a filter denoising method for single-trial estimation in the analysis of the experimental effects on auditory ERP components.
Collapse
Affiliation(s)
- Tiago O Paiva
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal; Faculty of Medicine of the University of Porto, Portugal.
| | - Pedro R Almeida
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal; School of Criminology, Faculty of Law of the University of Porto, Portugal
| | - Fernando Ferreira-Santos
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal
| | - Joana B Vieira
- The Brain and Mind Institute, University of Western Ontario, Canada
| | | | - Pedro L Chaves
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal; Faculty of Medicine of the University of Porto, Portugal; Mind, Brain Imaging and Neuroethics Research Unit, University of Ottawa Institute of Mental Health Research, Ottawa, Canada
| | - Fernando Barbosa
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal
| | - João Marques-Teixeira
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences of the University of Porto, Portugal
| |
Collapse
|
33
|
Chen S, Melara RD. Rejection positivity predicts trial-to-trial reaction times in an auditory selective attention task: a computational analysis of inhibitory control. Front Hum Neurosci 2014; 8:585. [PMID: 25191244 PMCID: PMC4137173 DOI: 10.3389/fnhum.2014.00585] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/16/2014] [Accepted: 07/14/2014] [Indexed: 12/02/2022] Open
Abstract
A series of computer simulations using variants of a formal model of attention (Melara and Algom, 2003) probed the role of rejection positivity (RP), a slow-wave electroencephalographic (EEG) component, in the inhibitory control of distraction. Behavioral and EEG data were recorded as participants performed auditory selective attention tasks. Simulations that modulated processes of distractor inhibition accounted well for reaction-time (RT) performance, whereas those that modulated target excitation did not. A model that incorporated RP from actual EEG recordings in estimating distractor inhibition was superior in predicting changes in RT as a function of distractor salience across conditions. A model that additionally incorporated momentary fluctuations in EEG as the source of trial-to-trial variation in performance precisely predicted individual RTs within each condition. The results lend support to the linking proposition that RP controls the speed of responding to targets through the inhibitory control of distractors.
Collapse
Affiliation(s)
- Sufen Chen
- Department of Neurology, Montefiore Medical Center Bronx, NY, USA
| | - Robert D Melara
- Department of Psychology, North Academic Center, City College, City University of New York New York, NY, USA
| |
Collapse
|
34
|
Kong YY, Mullangi A, Ding N. Differential modulation of auditory responses to attended and unattended speech in different listening conditions. Hear Res 2014; 316:73-81. [PMID: 25124153 DOI: 10.1016/j.heares.2014.07.009] [Citation(s) in RCA: 57] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/04/2014] [Revised: 07/24/2014] [Accepted: 07/31/2014] [Indexed: 10/24/2022]
Abstract
This study investigates how top-down attention modulates neural tracking of the speech envelope in different listening conditions. In the quiet conditions, a single speech stream was presented and the subjects paid attention to the speech stream (active listening) or watched a silent movie instead (passive listening). In the competing speaker (CS) conditions, two speakers of opposite genders were presented diotically. Ongoing electroencephalographic (EEG) responses were measured in each condition and cross-correlated with the speech envelope of each speaker at different time lags. In quiet, active and passive listening resulted in similar neural responses to the speech envelope. In the CS conditions, however, the shape of the cross-correlation function was remarkably different between the attended and unattended speech. The cross-correlation with the attended speech showed stronger N1 and P2 responses but a weaker P1 response compared to the cross-correlation with the unattended speech. Furthermore, the N1 response to the attended speech in the CS condition was enhanced and delayed compared with the active listening condition in quiet, while the P2 response to the unattended speaker in the CS condition was attenuated compared with the passive listening in quiet. Taken together, these results demonstrate that top-down attention differentially modulates envelope-tracking neural activity at different time lags and suggest that top-down attention can both enhance the neural responses to the attended sound stream and suppress the responses to the unattended sound stream.
Collapse
Affiliation(s)
- Ying-Yee Kong
- Department of Speech Language Pathology & Audiology, Northeastern University, Boston, MA 02115, United States; Bioengineering Program, Northeastern University, Boston, MA 02115, United States.
| | - Ala Mullangi
- Bioengineering Program, Northeastern University, Boston, MA 02115, United States
| | - Nai Ding
- Department of Psychology, New York University, NY 10012, United States
| |
Collapse
|
35
|
Bidet-Caulet A, Buchanan KG, Viswanath H, Black J, Scabini D, Bonnet-Brilhault F, Knight RT. Impaired Facilitatory Mechanisms of Auditory Attention After Damage of the Lateral Prefrontal Cortex. Cereb Cortex 2014; 25:4126-34. [PMID: 24925773 DOI: 10.1093/cercor/bhu131] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
Abstract
There is growing evidence that auditory selective attention operates via distinct facilitatory and inhibitory mechanisms enabling selective enhancement and suppression of sound processing, respectively. The lateral prefrontal cortex (LPFC) plays a crucial role in the top-down control of selective attention. However, whether the LPFC controls facilitatory, inhibitory, or both attentional mechanisms is unclear. Facilitatory and inhibitory mechanisms were assessed, in patients with LPFC damage, by comparing event-related potentials (ERPs) to attended and ignored sounds with ERPs to these same sounds when attention was equally distributed to all sounds. In control subjects, we observed 2 late frontally distributed ERP components: a transient facilitatory component occurring from 150 to 250 ms after sound onset; and an inhibitory component onsetting at 250 ms. Only the facilitatory component was affected in patients with LPFC damage: this component was absent when attending to sounds delivered in the ear contralateral to the lesion, with the most prominent decreases observed over the damaged brain regions. These findings have 2 important implications: (i) they provide evidence for functionally distinct facilitatory and inhibitory mechanisms supporting late auditory selective attention; (ii) they show that the LPFC is involved in the control of the facilitatory mechanisms of auditory attention.
Collapse
Affiliation(s)
- Aurélie Bidet-Caulet
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA Lyon Neuroscience Research Center, Brain Dynamics and Cognition Team, CRNL, INSERM U1028, CNRS UMR5292, University of Lyon 1, Lyon, France
| | - Kelly G Buchanan
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Humsini Viswanath
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Jessica Black
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Donatella Scabini
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA
| | - Frédérique Bonnet-Brilhault
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA INSERM, UMR930, Université François-Rabelais de Tours, CHRU de Tours, France
| | - Robert T Knight
- Helen Wills Neuroscience Institute, University of California, Berkeley, CA, USA Department of Psychology, University of California, Berkeley, CA, USA
| |
Collapse
|
36
|
Choi I, Wang L, Bharadwaj H, Shinn-Cunningham B. Individual differences in attentional modulation of cortical responses correlate with selective attention performance. Hear Res 2014; 314:10-9. [PMID: 24821552 DOI: 10.1016/j.heares.2014.04.008] [Citation(s) in RCA: 52] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/11/2013] [Revised: 04/18/2014] [Accepted: 04/23/2014] [Indexed: 11/29/2022]
Abstract
Many studies have shown that attention modulates the cortical representation of an auditory scene, emphasizing an attended source while suppressing competing sources. Yet, individual differences in the strength of this attentional modulation and their relationship with selective attention ability are poorly understood. Here, we ask whether differences in how strongly attention modulates cortical responses reflect differences in normal-hearing listeners' selective auditory attention ability. We asked listeners to attend to one of three competing melodies and identify its pitch contour while we measured cortical electroencephalographic responses. The three melodies were either from widely separated pitch ranges ("easy trials"), or from a narrow, overlapping pitch range ("hard trials"). The melodies started at slightly different times; listeners attended either the leading or lagging melody. Because of the timing of the onsets, the leading melody drew attention exogenously. In contrast, attending the lagging melody required listeners to direct top-down attention volitionally. We quantified how attention amplified auditory N1 response to the attended melody and found large individual differences in the N1 amplification, even though only correctly answered trials were used to quantify the ERP gain. Importantly, listeners with the strongest amplification of N1 response to the lagging melody in the easy trials were the best performers across other types of trials. Our results raise the possibility that individual differences in the strength of top-down gain control reflect inherent differences in the ability to control top-down attention.
Collapse
Affiliation(s)
- Inyong Choi
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA
| | - Le Wang
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA
| | - Hari Bharadwaj
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA
| | - Barbara Shinn-Cunningham
- Center for Computational Neuroscience and Neural Technology, Boston University, Boston, MA 02215, USA; Department of Biomedical Engineering, Boston University, Boston, MA 02215, USA.
| |
Collapse
|
37
|
Nourski KV, Steinschneider M, Oya H, Kawasaki H, Howard MA. Modulation of response patterns in human auditory cortex during a target detection task: an intracranial electrophysiology study. Int J Psychophysiol 2014; 95:191-201. [PMID: 24681353 DOI: 10.1016/j.ijpsycho.2014.03.006] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2014] [Revised: 03/11/2014] [Accepted: 03/18/2014] [Indexed: 11/15/2022]
Abstract
Selective attention enhances cortical activity representing an attended sound stream in human posterolateral superior temporal gyrus (PLST). It is unclear, however, what mechanisms are associated with a target detection task that necessitates sustained attention (vigilance) to a sound stream. We compared responses elicited by target and non-target sounds, and to sounds presented in a passive-listening paradigm. Subjects were neurosurgical patients undergoing invasive monitoring for medically refractory epilepsy. Stimuli were complex tones, band-limited noise bursts and speech syllables. High gamma cortical activity (70-150 Hz) was examined in all subjects using subdural grid electrodes implanted over PLST. Additionally, responses were measured from depth electrodes implanted within Heschl's gyrus (HG) in one subject. Responses to target sounds recorded from PLST were increased when compared to responses elicited by the same sounds when they were not-targets, and when they were presented during passive listening. Increases in high gamma activity to target sounds occurred during later portions (after 250 ms) of the response. These increases were related to the task and not to detailed stimulus characteristics. In contrast, earlier activity that did not vary across conditions did represent stimulus acoustic characteristics. Effects observed on PLST were not noted in HG. No consistent effects were noted in the averaged evoked potentials in either cortical region. We conclude that task dependence modulates later activity in PLST during vigilance. Later activity may represent feedback from higher cortical areas. Study of concurrently recorded activity from frontoparietal areas is necessary to further clarify task-related modulation of activity on PLST.
Collapse
Affiliation(s)
- Kirill V Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA.
| | | | - Hiroyuki Oya
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| | - Matthew A Howard
- Department of Neurosurgery, The University of Iowa, Iowa City, IA 52242, USA
| |
Collapse
|
38
|
Challenging perceptual tasks require more attention: The influence of task difficulty on the N1 effect of temporal orienting. Brain Cogn 2014; 84:153-63. [DOI: 10.1016/j.bandc.2013.12.001] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2013] [Revised: 09/15/2013] [Accepted: 12/04/2013] [Indexed: 11/23/2022]
|
39
|
Owen JP, Marco EJ, Desai S, Fourie E, Harris J, Hill SS, Arnett AB, Mukherjee P. Abnormal white matter microstructure in children with sensory processing disorders. NEUROIMAGE-CLINICAL 2013; 2:844-53. [PMID: 24179836 PMCID: PMC3778265 DOI: 10.1016/j.nicl.2013.06.009] [Citation(s) in RCA: 106] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/26/2013] [Revised: 06/02/2013] [Accepted: 06/17/2013] [Indexed: 12/30/2022]
Abstract
Sensory processing disorders (SPD) affect 5–16% of school-aged children and can cause long-term deficits in intellectual and social development. Current theories of SPD implicate primary sensory cortical areas and higher-order multisensory integration (MSI) cortical regions. We investigate the role of white matter microstructural abnormalities in SPD using diffusion tensor imaging (DTI). DTI was acquired in 16 boys, 8–11 years old, with SPD and 24 age-, gender-, handedness- and IQ-matched neurotypical controls. Behavior was characterized using a parent report sensory behavior measure, the Sensory Profile. Fractional anisotropy (FA), mean diffusivity (MD) and radial diffusivity (RD) were calculated. Tract-based spatial statistics were used to detect significant group differences in white matter integrity and to determine if microstructural parameters were significantly correlated with behavioral measures. Significant decreases in FA and increases in MD and RD were found in the SPD cohort compared to controls, primarily involving posterior white matter including the posterior corpus callosum, posterior corona radiata and posterior thalamic radiations. Strong positive correlations were observed between FA of these posterior tracts and auditory, multisensory, and inattention scores (r = 0.51–0.78; p < 0.001) with strong negative correlations between RD and multisensory and inattention scores (r = − 0.61–0.71; p < 0.001). To our knowledge, this is the first study to demonstrate reduced white matter microstructural integrity in children with SPD. We find that the disrupted white matter microstructure predominantly involves posterior cerebral tracts and correlates strongly with atypical unimodal and multisensory integration behavior. These findings suggest abnormal white matter as a biological basis for SPD and may also distinguish SPD from overlapping clinical conditions such as autism and attention deficit hyperactivity disorder. Abnormal posterior white matter microstructure in sensory processing disorders (SPD) Posterior cerebral white matter microstructure correlates with sensory behavior. DTI may help distinguish SPD from autism spectrum disorder and ADHD. DTI may yield prognostic and predictive biomarkers of SPD for clinical use.
Collapse
Affiliation(s)
- Julia P. Owen
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 185 Berry Street, San Francisco, CA 94107, USA
- Program in Bioengineering, University of California, San Francisco, 1700 4th St., San Francisco, CA 94158, USA
| | - Elysa J. Marco
- Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Shivani Desai
- Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Emily Fourie
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 185 Berry Street, San Francisco, CA 94107, USA
| | - Julia Harris
- Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Susanna S. Hill
- Department of Neurology, University of California, San Francisco, 675 Nelson Rising Lane, San Francisco, CA 94158, USA
| | - Anne B. Arnett
- Department of Psychology, University of Denver, Frontier Hall, 2155 S. Race Street, Denver, CO 80208, USA
| | - Pratik Mukherjee
- Department of Radiology and Biomedical Imaging, University of California, San Francisco, 185 Berry Street, San Francisco, CA 94107, USA
- Program in Bioengineering, University of California, San Francisco, 1700 4th St., San Francisco, CA 94158, USA
- Corresponding author at: Center for Molecular and Functional Imaging, Department of Radiology and Biomedical Imaging, University of California, San Francisco, UCSF Box 0946, 185 Berry Street, Suite 350, San Francisco, CA 94107, USA. Tel.: + 1 415 514 8186; fax: + 1 415 353 8593.
| |
Collapse
|
40
|
Lange K. The ups and downs of temporal orienting: a review of auditory temporal orienting studies and a model associating the heterogeneous findings on the auditory N1 with opposite effects of attention and prediction. Front Hum Neurosci 2013; 7:263. [PMID: 23781186 PMCID: PMC3678089 DOI: 10.3389/fnhum.2013.00263] [Citation(s) in RCA: 90] [Impact Index Per Article: 8.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2013] [Accepted: 05/23/2013] [Indexed: 11/13/2022] Open
Abstract
The temporal orienting of attention refers to the process of focusing (neural) resources on a particular time point in order to boost the processing of and the responding to sensory events. Temporal attention is manipulated by varying the task-relevance of events at different time points or by inducing expectations that an event occurs at a particular time point. Notably, the electrophysiological correlates of these manipulations at early processing stages are not identical: Auditory studies operationalizing temporal attention through task-relevance consistently found enhancements of early, sensory processing, as shown in the N1 component of the auditory event-related potential (ERP). By contrast, previous work on temporal orienting based on expectations showed mixed results: early, sensory processing was either enhanced or attenuated or not affected at all. In the present work, I will review existing findings on temporal orienting with a special focus on the auditory modality and present a working model to reconcile the previously heterogeneous results. Specifically, I will suggest that when expectations are used to manipulate attention, this will lead both to an orienting of attention and to the generation of precise predictions about the upcoming event. Attention and prediction are assumed to have opposite effects on early auditory processing, with temporal attention increasing and temporal predictions decreasing the associated ERP correlate, the auditory N1. The heterogeneous findings of studies manipulating temporal orienting by inducing expectations may thus be the consequence of differences in the relative contribution of attention and prediction processes. The model's predictions will be discussed in the context of a functional interpretation of the auditory N1 as an attention call signal, as presented in a recent model on auditory processing.
Collapse
Affiliation(s)
- Kathrin Lange
- Institut für Experimentelle Psychologie, Heinrich-Heine-Universität Düsseldorf Düsseldorf, Germany
| |
Collapse
|
41
|
Nardo D, Santangelo V, Macaluso E. Spatial orienting in complex audiovisual environments. Hum Brain Mapp 2013; 35:1597-614. [PMID: 23616340 DOI: 10.1002/hbm.22276] [Citation(s) in RCA: 46] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2012] [Revised: 01/22/2013] [Accepted: 02/07/2013] [Indexed: 11/11/2022] Open
Abstract
Previous studies on crossmodal spatial orienting typically used simple and stereotyped stimuli in the absence of any meaningful context. This study combined computational models, behavioural measures and functional magnetic resonance imaging to investigate audiovisual spatial interactions in naturalistic settings. We created short videos portraying everyday life situations that included a lateralised visual event and a co-occurring sound, either on the same or on the opposite side of space. Subjects viewed the videos with or without eye-movements allowed (overt or covert orienting). For each video, visual and auditory saliency maps were used to index the strength of stimulus-driven signals, and eye-movements were used as a measure of the efficacy of the audiovisual events for spatial orienting. Results showed that visual salience modulated activity in higher-order visual areas, whereas auditory salience modulated activity in the superior temporal cortex. Auditory salience modulated activity also in the posterior parietal cortex, but only when audiovisual stimuli occurred on the same side of space (multisensory spatial congruence). Orienting efficacy affected activity in the visual cortex, within the same regions modulated by visual salience. These patterns of activation were comparable in overt and covert orienting conditions. Our results demonstrate that, during viewing of complex multisensory stimuli, activity in sensory areas reflects both stimulus-driven signals and their efficacy for spatial orienting; and that the posterior parietal cortex combines spatial information about the visual and the auditory modality.
Collapse
Affiliation(s)
- Davide Nardo
- Neuroimaging Laboratory, Santa Lucia Foundation, Rome, Italy
| | | | | |
Collapse
|
42
|
Kauramäki J, Jääskeläinen IP, Hänninen JL, Auranen T, Nummenmaa A, Lampinen J, Sams M. Two-stage processing of sounds explains behavioral performance variations due to changes in stimulus contrast and selective attention: an MEG study. PLoS One 2012; 7:e46872. [PMID: 23071654 PMCID: PMC3469590 DOI: 10.1371/journal.pone.0046872] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2011] [Accepted: 09/10/2012] [Indexed: 11/18/2022] Open
Abstract
Selectively attending to task-relevant sounds whilst ignoring background noise is one of the most amazing feats performed by the human brain. Here, we studied the underlying neural mechanisms by recording magnetoencephalographic (MEG) responses of 14 healthy human subjects while they performed a near-threshold auditory discrimination task vs. a visual control task of similar difficulty. The auditory stimuli consisted of notch-filtered continuous noise masker sounds, and of 1020-Hz target tones occasionally (p = 0.1) replacing 1000-Hz standard tones of 300-ms duration that were embedded at the center of the notches, the widths of which were parametrically varied. As a control for masker effects, tone-evoked responses were additionally recorded without masker sound. Selective attention to tones significantly increased the amplitude of the onset M100 response at ~100 ms to the standard tones during presence of the masker sounds especially with notches narrower than the critical band. Further, attention modulated sustained response most clearly at 300-400 ms time range from sound onset, with narrower notches than in case of the M100, thus selectively reducing the masker-induced suppression of the tone-evoked response. Our results show evidence of a multiple-stage filtering mechanism of sensory input in the human auditory cortex: 1) one at early (~100 ms) latencies bilaterally in posterior parts of the secondary auditory areas, and 2) adaptive filtering of attended sounds from task-irrelevant background masker at longer latency (~300 ms) in more medial auditory cortical regions, predominantly in the left hemisphere, enhancing processing of near-threshold sounds.
Collapse
Affiliation(s)
- Jaakko Kauramäki
- Department of Biomedical Engineering and Computational Science (BECS), Brain and Mind Laboratory, Aalto University School of Science, Espoo, Finland.
| | | | | | | | | | | | | |
Collapse
|
43
|
Lange K. The N1 effect of temporal attention is independent of sound location and intensity: implications for possible mechanisms of temporal attention. Psychophysiology 2012; 49:1468-80. [PMID: 23046461 DOI: 10.1111/j.1469-8986.2012.01460.x] [Citation(s) in RCA: 16] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2012] [Accepted: 07/18/2012] [Indexed: 11/28/2022]
Abstract
It has been repeatedly shown that the auditory N1 is enhanced for sounds presented at an attended time point. The present study investigated the underlying mechanisms using a temporal cuing paradigm. In each trial, an auditory cue indicated at which time point a second sound could be relevant for response selection. Crucially, in addition to temporal attention, two physical sound features with known effects on the sensory N1 were manipulated: location and intensity. Positive evidence for conjoint effects of attention and location or attention and intensity would corroborate the notion that the sensory N1 was modulated by temporal attention, thus supporting a gain mechanism. However, the N1 effect of temporal attention was not similarly lateralized as the sensory N1, and, moreover, it was independent of sound intensity. Thus, the present results do not provide compelling evidence that temporal attention involves an increase in sensory gain.
Collapse
Affiliation(s)
- Kathrin Lange
- Institut für Experimentelle Psychologie, Heinrich Heine Universität Düsseldorf, Düsseldorf, Germany.
| |
Collapse
|
44
|
Ferreira-Santos F, Silveira C, Almeida PR, Palha A, Barbosa F, Marques-Teixeira J. The auditory P200 is both increased and reduced in schizophrenia? A meta-analytic dissociation of the effect for standard and target stimuli in the oddball task. Clin Neurophysiol 2011; 123:1300-8. [PMID: 22197447 DOI: 10.1016/j.clinph.2011.11.036] [Citation(s) in RCA: 34] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2011] [Revised: 11/22/2011] [Accepted: 11/28/2011] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Conflicting reports of P200 amplitude and latency in schizophrenia have suggested that this component is increased, reduced or does not differ from healthy subjects. A systematic review and meta-analysis were undertaken to accurately describe P200 deficits in auditory oddball tasks in schizophrenia. METHODS A systematic search identified 20 studies which were meta-analyzed. Effect size (ES) estimates were obtained: P200 amplitude and latency for target and standard tones at midline electrodes. RESULTS The ES obtained for amplitude (Cz) for standard and target stimuli indicate significant effects in opposite directions: standard stimuli elicit smaller P200 in patients (d = -0.36; 95% CI [-0.26, -0.08]); target stimuli elicit larger P200 in patients (d = 0.48; 95% CI [0.16, 0.82]). A similar effect occurs for latency at Cz, which is shorter for standards (d = -0.32; 95% CI [-0.54, -0.10]) and longer for targets (d = 0.42; 95% CI [0.23, 0.62]). Meta-regression analyses revealed that samples with more males show larger ES for amplitude of target stimuli, while the amount of medication was negatively associated with the ES for the latency of standards. CONCLUSIONS The results obtained suggest that claims of reduced or augmented P200 in schizophrenia based on the sole examination of standard or target stimuli fail to consider the stimulus effect. SIGNIFICANCE Quantification of effects for standard and target stimuli is a required first step to understand the nature of P200 deficits in schizophrenia.
Collapse
Affiliation(s)
- F Ferreira-Santos
- Laboratory of Neuropsychophysiology, Faculty of Psychology and Education Sciences, University of Porto, Rua do Dr Manuel Pereira da Silva, 4200-392 Porto, Portugal.
| | | | | | | | | | | |
Collapse
|
45
|
Abstract
A recent study provides intriguing insights into how we recognize the sound of everyday objects from the statistical properties of the textures they produce.
Collapse
Affiliation(s)
- Neil C Rabinowitz
- Department of Physiology, Aantomy and Genetics, University of Oxford, Parks Road, Oxford OX1 3PT, UK.
| | | |
Collapse
|