1
|
Viswanathan V, Bharadwaj HM, Heinz MG, Shinn-Cunningham BG. Induced alpha and beta electroencephalographic rhythms covary with single-trial speech intelligibility in competition. Sci Rep 2023; 13:10216. [PMID: 37353552 PMCID: PMC10290148 DOI: 10.1038/s41598-023-37173-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/17/2023] [Indexed: 06/25/2023] Open
Abstract
Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Hari M Bharadwaj
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Michael G Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, 47907, USA
| | | |
Collapse
|
2
|
Lim C, Barragan JA, Farrow JM, Wachs JP, Sundaram CP, Yu D. Physiological Metrics of Surgical Difficulty and Multi-Task Requirement during Robotic Surgery Skills. SENSORS (BASEL, SWITZERLAND) 2023; 23:s23094354. [PMID: 37177557 PMCID: PMC10181544 DOI: 10.3390/s23094354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/21/2023] [Revised: 04/19/2023] [Accepted: 04/20/2023] [Indexed: 05/15/2023]
Abstract
Previous studies in robotic-assisted surgery (RAS) have studied cognitive workload by modulating surgical task difficulty, and many of these studies have relied on self-reported workload measurements. However, contributors to and their effects on cognitive workload are complex and may not be sufficiently summarized by changes in task difficulty alone. This study aims to understand how multi-task requirement contributes to the prediction of cognitive load in RAS under different task difficulties. Multimodal physiological signals (EEG, eye-tracking, HRV) were collected as university students performed simulated RAS tasks consisting of two types of surgical task difficulty under three different multi-task requirement levels. EEG spectral analysis was sensitive enough to distinguish the degree of cognitive workload under both surgical conditions (surgical task difficulty/multi-task requirement). In addition, eye-tracking measurements showed differences under both conditions, but significant differences of HRV were observed in only multi-task requirement conditions. Multimodal-based neural network models have achieved up to 79% accuracy for both surgical conditions.
Collapse
Affiliation(s)
- Chiho Lim
- School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
| | | | | | - Juan P Wachs
- School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
| | | | - Denny Yu
- School of Industrial Engineering, Purdue University, West Lafayette, IN 47907, USA
| |
Collapse
|
3
|
Hardy SM, Jensen O, Wheeldon L, Mazaheri A, Segaert K. Modulation in alpha band activity reflects syntax composition: an MEG study of minimal syntactic binding. Cereb Cortex 2023; 33:497-511. [PMID: 35311899 PMCID: PMC9890467 DOI: 10.1093/cercor/bhac080] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2021] [Revised: 02/06/2022] [Accepted: 02/07/2022] [Indexed: 02/05/2023] Open
Abstract
Successful sentence comprehension requires the binding, or composition, of multiple words into larger structures to establish meaning. Using magnetoencephalography, we investigated the neural mechanisms involved in binding at the syntax level, in a task where contributions from semantics were minimized. Participants were auditorily presented with minimal sentences that required binding (pronoun and pseudo-verb with the corresponding morphological inflection; "she grushes") and pseudo-verb wordlists that did not require binding ("cugged grushes"). Relative to no binding, we found that syntactic binding was associated with a modulation in alpha band (8-12 Hz) activity in left-lateralized language regions. First, we observed a significantly smaller increase in alpha power around the presentation of the target word ("grushes") that required binding (-0.05 to 0.1 s), which we suggest reflects an expectation of binding to occur. Second, during binding of the target word (0.15-0.25 s), we observed significantly decreased alpha phase-locking between the left inferior frontal gyrus and the left middle/inferior temporal cortex, which we suggest reflects alpha-driven cortical disinhibition serving to strengthen communication within the syntax composition neural network. Altogether, our findings highlight the critical role of rapid spatial-temporal alpha band activity in controlling the allocation, transfer, and coordination of the brain's resources during syntax composition.
Collapse
Affiliation(s)
- Sophie M Hardy
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- Department of Psychology, University of Warwick, Coventry CV4 7AL, UK
| | - Ole Jensen
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
| | - Linda Wheeldon
- Department of Foreign Languages and Translations, University of Agder, Kristiansand 4630, Norway
| | - Ali Mazaheri
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| | - Katrien Segaert
- Centre for Human Brain Health, University of Birmingham, Birmingham B15 2TT, UK
- School of Psychology, University of Birmingham, Birmingham B15 2TT, UK
| |
Collapse
|
4
|
Bai F, Meyer AS, Martin AE. Neural dynamics differentially encode phrases and sentences during spoken language comprehension. PLoS Biol 2022; 20:e3001713. [PMID: 35834569 PMCID: PMC9282610 DOI: 10.1371/journal.pbio.3001713] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Accepted: 06/14/2022] [Indexed: 11/19/2022] Open
Abstract
Human language stands out in the natural world as a biological signal that uses a structured system to combine the meanings of small linguistic units (e.g., words) into larger constituents (e.g., phrases and sentences). However, the physical dynamics of speech (or sign) do not stand in a one-to-one relationship with the meanings listeners perceive. Instead, listeners infer meaning based on their knowledge of the language. The neural readouts of the perceptual and cognitive processes underlying these inferences are still poorly understood. In the present study, we used scalp electroencephalography (EEG) to compare the neural response to phrases (e.g., the red vase) and sentences (e.g., the vase is red), which were close in semantic meaning and had been synthesized to be physically indistinguishable. Differences in structure were well captured in the reorganization of neural phase responses in delta (approximately <2 Hz) and theta bands (approximately 2 to 7 Hz),and in power and power connectivity changes in the alpha band (approximately 7.5 to 13.5 Hz). Consistent with predictions from a computational model, sentences showed more power, more power connectivity, and more phase synchronization than phrases did. Theta–gamma phase–amplitude coupling occurred, but did not differ between the syntactic structures. Spectral–temporal response function (STRF) modeling revealed different encoding states for phrases and sentences, over and above the acoustically driven neural response. Our findings provide a comprehensive description of how the brain encodes and separates linguistic structures in the dynamics of neural responses. They imply that phase synchronization and strength of connectivity are readouts for the constituent structure of language. The results provide a novel basis for future neurophysiological research on linguistic structure representation in the brain, and, together with our simulations, support time-based binding as a mechanism of structure encoding in neural dynamics.
Collapse
Affiliation(s)
- Fan Bai
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Antje S. Meyer
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Andrea E. Martin
- Max Planck Institute for Psycholinguistics, Nijmegen, the Netherlands
- Donders Institute for Brain, Cognition, and Behaviour, Radboud University, Nijmegen, the Netherlands
- * E-mail:
| |
Collapse
|
5
|
Hauswald A, Keitel A, Chen Y, Rösch S, Weisz N. Degradation levels of continuous speech affect neural speech tracking and alpha power differently. Eur J Neurosci 2022; 55:3288-3302. [PMID: 32687616 PMCID: PMC9540197 DOI: 10.1111/ejn.14912] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/14/2020] [Revised: 07/12/2020] [Accepted: 07/13/2020] [Indexed: 11/26/2022]
Abstract
Making sense of a poor auditory signal can pose a challenge. Previous attempts to quantify speech intelligibility in neural terms have usually focused on one of two measures, namely low-frequency speech-brain synchronization or alpha power modulations. However, reports have been mixed concerning the modulation of these measures, an issue aggravated by the fact that they have normally been studied separately. We present two MEG studies analyzing both measures. In study 1, participants listened to unimodal auditory speech with three different levels of degradation (original, 7-channel and 3-channel vocoding). Intelligibility declined with declining clarity, but speech was still intelligible to some extent even for the lowest clarity level (3-channel vocoding). Low-frequency (1-7 Hz) speech tracking suggested a U-shaped relationship with strongest effects for the medium-degraded speech (7-channel) in bilateral auditory and left frontal regions. To follow up on this finding, we implemented three additional vocoding levels (5-channel, 2-channel and 1-channel) in a second MEG study. Using this wider range of degradation, the speech-brain synchronization showed a similar pattern as in study 1, but further showed that when speech becomes unintelligible, synchronization declines again. The relationship differed for alpha power, which continued to decrease across vocoding levels reaching a floor effect for 5-channel vocoding. Predicting subjective intelligibility based on models either combining both measures or each measure alone showed superiority of the combined model. Our findings underline that speech tracking and alpha power are modified differently by the degree of degradation of continuous speech but together contribute to the subjective speech understanding.
Collapse
Affiliation(s)
- Anne Hauswald
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Anne Keitel
- Psychology, School of Social SciencesUniversity of DundeeDundeeUK
- Centre for Cognitive NeuroimagingUniversity of GlasgowGlasgowUK
| | - Ya‐Ping Chen
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| | - Sebastian Rösch
- Department of OtorhinolaryngologyParacelsus Medical UniversitySalzburgAustria
| | - Nathan Weisz
- Center of Cognitive NeuroscienceUniversity of SalzburgSalzburgAustria
- Department of PsychologyUniversity of SalzburgSalzburgAustria
| |
Collapse
|
6
|
Grant AM, Kousaie S, Coulter K, Gilbert AC, Baum SR, Gracco V, Titone D, Klein D, Phillips NA. Age of Acquisition Modulates Alpha Power During Bilingual Speech Comprehension in Noise. Front Psychol 2022; 13:865857. [PMID: 35548507 PMCID: PMC9083356 DOI: 10.3389/fpsyg.2022.865857] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/31/2022] [Accepted: 03/11/2022] [Indexed: 12/20/2022] Open
Abstract
Research on bilingualism has grown exponentially in recent years. However, the comprehension of speech in noise, given the ubiquity of both bilingualism and noisy environments, has seen only limited focus. Electroencephalogram (EEG) studies in monolinguals show an increase in alpha power when listening to speech in noise, which, in the theoretical context where alpha power indexes attentional control, is thought to reflect an increase in attentional demands. In the current study, English/French bilinguals with similar second language (L2) proficiency and who varied in terms of age of L2 acquisition (AoA) from 0 (simultaneous bilinguals) to 15 years completed a speech perception in noise task. Participants were required to identify the final word of high and low semantically constrained auditory sentences such as "Stir your coffee with a spoon" vs. "Bob could have known about the spoon" in both of their languages and in both noise (multi-talker babble) and quiet during electrophysiological recording. We examined the effects of language, AoA, semantic constraint, and listening condition on participants' induced alpha power during speech comprehension. Our results show an increase in alpha power when participants were listening in their L2, suggesting that listening in an L2 requires additional attentional control compared to the first language, particularly early in processing during word identification. Additionally, despite similar proficiency across participants, our results suggest that under difficult processing demands, AoA modulates the amount of attention required to process the second language.
Collapse
Affiliation(s)
- Angela M Grant
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Shanna Kousaie
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Psychology, University of Ottawa, Ottawa, ON, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Kristina Coulter
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada
| | - Annie C Gilbert
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Shari R Baum
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada
| | - Vincent Gracco
- School of Communication Sciences and Disorders, McGill University, Montreal, QC, Canada.,Haskins Laboratories, New Haven, CT, United States
| | - Debra Titone
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Department of Psychology, McGill University Montreal, Montreal, QC, Canada
| | - Denise Klein
- Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Cognitive Neuroscience Unit, Montreal Neurological Institute, McGill University, Montreal, QC, Canada.,Department of Neurology and Neurosurgery, Montreal Neurological Institute, McGill University, Montreal, QC, Canada
| | - Natalie A Phillips
- Department of Psychology, Centre for Research in Human Development, Concordia University, Montreal, QC, Canada.,Centre for Research on Brain, Language and Music, McGill University, Montreal, QC, Canada.,Bloomfield Centre for Research in Aging, Lady Davis Institute for Medical Research and Jewish General Hospital, McGill University Memory Clinic, Jewish General Hospital, Montreal, QC, Canada
| |
Collapse
|
7
|
Corcoran AW, Perera R, Koroma M, Kouider S, Hohwy J, Andrillon T. Expectations boost the reconstruction of auditory features from electrophysiological responses to noisy speech. Cereb Cortex 2022; 33:691-708. [PMID: 35253871 PMCID: PMC9890472 DOI: 10.1093/cercor/bhac094] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2021] [Revised: 02/11/2022] [Accepted: 02/12/2022] [Indexed: 02/04/2023] Open
Abstract
Online speech processing imposes significant computational demands on the listening brain, the underlying mechanisms of which remain poorly understood. Here, we exploit the perceptual "pop-out" phenomenon (i.e. the dramatic improvement of speech intelligibility after receiving information about speech content) to investigate the neurophysiological effects of prior expectations on degraded speech comprehension. We recorded electroencephalography (EEG) and pupillometry from 21 adults while they rated the clarity of noise-vocoded and sine-wave synthesized sentences. Pop-out was reliably elicited following visual presentation of the corresponding written sentence, but not following incongruent or neutral text. Pop-out was associated with improved reconstruction of the acoustic stimulus envelope from low-frequency EEG activity, implying that improvements in perceptual clarity were mediated via top-down signals that enhanced the quality of cortical speech representations. Spectral analysis further revealed that pop-out was accompanied by a reduction in theta-band power, consistent with predictive coding accounts of acoustic filling-in and incremental sentence processing. Moreover, delta-band power, alpha-band power, and pupil diameter were all increased following the provision of any written sentence information, irrespective of content. Together, these findings reveal distinctive profiles of neurophysiological activity that differentiate the content-specific processes associated with degraded speech comprehension from the context-specific processes invoked under adverse listening conditions.
Collapse
Affiliation(s)
- Andrew W Corcoran
- Corresponding author: Room E672, 20 Chancellors Walk, Clayton, VIC 3800, Australia.
| | - Ricardo Perera
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Matthieu Koroma
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Sid Kouider
- Brain and Consciousness Group (ENS, EHESS, CNRS), Département d’Études Cognitives, École Normale Supérieure-PSL Research University, Paris 75005, France
| | - Jakob Hohwy
- Cognition & Philosophy Laboratory, School of Philosophical, Historical, and International Studies, Monash University, Melbourne, VIC 3800 Australia,Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia
| | - Thomas Andrillon
- Monash Centre for Consciousness & Contemplative Studies, Monash University, Melbourne, VIC 3800 Australia,Paris Brain Institute, Sorbonne Université, Inserm-CNRS, Paris 75013, France
| |
Collapse
|
8
|
Ten Oever S, Sack AT, Oehrn CR, Axmacher N. An engram of intentionally forgotten information. Nat Commun 2021; 12:6443. [PMID: 34750407 PMCID: PMC8575985 DOI: 10.1038/s41467-021-26713-x] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2020] [Accepted: 10/12/2021] [Indexed: 12/20/2022] Open
Abstract
Successful forgetting of unwanted memories is crucial for goal-directed behavior and mental wellbeing. While memory retention strengthens memory traces, it is unclear what happens to memory traces of events that are actively forgotten. Using intracranial EEG recordings from lateral temporal cortex, we find that memory traces for actively forgotten information are partially preserved and exhibit unique neural signatures. Memory traces of successfully remembered items show stronger encoding-retrieval similarity in gamma frequency patterns. By contrast, encoding-retrieval similarity of item-specific memory traces of actively forgotten items depend on activity at alpha/beta frequencies commonly associated with functional inhibition. Additional analyses revealed selective modification of item-specific patterns of connectivity and top-down information flow from dorsolateral prefrontal cortex to lateral temporal cortex in memory traces of intentionally forgotten items. These results suggest that intentional forgetting relies more on inhibitory top-down connections than intentional remembering, resulting in inhibitory memory traces with unique neural signatures and representational formats.
Collapse
Affiliation(s)
- Sanne Ten Oever
- Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525XD, Nijmegen, The Netherlands.
- Donders Centre for Cognitive Neuroimaging, Radboud University, Kapittelweg 29, 6525EN, Nijmegen, The Netherlands.
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229EV, Maastricht, The Netherlands.
| | - Alexander T Sack
- Department of Cognitive Neuroscience, Faculty of Psychology and Neuroscience, Maastricht University, Oxfordlaan 55, 6229EV, Maastricht, The Netherlands
- Department of Psychiatry and Neuropsychology, School for Mental Health and Neuroscience (MHeNs), Brain and Nerve Centre, Maastricht University Medical Centre+ (MUMC+), Debyelaan 25, 6229HX, Maastricht, The Netherlands
| | - Carina R Oehrn
- Department of Neurology, Philipps-University of Marburg, Biegenstraße 10, 35037, Marburg, Germany
- Center for Mind, Brain and Behavior (CMBB), Philipps-University Marburg, Biegenstraße 10, 35037, Marburg, Germany
| | - Nikolai Axmacher
- Department of Neuropsychology, Faculty of Psychology, Institute of Cognitive Neuroscience, Ruhr University Bochum, Universitätsstraße 150, 44801, Bochum, Germany.
- State Key Laboratory of Cognitive Neuroscience and Learning and IDG/McGovern Institute for Brain Research, Beijing Normal University, 19 Xinjiekou Outer St, Beijing, 100875, China.
| |
Collapse
|
9
|
Hearing Aid Noise Reduction Lowers the Sustained Listening Effort During Continuous Speech in Noise-A Combined Pupillometry and EEG Study. Ear Hear 2021; 42:1590-1601. [PMID: 33950865 DOI: 10.1097/aud.0000000000001050] [Citation(s) in RCA: 25] [Impact Index Per Article: 8.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The investigation of auditory cognitive processes recently moved from strictly controlled, trial-based paradigms toward the presentation of continuous speech. This also allows the investigation of listening effort on larger time scales (i.e., sustained listening effort). Here, we investigated the modulation of sustained listening effort by a noise reduction algorithm as applied in hearing aids in a listening scenario with noisy continuous speech. The investigated directional noise reduction algorithm mainly suppresses noise from the background. DESIGN We recorded the pupil size and the EEG in 22 participants with hearing loss who listened to audio news clips in the presence of background multi-talker babble noise. We estimated how noise reduction (off, on) and signal-to-noise ratio (SNR; +3 dB, +8 dB) affect pupil size and the power in the parietal EEG alpha band (i.e., parietal alpha power) as well as the behavioral performance. RESULTS Our results show that noise reduction reduces pupil size, while there was no significant effect of the SNR. It is important to note that we found interactions of SNR and noise reduction, which suggested that noise reduction reduces pupil size predominantly under the lower SNR. Parietal alpha power showed a similar yet nonsignificant pattern, with increased power under easier conditions. In line with the participants' reports that one of the two presented talkers was more intelligible, we found a reduced pupil size, increased parietal alpha power, and better performance when people listened to the more intelligible talker. CONCLUSIONS We show that the modulation of sustained listening effort (e.g., by hearing aid noise reduction) as indicated by pupil size and parietal alpha power can be studied under more ecologically valid conditions. Mainly concluded from pupil size, we demonstrate that hearing aid noise reduction lowers sustained listening effort. Our study approximates to real-world listening scenarios and evaluates the benefit of the signal processing as can be found in a modern hearing aid.
Collapse
|
10
|
Tracking Cognitive Spare Capacity During Speech Perception With EEG/ERP: Effects of Cognitive Load and Sentence Predictability. Ear Hear 2021; 41:1144-1157. [PMID: 32282402 DOI: 10.1097/aud.0000000000000856] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/31/2022]
Abstract
OBJECTIVES Listening to speech in adverse listening conditions is effortful. Objective assessment of cognitive spare capacity during listening can serve as an index of the effort needed to understand speech. Cognitive spare capacity is influenced both by signal-driven demands posed by listening conditions and top-down demands intrinsic to spoken language processing, such as memory use and semantic processing. Previous research indicates that electrophysiological responses, particularly alpha oscillatory power, may index listening effort. However, it is not known how these indices respond to memory and semantic processing demands during spoken language processing in adverse listening conditions. The aim of the present study was twofold: first, to assess the impact of memory demands on electrophysiological responses during recognition of degraded, spoken sentences, and second, to examine whether predictable sentence contexts increase or decrease cognitive spare capacity during listening. DESIGN Cognitive demand was varied in a memory load task in which young adult participants (n = 20) viewed either low-load (one digit) or high-load (seven digits) sequences of digits, then listened to noise-vocoded spoken sentences that were either predictable or unpredictable, and then reported the final word of the sentence and the digits. Alpha oscillations in the frequency domain and event-related potentials in the time domain of the electrophysiological data were analyzed, as was behavioral accuracy for both words and digits. RESULTS Measured during sentence processing, event-related desynchronization of alpha power was greater (more negative) under high load than low load and was also greater for unpredictable than predictable sentences. A complementary pattern was observed for the P300/late positive complex (LPC) to sentence-final words, such that P300/LPC amplitude was reduced under high load compared with low load and for unpredictable compared with predictable sentences. Both words and digits were identified more quickly and accurately on trials in which spoken sentences were predictable. CONCLUSIONS Results indicate that during a sentence-recognition task, both cognitive load and sentence predictability modulate electrophysiological indices of cognitive spare capacity, namely alpha oscillatory power and P300/LPC amplitude. Both electrophysiological and behavioral results indicate that a predictive sentence context reduces cognitive demands during listening. Findings contribute to a growing literature on objective measures of cognitive demand during listening and indicate predictable sentence context as a top-down factor that can support ease of listening.
Collapse
|
11
|
Schneider D, Herbst SK, Klatt LI, Wöstmann M. Target enhancement or distractor suppression? Functionally distinct alpha oscillations form the basis of attention. Eur J Neurosci 2021; 55:3256-3265. [PMID: 33973310 DOI: 10.1111/ejn.15309] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2020] [Revised: 04/07/2021] [Accepted: 05/04/2021] [Indexed: 11/27/2022]
Abstract
Recent advances in attention research have been propelled by the debate on target enhancement versus distractor suppression. A predominant neural correlate of attention is the modulation of alpha oscillatory power (~10 Hz), which signifies shifts of attention in time, space and between sensory modalities. However, the underspecified functional role of alpha oscillations limits the progress of tracking down the neurocognitive basis of attention. In this short opinion article, we review and critically examine a synthesis of three conceptual and methodological aspects that are indispensable for a mechanistic understanding of the role of alpha oscillations for attention. (a) Precise mapping of the anatomical source and the temporal response profile of neural signals reveals distinct alpha oscillatory processes that implement facilitatory versus suppressive components of attention. (b) A testable framework enables unanimous association of alpha modulation with either target enhancement or different forms of distractor suppression (active vs. automatic). (c) Linking anatomically specified alpha oscillations to behavior reveals the causal nature of alpha oscillations for attention. The three reviewed aspects substantially enrich study design, data analysis and interpretation of results to achieve the goal of understanding how anatomically specified and functionally relevant neural oscillations contribute to the implementation of facilitatory versus suppressive components of attention.
Collapse
Affiliation(s)
- Daniel Schneider
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Sophie K Herbst
- NeuroSpin, CEA, DRF/Joliot, INSERM, Cognitive Neuroimaging Unit, Université Paris-Saclay, 91191Gif/Yvette, France
| | - Laura-Isabelle Klatt
- Leibniz Research Centre for Working Environment and Human Factors, Dortmund, Germany
| | - Malte Wöstmann
- Department of Psychology, University of Lübeck, Lübeck, Germany.,Center of Brain, Behavior and Metabolism (CBBM), University of Lübeck, Lübeck, Germany
| |
Collapse
|
12
|
Jiang J, Benhamou E, Waters S, Johnson JCS, Volkmer A, Weil RS, Marshall CR, Warren JD, Hardy CJD. Processing of Degraded Speech in Brain Disorders. Brain Sci 2021; 11:394. [PMID: 33804653 PMCID: PMC8003678 DOI: 10.3390/brainsci11030394] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/18/2021] [Revised: 03/15/2021] [Accepted: 03/18/2021] [Indexed: 11/30/2022] Open
Abstract
The speech we hear every day is typically "degraded" by competing sounds and the idiosyncratic vocal characteristics of individual speakers. While the comprehension of "degraded" speech is normally automatic, it depends on dynamic and adaptive processing across distributed neural networks. This presents the brain with an immense computational challenge, making degraded speech processing vulnerable to a range of brain disorders. Therefore, it is likely to be a sensitive marker of neural circuit dysfunction and an index of retained neural plasticity. Considering experimental methods for studying degraded speech and factors that affect its processing in healthy individuals, we review the evidence for altered degraded speech processing in major neurodegenerative diseases, traumatic brain injury and stroke. We develop a predictive coding framework for understanding deficits of degraded speech processing in these disorders, focussing on the "language-led dementias"-the primary progressive aphasias. We conclude by considering prospects for using degraded speech as a probe of language network pathophysiology, a diagnostic tool and a target for therapeutic intervention.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Sheena Waters
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jeremy C. S. Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK;
| | - Rimona S. Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Charles R. Marshall
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
- Preventive Neurology Unit, Wolfson Institute of Preventive Medicine, Queen Mary University of London, London EC1M 6BQ, UK;
| | - Jason D. Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| | - Chris J. D. Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, London WC1N 3BG, UK; (J.J.); (E.B.); (J.C.S.J.); (R.S.W.); (C.R.M.); (J.D.W.)
| |
Collapse
|
13
|
Baroni F, Morillon B, Trébuchon A, Liégeois-Chauvel C, Olasagasti I, Giraud AL. Converging intracortical signatures of two separated processing timescales in human early auditory cortex. Neuroimage 2020; 218:116882. [PMID: 32439539 DOI: 10.1016/j.neuroimage.2020.116882] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/17/2019] [Revised: 03/30/2020] [Accepted: 04/23/2020] [Indexed: 11/15/2022] Open
Abstract
Neural oscillations in auditory cortex are argued to support parsing and representing speech constituents at their corresponding temporal scales. Yet, how incoming sensory information interacts with ongoing spontaneous brain activity, what features of the neuronal microcircuitry underlie spontaneous and stimulus-evoked spectral fingerprints, and what these fingerprints entail for stimulus encoding, remain largely open questions. We used a combination of human invasive electrophysiology, computational modeling and decoding techniques to assess the information encoding properties of brain activity and to relate them to a plausible underlying neuronal microarchitecture. We analyzed intracortical auditory EEG activity from 10 patients while they were listening to short sentences. Pre-stimulus neural activity in early auditory cortical regions often exhibited power spectra with a shoulder in the delta range and a small bump in the beta range. Speech decreased power in the beta range, and increased power in the delta-theta and gamma ranges. Using multivariate machine learning techniques, we assessed the spectral profile of information content for two aspects of speech processing: detection and discrimination. We obtained better phase than power information decoding, and a bimodal spectral profile of information content with better decoding at low (delta-theta) and high (gamma) frequencies than at intermediate (beta) frequencies. These experimental data were reproduced by a simple rate model made of two subnetworks with different timescales, each composed of coupled excitatory and inhibitory units, and connected via a negative feedback loop. Modeling and experimental results were similar in terms of pre-stimulus spectral profile (except for the iEEG beta bump), spectral modulations with speech, and spectral profile of information content. Altogether, we provide converging evidence from both univariate spectral analysis and decoding approaches for a dual timescale processing infrastructure in human auditory cortex, and show that it is consistent with the dynamics of a simple rate model.
Collapse
Affiliation(s)
- Fabiano Baroni
- Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland; School of Engineering, École Polytechnique Fédérale de Lausanne, Lausanne, Switzerland.
| | - Benjamin Morillon
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale (INSERM), Institut de Neurosciences des Systémes (INS), Marseille, France
| | - Agnès Trébuchon
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale (INSERM), Institut de Neurosciences des Systémes (INS), Marseille, France; Clinical Neurophysiology and Epileptology Department, Timone Hospital, Assistance Publique Hôpitaux de Marseille, Marseille, France
| | - Catherine Liégeois-Chauvel
- Aix Marseille Université, Institut National de la Santé et de la Recherche Médicale (INSERM), Institut de Neurosciences des Systémes (INS), Marseille, France; Department of Neurological Surgery, University of Pittsburgh, PA, 15213, USA
| | - Itsaso Olasagasti
- Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland
| | - Anne-Lise Giraud
- Department of Fundamental Neuroscience, University of Geneva, Geneva, Switzerland
| |
Collapse
|
14
|
Becker R, Vidaurre D, Quinn AJ, Abeysuriya RG, Parker Jones O, Jbabdi S, Woolrich MW. Transient spectral events in resting state MEG predict individual task responses. Neuroimage 2020; 215:116818. [PMID: 32276062 DOI: 10.1016/j.neuroimage.2020.116818] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 01/27/2020] [Accepted: 03/26/2020] [Indexed: 01/12/2023] Open
Abstract
Even in response to simple tasks such as hand movement, human brain activity shows remarkable inter-subject variability. Recently, it has been shown that individual spatial variability in fMRI task responses can be predicted from measurements collected at rest; suggesting that the spatial variability is a stable feature, inherent to the individual's brain. However, it is not clear if this is also true for individual variability in the spatio-spectral content of oscillatory brain activity. Here, we show using MEG (N = 89) that we can predict the spatial and spectral content of an individual's task response using features estimated from the individual's resting MEG data. This works by learning when transient spectral 'bursts' or events in the resting state tend to reoccur in the task responses. We applied our method to motor, working memory and language comprehension tasks. All task conditions were predicted significantly above chance. Finally, we found a systematic relationship between genetic similarity (e.g. unrelated subjects vs. twins) and predictability. Our approach can predict individual differences in brain activity and suggests a link between transient spectral events in task and rest that can be captured at the level of individuals.
Collapse
Affiliation(s)
- R Becker
- Oxford Center for Human Brain Activity, OHBA, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK.
| | - D Vidaurre
- Oxford Center for Human Brain Activity, OHBA, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
| | - A J Quinn
- Oxford Center for Human Brain Activity, OHBA, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
| | - R G Abeysuriya
- Oxford Center for Human Brain Activity, OHBA, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
| | - O Parker Jones
- FMRIB, Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - S Jbabdi
- FMRIB, Wellcome Centre for Integrative Neuroimaging, Nuffield Department of Clinical Neurosciences, University of Oxford, UK
| | - M W Woolrich
- Oxford Center for Human Brain Activity, OHBA, Wellcome Centre for Integrative Neuroimaging, University of Oxford, UK
| |
Collapse
|
15
|
McNair SW, Kayser SJ, Kayser C. Consistent pre-stimulus influences on auditory perception across the lifespan. Neuroimage 2019; 186:22-32. [PMID: 30391564 PMCID: PMC6347568 DOI: 10.1016/j.neuroimage.2018.10.085] [Citation(s) in RCA: 21] [Impact Index Per Article: 4.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/27/2018] [Revised: 10/29/2018] [Accepted: 10/31/2018] [Indexed: 01/29/2023] Open
Abstract
As we get older, perception in cluttered environments becomes increasingly difficult as a result of changes in peripheral and central neural processes. Given the aging society, it is important to understand the neural mechanisms constraining perception in the elderly. In young participants, the state of rhythmic brain activity prior to a stimulus has been shown to modulate the neural encoding and perceptual impact of this stimulus - yet it remains unclear whether, and if so, how, the perceptual relevance of pre-stimulus activity changes with age. Using the auditory system as a model, we recorded EEG activity during a frequency discrimination task from younger and older human listeners. By combining single-trial EEG decoding with linear modelling we demonstrate consistent statistical relations between pre-stimulus power and the encoding of sensory evidence in short-latency EEG components, and more variable relations between pre-stimulus phase and subjects' decisions in longer-latency components. At the same time, we observed a significant slowing of auditory evoked responses and a flattening of the overall EEG frequency spectrum in the older listeners. Our results point to mechanistically consistent relations between rhythmic brain activity and sensory encoding that emerge despite changes in neural response latencies and the relative amplitude of rhythmic brain activity with age.
Collapse
Affiliation(s)
- Steven W McNair
- Institute of Neuroscience and Psychology, University of Glasgow, 62 Hillhead Street, G12 8QB, United Kingdom
| | - Stephanie J Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany
| | - Christoph Kayser
- Department for Cognitive Neuroscience, Faculty of Biology, Bielefeld University, Universitätsstr. 25, 33615, Bielefeld, Germany; Cognitive Interaction Technology - Center of Excellence, Bielefeld University, Inspiration 1, 33615, Bielefeld, Germany.
| |
Collapse
|
16
|
Drijvers L, Özyürek A, Jensen O. Alpha and Beta Oscillations Index Semantic Congruency between Speech and Gestures in Clear and Degraded Speech. J Cogn Neurosci 2018; 30:1086-1097. [DOI: 10.1162/jocn_a_01301] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/04/2022]
Abstract
Previous work revealed that visual semantic information conveyed by gestures can enhance degraded speech comprehension, but the mechanisms underlying these integration processes under adverse listening conditions remain poorly understood. We used MEG to investigate how oscillatory dynamics support speech–gesture integration when integration load is manipulated by auditory (e.g., speech degradation) and visual semantic (e.g., gesture congruency) factors. Participants were presented with videos of an actress uttering an action verb in clear or degraded speech, accompanied by a matching (mixing gesture + “mixing”) or mismatching (drinking gesture + “walking”) gesture. In clear speech, alpha/beta power was more suppressed in the left inferior frontal gyrus and motor and visual cortices when integration load increased in response to mismatching versus matching gestures. In degraded speech, beta power was less suppressed over posterior STS and medial temporal lobe for mismatching compared with matching gestures, showing that integration load was lowest when speech was degraded and mismatching gestures could not be integrated and disambiguate the degraded signal. Our results thus provide novel insights on how low-frequency oscillatory modulations in different parts of the cortex support the semantic audiovisual integration of gestures in clear and degraded speech: When speech is clear, the left inferior frontal gyrus and motor and visual cortices engage because higher-level semantic information increases semantic integration load. When speech is degraded, posterior STS/middle temporal gyrus and medial temporal lobe are less engaged because integration load is lowest when visual semantic information does not aid lexical retrieval and speech and gestures cannot be integrated.
Collapse
Affiliation(s)
| | - Asli Özyürek
- Radboud University
- Max Planck Institute for Psycholinguistics, Nijmegen, The Netherlands
| | | |
Collapse
|
17
|
Miles K, McMahon C, Boisvert I, Ibrahim R, de Lissa P, Graham P, Lyxell B. Objective Assessment of Listening Effort: Coregistration of Pupillometry and EEG. Trends Hear 2018; 21:2331216517706396. [PMID: 28752807 PMCID: PMC5536372 DOI: 10.1177/2331216517706396] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/01/2023] Open
Abstract
Listening to speech in noise is effortful, particularly for people with hearing impairment. While it is known that effort is related to a complex interplay between bottom-up and top-down processes, the cognitive and neurophysiological mechanisms contributing to effortful listening remain unknown. Therefore, a reliable physiological measure to assess effort remains elusive. This study aimed to determine whether pupil dilation and alpha power change, two physiological measures suggested to index listening effort, assess similar processes. Listening effort was manipulated by parametrically varying spectral resolution (16- and 6-channel noise vocoding) and speech reception thresholds (SRT; 50% and 80%) while 19 young, normal-hearing adults performed a speech recognition task in noise. Results of off-line sentence scoring showed discrepancies between the target SRTs and the true performance obtained during the speech recognition task. For example, in the SRT80% condition, participants scored an average of 64.7%. Participants’ true performance levels were therefore used for subsequent statistical modelling. Results showed that both measures appeared to be sensitive to changes in spectral resolution (channel vocoding), while pupil dilation only was also significantly related to their true performance levels (%) and task accuracy (i.e., whether the response was correctly or partially recalled). The two measures were not correlated, suggesting they each may reflect different cognitive processes involved in listening effort. This combination of findings contributes to a growing body of research aiming to develop an objective measure of listening effort.
Collapse
Affiliation(s)
- Kelly Miles
- 1 Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, Australia.,2 The HEARing Cooperative Research Centre, Melbourne, Australia.,3 Linnaeus Centre for HEaring And Deafness (HEAD), Swedish Institute for Disability Research, Linköping University, Sweden
| | - Catherine McMahon
- 1 Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, Australia.,2 The HEARing Cooperative Research Centre, Melbourne, Australia
| | - Isabelle Boisvert
- 1 Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, Australia.,2 The HEARing Cooperative Research Centre, Melbourne, Australia
| | - Ronny Ibrahim
- 1 Department of Linguistics, Australian Hearing Hub, Macquarie University, Sydney, Australia.,2 The HEARing Cooperative Research Centre, Melbourne, Australia
| | - Peter de Lissa
- 2 The HEARing Cooperative Research Centre, Melbourne, Australia.,4 Department of Psychology, Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - Petra Graham
- 5 Department of Statistics, Australian Hearing Hub, Macquarie University, Sydney, Australia
| | - Björn Lyxell
- 3 Linnaeus Centre for HEaring And Deafness (HEAD), Swedish Institute for Disability Research, Linköping University, Sweden
| |
Collapse
|
18
|
Vassileiou B, Meyer L, Beese C, Friederici AD. Alignment of alpha-band desynchronization with syntactic structure predicts successful sentence comprehension. Neuroimage 2018; 175:286-296. [PMID: 29627592 DOI: 10.1016/j.neuroimage.2018.04.008] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2017] [Revised: 04/03/2018] [Accepted: 04/05/2018] [Indexed: 11/18/2022] Open
Abstract
Sentence comprehension requires the encoding of phrases and their relationships into working memory. To date, despite the importance of neural oscillations in language comprehension, the neural-oscillatory dynamics of sentence encoding are only sparsely understood. Although oscillations in a wide range of frequency bands have been reported both for the encoding of unstructured word lists and for working-memory intensive sentences, it is unclear to what extent these frequency bands subserve processes specific to the working-memory component of sentence comprehension or to general verbal working memory. In our auditory electroencephalography study, we isolated the working-memory component of sentence comprehension by adapting a subsequent memory paradigm to sentence comprehension and assessing oscillatory power changes during successful sentence encoding. Time-frequency analyses and source reconstruction revealed alpha-power desynchronization in left-hemispheric language-relevant regions during successful sentence encoding. We further showed that sentence encoding was more successful when source-level alpha-band desynchronization aligned with computational measures of syntactic-compared to lexical-semantic-difficulty. Our results are a preliminary indication of a domain-general mechanism of cortical disinhibition via alpha-band desynchronization superimposed onto the language-relevant cortex, which is beneficial for encoding sentences into working memory.
Collapse
Affiliation(s)
- Benedict Vassileiou
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103, Leipzig, Germany.
| | - Lars Meyer
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103, Leipzig, Germany
| | - Caroline Beese
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103, Leipzig, Germany
| | - Angela D Friederici
- Department of Neuropsychology, Max Planck Institute for Human Cognitive and Brain Sciences, Stephanstraße 1a, 04103, Leipzig, Germany
| |
Collapse
|
19
|
Drijvers L, Özyürek A, Jensen O. Hearing and seeing meaning in noise: Alpha, beta, and gamma oscillations predict gestural enhancement of degraded speech comprehension. Hum Brain Mapp 2018; 39:2075-2087. [PMID: 29380945 PMCID: PMC5947738 DOI: 10.1002/hbm.23987] [Citation(s) in RCA: 40] [Impact Index Per Article: 6.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/25/2017] [Revised: 01/09/2018] [Accepted: 01/19/2018] [Indexed: 11/10/2022] Open
Abstract
During face‐to‐face communication, listeners integrate speech with gestures. The semantic information conveyed by iconic gestures (e.g., a drinking gesture) can aid speech comprehension in adverse listening conditions. In this magnetoencephalography (MEG) study, we investigated the spatiotemporal neural oscillatory activity associated with gestural enhancement of degraded speech comprehension. Participants watched videos of an actress uttering clear or degraded speech, accompanied by a gesture or not and completed a cued‐recall task after watching every video. When gestures semantically disambiguated degraded speech comprehension, an alpha and beta power suppression and a gamma power increase revealed engagement and active processing in the hand‐area of the motor cortex, the extended language network (LIFG/pSTS/STG/MTG), medial temporal lobe, and occipital regions. These observed low‐ and high‐frequency oscillatory modulations in these areas support general unification, integration and lexical access processes during online language comprehension, and simulation of and increased visual attention to manual gestures over time. All individual oscillatory power modulations associated with gestural enhancement of degraded speech comprehension predicted a listener's correct disambiguation of the degraded verb after watching the videos. Our results thus go beyond the previously proposed role of oscillatory dynamics in unimodal degraded speech comprehension and provide first evidence for the role of low‐ and high‐frequency oscillations in predicting the integration of auditory and visual information at a semantic level.
Collapse
Affiliation(s)
- Linda Drijvers
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT, Nijmegen, The Netherlands.,Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR, Nijmegen, The Netherlands
| | - Asli Özyürek
- Radboud University, Centre for Language Studies, Erasmusplein 1, 6525 HT, Nijmegen, The Netherlands.,Radboud University, Donders Institute for Brain, Cognition, and Behaviour, Montessorilaan 3, 6525 HR, Nijmegen, The Netherlands.,Max Planck Institute for Psycholinguistics, Wundtlaan 1, 6525 XD, Nijmegen, The Netherlands
| | - Ole Jensen
- School of Psychology, Centre for Human Brain Health, University of Birmingham, Hills Building, Birmingham, B15 2TT, United Kingdom
| |
Collapse
|
20
|
Bidelman GM. Amplified induced neural oscillatory activity predicts musicians’ benefits in categorical speech perception. Neuroscience 2017; 348:107-113. [DOI: 10.1016/j.neuroscience.2017.02.015] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2016] [Revised: 02/04/2017] [Accepted: 02/08/2017] [Indexed: 10/20/2022]
|
21
|
Dimitrijevic A, Smith ML, Kadis DS, Moore DR. Cortical Alpha Oscillations Predict Speech Intelligibility. Front Hum Neurosci 2017; 11:88. [PMID: 28286478 PMCID: PMC5323373 DOI: 10.3389/fnhum.2017.00088] [Citation(s) in RCA: 51] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2016] [Accepted: 02/13/2017] [Indexed: 12/21/2022] Open
Abstract
Understanding speech in noise (SiN) is a complex task involving sensory encoding and cognitive resources including working memory and attention. Previous work has shown that brain oscillations, particularly alpha rhythms (8–12 Hz) play important roles in sensory processes involving working memory and attention. However, no previous study has examined brain oscillations during performance of a continuous speech perception test. The aim of this study was to measure cortical alpha during attentive listening in a commonly used SiN task (digits-in-noise, DiN) to better understand the neural processes associated with “top-down” cognitive processing in adverse listening environments. We recruited 14 normal hearing (NH) young adults. DiN speech reception threshold (SRT) was measured in an initial behavioral experiment. EEG activity was then collected: (i) while performing the DiN near SRT; and (ii) while attending to a silent, close-caption video during presentation of identical digit stimuli that the participant was instructed to ignore. Three main results were obtained: (1) during attentive (“active”) listening to the DiN, a number of distinct neural oscillations were observed (mainly alpha with some beta; 15–30 Hz). No oscillations were observed during attention to the video (“passive” listening); (2) overall, alpha event-related synchronization (ERS) of central/parietal sources were observed during active listening when data were grand averaged across all participants. In some participants, a smaller magnitude alpha event-related desynchronization (ERD), originating in temporal regions, was observed; and (3) when individual EEG trials were sorted according to correct and incorrect digit identification, the temporal alpha ERD was consistently greater on correctly identified trials. No such consistency was observed with the central/parietal alpha ERS. These data demonstrate that changes in alpha activity are specific to listening conditions. To our knowledge, this is the first report that shows almost no brain oscillatory changes during a passive task compared to an active task in any sensory modality. Temporal alpha ERD was related to correct digit identification.
Collapse
Affiliation(s)
- Andrew Dimitrijevic
- Otolaryngology-Head and Neck Surgery, Sunnybrook Health Sciences CentreToronto, ON, Canada; Hurvitz Brain Sciences, Evaluative Clinical Sciences, Sunnybrook Research InstituteToronto, ON, Canada; Faculty of Medicine, Otolaryngology-Head and Neck SurgeryUniversity of Toronto, Toronto, ON, Canada
| | - Michael L Smith
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Speech and Hearing Sciences, University of WashingtonSeattle, WA, USA
| | - Darren S Kadis
- Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Division of Neurology, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Department of Pediatrics, University of Cincinnati, College of MedicineCincinnati, OH, USA
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical CenterCincinnati, OH, USA; Department of Otolaryngology, University of CincinnatiCincinnati, OH, USA
| |
Collapse
|
22
|
Steinmetzger K, Rosen S. Effects of acoustic periodicity and intelligibility on the neural oscillations in response to speech. Neuropsychologia 2017; 95:173-181. [DOI: 10.1016/j.neuropsychologia.2016.12.003] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/14/2015] [Revised: 09/07/2016] [Accepted: 12/05/2016] [Indexed: 10/20/2022]
|
23
|
Steinmetzger K, Rosen S. Effects of acoustic periodicity, intelligibility, and pre-stimulus alpha power on the event-related potentials in response to speech. BRAIN AND LANGUAGE 2017; 164:1-8. [PMID: 27690124 DOI: 10.1016/j.bandl.2016.09.008] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/02/2016] [Revised: 08/04/2016] [Accepted: 09/19/2016] [Indexed: 06/06/2023]
Abstract
Magneto- and electroencephalographic (M/EEG) signals in response to acoustically degraded speech have been examined by several recent studies. Unambiguously interpreting the results is complicated by the fact that speech signal manipulations affect acoustics and intelligibility alike. In the current EEG study, the acoustic properties of the stimuli were altered and the trials were sorted according to the correctness of the listeners' spoken responses to separate out these two factors. Firstly, more periodicity (i.e. voicing) rendered the event-related potentials (ERPs) more negative during the first second after sentence onset, indicating a greater cortical sensitivity to auditory input with a pitch. Secondly, we observed a larger contingent negative variation (CNV) during sentence presentation when the subjects could subsequently repeat more words correctly. Additionally, slow alpha power (7-10Hz) before sentences with the least correctly repeated words was increased, which may indicate that subjects have not been focussed on the upcoming task.
Collapse
Affiliation(s)
- Kurt Steinmetzger
- Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom.
| | - Stuart Rosen
- Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, London WC1N 1PF, United Kingdom
| |
Collapse
|
24
|
McMahon CM, Boisvert I, de Lissa P, Granger L, Ibrahim R, Lo CY, Miles K, Graham PL. Monitoring Alpha Oscillations and Pupil Dilation across a Performance-Intensity Function. Front Psychol 2016; 7:745. [PMID: 27252671 PMCID: PMC4877370 DOI: 10.3389/fpsyg.2016.00745] [Citation(s) in RCA: 51] [Impact Index Per Article: 6.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Accepted: 05/05/2016] [Indexed: 12/21/2022] Open
Abstract
Listening to degraded speech can be challenging and requires a continuous investment of cognitive resources, which is more challenging for those with hearing loss. However, while alpha power (8-12 Hz) and pupil dilation have been suggested as objective correlates of listening effort, it is not clear whether they assess the same cognitive processes involved, or other sensory and/or neurophysiological mechanisms that are associated with the task. Therefore, the aim of this study is to compare alpha power and pupil dilation during a sentence recognition task in 15 randomized levels of noise (-7 to +7 dB SNR) using highly intelligible (16 channel vocoded) and moderately intelligible (6 channel vocoded) speech. Twenty young normal-hearing adults participated in the study, however, due to extraneous noise, data from only 16 (10 females, 6 males; aged 19-28 years) was used in the Electroencephalography (EEG) analysis and 10 in the pupil analysis. Behavioral testing of perceived effort and speech performance was assessed at 3 fixed SNRs per participant and was comparable to sentence recognition performance assessed in the physiological test session for both 16- and 6-channel vocoded sentences. Results showed a significant interaction between channel vocoding for both the alpha power and the pupil size changes. While both measures significantly decreased with more positive SNRs for the 16-channel vocoding, this was not observed with the 6-channel vocoding. The results of this study suggest that these measures may encode different processes involved in speech perception, which show similar trends for highly intelligible speech, but diverge for more spectrally degraded speech. The results to date suggest that these objective correlates of listening effort, and the cognitive processes involved in listening effort, are not yet sufficiently well understood to be used within a clinical setting.
Collapse
Affiliation(s)
- Catherine M McMahon
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Isabelle Boisvert
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Peter de Lissa
- The HEARing CRC, Melbourne, VICAustralia; Department of Psychology, Macquarie University, Sydney, NSWAustralia
| | - Louise Granger
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Ronny Ibrahim
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Chi Yhun Lo
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Kelly Miles
- Department of Linguistics, Macquarie University, Sydney, NSWAustralia; The HEARing CRC, Melbourne, VICAustralia
| | - Petra L Graham
- Department of Statistics, Macquarie University, Sydney, NSW Australia
| |
Collapse
|
25
|
Modalities of Thinking: State and Trait Effects on Cross-Frequency Functional Independent Brain Networks. Brain Topogr 2016; 29:477-90. [PMID: 26838167 DOI: 10.1007/s10548-016-0469-3] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2015] [Accepted: 01/11/2016] [Indexed: 10/22/2022]
Abstract
Functional states of the brain are constituted by the temporally attuned activity of spatially distributed neural networks. Such networks can be identified by independent component analysis (ICA) applied to frequency-dependent source-localized EEG data. This methodology allows the identification of networks at high temporal resolution in frequency bands of established location-specific physiological functions. EEG measurements are sensitive to neural activity changes in cortical areas of modality-specific processing. We tested effects of modality-specific processing on functional brain networks. Phasic modality-specific processing was induced via tasks (state effects) and tonic processing was assessed via modality-specific person parameters (trait effects). Modality-specific person parameters and 64-channel EEG were obtained from 70 male, right-handed students. Person parameters were obtained using cognitive style questionnaires, cognitive tests, and thinking modality self-reports. EEG was recorded during four conditions: spatial visualization, object visualization, verbalization, and resting. Twelve cross-frequency networks were extracted from source-localized EEG across six frequency bands using ICA. RMANOVAs, Pearson correlations, and path modelling examined effects of tasks and person parameters on networks. Results identified distinct state- and trait-dependent functional networks. State-dependent networks were characterized by decreased, trait-dependent networks by increased alpha activity in sub-regions of modality-specific pathways. Pathways of competing modalities showed opposing alpha changes. State- and trait-dependent alpha were associated with inhibitory and automated processing, respectively. Antagonistic alpha modulations in areas of competing modalities likely prevent intruding effects of modality-irrelevant processing. Considerable research suggested alpha modulations related to modality-specific states and traits. This study identified the distinct electrophysiological cortical frequency-dependent networks within which they operate.
Collapse
|
26
|
Becker R, Knock S, Ritter P, Jirsa V. Relating Alpha Power and Phase to Population Firing and Hemodynamic Activity Using a Thalamo-cortical Neural Mass Model. PLoS Comput Biol 2015; 11:e1004352. [PMID: 26335064 PMCID: PMC4559309 DOI: 10.1371/journal.pcbi.1004352] [Citation(s) in RCA: 37] [Impact Index Per Article: 4.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/18/2014] [Accepted: 05/27/2015] [Indexed: 11/18/2022] Open
Abstract
Oscillations are ubiquitous phenomena in the animal and human brain. Among them, the alpha rhythm in human EEG is one of the most prominent examples. However, its precise mechanisms of generation are still poorly understood. It was mainly this lack of knowledge that motivated a number of simultaneous electroencephalography (EEG) - functional magnetic resonance imaging (fMRI) studies. This approach revealed how oscillatory neuronal signatures such as the alpha rhythm are paralleled by changes of the blood oxygenation level dependent (BOLD) signal. Several such studies revealed a negative correlation between the alpha rhythm and the hemodynamic BOLD signal in visual cortex and a positive correlation in the thalamus. In this study we explore the potential generative mechanisms that lead to those observations. We use a bursting capable Stefanescu-Jirsa 3D (SJ3D) neural-mass model that reproduces a wide repertoire of prominent features of local neuronal-population dynamics. We construct a thalamo-cortical network of coupled SJ3D nodes considering excitatory and inhibitory directed connections. The model suggests that an inverse correlation between cortical multi-unit activity, i.e. the firing of neuronal populations, and narrow band local field potential oscillations in the alpha band underlies the empirically observed negative correlation between alpha-rhythm power and fMRI signal in visual cortex. Furthermore the model suggests that the interplay between tonic and bursting mode in thalamus and cortex is critical for this relation. This demonstrates how biophysically meaningful modelling can generate precise and testable hypotheses about the underpinnings of large-scale neuroimaging signals.
Collapse
Affiliation(s)
- Robert Becker
- Functional Brain Mapping Lab, University of Geneva, Geneva, Switzerland
| | - Stuart Knock
- Institut de Neurosciences des Systèmes, Aix Marseille Université, Marseille, France
| | - Petra Ritter
- Minerva Research Group BrainModes, Max Planck Institute for Human Cognitive and Brain Sciences, Leipzig, Germany
- Dept. Neurology, Charité & Bernstein Center for Computational Neuroscience—University Medicine, Berlin, Germany
- Berlin School of Mind and Brain & Mind and Brain Institute, Humboldt University, Berlin, Germany
| | - Viktor Jirsa
- Institut de Neurosciences des Systèmes, Aix Marseille Université, Marseille, France
- Inserm, UMR 1106, Aix Marseille Université, Marseille, France
| |
Collapse
|
27
|
Neural alpha dynamics in younger and older listeners reflect acoustic challenges and predictive benefits. J Neurosci 2015; 35:1458-67. [PMID: 25632123 DOI: 10.1523/jneurosci.3250-14.2015] [Citation(s) in RCA: 97] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Speech comprehension in multitalker situations is a notorious real-life challenge, particularly for older listeners. Younger listeners exploit stimulus-inherent acoustic detail, but are they also actively predicting upcoming information? And further, how do older listeners deal with acoustic and predictive information? To understand the neural dynamics of listening difficulties and according listening strategies, we contrasted neural responses in the alpha-band (∼10 Hz) in younger (20-30 years, n = 18) and healthy older (60-70 years, n = 20) participants under changing task demands in a two-talker paradigm. Electroencephalograms were recorded while humans listened to two spoken digits against a distracting talker and decided whether the second digit was smaller or larger. Acoustic detail (temporal fine structure) and predictiveness (the degree to which the first digit predicted the second) varied orthogonally. Alpha power at widespread scalp sites decreased with increasing acoustic detail (during target digit presentation) but also with increasing predictiveness (in-between target digits). For older compared with younger listeners, acoustic detail had a stronger impact on task performance and alpha power modulation. This suggests that alpha dynamics plays an important role in the changes in listening behavior that occur with age. Last, alpha power variations resulting from stimulus manipulations (of acoustic detail and predictiveness) as well as task-independent overall alpha power were related to subjective listening effort. The present data show that alpha dynamics is a promising neural marker of individual difficulties as well as age-related changes in sensation, perception, and comprehension in complex communication situations.
Collapse
|
28
|
Wingfield A, Peelle JE. The effects of hearing loss on neural processing and plasticity. Front Syst Neurosci 2015; 9:35. [PMID: 25798095 PMCID: PMC4351590 DOI: 10.3389/fnsys.2015.00035] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2015] [Accepted: 02/19/2015] [Indexed: 11/28/2022] Open
Affiliation(s)
- Arthur Wingfield
- Volen National Center for Complex Systems, Brandeis University Waltham, MA, USA
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis St. Louis, MO, USA
| |
Collapse
|
29
|
Petersen EB, Wöstmann M, Obleser J, Stenfelt S, Lunner T. Hearing loss impacts neural alpha oscillations under adverse listening conditions. Front Psychol 2015; 6:177. [PMID: 25745410 PMCID: PMC4333793 DOI: 10.3389/fpsyg.2015.00177] [Citation(s) in RCA: 48] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/02/2014] [Accepted: 02/04/2015] [Indexed: 12/04/2022] Open
Abstract
Degradations in external, acoustic stimulation have long been suspected to increase the load on working memory (WM). One neural signature of WM load is enhanced power of alpha oscillations (6-12 Hz). However, it is unknown to what extent common internal, auditory degradation, that is, hearing impairment, affects the neural mechanisms of WM when audibility has been ensured via amplification. Using an adapted auditory Sternberg paradigm, we varied the orthogonal factors memory load and background noise level, while the electroencephalogram was recorded. In each trial, participants were presented with 2, 4, or 6 spoken digits embedded in one of three different levels of background noise. After a stimulus-free delay interval, participants indicated whether a probe digit had appeared in the sequence of digits. Participants were healthy older adults (62-86 years), with normal to moderately impaired hearing. Importantly, the background noise levels were individually adjusted and participants were wearing hearing aids to equalize audibility across participants. Irrespective of hearing loss (HL), behavioral performance improved with lower memory load and also with lower levels of background noise. Interestingly, the alpha power in the stimulus-free delay interval was dependent on the interplay between task demands (memory load and noise level) and HL; while alpha power increased with HL during low and intermediate levels of memory load and background noise, it dropped for participants with the relatively most severe HL under the highest memory load and background noise level. These findings suggest that adaptive neural mechanisms for coping with adverse listening conditions break down for higher degrees of HL, even when adequate hearing aid amplification is in place.
Collapse
Affiliation(s)
- Eline B. Petersen
- Eriksholm Research Centre, SnekkerstenDenmark
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, LinköpingSweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| | - Malte Wöstmann
- International Max Planck Research School on Neuroscience of Communication, LeipzigGermany
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigGermany
| | - Jonas Obleser
- Max Planck Research Group “Auditory Cognition”, Max Planck Institute for Human Cognitive and Brain Sciences, LeipzigGermany
| | - Stefan Stenfelt
- Technical Audiology, Department of Clinical and Experimental Medicine, Linköping University, LinköpingSweden
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| | - Thomas Lunner
- Eriksholm Research Centre, SnekkerstenDenmark
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, LinköpingSweden
| |
Collapse
|
30
|
Birot G, Spinelli L, Vulliémoz S, Mégevand P, Brunet D, Seeck M, Michel CM. Head model and electrical source imaging: a study of 38 epileptic patients. NEUROIMAGE-CLINICAL 2014; 5:77-83. [PMID: 25003030 PMCID: PMC4081973 DOI: 10.1016/j.nicl.2014.06.005] [Citation(s) in RCA: 73] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Revised: 05/28/2014] [Accepted: 06/06/2014] [Indexed: 11/18/2022]
Abstract
Electrical source imaging (ESI) aims at reconstructing the electrical brain activity from scalp EEG. When applied to interictal epileptiform discharges (IEDs), this technique is of great use for identifying the irritative zone in focal epilepsies. Inaccuracies in the modeling of electro-magnetic field propagation in the head (forward model) may strongly influence ESI and lead to mislocalization of IED generators. However, a systematic study on the influence of the selected head model on the localization precision of IED in a large number of patients with known focus localization has not yet been performed. We here present such a performance evaluation of different head models in a dataset of 38 epileptic patients who have undergone high-density scalp EEG, intracranial EEG and, for the majority, subsequent surgery. We compared ESI accuracy resulting from three head models: a Locally Spherical Model with Anatomical Constraints (LSMAC), a Boundary Element Model (BEM) and a Finite Element Model (FEM). All of them were computed from the individual MRI of the patient and ESI was performed on averaged IED. We found that all head models provided very similar source locations. In patients having a positive post-operative outcome, at least 74% of the source maxima were within the resection. The median distance from the source maximum to the nearest intracranial electrode showing IED was 13.2, 15.6 and 15.6 mm for LSMAC, BEM and FEM, respectively. The study demonstrates that in clinical applications, the use of highly sophisticated and difficult to implement head models is not a crucial factor for an accurate ESI.
Collapse
Affiliation(s)
- Gwénael Birot
- Department of Fundamental and Clinical Neurosciences, University of Geneva, Rue Michel Servet 1, 1211 Genève, Switzerland
- Corresponding author. Tel.: + 41 22 372 82 94; fax: + 41 22 372 83 40.
| | - Laurent Spinelli
- EEG and Epilepsy Unit, Department of Neurology, University Hospital of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Genève, Switzerland
| | - Serge Vulliémoz
- EEG and Epilepsy Unit, Department of Neurology, University Hospital of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Genève, Switzerland
| | - Pierre Mégevand
- EEG and Epilepsy Unit, Department of Neurology, University Hospital of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Genève, Switzerland
- Department of Neurosurgery, Hofstra North Shore-LIJ School of Medicine, Feinstein Institute for Medical Research, Manhasset, NY 11030, USA
| | - Denis Brunet
- Department of Fundamental and Clinical Neurosciences, University of Geneva, Rue Michel Servet 1, 1211 Genève, Switzerland
| | - Margitta Seeck
- EEG and Epilepsy Unit, Department of Neurology, University Hospital of Geneva, Rue Gabrielle-Perret-Gentil 4, 1205 Genève, Switzerland
| | - Christoph M. Michel
- Department of Fundamental and Clinical Neurosciences, University of Geneva, Rue Michel Servet 1, 1211 Genève, Switzerland
| |
Collapse
|
31
|
Scharinger M, Herrmann B, Nierhaus T, Obleser J. Simultaneous EEG-fMRI brain signatures of auditory cue utilization. Front Neurosci 2014; 8:137. [PMID: 24926232 PMCID: PMC4044900 DOI: 10.3389/fnins.2014.00137] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/24/2014] [Accepted: 05/17/2014] [Indexed: 11/13/2022] Open
Abstract
Optimal utilization of acoustic cues during auditory categorization is a vital skill, particularly when informative cues become occluded or degraded. Consequently, the acoustic environment requires flexible choosing and switching amongst available cues. The present study targets the brain functions underlying such changes in cue utilization. Participants performed a categorization task with immediate feedback on acoustic stimuli from two categories that varied in duration and spectral properties, while we simultaneously recorded Blood Oxygenation Level Dependent (BOLD) responses in fMRI and electroencephalograms (EEGs). In the first half of the experiment, categories could be best discriminated by spectral properties. Halfway through the experiment, spectral degradation rendered the stimulus duration the more informative cue. Behaviorally, degradation decreased the likelihood of utilizing spectral cues. Spectrally degrading the acoustic signal led to increased alpha power compared to nondegraded stimuli. The EEG-informed fMRI analyses revealed that alpha power correlated with BOLD changes in inferior parietal cortex and right posterior superior temporal gyrus (including planum temporale). In both areas, spectral degradation led to a weaker coupling of BOLD response to behavioral utilization of the spectral cue. These data provide converging evidence from behavioral modeling, electrophysiology, and hemodynamics that (a) increased alpha power mediates the inhibition of uninformative (here spectral) stimulus features, and that (b) the parietal attention network supports optimal cue utilization in auditory categorization. The results highlight the complex cortical processing of auditory categorization under realistic listening challenges.
Collapse
Affiliation(s)
- Mathias Scharinger
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Björn Herrmann
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Till Nierhaus
- Department of Neurology, Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| | - Jonas Obleser
- Max Planck Research Group "Auditory Cognition," Max Planck Institute for Human Cognitive and Brain Sciences Leipzig, Germany
| |
Collapse
|
32
|
Becker R, Pefkou M, Michel CM, Hervais-Adelman AG. Corrigendum: Left temporal alpha-band activity reflects single word intelligibility. Front Syst Neurosci 2014; 8:47. [PMID: 24744706 PMCID: PMC3978255 DOI: 10.3389/fnsys.2014.00047] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2014] [Accepted: 03/14/2014] [Indexed: 11/13/2022] Open
Affiliation(s)
- Robert Becker
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva, University Medical SchoolGeneva, Switzerland
| | - Maria Pefkou
- Brain and Languge Lab, Department of Clinical Neuroscience, University of Geneva, University Medical SchoolGeneva, Switzerland
| | - Christoph M. Michel
- Functional Brain Mapping Lab, Department of Fundamental Neuroscience, University of Geneva, University Medical SchoolGeneva, Switzerland
| | - Alexis G. Hervais-Adelman
- Brain and Languge Lab, Department of Clinical Neuroscience, University of Geneva, University Medical SchoolGeneva, Switzerland,*Correspondence:
| |
Collapse
|