1
|
Miao Y, Suzuki H, Sugano H, Ueda T, Iimura Y, Matsui R, Tanaka T. Causal Connectivity Network Analysis of Ictal Electrocorticogram With Temporal Lobe Epilepsy Based on Dynamic Phase Transfer Entropy. IEEE Trans Biomed Eng 2024; 71:531-541. [PMID: 37624716 DOI: 10.1109/tbme.2023.3308616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/27/2023]
Abstract
Temporallobe epilepsy (TLE) has been conceptualized as a brain network disease, which generates brain connectivity dynamics within and beyond the temporal lobe structures in seizures. The hippocampus is a representative epileptogenic focus in TLE. Understanding the causal connectivity in terms of brain network during seizures is crucial in revealing the triggering mechanism of epileptic seizures originating from the hippocampus (HPC) spread to the lateral temporal cortex (LTC) by ictal electrocorticogram (ECoG), particularly in high-frequency oscillations (HFOs) bands. In this study, we proposed the unified-epoch dynamic causality analysis method to investigate the causal influence dynamics between two brain regions (HPC and LTC) at interictal and ictal phases in the frequency range of 1-500 Hz by introducing the phase transfer entropy (PTE) out/in-ratio and sliding window. We also proposed PTE-based machine learning algorithms to identify epileptogenic zone (EZ). Nine patients with a total of 26 seizures were included in this study. We hypothesized that: 1) HPC is the focus with the stronger causal connectivity than that in LTC in the ictal state at gamma and HFOs bands. 2) Causal connectivity in the ictal phase shows significant changes compared to that in the interictal phase. 3) The PTE out/in-ratio in the HFOs band can identify the EZ with the best prediction performance.
Collapse
|
2
|
Katanga JA, Hamilton CA, Walker L, Attems J, Thomas AJ. Age-related hearing loss and dementia-related neuropathology: An analysis of the United Kingdom brains for dementia research cohort. Brain Pathol 2023; 33:e13188. [PMID: 37551936 PMCID: PMC10580004 DOI: 10.1111/bpa.13188] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/20/2023] [Accepted: 06/26/2023] [Indexed: 08/09/2023] Open
Abstract
Age-related hearing loss frequently precedes or coexists with mild cognitive impairment and dementia. The role specific neuropathologies play in this association, as either a cause or a consequence, is unclear. We therefore aimed to investigate whether specific dementia related neuropathologies were associated with hearing impairment in later life. We analysed data on ante-mortem hearing impairment with post-mortem neuropathological data for 442 participants from the Brains for Dementia Research Cohort. Binary logistic regression models were used to estimate the association of hearing impairment with the presence of each dementia-related neuropathology overall, and with specific staged changes. All analyses adjusted for age and sex, and several sensitivity analyses were conducted to test the robustness of findings. Presence and density of neuritic plaques were associated with higher odds of hearing impairment ante-mortem (OR = 3.65, 95% CI 1.78-7.46 for frequent density of plaques). Presence of any LB disease was likewise associated with hearing impairment (OR = 2.10, 95% CI 1.27-3.48), but this did not increase with higher cortical pathology (OR = 1.53, 95% CI 0.75-3.11). Nonspecific amyloid deposition, neurofibrillary tangle staging, overall AD neuropathology level, and cerebrovascular disease were not clearly associated with increased risks of hearing impairment. Our results provide some support for an association between dementia-related neuropathology and hearing loss and suggest that hearing loss may be associated with a high neuritic plaque burden and more common in Lewy body disease.
Collapse
Affiliation(s)
- Jessica A. Katanga
- Translational and Clinical Research InstituteNewcastle UniversityNewcastle upon TyneUK
| | - Calum A. Hamilton
- Translational and Clinical Research InstituteNewcastle UniversityNewcastle upon TyneUK
| | - Lauren Walker
- Translational and Clinical Research InstituteNewcastle UniversityNewcastle upon TyneUK
| | - Johannes Attems
- Translational and Clinical Research InstituteNewcastle UniversityNewcastle upon TyneUK
| | - Alan J. Thomas
- Translational and Clinical Research InstituteNewcastle UniversityNewcastle upon TyneUK
| |
Collapse
|
3
|
Van Hirtum T, Somers B, Dieudonné B, Verschueren E, Wouters J, Francart T. Neural envelope tracking predicts speech intelligibility and hearing aid benefit in children with hearing loss. Hear Res 2023; 439:108893. [PMID: 37806102 DOI: 10.1016/j.heares.2023.108893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/30/2023] [Revised: 09/01/2023] [Accepted: 09/27/2023] [Indexed: 10/10/2023]
Abstract
Early assessment of hearing aid benefit is crucial, as the extent to which hearing aids provide audible speech information predicts speech and language outcomes. A growing body of research has proposed neural envelope tracking as an objective measure of speech intelligibility, particularly for individuals unable to provide reliable behavioral feedback. However, its potential for evaluating speech intelligibility and hearing aid benefit in children with hearing loss remains unexplored. In this study, we investigated neural envelope tracking in children with permanent hearing loss through two separate experiments. EEG data were recorded while children listened to age-appropriate stories (Experiment 1) or an animated movie (Experiment 2) under aided and unaided conditions (using personal hearing aids) at multiple stimulus intensities. Neural envelope tracking was evaluated using a linear decoder reconstructing the speech envelope from the EEG in the delta band (0.5-4 Hz). Additionally, we calculated temporal response functions (TRFs) to investigate the spatio-temporal dynamics of the response. In both experiments, neural tracking increased with increasing stimulus intensity, but only in the unaided condition. In the aided condition, neural tracking remained stable across a wide range of intensities, as long as speech intelligibility was maintained. Similarly, TRF amplitudes increased with increasing stimulus intensity in the unaided condition, while in the aided condition significant differences were found in TRF latency rather than TRF amplitude. This suggests that decreasing stimulus intensity does not necessarily impact neural tracking. Furthermore, the use of personal hearing aids significantly enhanced neural envelope tracking, particularly in challenging speech conditions that would be inaudible when unaided. Finally, we found a strong correlation between neural envelope tracking and behaviorally measured speech intelligibility for both narrated stories (Experiment 1) and movie stimuli (Experiment 2). Altogether, these findings indicate that neural envelope tracking could be a valuable tool for predicting speech intelligibility benefits derived from personal hearing aids in hearing-impaired children. Incorporating narrated stories or engaging movies expands the accessibility of these methods even in clinical settings, offering new avenues for using objective speech measures to guide pediatric audiology decision-making.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Ben Somers
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Benjamin Dieudonné
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Eline Verschueren
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Jan Wouters
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, 3000 Leuven, Belgium.
| |
Collapse
|
4
|
Van Hirtum T, Somers B, Verschueren E, Dieudonné B, Francart T. Delta-band neural envelope tracking predicts speech intelligibility in noise in preschoolers. Hear Res 2023; 434:108785. [PMID: 37172414 DOI: 10.1016/j.heares.2023.108785] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/17/2023] [Revised: 04/24/2023] [Accepted: 05/05/2023] [Indexed: 05/15/2023]
Abstract
Behavioral tests are currently the gold standard in measuring speech intelligibility. However, these tests can be difficult to administer in young children due to factors such as motivation, linguistic knowledge and cognitive skills. It has been shown that measures of neural envelope tracking can be used to predict speech intelligibility and overcome these issues. However, its potential as an objective measure for speech intelligibility in noise remains to be investigated in preschool children. Here, we evaluated neural envelope tracking as a function of signal-to-noise ratio (SNR) in 14 5-year-old children. We examined EEG responses to natural, continuous speech presented at different SNRs ranging from -8 (very difficult) to 8 dB SNR (very easy). As expected delta band (0.5-4 Hz) tracking increased with increasing stimulus SNR. However, this increase was not strictly monotonic as neural tracking reached a plateau between 0 and 4 dB SNR, similarly to the behavioral speech intelligibility outcomes. These findings indicate that neural tracking in the delta band remains stable, as long as the acoustical degradation of the speech signal does not reflect significant changes in speech intelligibility. Theta band tracking (4-8 Hz), on the other hand, was found to be drastically reduced and more easily affected by noise in children, making it less reliable as a measure of speech intelligibility. By contrast, neural envelope tracking in the delta band was directly associated with behavioral measures of speech intelligibility. This suggests that neural envelope tracking in the delta band is a valuable tool for evaluating speech-in-noise intelligibility in preschoolers, highlighting its potential as an objective measure of speech in difficult-to-test populations.
Collapse
Affiliation(s)
- Tilde Van Hirtum
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium.
| | - Ben Somers
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Eline Verschueren
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Benjamin Dieudonné
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| | - Tom Francart
- KU Leuven - University of Leuven, Department of Neurosciences, Experimental Oto-rhino-laryngology, Herestraat 49 bus 721, Leuven 3000, Belgium
| |
Collapse
|
5
|
Ryan DB, Eckert MA, Sellers EW, Schairer KS, McBee MT, Ridley EA, Smith SL. Performance Monitoring and Cognitive Inhibition during a Speech-in-Noise Task in Older Listeners. Semin Hear 2023; 44:124-139. [PMID: 37122879 PMCID: PMC10147504 DOI: 10.1055/s-0043-1767695] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/02/2023] Open
Abstract
The goal of this study was to examine the effect of hearing loss on theta and alpha electroencephalography (EEG) frequency power measures of performance monitoring and cognitive inhibition, respectively, during a speech-in-noise task. It was hypothesized that hearing loss would be associated with an increase in the peak power of theta and alpha frequencies toward easier conditions compared to normal hearing adults. The shift would reflect how hearing loss modulates the recruitment of listening effort to easier listening conditions. Nine older adults with normal hearing (ONH) and 10 older adults with hearing loss (OHL) participated in this study. EEG data were collected from all participants while they completed the words-in-noise task. It hypothesized that hearing loss would also have an effect on theta and alpha power. The ONH group showed an inverted U -shape effect of signal-to-noise ratio (SNR), but there were limited effects of SNR on theta or alpha power in the OHL group. The results of the ONH group support the growing body of literature showing effects of listening conditions on alpha and theta power. The null results of listening condition in the OHL group add to a smaller body of literature, suggesting that listening effort research conditions should have near ceiling performance.
Collapse
Affiliation(s)
- David B. Ryan
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
| | - Mark A. Eckert
- Department of Otolaryngology - Head and Neck Surgery, Hearing Research Program, Medical University of South Carolina, Charleston, North Carolina
| | - Eric W. Sellers
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Kim S. Schairer
- Hearing and Balance Research Program, James H. Quillen VA Medical Center, Mountain Home, Tennessee
- Department of Audiology and Speech Language Pathology, East Tennessee State University, Johnson City, Tennessee
| | - Matthew T. McBee
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Elizabeth A. Ridley
- Department of Psychology, East Tennessee State University, Johnson City, Tennessee
| | - Sherri L. Smith
- Department of Head and Neck Surgery and Communication Sciences, Duke University School of Medicine, Durham, North Carolina
- Center for the Study of Aging and Human Development, Duke University, Durham, North Carolina
- Department of Population Health Sciences, Duke University School of Medicine, Durham, North Carolina
- Audiology and Speech Pathology Service, Durham Veterans Affairs Healthcare System, Durham, North Carolina
| |
Collapse
|
6
|
Haumann NT, Petersen B, Vuust P, Brattico E. Age differences in central auditory system responses to naturalistic music. Biol Psychol 2023; 179:108566. [PMID: 37086903 DOI: 10.1016/j.biopsycho.2023.108566] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/17/2022] [Revised: 04/11/2023] [Accepted: 04/14/2023] [Indexed: 04/24/2023]
Abstract
Aging influences the central auditory system leading to difficulties in the decoding and understanding of overlapping sound signals, such as speech in noise or polyphonic music. Studies on central auditory system evoked responses (ERs) have found in older compared to young listeners increased amplitudes (less inhibition) of the P1 and N1 and decreased amplitudes of the P2, mismatch negativity (MMN), and P3a responses. While preceding research has focused on simplified auditory stimuli, we here tested whether the previously observed age-related differences could be replicated with sounds embedded in medium and highly naturalistic musical contexts. Older (age 55-77 years) and younger adults (age 21-31 years) listened to medium naturalistic (synthesized melody) and highly naturalistic (studio recording of a music piece) stimuli. For the medium naturalistic music, the age group differences on the P1, N1, P2, MMN, and P3a amplitudes were all replicated. The age group differences, however, appeared reduced with the highly compared to the medium naturalistic music. The finding of lower P2 amplitude in older than young was replicated for slow event rates (0.3-2.9Hz) in the highly naturalistic music. Moreover, the ER latencies suggested a gradual slowing of the auditory processing time course for highly compared to medium naturalistic stimuli irrespective of age. These results support that age-related differences on ERs can partly be observed with naturalistic stimuli. This opens new avenues for including naturalistic stimuli in the investigation of age-related central auditory system disorders.
Collapse
Affiliation(s)
- Niels Trusbak Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark.
| | - Bjørn Petersen
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music, Aarhus/Aalborg, Universitetsbyen 3, 8000 Aarhus C, Denmark
| |
Collapse
|
7
|
Carter JA, Bidelman GM. Perceptual warping exposes categorical representations for speech in human brainstem responses. Neuroimage 2023; 269:119899. [PMID: 36720437 PMCID: PMC9992300 DOI: 10.1016/j.neuroimage.2023.119899] [Citation(s) in RCA: 6] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 01/17/2023] [Accepted: 01/22/2023] [Indexed: 01/30/2023] Open
Abstract
The brain transforms continuous acoustic events into discrete category representations to downsample the speech signal for our perceptual-cognitive systems. Such phonetic categories are highly malleable, and their percepts can change depending on surrounding stimulus context. Previous work suggests these acoustic-phonetic mapping and perceptual warping of speech emerge in the brain no earlier than auditory cortex. Here, we examined whether these auditory-category phenomena inherent to speech perception occur even earlier in the human brain, at the level of auditory brainstem. We recorded speech-evoked frequency following responses (FFRs) during a task designed to induce more/less warping of listeners' perceptual categories depending on stimulus presentation order of a speech continuum (random, forward, backward directions). We used a novel clustered stimulus paradigm to rapidly record the high trial counts needed for FFRs concurrent with active behavioral tasks. We found serial stimulus order caused perceptual shifts (hysteresis) near listeners' category boundary confirming identical speech tokens are perceived differentially depending on stimulus context. Critically, we further show neural FFRs during active (but not passive) listening are enhanced for prototypical vs. category-ambiguous tokens and are biased in the direction of listeners' phonetic label even for acoustically-identical speech stimuli. These findings were not observed in the stimulus acoustics nor model FFR responses generated via a computational model of cochlear and auditory nerve transduction, confirming a central origin to the effects. Our data reveal FFRs carry category-level information and suggest top-down processing actively shapes the neural encoding and categorization of speech at subcortical levels. These findings suggest the acoustic-phonetic mapping and perceptual warping in speech perception occur surprisingly early along the auditory neuroaxis, which might aid understanding by reducing ambiguity inherent to the speech signal.
Collapse
Affiliation(s)
- Jared A Carter
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Division of Clinical Neuroscience, School of Medicine, Hearing Sciences - Scottish Section, University of Nottingham, Glasgow, Scotland, UK
| | - Gavin M Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, IN, USA; Program in Neuroscience, Indiana University, Bloomington, IN, USA.
| |
Collapse
|
8
|
Lai J, Alain C, Bidelman GM. Cortical-brainstem interplay during speech perception in older adults with and without hearing loss. Front Neurosci 2023; 17:1075368. [PMID: 36816123 PMCID: PMC9932544 DOI: 10.3389/fnins.2023.1075368] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 01/17/2023] [Indexed: 02/05/2023] Open
Abstract
Introduction Real time modulation of brainstem frequency-following responses (FFRs) by online changes in cortical arousal state via the corticofugal (top-down) pathway has been demonstrated previously in young adults and is more prominent in the presence of background noise. FFRs during high cortical arousal states also have a stronger relationship with speech perception. Aging is associated with increased auditory brain responses, which might reflect degraded inhibitory processing within the peripheral and ascending pathways, or changes in attentional control regulation via descending auditory pathways. Here, we tested the hypothesis that online corticofugal interplay is impacted by age-related hearing loss. Methods We measured EEG in older adults with normal-hearing (NH) and mild to moderate hearing-loss (HL) while they performed speech identification tasks in different noise backgrounds. We measured α power to index online cortical arousal states during task engagement. Subsequently, we split brainstem speech-FFRs, on a trial-by-trial basis, according to fluctuations in concomitant cortical α power into low or high α FFRs to index cortical-brainstem modulation. Results We found cortical α power was smaller in the HL than the NH group. In NH listeners, α-FFRs modulation for clear speech (i.e., without noise) also resembled that previously observed in younger adults for speech in noise. Cortical-brainstem modulation was further diminished in HL older adults in the clear condition and by noise in NH older adults. Machine learning classification showed low α FFR frequency spectra yielded higher accuracy for classifying listeners' perceptual performance in both NH and HL participants. Moreover, low α FFRs decreased with increased hearing thresholds at 0.5-2 kHz for clear speech but noise generally reduced low α FFRs in the HL group. Discussion Collectively, our study reveals cortical arousal state actively shapes brainstem speech representations and provides a potential new mechanism for older listeners' difficulties perceiving speech in cocktail party-like listening situations in the form of a miss-coordination between cortical and subcortical levels of auditory processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Diagnostic Imaging, St. Jude Children’s Research Hospital, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre for Geriatric Care, Toronto, ON, Canada,Department of Psychology, University of Toronto, Toronto, ON, Canada
| | - Gavin M. Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States,Department of Speech, Language, and Hearing Sciences, Indiana University, Bloomington, IN, United States,Program in Neuroscience, Indiana University, Bloomington, IN, United States,*Correspondence: Gavin M. Bidelman,
| |
Collapse
|
9
|
Zhu S, Song J, Xia W, Xue Y. Aberrant brain functional network strength related to cognitive impairment in age-related hearing loss. Front Neurol 2022; 13:1071237. [PMID: 36619924 PMCID: PMC9810801 DOI: 10.3389/fneur.2022.1071237] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2022] [Accepted: 11/28/2022] [Indexed: 12/24/2022] Open
Abstract
Purpose Age-related hearing loss (ARHL) is a major public issue that affects elderly adults. However, the neural substrates for the cognitive deficits in patients with ARHL need to be elucidated. This study aimed to explore the brain regions that show aberrant brain functional network strength related to cognitive impairment in patients with ARHL. Methods A total of 27 patients with ARHL and 23 well-matched healthy controls were recruited for the present study. Each subject underwent pure-tone audiometry (PTA), MRI scanning, and cognition evaluation. We analyzed the functional network strength by using degree centrality (DC) characteristics and tried to recognize key nodes that contribute significantly. Subsequent functional connectivity (FC) was analyzed using significant DC nodes as seeds. Results Compared with controls, patients with ARHL showed a deceased DC in the bilateral supramarginal gyrus (SMG). In addition, patients with ARHL showed enhanced DC in the left fusiform gyrus (FG) and right parahippocampal gyrus (PHG). Then, the bilateral SMGs were used as seeds for FC analysis. With the seed set at the left SMG, patients with ARHL showed decreased connectivity with the right superior temporal gyrus (STG). Moreover, the right SMG showed reduced connectivity with the right middle temporal gyrus (MTG) and increased connection with the left middle frontal gyrus (MFG) in patients with ARHL. The reduced DC in the left and right SMGs showed significant negative correlations with poorer TMT-B scores (r = -0.596, p = 0.002; r = -0.503, p = 0.012, respectively). Conclusion These findings enriched our understanding of the neural mechanisms underlying cognitive impairment associated with ARHL and may serve as a potential brain network biomarker for investigating and predicting cognitive difficulties.
Collapse
Affiliation(s)
- Shaoyun Zhu
- Department of Ultrasound, Nanjing Pukou Central Hospital, Pukou Branch Hospital of Jiangsu Province Hospital, Nanjing, China
| | - Jiajie Song
- Department of Radiology, Nanjing Pukou Central Hospital, Pukou Branch Hospital of Jiangsu Province Hospital, Nanjing, China
| | - Wenqing Xia
- Department of Endocrinology, Nanjing First Hospital, Nanjing Medical University, Nanjing, China,*Correspondence: Wenqing Xia ✉
| | - Yuan Xue
- Department of Otolaryngology, Nanjing Pukou Central Hospital, Pukou Branch Hospital of Jiangsu Province Hospital, Nanjing, China,Yuan Xue ✉
| |
Collapse
|
10
|
Xu XM, Liu Y, Feng Y, Xu JJ, Gao J, Salvi R, Wu Y, Yin X, Chen YC. Degree centrality and functional connections in presbycusis with and without cognitive impairments. Brain Imaging Behav 2022; 16:2725-2734. [DOI: 10.1007/s11682-022-00734-6] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 10/11/2022] [Indexed: 11/06/2022]
|
11
|
Price CN, Bidelman GM. Musical experience partially counteracts temporal speech processing deficits in putative mild cognitive impairment. Ann N Y Acad Sci 2022; 1516:114-122. [PMID: 35762658 PMCID: PMC9588638 DOI: 10.1111/nyas.14853] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
Mild cognitive impairment (MCI) commonly results in more rapid cognitive and behavioral declines than typical aging. Individuals with MCI can exhibit impaired receptive speech abilities that may reflect neurophysiological changes in auditory-sensory processing prior to usual cognitive deficits. Benefits from current interventions targeting communication difficulties in MCI are limited. Yet, neuroplasticity associated with musical experience has been implicated in improving neural representations of speech and offsetting age-related declines in perception. Here, we asked whether these experience-dependent effects of musical experience might extend to aberrant aging and offer some degree of cognitive protection against MCI. During a vowel categorization task, we recorded single-channel electroencephalograms (EEGs) in older adults with putative MCI to evaluate speech encoding across subcortical and cortical levels of the auditory system. Critically, listeners varied in their duration of formal musical experience (0-21 years). Musical experience sharpened temporal precision in auditory cortical responses, suggesting that musical experience produces more efficient processing of acoustic features by counteracting age-related neural delays. Additionally, robustness of brainstem responses predicted the severity of cognitive decline, suggesting that early speech representations are sensitive to preclinical stages of cognitive impairment. Our results extend prior studies by demonstrating positive benefits of musical experience in older adults with emergent cognitive impairments.
Collapse
Affiliation(s)
- Caitlin N. Price
- Department of Audiology & Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, Arkansas, USA
| | - Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University, Bloomington, Indiana, USA
| |
Collapse
|
12
|
Verschueren E, Gillis M, Decruy L, Vanthornhout J, Francart T. Speech Understanding Oppositely Affects Acoustic and Linguistic Neural Tracking in a Speech Rate Manipulation Paradigm. J Neurosci 2022; 42:7442-7453. [PMID: 36041851 PMCID: PMC9525161 DOI: 10.1523/jneurosci.0259-22.2022] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/04/2022] [Revised: 06/29/2022] [Accepted: 07/17/2022] [Indexed: 11/21/2022] Open
Abstract
When listening to continuous speech, the human brain can track features of the presented speech signal. It has been shown that neural tracking of acoustic features is a prerequisite for speech understanding and can predict speech understanding in controlled circumstances. However, the brain also tracks linguistic features of speech, which may be more directly related to speech understanding. We investigated acoustic and linguistic speech processing as a function of varying speech understanding by manipulating the speech rate. In this paradigm, acoustic and linguistic speech processing is affected simultaneously but in opposite directions: When the speech rate increases, more acoustic information per second is present. In contrast, the tracking of linguistic information becomes more challenging when speech is less intelligible at higher speech rates. We measured the EEG of 18 participants (4 male) who listened to speech at various speech rates. As expected and confirmed by the behavioral results, speech understanding decreased with increasing speech rate. Accordingly, linguistic neural tracking decreased with increasing speech rate, but acoustic neural tracking increased. This indicates that neural tracking of linguistic representations can capture the gradual effect of decreasing speech understanding. In addition, increased acoustic neural tracking does not necessarily imply better speech understanding. This suggests that, although more challenging to measure because of the low signal-to-noise ratio, linguistic neural tracking may be a more direct predictor of speech understanding.SIGNIFICANCE STATEMENT An increasingly popular method to investigate neural speech processing is to measure neural tracking. Although much research has been done on how the brain tracks acoustic speech features, linguistic speech features have received less attention. In this study, we disentangled acoustic and linguistic characteristics of neural speech tracking via manipulating the speech rate. A proper way of objectively measuring auditory and language processing paves the way toward clinical applications: An objective measure of speech understanding would allow for behavioral-free evaluation of speech understanding, which allows to evaluate hearing loss and adjust hearing aids based on brain responses. This objective measure would benefit populations from whom obtaining behavioral measures may be complex, such as young children or people with cognitive impairments.
Collapse
Affiliation(s)
- Eline Verschueren
- Research Group Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven-University of Leuven, Leuven, 3000, Belgium
| | - Marlies Gillis
- Research Group Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven-University of Leuven, Leuven, 3000, Belgium
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742
| | - Jonas Vanthornhout
- Research Group Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven-University of Leuven, Leuven, 3000, Belgium
| | - Tom Francart
- Research Group Experimental Oto-rhino-laryngology, Department of Neurosciences, KU Leuven-University of Leuven, Leuven, 3000, Belgium
| |
Collapse
|
13
|
Lai J, Price CN, Bidelman GM. Brainstem speech encoding is dynamically shaped online by fluctuations in cortical α state. Neuroimage 2022; 263:119627. [PMID: 36122686 PMCID: PMC10017375 DOI: 10.1016/j.neuroimage.2022.119627] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/15/2022] [Accepted: 09/12/2022] [Indexed: 11/25/2022] Open
Abstract
Experimental evidence in animals demonstrates cortical neurons innervate subcortex bilaterally to tune brainstem auditory coding. Yet, the role of the descending (corticofugal) auditory system in modulating earlier sound processing in humans during speech perception remains unclear. Here, we measured EEG activity as listeners performed speech identification tasks in different noise backgrounds designed to tax perceptual and attentional processing. We hypothesized brainstem speech coding might be tied to attention and arousal states (indexed by cortical α power) that actively modulate the interplay of brainstem-cortical signal processing. When speech-evoked brainstem frequency-following responses (FFRs) were categorized according to cortical α states, we found low α FFRs in noise were weaker, correlated positively with behavioral response times, and were more "decodable" via neural classifiers. Our data provide new evidence for online corticofugal interplay in humans and establish that brainstem sensory representations are continuously yoked to (i.e., modulated by) the ebb and flow of cortical states to dynamically update perceptual processing.
Collapse
Affiliation(s)
- Jesyin Lai
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Diagnostic Imaging Department, St. Jude Children's Research Hospital, Memphis, TN, USA.
| | - Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Audiology and Speech Pathology, University of Arkansas for Medical Sciences, Little Rock, AR, USA
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, USA; Department of Speech, Language and Hearing Sciences, Indiana University, 2631 East Discovery Parkway, Bloomington, IN 47408, USA; Program in Neuroscience, Indiana University, 1101 E 10th St, Bloomington, IN 47405, USA.
| |
Collapse
|
14
|
Kurthen I, Christen A, Meyer M, Giroud N. Older adults' neural tracking of interrupted speech is a function of task difficulty. Neuroimage 2022; 262:119580. [PMID: 35995377 DOI: 10.1016/j.neuroimage.2022.119580] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/16/2022] [Revised: 08/14/2022] [Accepted: 08/18/2022] [Indexed: 11/16/2022] Open
Abstract
Age-related hearing loss is a highly prevalent condition, which manifests at both the auditory periphery and the brain. It leads to degraded auditory input, which needs to be repaired in order to achieve understanding of spoken language. It is still unclear how older adults with this condition draw on their neural resources to optimally process speech. By presenting interrupted speech to 26 healthy older adults with normal-for-age audiograms, this study investigated neural tracking of degraded auditory input. The electroencephalograms of the participants were recorded while they first listened to and then verbally repeated sentences interrupted by silence in varying interruption rates. Speech tracking was measured by inter-trial phase coherence in response to the stimuli. In interruption rates that corresponded to the theta frequency band, speech tracking was highly specific to the interruption rate and positively related to the understanding of interrupted speech. These results suggest that older adults' brain activity optimizes through the tracking of stimulus characteristics, and that this tracking aids in processing an incomplete auditory stimulus. Further investigation of speech tracking as a candidate training mechanism to alleviate age-related hearing loss is thus encouraged.
Collapse
Affiliation(s)
- Ira Kurthen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland.
| | - Allison Christen
- Department of Psychology, University of Zurich, Binzmuehlestrasse 14/21, Zurich 8050, Switzerland
| | - Martin Meyer
- Department of Comparative Language Science, University of Zurich, Switzerland; Center for the Interdisciplinary Study of Language Evolution, University of Zurich, Switzerland; Cognitive Psychology Unit, University of Klagenfurt, Austria
| | - Nathalie Giroud
- Department of Computational Linguistics, Phonetics and Speech Sciences, University of Zurich, Switzerland; Competence Center for Language & Medicine, University of Zurich, Switzerland; Center for Neuroscience Zurich, University of Zurich, Switzerland
| |
Collapse
|
15
|
Bidelman GM, Chow R, Noly-Gandon A, Ryan JD, Bell KL, Rizzi R, Alain C. Transcranial Direct Current Stimulation Combined With Listening to Preferred Music Alters Cortical Speech Processing in Older Adults. Front Neurosci 2022; 16:884130. [PMID: 35873829 PMCID: PMC9298650 DOI: 10.3389/fnins.2022.884130] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2022] [Accepted: 06/17/2022] [Indexed: 11/13/2022] Open
Abstract
Emerging evidence suggests transcranial direct current stimulation (tDCS) can improve cognitive performance in older adults. Similarly, music listening may improve arousal and stimulate subsequent performance on memory-related tasks. We examined the synergistic effects of tDCS paired with music listening on auditory neurobehavioral measures to investigate causal evidence of short-term plasticity in speech processing among older adults. In a randomized sham-controlled crossover study, we measured how combined anodal tDCS over dorsolateral prefrontal cortex (DLPFC) paired with listening to autobiographically salient music alters neural speech processing in older adults compared to either music listening (sham stimulation) or tDCS alone. EEG assays included both frequency-following responses (FFRs) and auditory event-related potentials (ERPs) to trace neuromodulation-related changes at brainstem and cortical levels. Relative to music without tDCS (sham), we found tDCS alone (without music) modulates the early cortical neural encoding of speech in the time frame of ∼100-150 ms. Whereas tDCS by itself appeared to largely produce suppressive effects (i.e., reducing ERP amplitude), concurrent music with tDCS restored responses to those of the music+sham levels. However, the interpretation of this effect is somewhat ambiguous as this neural modulation could be attributable to a true effect of tDCS or presence/absence music. Still, the combined benefit of tDCS+music (above tDCS alone) was correlated with listeners' education level suggesting the benefit of neurostimulation paired with music might depend on listener demographics. tDCS changes in speech-FFRs were not observed with DLPFC stimulation. Improvements in working memory pre to post session were also associated with better speech-in-noise listening skills. Our findings provide new causal evidence that combined tDCS+music relative to tDCS-alone (i) modulates the early (100-150 ms) cortical encoding of speech and (ii) improves working memory, a cognitive skill which may indirectly bolster noise-degraded speech perception in older listeners.
Collapse
Affiliation(s)
- Gavin M. Bidelman
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States
- School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Ricky Chow
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
| | | | - Jennifer D. Ryan
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Department of Psychiatry, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
| | - Karen L. Bell
- Department of Audiology, San José State University, San Jose, CA, United States
| | - Rose Rizzi
- Department of Speech, Language and Hearing Sciences, Indiana University Bloomington, Bloomington, IN, United States
- School of Communication Sciences and Disorders, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute, Baycrest Centre, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Institute of Medical Science, University of Toronto, Toronto, ON, Canada
- Music and Health Science Research Collaboratory, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
16
|
Adkisson P, Fridman GY, Steinhardt CR. Difference in Network Effects of Pulsatile and Galvanic Stimulation. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2022; 2022:3093-3099. [PMID: 36086346 DOI: 10.1109/embc48229.2022.9871812] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Biphasic pulsatile stimulation is the present standard for neural prosthetic use, and it is used to understand connectivity and functionality of the brain in brain mapping studies. While pulses have been shown to drive behavioral changes, such as biasing decision making, they have deficits. For example, cochlear implants restore hearing but lack the ability to restore pitch perception. Recent work shows that pulses produce artificial synchrony in networks of neurons and non-linear changes in firing rate with pulse amplitude. Studies also show galvanic stimulation, delivery of current for extended periods of time, produces more naturalistic behavioral responses than pulses. In this paper, we use a winner-take-all decision-making network model to investigate differences between pulsatile and galvanic stimulation at the single neuron and network level while accurately modeling the effects of pulses on neurons for the first time. Results show pulses bias spike timing and make neurons more resistive to natural network inputs than galvanic stimulation at an equivalent current amplitude. Clinical Relevance- This establishes that pulsatile stimulation may disrupt natural spike timing and network-level interactions while certain parameterizations of galvanic stimulation avoid these effects and can drive network firing more naturally.
Collapse
|
17
|
Gillis M, Decruy L, Vanthornhout J, Francart T. Hearing loss is associated with delayed neural responses to continuous speech. Eur J Neurosci 2022; 55:1671-1690. [PMID: 35263814 DOI: 10.1111/ejn.15644] [Citation(s) in RCA: 12] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 02/21/2022] [Accepted: 02/23/2022] [Indexed: 11/28/2022]
Abstract
We investigated the impact of hearing loss on the neural processing of speech. Using a forward modeling approach, we compared the neural responses to continuous speech of 14 adults with sensorineural hearing loss with those of age-matched normal-hearing peers. Compared to their normal-hearing peers, hearing-impaired listeners had increased neural tracking and delayed neural responses to continuous speech in quiet. The latency also increased with the degree of hearing loss. As speech understanding decreased, neural tracking decreased in both populations; however, a significantly different trend was observed for the latency of the neural responses. For normal-hearing listeners, the latency increased with increasing background noise level. However, for hearing-impaired listeners, this increase was not observed. Our results support the idea that the neural response latency indicates the efficiency of neural speech processing: more or different brain regions are involved in processing speech, which causes longer communication pathways in the brain. These longer communication pathways hamper the information integration among these brain regions, reflected in longer processing times. Altogether, this suggests decreased neural speech processing efficiency in HI listeners as more time and more or different brain regions are required to process speech. Our results suggest that this reduction in neural speech processing efficiency occurs gradually as hearing deteriorates. From our results, it is apparent that sound amplification does not solve hearing loss. Even when listening to speech in silence at a comfortable loudness, hearing-impaired listeners process speech less efficiently.
Collapse
Affiliation(s)
- Marlies Gillis
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| | - Lien Decruy
- Institute for Systems Research, University of Maryland, College Park, MD, USA
| | | | - Tom Francart
- KU Leuven, Department of Neurosciences, ExpORL, Leuven, Belgium
| |
Collapse
|
18
|
Lesicko AM, Geffen MN. Diverse functions of the auditory cortico-collicular pathway. Hear Res 2022; 425:108488. [DOI: 10.1016/j.heares.2022.108488] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/19/2021] [Revised: 02/27/2022] [Accepted: 03/19/2022] [Indexed: 01/23/2023]
|
19
|
Cheng FY, Xu C, Gold L, Smith S. Rapid Enhancement of Subcortical Neural Responses to Sine-Wave Speech. Front Neurosci 2022; 15:747303. [PMID: 34987356 PMCID: PMC8721138 DOI: 10.3389/fnins.2021.747303] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Accepted: 12/02/2021] [Indexed: 01/15/2023] Open
Abstract
The efferent auditory nervous system may be a potent force in shaping how the brain responds to behaviorally significant sounds. Previous human experiments using the frequency following response (FFR) have shown efferent-induced modulation of subcortical auditory function online and over short- and long-term time scales; however, a contemporary understanding of FFR generation presents new questions about whether previous effects were constrained solely to the auditory subcortex. The present experiment used sine-wave speech (SWS), an acoustically-sparse stimulus in which dynamic pure tones represent speech formant contours, to evoke FFRSWS. Due to the higher stimulus frequencies used in SWS, this approach biased neural responses toward brainstem generators and allowed for three stimuli (/bɔ/, /bu/, and /bo/) to be used to evoke FFRSWSbefore and after listeners in a training group were made aware that they were hearing a degraded speech stimulus. All SWS stimuli were rapidly perceived as speech when presented with a SWS carrier phrase, and average token identification reached ceiling performance during a perceptual training phase. Compared to a control group which remained naïve throughout the experiment, training group FFRSWS amplitudes were enhanced post-training for each stimulus. Further, linear support vector machine classification of training group FFRSWS significantly improved post-training compared to the control group, indicating that training-induced neural enhancements were sufficient to bolster machine learning classification accuracy. These results suggest that the efferent auditory system may rapidly modulate auditory brainstem representation of sounds depending on their context and perception as non-speech or speech.
Collapse
Affiliation(s)
- Fan-Yin Cheng
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Can Xu
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Lisa Gold
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| | - Spencer Smith
- Department of Speech, Language, and Hearing Sciences, University of Texas at Austin, Austin, TX, United States
| |
Collapse
|
20
|
Kommajosyula SP, Bartlett EL, Cai R, Ling L, Caspary DM. Corticothalamic projections deliver enhanced responses to medial geniculate body as a function of the temporal reliability of the stimulus. J Physiol 2021; 599:5465-5484. [PMID: 34783016 PMCID: PMC10630908 DOI: 10.1113/jp282321] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 01/12/2023] Open
Abstract
Ageing and challenging signal-in-noise conditions are known to engage the use of cortical resources to help maintain speech understanding. Extensive corticothalamic projections are thought to provide attentional, mnemonic and cognitive-related inputs in support of sensory inferior colliculus (IC) inputs to the medial geniculate body (MGB). Here we show that a decrease in modulation depth, a temporally less distinct periodic acoustic signal, leads to a jittered ascending temporal code, changing MGB unit responses from adapting responses to responses showing repetition enhancement, posited to aid identification of important communication and environmental sounds. Young-adult male Fischer Brown Norway rats, injected with the inhibitory opsin archaerhodopsin T (ArchT) into the primary auditory cortex (A1), were subsequently studied using optetrodes to record single-units in MGB. Decreasing the modulation depth of acoustic stimuli significantly increased repetition enhancement. Repetition enhancement was blocked by optical inactivation of corticothalamic terminals in MGB. These data support a role for corticothalamic projections in repetition enhancement, implying that predictive anticipation could be used to improve neural representation of weakly modulated sounds. KEY POINTS: In response to a less temporally distinct repeating sound with low modulation depth, medial geniculate body (MGB) single units show a switch from adaptation towards repetition enhancement. Repetition enhancement was reversed by blockade of MGB inputs from the auditory cortex. Collectively, these data argue that diminished acoustic temporal cues such as weak modulation engage cortical processes to enhance coding of those cues in auditory thalamus.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Edward L Bartlett
- Department of Biological Sciences and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Rui Cai
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Lynne Ling
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Donald M Caspary
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| |
Collapse
|
21
|
Multiple Cases of Auditory Neuropathy Illuminate the Importance of Subcortical Neural Synchrony for Speech-in-noise Recognition and the Frequency-following Response. Ear Hear 2021; 43:605-619. [PMID: 34619687 DOI: 10.1097/aud.0000000000001122] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The role of subcortical synchrony in speech-in-noise (SIN) recognition and the frequency-following response (FFR) was examined in multiple listeners with auditory neuropathy. Although an absent FFR has been documented in one listener with idiopathic neuropathy who has severe difficulty recognizing SIN, several etiologies cause the neuropathy phenotype. Consequently, it is necessary to replicate absent FFRs and concomitant SIN difficulties in patients with multiple sources and clinical presentations of neuropathy to elucidate fully the importance of subcortical neural synchrony for the FFR and SIN recognition. DESIGN Case series. Three children with auditory neuropathy (two males with neuropathy attributed to hyperbilirubinemia, one female with a rare missense mutation in the OPA1 gene) were compared to age-matched controls with normal hearing (52 for electrophysiology and 48 for speech recognition testing). Tests included standard audiological evaluations, FFRs, and sentence recognition in noise. The three children with neuropathy had a range of clinical presentations, including moderate sensorineural hearing loss, use of a cochlear implant, and a rapid progressive hearing loss. RESULTS Children with neuropathy generally had good speech recognition in quiet but substantial difficulties in noise. These SIN difficulties were somewhat mitigated by a clear speaking style and presenting words in a high semantic context. In the children with neuropathy, FFRs were absent from all tested stimuli. In contrast, age-matched controls had reliable FFRs. CONCLUSION Subcortical synchrony is subject to multiple forms of disruption but results in a consistent phenotype of an absent FFR and substantial difficulties recognizing SIN. These results support the hypothesis that subcortical synchrony is necessary for the FFR. Thus, in healthy listeners, the FFR may reflect subcortical neural processes important for SIN recognition.
Collapse
|
22
|
Price CN, Bidelman GM. Attention reinforces human corticofugal system to aid speech perception in noise. Neuroimage 2021; 235:118014. [PMID: 33794356 PMCID: PMC8274701 DOI: 10.1016/j.neuroimage.2021.118014] [Citation(s) in RCA: 23] [Impact Index Per Article: 7.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/24/2020] [Revised: 03/09/2021] [Accepted: 03/25/2021] [Indexed: 12/13/2022] Open
Abstract
Perceiving speech-in-noise (SIN) demands precise neural coding between brainstem and cortical levels of the hearing system. Attentional processes can then select and prioritize task-relevant cues over competing background noise for successful speech perception. In animal models, brainstem-cortical interplay is achieved via descending corticofugal projections from cortex that shape midbrain responses to behaviorally-relevant sounds. Attentional engagement of corticofugal feedback may assist SIN understanding but has never been confirmed and remains highly controversial in humans. To resolve these issues, we recorded source-level, anatomically constrained brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) to speech via high-density EEG while listeners performed rapid SIN identification tasks. We varied attention with active vs. passive listening scenarios whereas task difficulty was manipulated with additive noise interference. Active listening (but not arousal-control tasks) exaggerated both ERPs and FFRs, confirming attentional gain extends to lower subcortical levels of speech processing. We used functional connectivity to measure the directed strength of coupling between levels and characterize "bottom-up" vs. "top-down" (corticofugal) signaling within the auditory brainstem-cortical pathway. While attention strengthened connectivity bidirectionally, corticofugal transmission disengaged under passive (but not active) SIN listening. Our findings (i) show attention enhances the brain's transcription of speech even prior to cortex and (ii) establish a direct role of the human corticofugal feedback system as an aid to cocktail party speech perception.
Collapse
Affiliation(s)
- Caitlin N Price
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA.
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN 38152, USA; Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| |
Collapse
|
23
|
Defining the Role of Attention in Hierarchical Auditory Processing. Audiol Res 2021; 11:112-128. [PMID: 33805600 PMCID: PMC8006147 DOI: 10.3390/audiolres11010012] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2021] [Revised: 03/07/2021] [Accepted: 03/10/2021] [Indexed: 01/09/2023] Open
Abstract
Communication in noise is a complex process requiring efficient neural encoding throughout the entire auditory pathway as well as contributions from higher-order cognitive processes (i.e., attention) to extract speech cues for perception. Thus, identifying effective clinical interventions for individuals with speech-in-noise deficits relies on the disentanglement of bottom-up (sensory) and top-down (cognitive) factors to appropriately determine the area of deficit; yet, how attention may interact with early encoding of sensory inputs remains unclear. For decades, attentional theorists have attempted to address this question with cleverly designed behavioral studies, but the neural processes and interactions underlying attention's role in speech perception remain unresolved. While anatomical and electrophysiological studies have investigated the neurological structures contributing to attentional processes and revealed relevant brain-behavior relationships, recent electrophysiological techniques (i.e., simultaneous recording of brainstem and cortical responses) may provide novel insight regarding the relationship between early sensory processing and top-down attentional influences. In this article, we review relevant theories that guide our present understanding of attentional processes, discuss current electrophysiological evidence of attentional involvement in auditory processing across subcortical and cortical levels, and propose areas for future study that will inform the development of more targeted and effective clinical interventions for individuals with speech-in-noise deficits.
Collapse
|
24
|
Qian M, Wang Q, Yang L, Wang Z, Hu D, Li B, Li Y, Wu H, Huang Z. The effects of aging on peripheral and central auditory function in adults with normal hearing. Am J Transl Res 2021; 13:549-564. [PMID: 33594309 PMCID: PMC7868840] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2020] [Accepted: 12/15/2020] [Indexed: 06/12/2023]
Abstract
This study was designed to investigate the effects of the aging process on peripheral and central auditory functions in adults with normal hearing. In this study, 149 participants with normal hearing were divided into four groups: aged 20-29, 30-39, 40-49 and 50-59 years for statistical purposes. Electrocochleography (EcochG), transient evoked otoacoustic emissions (TEOAE), Mandarin Hearing in Noise Test (MHINT) and the Gap Detection Test (GDT) were used. Our study found: (1) MHINT is significantly associated with aging (left ear R2=0.29, right ear R2=0.35). (2) TEOAE amplitude, TEOAE contralateral acoustic stimulation (CS) amplitude, EcochG action potential (AP), EcochG AP latency, EcochG summating potential (SP) and GDT progressively declined with age. (3) The EcochG SP/AP has no statistically significant difference among different age groups. (4) The peripheral auditory function of the right ear declines more slowly than that of the left ear. (5) Hypofunction of the central auditory system accelerates after age 40. The results demonstrate: (1) The age-related decline in the ability of speech recognition in a noisy environment may be the most sensitive indicator that reflects auditory function. (2) The decline of central auditory function is independent of peripheral auditory function, according to the auditory characteristics of the right ear. (3) Auditory function needs to be assessed individually to allow early prevention before age 40.
Collapse
Affiliation(s)
- Minfei Qian
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Qixuan Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Lu Yang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhongying Wang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Difei Hu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Bei Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Yun Li
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Hao Wu
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| | - Zhiwu Huang
- Department of Otolaryngology-Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Hearing and Speech Center of Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of MedicineShanghai 200011, China
- Ear Institute, Shanghai Jiao Tong University School of MedicineShanghai 200125, China
- Shanghai Key Laboratory of Translational Medicine on Ear and Nose DiseasesShanghai 200125, China
| |
Collapse
|
25
|
Subcortical rather than cortical sources of the frequency-following response (FFR) relate to speech-in-noise perception in normal-hearing listeners. Neurosci Lett 2021; 746:135664. [PMID: 33497718 DOI: 10.1016/j.neulet.2021.135664] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2020] [Revised: 12/22/2020] [Accepted: 01/13/2021] [Indexed: 12/27/2022]
Abstract
Scalp-recorded frequency-following responses (FFRs) reflect a mixture of phase-locked activity across the auditory pathway. FFRs have been widely used as a neural barometer of complex listening skills, especially speech-in noise (SIN) perception. Applying individually optimized source reconstruction to speech-FFRs recorded via EEG (FFREEG), we assessed the relative contributions of subcortical [auditory nerve (AN), brainstem/midbrain (BS)] and cortical [bilateral primary auditory cortex, PAC] source generators with the aim of identifying which source(s) drive the brain-behavior relation between FFRs and SIN listening skills. We found FFR strength declined precipitously from AN to PAC, consistent with diminishing phase-locking along the ascending auditory neuroaxis. FFRs to the speech fundamental (F0) were robust to noise across sources, but were largest in subcortical sources (BS > AN > PAC). PAC FFRs were only weakly observed above the noise floor and only at the low pitch of speech (F0≈100 Hz). Brain-behavior regressions revealed (i) AN and BS FFRs were sufficient to describe listeners' QuickSIN scores and (ii) contrary to neuromagnetic (MEG) FFRs, neither left nor right PAC FFREEG related to SIN performance. Our findings suggest subcortical sources not only dominate the electrical FFR but also the link between speech-FFRs and SIN processing in normal-hearing adults as observed in previous EEG studies.
Collapse
|
26
|
Mahmud MS, Ahmed F, Al-Fahad R, Moinuddin KA, Yeasin M, Alain C, Bidelman GM. Decoding Hearing-Related Changes in Older Adults' Spatiotemporal Neural Processing of Speech Using Machine Learning. Front Neurosci 2020; 14:748. [PMID: 32765215 PMCID: PMC7378401 DOI: 10.3389/fnins.2020.00748] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/18/2019] [Accepted: 06/25/2020] [Indexed: 12/25/2022] Open
Abstract
Speech perception in noisy environments depends on complex interactions between sensory and cognitive systems. In older adults, such interactions may be affected, especially in those individuals who have more severe age-related hearing loss. Using a data-driven approach, we assessed the temporal (when in time) and spatial (where in the brain) characteristics of cortical speech-evoked responses that distinguish older adults with or without mild hearing loss. We performed source analyses to estimate cortical surface signals from the EEG recordings during a phoneme discrimination task conducted under clear and noise-degraded conditions. We computed source-level ERPs (i.e., mean activation within each ROI) from each of the 68 ROIs of the Desikan-Killiany (DK) atlas, averaged over a randomly chosen 100 trials without replacement to form feature vectors. We adopted a multivariate feature selection method called stability selection and control to choose features that are consistent over a range of model parameters. We use parameter optimized support vector machine (SVM) as a classifiers to investigate the time course and brain regions that segregate groups and speech clarity. For clear speech perception, whole-brain data revealed a classification accuracy of 81.50% [area under the curve (AUC) 80.73%; F1-score 82.00%], distinguishing groups within ∼60 ms after speech onset (i.e., as early as the P1 wave). We observed lower accuracy of 78.12% [AUC 77.64%; F1-score 78.00%] and delayed classification performance when speech was embedded in noise, with group segregation at 80 ms. Separate analysis using left (LH) and right hemisphere (RH) regions showed that LH speech activity was better at distinguishing hearing groups than activity measured in the RH. Moreover, stability selection analysis identified 12 brain regions (among 1428 total spatiotemporal features from 68 regions) where source activity segregated groups with >80% accuracy (clear speech); whereas 16 regions were critical for noise-degraded speech to achieve a comparable level of group segregation (78.7% accuracy). Our results identify critical time-courses and brain regions that distinguish mild hearing loss from normal hearing in older adults and confirm a larger number of active areas, particularly in RH, when processing noise-degraded speech information.
Collapse
Affiliation(s)
- Md Sultan Mahmud
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Faruk Ahmed
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Rakib Al-Fahad
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Kazi Ashraf Moinuddin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, The University of Memphis, Memphis, TN, United States
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada.,Department of Psychology, University of Toronto, Toronto, ON, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| | - Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, United States.,School of Communication Sciences and Disorders, University of Memphis, Memphis, TN, United States.,Department of Anatomy and Neurobiology, University of Tennessee Health Science Center, Memphis, TN, United States
| |
Collapse
|
27
|
Rotman T, Lavie L, Banai K. Rapid Perceptual Learning: A Potential Source of Individual Differences in Speech Perception Under Adverse Conditions? Trends Hear 2020; 24:2331216520930541. [PMID: 32552477 PMCID: PMC7303778 DOI: 10.1177/2331216520930541] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Challenging listening situations (e.g., when speech is rapid or noisy) result in substantial individual differences in speech perception. We propose that rapid auditory perceptual learning is one of the factors contributing to those individual differences. To explore this proposal, we assessed rapid perceptual learning of time-compressed speech in young adults with normal hearing and in older adults with age-related hearing loss. We also assessed the contribution of this learning as well as that of hearing and cognition (vocabulary, working memory, and selective attention) to the recognition of natural-fast speech (NFS; both groups) and speech in noise (younger adults). In young adults, rapid learning and vocabulary were significant predictors of NFS and speech in noise recognition. In older adults, hearing thresholds, vocabulary, and rapid learning were significant predictors of NFS recognition. In both groups, models that included learning fitted the speech data better than models that did not include learning. Therefore, under adverse conditions, rapid learning may be one of the skills listeners could employ to support speech recognition.
Collapse
Affiliation(s)
- Tali Rotman
- Department of Communication Sciences and Disorders, University of Haifa
| | - Limor Lavie
- Department of Communication Sciences and Disorders, University of Haifa
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa
| |
Collapse
|
28
|
Loughrey DG, Pakhomov SVS, Lawlor BA. Altered verbal fluency processes in older adults with age-related hearing loss. Exp Gerontol 2019; 130:110794. [PMID: 31790801 DOI: 10.1016/j.exger.2019.110794] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/08/2019] [Revised: 10/27/2019] [Accepted: 11/24/2019] [Indexed: 11/28/2022]
Abstract
Epidemiological studies have linked age-related hearing loss (ARHL) with an increased risk of neurocognitive decline. Difficulties in speech perception with subsequent changes in brain morphometry, including regions important for lexical-semantic memory, are thought to be a possible mechanism for this relationship. This study investigated differences in automatic and executive lexical-semantic processes on verbal fluency tasks in individuals with acquired hearing loss. The primary outcomes were indices of automatic (clustering/word retrieval at start of task) and executive (switching/word retrieval after start of the task) processes from semantic and phonemic fluency tasks. To extract indices of clustering and switching, we used both manual and computerised methods. There were no differences between groups on indices of executive fluency processes or on any indices from the semantic fluency task. The hearing loss group demonstrated weaker automatic processes on the phonemic fluency task. Further research into differences in lexical-semantic processes with ARHL is warranted.
Collapse
Affiliation(s)
- David G Loughrey
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Trinity College Institute of Neuroscience, Trinity College Dublin.
| | | | - Brian A Lawlor
- Global Brain Health Institute, Trinity College Dublin, Ireland; Global Brain Health Institute, University of California, San Francisco, USA; Mercer's Institute for Successful Ageing, St James Hospital, Dublin, Ireland
| |
Collapse
|
29
|
Auditory-frontal Channeling in α and β Bands is Altered by Age-related Hearing Loss and Relates to Speech Perception in Noise. Neuroscience 2019; 423:18-28. [PMID: 31705894 DOI: 10.1016/j.neuroscience.2019.10.044] [Citation(s) in RCA: 28] [Impact Index Per Article: 5.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Revised: 09/19/2019] [Accepted: 10/27/2019] [Indexed: 01/16/2023]
Abstract
Difficulty understanding speech-in-noise (SIN) is a pervasive problem faced by older adults particularly those with hearing loss. Previous studies have identified structural and functional changes in the brain that contribute to older adults' speech perception difficulties. Yet, many of these studies use neuroimaging techniques that evaluate only gross activation in isolated brain regions. Neural oscillations may provide further insight into the processes underlying SIN perception as well as the interaction between auditory cortex and prefrontal linguistic brain regions that mediate complex behaviors. We examined frequency-specific neural oscillations and functional connectivity of the EEG in older adults with and without hearing loss during an active SIN perception task. Brain-behavior correlations revealed listeners who were more resistant to the detrimental effects of noise also demonstrated greater modulation of α phase coherence between clean and noise-degraded speech, suggesting α desynchronization reflects release from inhibition and more flexible allocation of neural resources. Additionally, we found top-down β connectivity between prefrontal and auditory cortices strengthened with poorer hearing thresholds despite minimal behavioral differences. This is consistent with the proposal that linguistic brain areas may be recruited to compensate for impoverished auditory inputs through increased top-down predictions to assist SIN perception. Overall, these results emphasize the importance of top-down signaling in low-frequency brain rhythms that help compensate for hearing-related declines and facilitate efficient SIN processing.
Collapse
|
30
|
Bidelman GM, Mahmud MS, Yeasin M, Shen D, Arnott SR, Alain C. Age-related hearing loss increases full-brain connectivity while reversing directed signaling within the dorsal-ventral pathway for speech. Brain Struct Funct 2019; 224:2661-2676. [PMID: 31346715 PMCID: PMC6778722 DOI: 10.1007/s00429-019-01922-9] [Citation(s) in RCA: 26] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2019] [Accepted: 07/13/2019] [Indexed: 01/08/2023]
Abstract
Speech comprehension difficulties are ubiquitous to aging and hearing loss, particularly in noisy environments. Older adults' poorer speech-in-noise (SIN) comprehension has been related to abnormal neural representations within various nodes (regions) of the speech network, but how senescent changes in hearing alter the transmission of brain signals remains unspecified. We measured electroencephalograms in older adults with and without mild hearing loss during a SIN identification task. Using functional connectivity and graph-theoretic analyses, we show that hearing-impaired (HI) listeners have more extended (less integrated) communication pathways and less efficient information exchange among widespread brain regions (larger network eccentricity) than their normal-hearing (NH) peers. Parameter optimized support vector machine classifiers applied to EEG connectivity data showed hearing status could be decoded (> 85% accuracy) solely using network-level descriptions of brain activity, but classification was particularly robust using left hemisphere connections. Notably, we found a reversal in directed neural signaling in left hemisphere dependent on hearing status among specific connections within the dorsal-ventral speech pathways. NH listeners showed an overall net "bottom-up" signaling directed from auditory cortex (A1) to inferior frontal gyrus (IFG; Broca's area), whereas the HI group showed the reverse signal (i.e., "top-down" Broca's → A1). A similar flow reversal was noted between left IFG and motor cortex. Our full-brain connectivity results demonstrate that even mild forms of hearing loss alter how the brain routes information within the auditory-linguistic-motor loop.
Collapse
Affiliation(s)
- Gavin M Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA.
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, TN, 38152, USA.
- Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA.
| | - Md Sultan Mahmud
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, TN, USA
| | - Mohammed Yeasin
- Department of Electrical and Computer Engineering, University of Memphis, Memphis, TN, USA
| | - Dawei Shen
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada
| | - Stephen R Arnott
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada
| | - Claude Alain
- Rotman Research Institute-Baycrest Centre for Geriatric Care, Toronto, ON, Canada
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- Institute of Medical Sciences, University of Toronto, Toronto, ON, Canada
| |
Collapse
|