51
|
Kim SY, Lim JS, Kong IG, Choi HG. Hearing impairment and the risk of neurodegenerative dementia: A longitudinal follow-up study using a national sample cohort. Sci Rep 2018; 8:15266. [PMID: 30323320 PMCID: PMC6189102 DOI: 10.1038/s41598-018-33325-x] [Citation(s) in RCA: 66] [Impact Index Per Article: 11.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2018] [Accepted: 09/24/2018] [Indexed: 11/17/2022] Open
Abstract
This study aimed to explore the risk of dementia in a middle- and older-aged population with severe or profound hearing impairments. Data were collected for the Korean National Health Insurance Service-National Sample Cohort from 2002 to 2013. Participants aged 40 or older were selected. The 4,432 severely hearing-impaired participants were matched 1:4 with 17,728 controls, and the 958 profoundly hearing-impaired participants were matched 1:4 with 3,832 controls who had not reported any hearing impairment. Age, sex, income, region of residence, hypertension, diabetes mellitus, and dyslipidemia histories were matched between hearing-impaired and control groups. The crude (simple) and adjusted (age, sex, income, region of residence, dementia, hypertension, diabetes mellitus, dyslipidemia, ischemic heart disease, cerebrovascular disease, and depression) hazard ratios (HRs) of hearing impairment on dementia were analyzed using Cox-proportional hazard models. The severe hearing impairment group showed an increased risk of dementia (adjusted HR = 1.17, 95% confidence interval [CI] = 1.04–1.31, P = 0.010). The profound hearing impairment group also showed an increased risk of dementia (adjusted HR = 1.51, 95% CI = 1.14–2.00, P = 0.004). Both severe and profound hearing impairments were associated with elevated the risk of dementia in middle- and older-aged individuals.
Collapse
Affiliation(s)
- So Young Kim
- Department of Otorhinolaryngology-Head & Neck Surgery, CHA Bundang Medical Center, CHA University, Seongnam, Korea
| | - Jae-Sung Lim
- Department of Neurology, Hallym University Sacred Heart Hospital, Anyang, Korea
| | - Il Gyu Kong
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Anyang, Korea
| | - Hyo Geun Choi
- Department of Otorhinolaryngology-Head & Neck Surgery, Hallym University College of Medicine, Anyang, Korea.
| |
Collapse
|
52
|
Campbell TA, Marsh JE. Commentary: Donepezil enhances understanding of degraded speech in Alzheimer's disease. Front Aging Neurosci 2018; 10:197. [PMID: 30057546 PMCID: PMC6053516 DOI: 10.3389/fnagi.2018.00197] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2018] [Accepted: 06/11/2018] [Indexed: 11/13/2022] Open
Affiliation(s)
- Tom A Campbell
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - John E Marsh
- Department of Building, Energy and Environmental Engineering, University of Gävle, Gävle, Sweden.,School of Psychology, University of Central Lancashire, Preston, United Kingdom
| |
Collapse
|
53
|
Synaptopathy in the Aging Cochlea: Characterizing Early-Neural Deficits in Auditory Temporal Envelope Processing. J Neurosci 2018; 38:7108-7119. [PMID: 29976623 DOI: 10.1523/jneurosci.3240-17.2018] [Citation(s) in RCA: 112] [Impact Index Per Article: 18.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2017] [Revised: 06/27/2018] [Accepted: 06/28/2018] [Indexed: 12/12/2022] Open
Abstract
Aging listeners, even in the absence of overt hearing loss measured as changes in hearing thresholds, often experience impairments processing temporally complex sounds such as speech in noise. Recent evidence has shown that normal aging is accompanied by a progressive loss of synapses between inner hair cells and auditory nerve fibers. The role of this cochlear synaptopathy in degraded temporal processing with age is not yet understood. Here, we used population envelope following responses, along with other hair cell- and neural-based measures from an age-graded series of male and female CBA/CaJ mice to study changes in encoding stimulus envelopes. By comparing responses obtained before and after the application of the neurotoxin ouabain to the inner ear, we demonstrate that we can study changes in temporal processing on either side of the cochlear synapse. Results show that deficits in neural coding with age emerge at the earliest neural stages of auditory processing and are correlated with the degree of cochlear synaptopathy. These changes are seen before losses in neural thresholds and particularly affect the suprathreshold processing of sound. Responses obtained from more central sources show smaller differences with age, suggesting compensatory gain. These results show that progressive cochlear synaptopathy is accompanied by deficits in temporal coding at the earliest neural generators and contribute to the suprathreshold sound processing deficits observed with age.SIGNIFICANCE STATEMENT Aging listeners often experience difficulty hearing and understanding speech in noisy conditions. The results described here suggest that age-related loss of cochlear synapses may be a significant contributor to those performance declines. We observed aberrant neural coding of sounds in the early auditory pathway, which was accompanied by and correlated with an age-progressive loss of synapses between the inner hair cells and the auditory nerve. Deficits first appeared before changes in hearing thresholds and were largest at higher sound levels relevant to real world communication. The noninvasive tests described here may be adapted to detect cochlear synaptopathy in the clinical setting.
Collapse
|
54
|
Swords GM, Nguyen LT, Mudar RA, Llano DA. Auditory system dysfunction in Alzheimer disease and its prodromal states: A review. Ageing Res Rev 2018; 44:49-59. [PMID: 29630950 DOI: 10.1016/j.arr.2018.04.001] [Citation(s) in RCA: 42] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/07/2018] [Revised: 04/03/2018] [Accepted: 04/04/2018] [Indexed: 01/15/2023]
Abstract
Recent findings suggest that both peripheral and central auditory system dysfunction occur in the prodromal stages of Alzheimer Disease (AD), and therefore may represent early indicators of the disease. In addition, loss of auditory function itself leads to communication difficulties, social isolation and poor quality of life for both patients with AD and their caregivers. Developing a greater understanding of auditory dysfunction in early AD may shed light on the mechanisms of disease progression and carry diagnostic and therapeutic importance. Herein, we review the literature on hearing abilities in AD and its prodromal stages investigated through methods such as pure-tone audiometry, dichotic listening tasks, and evoked response potentials. We propose that screening for peripheral and central auditory dysfunction in at-risk populations is a low-cost and effective means to identify early AD pathology and provides an entry point for therapeutic interventions that enhance the quality of life of AD patients.
Collapse
Affiliation(s)
| | - Lydia T Nguyen
- Neuroscience Program, University of Illinois at Urbana-Champaign, United States; Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States
| | - Raksha A Mudar
- Neuroscience Program, University of Illinois at Urbana-Champaign, United States; Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, United States
| | - Daniel A Llano
- University of Illinois College of Medicine, United States; Neuroscience Program, University of Illinois at Urbana-Champaign, United States; Department of Molecular and Integrative Physiology, University of Illinois at Urbana-Champaign, United States; Beckman Institute for Advanced Science and Technology, Urbana, IL, United States.
| |
Collapse
|
55
|
Brainstem-cortical functional connectivity for speech is differentially challenged by noise and reverberation. Hear Res 2018; 367:149-160. [PMID: 29871826 DOI: 10.1016/j.heares.2018.05.018] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/25/2018] [Revised: 05/18/2018] [Accepted: 05/23/2018] [Indexed: 11/21/2022]
Abstract
Everyday speech perception is challenged by external acoustic interferences that hinder verbal communication. Here, we directly compared how different levels of the auditory system (brainstem vs. cortex) code speech and how their neural representations are affected by two acoustic stressors: noise and reverberation. We recorded multichannel (64 ch) brainstem frequency-following responses (FFRs) and cortical event-related potentials (ERPs) simultaneously in normal hearing individuals to speech sounds presented in mild and moderate levels of noise and reverb. We matched signal-to-noise and direct-to-reverberant ratios to equate the severity between classes of interference. Electrode recordings were parsed into source waveforms to assess the relative contribution of region-specific brain areas [i.e., brainstem (BS), primary auditory cortex (A1), inferior frontal gyrus (IFG)]. Results showed that reverberation was less detrimental to (and in some cases facilitated) the neural encoding of speech compared to additive noise. Inter-regional correlations revealed associations between BS and A1 responses, suggesting subcortical speech representations influence higher auditory-cortical areas. Functional connectivity analyses further showed that directed signaling toward A1 in both feedforward cortico-collicular (BS→A1) and feedback cortico-cortical (IFG→A1) pathways were strong predictors of degraded speech perception and differentiated "good" vs. "poor" perceivers. Our findings demonstrate a functional interplay within the brain's speech network that depends on the form and severity of acoustic interference. We infer that in addition to the quality of neural representations within individual brain regions, listeners' success at the "cocktail party" is modulated based on how information is transferred among subcortical and cortical hubs of the auditory-linguistic network.
Collapse
|
56
|
Bidelman G, Powers L. Response properties of the human frequency-following response (FFR) to speech and non-speech sounds: level dependence, adaptation and phase-locking limits. Int J Audiol 2018; 57:665-672. [DOI: 10.1080/14992027.2018.1470338] [Citation(s) in RCA: 15] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/16/2022]
Affiliation(s)
- Gavin Bidelman
- Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
- Department of Anatomy and Neurobiology, University of Tennessee Health Sciences Center, Memphis, TN, USA
| | - Louise Powers
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA
| |
Collapse
|
57
|
Bidelman GM. Subcortical sources dominate the neuroelectric auditory frequency-following response to speech. Neuroimage 2018; 175:56-69. [PMID: 29604459 DOI: 10.1016/j.neuroimage.2018.03.060] [Citation(s) in RCA: 146] [Impact Index Per Article: 24.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2018] [Accepted: 03/26/2018] [Indexed: 11/16/2022] Open
Abstract
Frequency-following responses (FFRs) are neurophonic potentials that provide a window into the encoding of complex sounds (e.g., speech/music), auditory disorders, and neuroplasticity. While the neural origins of the FFR remain debated, renewed controversy has reemerged after demonstration that FFRs recorded via magnetoencephalography (MEG) are dominated by cortical rather than brainstem structures as previously assumed. Here, we recorded high-density (64 ch) FFRs via EEG and applied state-of-the art source imaging techniques to multichannel data (discrete dipole modeling, distributed imaging, independent component analysis, computational simulations). Our data confirm a mixture of generators localized to bilateral auditory nerve (AN), brainstem inferior colliculus (BS), and bilateral primary auditory cortex (PAC). However, frequency-specific scrutiny of source waveforms showed the relative contribution of these nuclei to the aggregate FFR varied across stimulus frequencies. Whereas AN and BS sources produced robust FFRs up to ∼700 Hz, PAC showed weak phase-locking with little FFR energy above the speech fundamental (100 Hz). Notably, CLARA imaging further showed PAC activation was eradicated for FFRs >150 Hz, above which only subcortical sources remained active. Our results show (i) the site of FFR generation varies critically with stimulus frequency; and (ii) opposite the pattern observed in MEG, subcortical structures make the largest contribution to electrically recorded FFRs (AN ≥ BS > PAC). We infer that cortical dominance observed in previous neuromagnetic data is likely due to the bias of MEG to superficial brain tissue, underestimating subcortical structures that drive most of the speech-FFR. Cleanly separating subcortical from cortical FFRs can be achieved by ensuring stimulus frequencies are >150-200 Hz, above the phase-locking limit of cortical neurons.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, USA.
| |
Collapse
|
58
|
Correa-Jaraba KS, Lindín M, Díaz F. Increased Amplitude of the P3a ERP Component as a Neurocognitive Marker for Differentiating Amnestic Subtypes of Mild Cognitive Impairment. Front Aging Neurosci 2018; 10:19. [PMID: 29483869 PMCID: PMC5816051 DOI: 10.3389/fnagi.2018.00019] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2017] [Accepted: 01/16/2018] [Indexed: 01/08/2023] Open
Abstract
The event-related potential (ERP) technique has been shown to be useful for evaluating changes in brain electrical activity associated with different cognitive processes, particularly in Alzheimer's disease (AD). Longitudinal studies have shown that a high proportion of people with amnestic mild cognitive impairment (aMCI) go on to develop AD. aMCI is divided into two subtypes according to the presence of memory impairment only (single-domain aMCI: sdaMCI) or impairment of memory and other cognitive domains (multi-domain aMCI: mdaMCI). The main aim of this study was to examine the effects of sdaMCI and mdaMCI on the P3a ERP component associated with the involuntary orientation of attention toward unattended infrequent novel auditory stimuli. Participants performed an auditory-visual distraction-attention task, in which they were asked to ignore the auditory stimuli (standard, deviant, and novel) and to attend to the visual stimuli (responding to some of them: Go stimuli). P3a was identified in the Novel minus Standard difference waveforms, and reaction times (RTs) and hits (in response to Go stimuli) were also analyzed. Participants were classified into three groups: Control, 20 adults (mean age (M): 65.8 years); sdaMCI, 19 adults (M: 67 years); and mdaMCI, 11 adults (M: 71 years). In all groups, the RTs were significantly longer when Go stimuli were preceded by novel (relative to standard) auditory stimuli, suggesting a distraction effect triggered by novel stimuli; mdaMCI participants made significantly fewer hits than control and sdaMCI participants. P3a comprised two consecutive phases in all groups: early-P3a (e-P3a), which may reflect the orienting response toward the irrelevant stimuli, and late-P3a (l-P3a), which may be a correlate of subsequent evaluation of these stimuli. The e-P3a amplitude was significantly larger in mdaMCI than in sdaMCI participants, and the l-P3a amplitude was significantly larger in mdaMCI than in sdaMCI and Control participants, indicating greater involuntary capture of attention to unattended novel auditory stimuli and allocation of more attentional resources for the subsequent evaluation of these stimuli in mdaMCI participants. The e-P3a and l-P3a components showed moderate to high sensitivity and specificity for distinguishing between groups, suggesting that both may represent optimal neurocognitive markers for differentiating aMCI subtypes.
Collapse
Affiliation(s)
- Kenia S. Correa-Jaraba
- Laboratorio de Psicofisioloxía e Neurociencia Cognitiva, Facultade de Psicoloxía, Universidade de Santiago de Compostela, Galicia, Spain
| | | | | |
Collapse
|
59
|
Valderrama JT, de la Torre A, Van Dun B. An automatic algorithm for blink-artifact suppression based on iterative template matching: application to single channel recording of cortical auditory evoked potentials. J Neural Eng 2018; 15:016008. [DOI: 10.1088/1741-2552/aa8d95] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/12/2022]
|
60
|
Bidelman GM. Sonification of scalp-recorded frequency-following responses (FFRs) offers improved response detection over conventional statistical metrics. J Neurosci Methods 2018; 293:59-66. [DOI: 10.1016/j.jneumeth.2017.09.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2017] [Revised: 08/15/2017] [Accepted: 09/12/2017] [Indexed: 11/30/2022]
|
61
|
Bidelman GM, Yellamsetty A. Noise and pitch interact during the cortical segregation of concurrent speech. Hear Res 2017; 351:34-44. [PMID: 28578876 DOI: 10.1016/j.heares.2017.05.008] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2017] [Revised: 05/09/2017] [Accepted: 05/23/2017] [Indexed: 10/19/2022]
Abstract
Behavioral studies reveal listeners exploit intrinsic differences in voice fundamental frequency (F0) to segregate concurrent speech sounds-the so-called "F0-benefit." More favorable signal-to-noise ratio (SNR) in the environment, an extrinsic acoustic factor, similarly benefits the parsing of simultaneous speech. Here, we examined the neurobiological substrates of these two cues in the perceptual segregation of concurrent speech mixtures. We recorded event-related brain potentials (ERPs) while listeners performed a speeded double-vowel identification task. Listeners heard two concurrent vowels whose F0 differed by zero or four semitones presented in either clean (no noise) or noise-degraded (+5 dB SNR) conditions. Behaviorally, listeners were more accurate in correctly identifying both vowels for larger F0 separations but F0-benefit was more pronounced at more favorable SNRs (i.e., pitch × SNR interaction). Analysis of the ERPs revealed that only the P2 wave (∼200 ms) showed a similar F0 x SNR interaction as behavior and was correlated with listeners' perceptual F0-benefit. Neural classifiers applied to the ERPs further suggested that speech sounds are segregated neurally within 200 ms based on SNR whereas segregation based on pitch occurs later in time (400-700 ms). The earlier timing of extrinsic SNR compared to intrinsic F0-based segregation implies that the cortical extraction of speech from noise is more efficient than differentiating speech based on pitch cues alone, which may recruit additional cortical processes. Findings indicate that noise and pitch differences interact relatively early in cerebral cortex and that the brain arrives at the identities of concurrent speech mixtures as early as ∼200 ms.
Collapse
Affiliation(s)
- Gavin M Bidelman
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, 38152, USA; Institute for Intelligent Systems, University of Memphis, Memphis, TN, 38152, USA; Univeristy of Tennessee Health Sciences Center, Department of Anatomy and Neurobiology, Memphis, TN, 38163, USA.
| | - Anusha Yellamsetty
- School of Communication Sciences & Disorders, University of Memphis, Memphis, TN, 38152, USA
| |
Collapse
|