1
|
Alqudah S, Zuriekat M, Shatarah A. Impact of hearing impairment on the mental status of the adults and older adults in Jordanian society. PLoS One 2024; 19:e0298616. [PMID: 38437235 PMCID: PMC10911586 DOI: 10.1371/journal.pone.0298616] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/29/2023] [Accepted: 01/27/2024] [Indexed: 03/06/2024] Open
Abstract
BACKGROUND Hearing loss is a common disorder, affecting both children and adults worldwide. Individuals with hearing loss suffer from mental health problems that affect their quality of life. OBJECTIVE This study aimed to investigate the social and emotional consequences of hearing loss in a Jordanian population using Arabic versions of the Hearing Handicap Inventory for Adults (HHIA) and the Hearing Handicap Inventory for the Elderly (HHIE). METHODS This study included 300 Jordanian participants aged 18-90 years with hearing loss. Each participant underwent a complete audiological evaluation before answering the questionnaires. RESULTS The median overall scores of the HHIA and HHIE groups were 39 and 65, respectively. Both HHIA (Cronbach's alpha = 0.79, p < 0.001) and HHIE (Cronbach's alpha = 0.78, p < 0.001) were significantly associated with the social, emotional, and overall scores. Compared to the adult group, the median emotional and social scores of the older adults group were significantly higher than the adults group (Z = -4.721, p = 0.001), using the Mann-Whitney test. CONCLUSION The present research revealed that psychological disabilities associated with hearing loss in the adult Jordanian population are more frequent and severe than in other nations. This may be attributed to the lack of awareness of the mental consequences of hearing loss among Jordanian healthcare providers and the public.
Collapse
Affiliation(s)
- Safa Alqudah
- Department of Rehabilitation Sciences, Faculty of Applied Medical Sciences, Jordan University of Science and Technology, Irbid, Jordan
| | - Margaret Zuriekat
- Department of Special Surgery, School of Medicine, The University of Jordan & Jordan University Hospital, Amman, Jordan
| | - Aya Shatarah
- Bachelor in Speech and Hearing, Jordan University of Science and Technology, Irbid, Jordan
| |
Collapse
|
2
|
Diao T, Ma X, Fang X, Duan M, Yu L. Compensation in neuro-system related to age-related hearing loss. Acta Otolaryngol 2024; 144:30-34. [PMID: 38265951 DOI: 10.1080/00016489.2023.2295400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 12/10/2023] [Indexed: 01/26/2024]
Abstract
BACKGROUND Age-related hearing loss (ARHL) is a major cause of chronic disability among the elderly. Individuals with ARHL not only have trouble hearing sounds, but also with speech perception. As the perception of auditory information is reliant on integration between widespread brain networks to interpret auditory stimuli, both auditory and extra-auditory systems which mainly include visual, motor and attention systems, play an important role in compensating for ARHL. OBJECTIVES To better understand the compensatory mechanism of ARHL and inspire better interventions that may alleviate ARHL. METHODS We mainly focus on the existing information on ARHL-related central compensation. The compensatory effects of hearing aids (HAs) and cochlear implants (CIs) on ARHL were also discussed. RESULTS Studies have shown that ARHL can induce cochlear hair cell damage or loss and cochlear synaptopathy, which could induce central compensation including compensation of auditory and extra-auditory neural networks. The use of HAs and CIs can improve bottom-up processing by enabling 'better' input to the auditory pathways and then to the cortex by enhancing the diminished auditory signal. CONCLUSIONS The central compensation of ARHL and its possible correlation with HAs and CIs are current hotspots in the field and should be given focus in future research.
Collapse
Affiliation(s)
- Tongxiang Diao
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Xin Ma
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Xuan Fang
- Department of Human Anatomy, Histology & Embryology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Maoli Duan
- Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
- Department of Otolaryngology, Head and Neck Surgery & Audiology and Neurotology, Karolinska University Hospital, Karolinska Institute, Stockholm, Sweden
| | - Lisheng Yu
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| |
Collapse
|
3
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
4
|
Lee YS, Rogers CS, Grossman M, Wingfield A, Peelle JE. Hemispheric dissociations in regions supporting auditory sentence comprehension in older adults. AGING BRAIN 2022; 2:100051. [PMID: 36908889 PMCID: PMC9997128 DOI: 10.1016/j.nbas.2022.100051] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2022] [Revised: 08/10/2022] [Accepted: 08/11/2022] [Indexed: 11/21/2022] Open
Abstract
We investigated how the aging brain copes with acoustic and syntactic challenges during spoken language comprehension. Thirty-eight healthy adults aged 54 - 80 years (M = 66 years) participated in an fMRI experiment wherein listeners indicated the gender of an agent in short spoken sentences that varied in syntactic complexity (object-relative vs subject-relative center-embedded clause structures) and acoustic richness (high vs low spectral detail, but all intelligible). We found widespread activity throughout a bilateral frontotemporal network during successful sentence comprehension. Consistent with prior reports, bilateral inferior frontal gyrus and left posterior superior temporal gyrus were more active in response to object-relative sentences than to subject-relative sentences. Moreover, several regions were significantly correlated with individual differences in task performance: Activity in right frontoparietal cortex and left cerebellum (Crus I & II) showed a negative correlation with overall comprehension. By contrast, left frontotemporal areas and right cerebellum (Lobule VII) showed a negative correlation with accuracy specifically for syntactically complex sentences. In addition, laterality analyses confirmed a lack of hemispheric lateralization in activity evoked by sentence stimuli in older adults. Importantly, we found different hemispheric roles, with a left-lateralized core language network supporting syntactic operations, and right-hemisphere regions coming into play to aid in general cognitive demands during spoken sentence processing. Together our findings support the view that high levels of language comprehension in older adults are maintained by a close interplay between a core left hemisphere language network and additional neural resources in the contralateral hemisphere.
Collapse
Affiliation(s)
- Yune Sang Lee
- Department of Speech, Language, and Hearing, School of Behavioral and Brain Sciences, The University of Texas at Dallas, Richardson, TX, USA
| | - Chad S. Rogers
- Department of Psychology, Union College, Schenectady, NY, USA
| | - Murray Grossman
- Department of Neurology, Perelman School of Medicine, University of Pennsylvania, Philadelphia, PA, USA
| | | | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
5
|
Ritz H, Wild CJ, Johnsrude IS. Parametric Cognitive Load Reveals Hidden Costs in the Neural Processing of Perfectly Intelligible Degraded Speech. J Neurosci 2022; 42:4619-4628. [PMID: 35508382 PMCID: PMC9186799 DOI: 10.1523/jneurosci.1777-21.2022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Revised: 03/08/2022] [Accepted: 03/10/2022] [Indexed: 11/21/2022] Open
Abstract
Speech is often degraded by environmental noise or hearing impairment. People can compensate for degradation, but this requires cognitive effort. Previous research has identified frontotemporal networks involved in effortful perception, but materials in these works were also less intelligible, and so it is not clear whether activity reflected effort or intelligibility differences. We used functional magnetic resonance imaging to assess the degree to which spoken sentences were processed under distraction and whether this depended on speech quality even when intelligibility of degraded speech was matched to that of clear speech (close to 100%). On each trial, male and female human participants either attended to a sentence or to a concurrent multiple object tracking (MOT) task that imposed parametric cognitive load. Activity in bilateral anterior insula reflected task demands; during the MOT task, activity increased as cognitive load increased, and during speech listening, activity increased as speech became more degraded. In marked contrast, activity in bilateral anterior temporal cortex was speech selective and gated by attention when speech was degraded. In this region, performance of the MOT task with a trivial load blocked processing of degraded speech, whereas processing of clear speech was unaffected. As load increased, responses to clear speech in these areas declined, consistent with reduced capacity to process it. This result dissociates cognitive control from speech processing; substantially less cognitive control is required to process clear speech than is required to understand even very mildly degraded, 100% intelligible speech. Perceptual and control systems clearly interact dynamically during real-world speech comprehension.SIGNIFICANCE STATEMENT Speech is often perfectly intelligible even when degraded, for example, by background sound, phone transmission, or hearing loss. How does degradation alter cognitive demands? Here, we use fMRI to demonstrate a novel and critical role for cognitive control in the processing of mildly degraded but perfectly intelligible speech. We compare speech that is matched for intelligibility but differs in putative control demands, dissociating cognitive control from speech processing. We also impose a parametric cognitive load during perception, dissociating processes that depend on tasks from those that depend on available capacity. Our findings distinguish between frontal and temporal contributions to speech perception and reveal a hidden cost to processing mildly degraded speech, underscoring the importance of cognitive control for everyday speech comprehension.
Collapse
Affiliation(s)
- Harrison Ritz
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Department of Cognitive, Linguistic, and Psychological Sciences, Brown University, Providence, Rhode Island 02912
| | - Conor J Wild
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Brain and Mind Institute, University of Western Ontario, London, Ontario N6A 3K7, Canada
- Departments of Psychology and Communication Sciences and Disorders, University of Western Ontario, London, Ontario N6A 3K7, Canada
| |
Collapse
|
6
|
Vaden KI, Teubner-Rhodes S, Ahlstrom JB, Dubno JR, Eckert MA. Evidence for cortical adjustments to perceptual decision criteria during word recognition in noise. Neuroimage 2022; 253:119042. [PMID: 35259524 PMCID: PMC9082296 DOI: 10.1016/j.neuroimage.2022.119042] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2021] [Revised: 02/23/2022] [Accepted: 02/26/2022] [Indexed: 01/31/2023] Open
Abstract
Extensive increases in cingulo-opercular frontal activity are typically observed during speech recognition in noise tasks. This elevated activity has been linked to a word recognition benefit on the next trial, termed "adaptive control," but how this effect might be implemented has been unclear. The established link between perceptual decision making and cingulo-opercular function may provide an explanation for how those regions benefit subsequent word recognition. In this case, processes that support recognition such as raising or lowering the decision criteria for more accurate or faster recognition may be adjusted to optimize performance on the next trial. The current neuroimaging study tested the hypothesis that pre-stimulus cingulo-opercular activity reflects criterion adjustments that determine how much information to collect for word recognition on subsequent trials. Participants included middle-age and older adults (N = 30; age = 58.3 ± 8.8 years; m ± sd) with normal hearing or mild sensorineural hearing loss. During a sparse fMRI experiment, words were presented in multitalker babble at +3 dB or +10 dB signal-to-noise ratio (SNR), which participants were instructed to repeat aloud. Word recognition was significantly poorer with increasing participant age and lower SNR compared to higher SNR conditions. A perceptual decision-making model was used to characterize processing differences based on task response latency distributions. The model showed that significantly less sensory evidence was collected (i.e., lower criteria) for lower compared to higher SNR trials. Replicating earlier observations, pre-stimulus cingulo-opercular activity was significantly predictive of correct recognition on a subsequent trial. Individual differences showed that participants with higher criteria also benefitted the most from pre-stimulus activity. Moreover, trial-level criteria changes were significantly linked to higher versus lower pre-stimulus activity. These results suggest cingulo-opercular cortex contributes to criteria adjustments to optimize speech recognition task performance.
Collapse
Affiliation(s)
- Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Corresponding author. (K.I. Vaden Jr)
| | - Susan Teubner-Rhodes
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States,Department of Psychological Sciences, 226 Thach Hall, Auburn University, AL 36849-9027
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| | - Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology, Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave. MSC 550, Charleston, SC 29455-5500, United States
| |
Collapse
|
7
|
Eckert MA, Vaden KI, Iuricich F. Cortical asymmetries at different spatial hierarchies relate to phonological processing ability. PLoS Biol 2022; 20:e3001591. [PMID: 35381012 PMCID: PMC8982829 DOI: 10.1371/journal.pbio.3001591] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/14/2021] [Accepted: 03/03/2022] [Indexed: 11/22/2022] Open
Abstract
The ability to map speech sounds to corresponding letters is critical for establishing proficient reading. People vary in this phonological processing ability, which has been hypothesized to result from variation in hemispheric asymmetries within brain regions that support language. A cerebral lateralization hypothesis predicts that more asymmetric brain structures facilitate the development of foundational reading skills like phonological processing. That is, structural asymmetries are predicted to linearly increase with ability. In contrast, a canalization hypothesis predicts that asymmetries constrain behavioral performance within a normal range. That is, structural asymmetries are predicted to quadratically relate to phonological processing, with average phonological processing occurring in people with the most asymmetric structures. These predictions were examined in relatively large samples of children (N = 424) and adults (N = 300), using a topological asymmetry analysis of T1-weighted brain images and a decoding measure of phonological processing. There was limited evidence of structural asymmetry and phonological decoding associations in classic language-related brain regions. However, and in modest support of the cerebral lateralization hypothesis, small to medium effect sizes were observed where phonological decoding accuracy increased with the magnitude of the largest structural asymmetry across left hemisphere cortical regions, but not right hemisphere cortical regions, for both the adult and pediatric samples. In support of the canalization hypothesis, small to medium effect sizes were observed where phonological decoding in the normal range was associated with increased asymmetries in specific cortical regions for both the adult and pediatric samples, which included performance monitoring and motor planning brain regions that contribute to oral and written language functions. Thus, the relevance of each hypothesis to phonological decoding may depend on the scale of brain organization.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology—Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina, United States of America
| | - Federico Iuricich
- Visual Computing Division, School of Computing, Clemson University, Clemson, South Carolina, United States of America
| | | |
Collapse
|
8
|
McClannahan KS, Mainardi A, Luor A, Chiu YF, Sommers MS, Peelle JE. Spoken Word Recognition in Listeners with Mild Dementia Symptoms. J Alzheimers Dis 2022; 90:749-759. [PMID: 36189586 PMCID: PMC9885492 DOI: 10.3233/jad-215606] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/01/2023]
Abstract
BACKGROUND Difficulty understanding speech is a common complaint of older adults. In quiet, speech perception is often assumed to be relatively automatic. However, higher-level cognitive processes play a key role in successful communication in noise. Limited cognitive resources in adults with dementia may therefore hamper word recognition. OBJECTIVE The goal of this study was to determine the impact of mild dementia on spoken word recognition in quiet and noise. METHODS Participants were 53-86 years with (n = 16) or without (n = 32) dementia symptoms as classified by the Clinical Dementia Rating scale. Participants performed a word identification task with two levels of word difficulty (few and many similar sounding words) in quiet and in noise at two signal-to-noise ratios, +6 and +3 dB. Our hypothesis was that listeners with mild dementia symptoms would have more difficulty with speech perception in noise under conditions that tax cognitive resources. RESULTS Listeners with mild dementia symptoms had poorer task accuracy in both quiet and noise, which held after accounting for differences in age and hearing level. Notably, even in quiet, adults with dementia symptoms correctly identified words only about 80% of the time. However, word difficulty was not a factor in task performance for either group. CONCLUSION These results affirm the difficulty that listeners with mild dementia may have with spoken word recognition, both in quiet and in background noise, consistent with a role of cognitive resources in spoken word identification.
Collapse
Affiliation(s)
| | - Amelia Mainardi
- Department of Otolaryngology, Washington University in St. Louis
| | - Austin Luor
- Department of Otolaryngology, Washington University in St. Louis
| | - Yi-Fang Chiu
- Department of Speech, Language and Hearing Sciences, Saint Louis University
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | | |
Collapse
|
9
|
Eckert MA, Teubner-Rhodes S, Vaden KI, Ahlstrom JB, McClaskey CM, Dubno JR. Unique patterns of hearing loss and cognition in older adults' neural responses to cues for speech recognition difficulty. Brain Struct Funct 2022; 227:203-218. [PMID: 34632538 PMCID: PMC9044122 DOI: 10.1007/s00429-021-02398-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/27/2021] [Accepted: 09/26/2021] [Indexed: 01/31/2023]
Abstract
Older adults with hearing loss experience significant difficulties understanding speech in noise, perhaps due in part to limited benefit from supporting executive functions that enable the use of environmental cues signaling changes in listening conditions. Here we examined the degree to which 41 older adults (60.56-86.25 years) exhibited cortical responses to informative listening difficulty cues that communicated the listening difficulty for each trial compared to neutral cues that were uninformative of listening difficulty. Word recognition was significantly higher for informative compared to uninformative cues in a + 10 dB signal-to-noise ratio (SNR) condition, and response latencies were significantly shorter for informative cues in the + 10 dB SNR and the more-challenging + 2 dB SNR conditions. Informative cues were associated with elevated blood oxygenation level-dependent contrast in visual and parietal cortex. A cue-SNR interaction effect was observed in the cingulo-opercular (CO) network, such that activity only differed between SNR conditions when an informative cue was presented. That is, participants used the informative cues to prepare for changes in listening difficulty from one trial to the next. This cue-SNR interaction effect was driven by older adults with more low-frequency hearing loss and was not observed for those with more high-frequency hearing loss, poorer set-shifting task performance, and lower frontal operculum gray matter volume. These results suggest that proactive strategies for engaging CO adaptive control may be important for older adults with high-frequency hearing loss to optimize speech recognition in changing and challenging listening conditions.
Collapse
Affiliation(s)
- Mark A. Eckert
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | | | - Kenneth I. Vaden
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Jayne B. Ahlstrom
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Carolyn M. McClaskey
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| | - Judy R. Dubno
- Hearing Research Program, Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Avenue, MSC 55, Charleston, SC 29425-5500, USA
| |
Collapse
|
10
|
Kommajosyula SP, Bartlett EL, Cai R, Ling L, Caspary DM. Corticothalamic projections deliver enhanced responses to medial geniculate body as a function of the temporal reliability of the stimulus. J Physiol 2021; 599:5465-5484. [PMID: 34783016 PMCID: PMC10630908 DOI: 10.1113/jp282321] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 11/11/2021] [Indexed: 01/12/2023] Open
Abstract
Ageing and challenging signal-in-noise conditions are known to engage the use of cortical resources to help maintain speech understanding. Extensive corticothalamic projections are thought to provide attentional, mnemonic and cognitive-related inputs in support of sensory inferior colliculus (IC) inputs to the medial geniculate body (MGB). Here we show that a decrease in modulation depth, a temporally less distinct periodic acoustic signal, leads to a jittered ascending temporal code, changing MGB unit responses from adapting responses to responses showing repetition enhancement, posited to aid identification of important communication and environmental sounds. Young-adult male Fischer Brown Norway rats, injected with the inhibitory opsin archaerhodopsin T (ArchT) into the primary auditory cortex (A1), were subsequently studied using optetrodes to record single-units in MGB. Decreasing the modulation depth of acoustic stimuli significantly increased repetition enhancement. Repetition enhancement was blocked by optical inactivation of corticothalamic terminals in MGB. These data support a role for corticothalamic projections in repetition enhancement, implying that predictive anticipation could be used to improve neural representation of weakly modulated sounds. KEY POINTS: In response to a less temporally distinct repeating sound with low modulation depth, medial geniculate body (MGB) single units show a switch from adaptation towards repetition enhancement. Repetition enhancement was reversed by blockade of MGB inputs from the auditory cortex. Collectively, these data argue that diminished acoustic temporal cues such as weak modulation engage cortical processes to enhance coding of those cues in auditory thalamus.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Edward L Bartlett
- Department of Biological Sciences and the Weldon School of Biomedical Engineering, Purdue University, West Lafayette, IN, USA
| | - Rui Cai
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Lynne Ling
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| | - Donald M Caspary
- Department of Pharmacology, Southern Illinois University School of Medicine, Springfield, IL, USA
| |
Collapse
|
11
|
Bhandari P, Demberg V, Kray J. Semantic Predictability Facilitates Comprehension of Degraded Speech in a Graded Manner. Front Psychol 2021; 12:714485. [PMID: 34566795 PMCID: PMC8459870 DOI: 10.3389/fpsyg.2021.714485] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2021] [Accepted: 08/06/2021] [Indexed: 01/02/2023] Open
Abstract
Previous studies have shown that at moderate levels of spectral degradation, semantic predictability facilitates language comprehension. It is argued that when speech is degraded, listeners have narrowed expectations about the sentence endings; i.e., semantic prediction may be limited to only most highly predictable sentence completions. The main objectives of this study were to (i) examine whether listeners form narrowed expectations or whether they form predictions across a wide range of probable sentence endings, (ii) assess whether the facilitatory effect of semantic predictability is modulated by perceptual adaptation to degraded speech, and (iii) use and establish a sensitive metric for the measurement of language comprehension. For this, we created 360 German Subject-Verb-Object sentences that varied in semantic predictability of a sentence-final target word in a graded manner (high, medium, and low) and levels of spectral degradation (1, 4, 6, and 8 channels noise-vocoding). These sentences were presented auditorily to two groups: One group (n =48) performed a listening task in an unpredictable channel context in which the degraded speech levels were randomized, while the other group (n =50) performed the task in a predictable channel context in which the degraded speech levels were blocked. The results showed that at 4 channels noise-vocoding, response accuracy was higher in high-predictability sentences than in the medium-predictability sentences, which in turn was higher than in the low-predictability sentences. This suggests that, in contrast to the narrowed expectations view, comprehension of moderately degraded speech, ranging from low- to high- including medium-predictability sentences, is facilitated in a graded manner; listeners probabilistically preactivate upcoming words from a wide range of semantic space, not limiting only to highly probable sentence endings. Additionally, in both channel contexts, we did not observe learning effects; i.e., response accuracy did not increase over the course of experiment, and response accuracy was higher in the predictable than in the unpredictable channel context. We speculate from these observations that when there is no trial-by-trial variation of the levels of speech degradation, listeners adapt to speech quality at a long timescale; however, when there is a trial-by-trial variation of the high-level semantic feature (e.g., sentence predictability), listeners do not adapt to low-level perceptual property (e.g., speech quality) at a short timescale.
Collapse
Affiliation(s)
- Pratik Bhandari
- Department of Psychology, Saarland University, Saarbrücken, Germany
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
| | - Vera Demberg
- Department of Language Science and Technology, Saarland University, Saarbrücken, Germany
- Department of Computer Science, Saarland University, Saarbrücken, Germany
| | - Jutta Kray
- Department of Psychology, Saarland University, Saarbrücken, Germany
| |
Collapse
|
12
|
Jafari Z, Kolb BE, Mohajerani MH. Age-related hearing loss and cognitive decline: MRI and cellular evidence. Ann N Y Acad Sci 2021; 1500:17-33. [PMID: 34114212 DOI: 10.1111/nyas.14617] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/13/2020] [Revised: 04/30/2021] [Accepted: 05/07/2021] [Indexed: 12/16/2022]
Abstract
Extensive evidence supports the association between age-related hearing loss (ARHL) and cognitive decline. It is, however, unknown whether a causal relationship exists between these two, or whether they both result from shared mechanisms. This paper intends to study this relationship through a comprehensive review of MRI findings as well as evidence of cellular alterations. Our review of structural MRI studies demonstrates that ARHL is independently linked to accelerated atrophy of total and regional brain volumes and reduced white matter integrity. Resting-state and task-based fMRI studies on ARHL also show changes in spontaneous neural activity and brain functional connectivity; and alterations in brain areas supporting auditory, language, cognitive, and affective processing independent of age, respectively. Although MRI findings support a causal relationship between ARHL and cognitive decline, the contribution of potential shared mechanisms should also be considered. In this regard, the review of cellular evidence indicates their role as possible common mechanisms underlying both age-related changes in hearing and cognition. Considering existing evidence, no single hypothesis can explain the link between ARHL and cognitive decline, and the contribution of both causal (i.e., the sensory hypothesis) and shared (i.e., the common cause hypothesis) mechanisms is expected.
Collapse
Affiliation(s)
- Zahra Jafari
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Bryan E Kolb
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| | - Majid H Mohajerani
- Department of Neuroscience, Canadian Centre for Behavioural Neuroscience, University of Lethbridge, Lethbridge, Alberta, Canada
| |
Collapse
|
13
|
Pauquet J, Thiel CM, Mathys C, Rosemann S. Relationship between Memory Load and Listening Demands in Age-Related Hearing Impairment. Neural Plast 2021; 2021:8840452. [PMID: 34188676 PMCID: PMC8195652 DOI: 10.1155/2021/8840452] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 04/27/2021] [Accepted: 05/24/2021] [Indexed: 01/10/2023] Open
Abstract
Age-related hearing loss has been associated with increased recruitment of frontal brain areas during speech perception to compensate for the decline in auditory input. This additional recruitment may bind resources otherwise needed for understanding speech. However, it is unknown how increased demands on listening interact with increasing cognitive demands when processing speech in age-related hearing loss. The current study used a full-sentence working memory task manipulating demands on working memory and listening and studied untreated mild to moderate hard of hearing (n = 20) and normal-hearing age-matched participants (n = 19) with functional MRI. On the behavioral level, we found a significant interaction of memory load and listening condition; this was, however, similar for both groups. Under low, but not high memory load, listening condition significantly influenced task performance. Similarly, under easy but not difficult listening conditions, memory load had a significant effect on task performance. On the neural level, as measured by the BOLD response, we found increased responses under high compared to low memory load conditions in the left supramarginal gyrus, left middle frontal gyrus, and left supplementary motor cortex regardless of hearing ability. Furthermore, we found increased responses in the bilateral superior temporal gyri under easy compared to difficult listening conditions. We found no group differences nor interactions of group with memory load or listening condition. This suggests that memory load and listening condition interacted on a behavioral level, however, only the increased memory load was reflected in increased BOLD responses in frontal and parietal brain regions. Hence, when evaluating listening abilities in elderly participants, memory load should be considered as it might interfere with the assessed performance. We could not find any further evidence that BOLD responses for the different memory and listening conditions are affected by mild to moderate age-related hearing loss.
Collapse
Affiliation(s)
- Julia Pauquet
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
| | - Christiane M. Thiel
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| | - Christian Mathys
- Institute of Radiology and Neuroradiology, Evangelisches Krankenhaus, Carl von Ossietzky Universität Oldenburg, 26122 Oldenburg, Germany
- Research Center Neurosensory Science, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| | - Stephanie Rosemann
- Biological Psychology, Department of Psychology, School of Medicine and Health Sciences, Carl von Ossietzky Universität, 26111 Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, Carl von Ossietzky Universität Oldenburg, 26111 Oldenburg, Germany
| |
Collapse
|
14
|
Griffiths TD, Lad M, Kumar S, Holmes E, McMurray B, Maguire EA, Billig AJ, Sedley W. How Can Hearing Loss Cause Dementia? Neuron 2020; 108:401-412. [PMID: 32871106 PMCID: PMC7664986 DOI: 10.1016/j.neuron.2020.08.003] [Citation(s) in RCA: 155] [Impact Index Per Article: 38.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2020] [Revised: 07/31/2020] [Accepted: 08/05/2020] [Indexed: 12/11/2022]
Abstract
Epidemiological studies identify midlife hearing loss as an independent risk factor for dementia, estimated to account for 9% of cases. We evaluate candidate brain bases for this relationship. These bases include a common pathology affecting the ascending auditory pathway and multimodal cortex, depletion of cognitive reserve due to an impoverished listening environment, and the occupation of cognitive resources when listening in difficult conditions. We also put forward an alternate mechanism, drawing on new insights into the role of the medial temporal lobe in auditory cognition. In particular, we consider how aberrant activity in the service of auditory pattern analysis, working memory, and object processing may interact with dementia pathology in people with hearing loss. We highlight how the effect of hearing interventions on dementia depends on the specific mechanism and suggest avenues for work at the molecular, neuronal, and systems levels to pin this down.
Collapse
Affiliation(s)
- Timothy D Griffiths
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK; Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK; Human Brain Research Laboratory, Department of Neurosurgery, University of Iowa Hospitals and Clinics, Iowa City, IA 52242, USA.
| | - Meher Lad
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Sukhbinder Kumar
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| | - Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, Otolaryngology, University of Iowa, Iowa City, IA 52242, USA
| | - Eleanor A Maguire
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | | | - William Sedley
- Biosciences Institute, Newcastle University Medical School, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
15
|
Holmes E, Zeidman P, Friston KJ, Griffiths TD. Difficulties with Speech-in-Noise Perception Related to Fundamental Grouping Processes in Auditory Cortex. Cereb Cortex 2020; 31:1582-1596. [PMID: 33136138 PMCID: PMC7869094 DOI: 10.1093/cercor/bhaa311] [Citation(s) in RCA: 12] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2020] [Revised: 08/04/2020] [Accepted: 09/22/2020] [Indexed: 01/05/2023] Open
Abstract
In our everyday lives, we are often required to follow a conversation when background noise is present (“speech-in-noise” [SPIN] perception). SPIN perception varies widely—and people who are worse at SPIN perception are also worse at fundamental auditory grouping, as assessed by figure-ground tasks. Here, we examined the cortical processes that link difficulties with SPIN perception to difficulties with figure-ground perception using functional magnetic resonance imaging. We found strong evidence that the earliest stages of the auditory cortical hierarchy (left core and belt areas) are similarly disinhibited when SPIN and figure-ground tasks are more difficult (i.e., at target-to-masker ratios corresponding to 60% rather than 90% performance)—consistent with increased cortical gain at lower levels of the auditory hierarchy. Overall, our results reveal a common neural substrate for these basic (figure-ground) and naturally relevant (SPIN) tasks—which provides a common computational basis for the link between SPIN perception and fundamental auditory grouping.
Collapse
Affiliation(s)
- Emma Holmes
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Peter Zeidman
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Karl J Friston
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK
| | - Timothy D Griffiths
- Wellcome Centre for Human Neuroimaging, UCL Queen Square Institute of Neurology, UCL, London WC1N 3AR, UK.,Biosciences Institute, Faculty of Medical Sciences, Newcastle University, Newcastle upon Tyne NE2 4HH, UK
| |
Collapse
|
16
|
Slade K, Plack CJ, Nuttall HE. The Effects of Age-Related Hearing Loss on the Brain and Cognitive Function. Trends Neurosci 2020; 43:810-821. [PMID: 32826080 DOI: 10.1016/j.tins.2020.07.005] [Citation(s) in RCA: 112] [Impact Index Per Article: 28.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2020] [Revised: 06/22/2020] [Accepted: 07/14/2020] [Indexed: 12/27/2022]
Abstract
Age-related hearing loss (ARHL) is a common problem for older adults, leading to communication difficulties, isolation, and cognitive decline. Recently, hearing loss has been identified as potentially the most modifiable risk factor for dementia. Listening in challenging situations, or when the auditory system is damaged, strains cortical resources, and this may change how the brain responds to cognitively demanding situations more generally. We review the effects of ARHL on brain areas involved in speech perception, from the auditory cortex, through attentional networks, to the motor system. We explore current perspectives on the possible causal relationship between hearing loss, neural reorganisation, and cognitive impairment. Through this synthesis we aim to inspire innovative research and novel interventions for alleviating hearing loss and cognitive decline.
Collapse
Affiliation(s)
- Kate Slade
- Department of Psychology, Lancaster University, Lancaster, UK
| | - Christopher J Plack
- Department of Psychology, Lancaster University, Lancaster, UK; Manchester Centre for Audiology and Deafness, School of Health Sciences, University of Manchester, Manchester, UK
| | - Helen E Nuttall
- Department of Psychology, Lancaster University, Lancaster, UK.
| |
Collapse
|
17
|
Zekveld AA, van Scheepen JAM, Versfeld NJ, Kramer SE, van Steenbergen H. The Influence of Hearing Loss on Cognitive Control in an Auditory Conflict Task: Behavioral and Pupillometry Findings. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2483-2492. [PMID: 32610026 DOI: 10.1044/2020_jslhr-20-00107] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The pupil dilation response is sensitive not only to auditory task demand but also to cognitive conflict. Conflict is induced by incompatible trials in auditory Stroop tasks in which participants have to identify the presentation location (left or right ear) of the words "left" or "right." Previous studies demonstrated that the compatibility effect is reduced if the trial is preceded by another incompatible trial (conflict adaptation). Here, we investigated the influence of hearing status on cognitive conflict and conflict adaptation in an auditory Stroop task. Method Two age-matched groups consisting of 32 normal-hearing participants (M age = 52 years, age range: 25-67 years) and 28 participants with hearing impairment (M age = 52 years, age range: 23-64 years) performed an auditory Stroop task. We assessed the effects of hearing status and stimulus compatibility on reaction times (RTs) and pupil dilation responses. We furthermore analyzed the Pearson correlation coefficients between age, degree of hearing loss, and the compatibility effects on the RT and pupil response data across all participants. Results As expected, the RTs were longer and pupil dilation was larger for incompatible relative to compatible trials. Furthermore, these effects were reduced for trials following incompatible (as compared to compatible) trials (conflict adaptation). No general effect of hearing status was observed, but the correlations suggested that higher age and a larger degree of hearing loss were associated with more interference of current incompatibility on RTs. Conclusions Conflict processing and adaptation effects were observed on the RTs and pupil dilation responses in an auditory Stroop task. No general effects of hearing status were observed, but the correlations suggested that higher age and a greater degree of hearing loss were related to reduced conflict processing ability. The current study underlines the relevance of taking into account cognitive control and conflict adaptation processes.
Collapse
Affiliation(s)
- Adriana A Zekveld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - J A M van Scheepen
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Niek J Versfeld
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Sophia E Kramer
- Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology-Head and Neck Surgery, Ear and Hearing, Amsterdam Public Health Research Institute, De Boelelaan, Amsterdam, the Netherlands
| | - Henk van Steenbergen
- Cognitive Psychology Unit, Institute of Psychology, University of Leiden, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| |
Collapse
|
18
|
Venezia JH, Leek MR, Lindeman MP. Suprathreshold Differences in Competing Speech Perception in Older Listeners With Normal and Impaired Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2141-2161. [PMID: 32603618 DOI: 10.1044/2020_jslhr-19-00324] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Age-related declines in auditory temporal processing and cognition make older listeners vulnerable to interference from competing speech. This vulnerability may be increased in older listeners with sensorineural hearing loss due to additional effects of spectral distortion and accelerated cognitive decline. The goal of this study was to uncover differences between older hearing-impaired (OHI) listeners and older normal-hearing (ONH) listeners in the perceptual encoding of competing speech signals. Method Age-matched groups of 10 OHI and 10 ONH listeners performed the coordinate response measure task with a synthetic female target talker and a male competing talker at a target-to-masker ratio of +3 dB. Individualized gain was provided to OHI listeners. Each listener completed 50 baseline and 800 "bubbles" trials in which randomly selected segments of the speech modulation power spectrum (MPS) were retained on each trial while the remainder was filtered out. Average performance was fixed at 50% correct by adapting the number of segments retained. Multinomial regression was used to estimate weights showing the regions of the MPS associated with performance (a "classification image" or CImg). Results The CImg weights were significantly different between the groups in two MPS regions: a region encoding the shared phonetic content of the two talkers and a region encoding the competing (male) talker's voice. The OHI listeners demonstrated poorer encoding of the phonetic content and increased vulnerability to interference from the competing talker. Individual differences in CImg weights explained over 75% of the variance in baseline performance in the OHI listeners, whereas differences in high-frequency pure-tone thresholds explained only 10%. Conclusion Suprathreshold deficits in the encoding of low- to mid-frequency (~5-10 Hz) temporal modulations-which may reflect poorer "dip listening"-and auditory grouping at a perceptual and/or cognitive level are responsible for the relatively poor performance of OHI versus ONH listeners on a different-gender competing speech task. Supplemental Material https://doi.org/10.23641/asha.12568472.
Collapse
Affiliation(s)
- Jonathan H Venezia
- VA Loma Linda Healthcare System, CA
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Loma Linda University, CA
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, CA
- Department of Otolaryngology-Head and Neck Surgery, School of Medicine, Loma Linda University, CA
| | | |
Collapse
|
19
|
Jaeger M, Mirkovic B, Bleichner MG, Debener S. Decoding the Attended Speaker From EEG Using Adaptive Evaluation Intervals Captures Fluctuations in Attentional Listening. Front Neurosci 2020; 14:603. [PMID: 32612507 PMCID: PMC7308709 DOI: 10.3389/fnins.2020.00603] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/06/2019] [Accepted: 05/15/2020] [Indexed: 11/13/2022] Open
Abstract
Listeners differ in their ability to attend to a speech stream in the presence of a competing sound. Differences in speech intelligibility in noise cannot be fully explained by the hearing ability which suggests the involvement of additional cognitive factors. A better understanding of the temporal fluctuations in the ability to pay selective auditory attention to a desired speech stream may help in explaining these variabilities. In order to better understand the temporal dynamics of selective auditory attention, we developed an online auditory attention decoding (AAD) processing pipeline based on speech envelope tracking in the electroencephalogram (EEG). Participants had to attend to one audiobook story while a second one had to be ignored. Online AAD was applied to track the attention toward the target speech signal. Individual temporal attention profiles were computed by combining an established AAD method with an adaptive staircase procedure. The individual decoding performance over time was analyzed and linked to behavioral performance as well as subjective ratings of listening effort, motivation, and fatigue. The grand average attended speaker decoding profile derived in the online experiment indicated performance above chance level. Parameters describing the individual AAD performance in each testing block indicated significant differences in decoding performance over time to be closely related to the behavioral performance in the selective listening task. Further, an exploratory analysis indicated that subjects with poor decoding performance reported higher listening effort and fatigue compared to good performers. Taken together our results show that online EEG based AAD in a complex listening situation is feasible. Adaptive attended speaker decoding profiles over time could be used as an objective measure of behavioral performance and listening effort. The developed online processing pipeline could also serve as a basis for future EEG based near real-time auditory neurofeedback systems.
Collapse
Affiliation(s)
- Manuela Jaeger
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Fraunhofer Institute for Digital Media Technology IDMT, Division Hearing, Speech and Audio Technology, Oldenburg, Germany
| | - Bojana Mirkovic
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany
| | - Martin G Bleichner
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Neurophysiology of Everyday Life Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany
| | - Stefan Debener
- Neuropsychology Lab, Department of Psychology, University of Oldenburg, Oldenburg, Germany.,Cluster of Excellence Hearing4all, University of Oldenburg, Oldenburg, Germany.,Research Center for Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
20
|
Rogers CS, Jones MS, McConkey S, Spehar B, Van Engen KJ, Sommers MS, Peelle JE. Age-Related Differences in Auditory Cortex Activity During Spoken Word Recognition. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2020; 1:452-473. [PMID: 34327333 PMCID: PMC8318202 DOI: 10.1162/nol_a_00021] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/22/2023]
Abstract
Understanding spoken words requires the rapid matching of a complex acoustic stimulus with stored lexical representations. The degree to which brain networks supporting spoken word recognition are affected by adult aging remains poorly understood. In the current study we used fMRI to measure the brain responses to spoken words in two conditions: an attentive listening condition, in which no response was required, and a repetition task. Listeners were 29 young adults (aged 19-30 years) and 32 older adults (aged 65-81 years) without self-reported hearing difficulty. We found largely similar patterns of activity during word perception for both young and older adults, centered on the bilateral superior temporal gyrus. As expected, the repetition condition resulted in significantly more activity in areas related to motor planning and execution (including the premotor cortex and supplemental motor area) compared to the attentive listening condition. Importantly, however, older adults showed significantly less activity in probabilistically defined auditory cortex than young adults when listening to individual words in both the attentive listening and repetition tasks. Age differences in auditory cortex activity were seen selectively for words (no age differences were present for 1-channel vocoded speech, used as a control condition), and could not be easily explained by accuracy on the task, movement in the scanner, or hearing sensitivity (available on a subset of participants). These findings indicate largely similar patterns of brain activity for young and older adults when listening to words in quiet, but suggest less recruitment of auditory cortex by the older adults.
Collapse
Affiliation(s)
- Chad S. Rogers
- Department of Psychology, Union College, Schenectady, NY, USA
| | - Michael S. Jones
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Brent Spehar
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| | - Kristin J. Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Mitchell S. Sommers
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, MO, USA
| | - Jonathan E. Peelle
- Department of Otolaryngology, Washington University in St. Louis, St. Louis, MO, USA
| |
Collapse
|
21
|
Kommajosyula SP, Cai R, Bartlett E, Caspary DM. Top-down or bottom up: decreased stimulus salience increases responses to predictable stimuli of auditory thalamic neurons. J Physiol 2019; 597:2767-2784. [PMID: 30924931 DOI: 10.1113/jp277450] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/16/2018] [Accepted: 03/25/2019] [Indexed: 01/29/2023] Open
Abstract
KEY POINTS Temporal imprecision leads to deficits in the comprehension of signals in cluttered acoustic environments, and the elderly are shown to use cognitive resources to disambiguate these signals. To mimic ageing in young rats, we delivered sound signals that are temporally degraded, which led to temporally imprecise neural codes. Instead of adaptation to repeated stimuli, with degraded signals, there was a relative increase in firing rates, similar to that seen in aged rats. We interpret this increase with repetition as a repair mechanism for strengthening the internal representations of degraded signals by the higher-order structures. ABSTRACT To better understand speech in challenging environments, older adults increasingly use top-down cognitive and contextual resources. The medial geniculate body (MGB) integrates ascending inputs with descending predictions to dynamically gate auditory representations based on salience and context. A previous MGB single-unit study found an increased preference for predictable sinusoidal amplitude modulated (SAM) stimuli in aged rats relative to young rats. The results suggested that the age-degraded/jittered up-stream acoustic code may engender an increased preference for predictable/repeating acoustic signals, possibly reflecting increased use of top-down resources. In the present study, we recorded from units in young-adult MGB, comparing responses to standard SAM with those evoked by less salient SAM (degraded) stimuli. We hypothesized that degrading the SAM stimulus would simulate the degraded ascending acoustic code seen in the elderly, increasing the preference for predictable stimuli. Single units were recorded from clusters of advanceable tetrodes implanted above the MGB of young-adult awake rats. Less salient SAM significantly increased the preference for predictable stimuli, especially at higher modulation frequencies. Rather than adaptation, higher modulation frequencies elicited increased numbers of spikes with each successive trial/repeat of the less salient SAM. These findings are consistent with previous findings obtained in aged rats suggesting that less salient acoustic signals engage the additional use of top-down resources, as reflected by an increased preference for repeating stimuli that enhance the representation of complex environmental/communication sounds.
Collapse
Affiliation(s)
- Srinivasa P Kommajosyula
- Southern Illinois University School of Medicine, , Department of Pharmacology, Springfield, IL, USA
| | - Rui Cai
- Southern Illinois University School of Medicine, , Department of Pharmacology, Springfield, IL, USA
| | - Edward Bartlett
- Department of Biological Sciences, Purdue University, West Lafayette, IN, USA
| | - Donald M Caspary
- Southern Illinois University School of Medicine, , Department of Pharmacology, Springfield, IL, USA
| |
Collapse
|
22
|
Presacco A, Simon JZ, Anderson S. Speech-in-noise representation in the aging midbrain and cortex: Effects of hearing loss. PLoS One 2019; 14:e0213899. [PMID: 30865718 PMCID: PMC6415857 DOI: 10.1371/journal.pone.0213899] [Citation(s) in RCA: 62] [Impact Index Per Article: 12.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Accepted: 03/04/2019] [Indexed: 01/24/2023] Open
Abstract
Age-related deficits in speech-in-noise understanding pose a significant problem for older adults. Despite the vast number of studies conducted to investigate the neural mechanisms responsible for these communication difficulties, the role of central auditory deficits, beyond peripheral hearing loss, remains unclear. The current study builds upon our previous work that investigated the effect of aging on normal-hearing individuals and aims to estimate the effect of peripheral hearing loss on the representation of speech in noise in two critical regions of the aging auditory pathway: the midbrain and cortex. Data from 14 hearing-impaired older adults were added to a previously published dataset of 17 normal-hearing younger adults and 15 normal-hearing older adults. The midbrain response, measured by the frequency-following response (FFR), and the cortical response, measured with the magnetoencephalography (MEG) response, were recorded from subjects listening to speech in quiet and noise conditions at four signal-to-noise ratios (SNRs): +3, 0, -3, and -6 dB sound pressure level (SPL). Both groups of older listeners showed weaker midbrain response amplitudes and overrepresentation of cortical responses compared to younger listeners. No significant differences were found between the two older groups when the midbrain and cortical measurements were analyzed independently. However, significant differences between the older groups were found when investigating the midbrain-cortex relationships; that is, only hearing-impaired older adults showed significant correlations between midbrain and cortical measurements, suggesting that hearing loss may alter reciprocal connections between lower and higher levels of the auditory pathway. The overall paucity of differences in midbrain or cortical responses between the two older groups suggests that age-related temporal processing deficits may contribute to older adults' communication difficulties beyond what might be predicted from peripheral hearing loss alone; however, hearing loss does seem to alter the connectivity between midbrain and cortex. These results may have important ramifications for the field of audiology, as it indicates that algorithms in clinical devices, such as hearing aids, should consider age-related temporal processing deficits to maximize user benefit.
Collapse
Affiliation(s)
- Alessandro Presacco
- Department of Otolaryngology, University of California, Irvine, CA, United States of America
- Center for Hearing Research, University of California, Irvine, CA, United States of America
- * E-mail:
| | - Jonathan Z. Simon
- Department of Electrical & Computer Engineering, University of Maryland, College Park, MD, United States of America
- Department of Biology, University of Maryland, College Park, MD, United States of America
- Institute for Systems Research, University of Maryland, College Park, MD, United States of America
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
| | - Samira Anderson
- Neuroscience and Cognitive Science Program, University of Maryland, College Park, MD, United States of America
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States of America
| |
Collapse
|
23
|
Peelle JE. Listening Effort: How the Cognitive Consequences of Acoustic Challenge Are Reflected in Brain and Behavior. Ear Hear 2019; 39:204-214. [PMID: 28938250 PMCID: PMC5821557 DOI: 10.1097/aud.0000000000000494] [Citation(s) in RCA: 309] [Impact Index Per Article: 61.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/28/2017] [Accepted: 07/28/2017] [Indexed: 02/04/2023]
Abstract
Everyday conversation frequently includes challenges to the clarity of the acoustic speech signal, including hearing impairment, background noise, and foreign accents. Although an obvious problem is the increased risk of making word identification errors, extracting meaning from a degraded acoustic signal is also cognitively demanding, which contributes to increased listening effort. The concepts of cognitive demand and listening effort are critical in understanding the challenges listeners face in comprehension, which are not fully predicted by audiometric measures. In this article, the authors review converging behavioral, pupillometric, and neuroimaging evidence that understanding acoustically degraded speech requires additional cognitive support and that this cognitive load can interfere with other operations such as language processing and memory for what has been heard. Behaviorally, acoustic challenge is associated with increased errors in speech understanding, poorer performance on concurrent secondary tasks, more difficulty processing linguistically complex sentences, and reduced memory for verbal material. Measures of pupil dilation support the challenge associated with processing a degraded acoustic signal, indirectly reflecting an increase in neural activity. Finally, functional brain imaging reveals that the neural resources required to understand degraded speech extend beyond traditional perisylvian language networks, most commonly including regions of prefrontal cortex, premotor cortex, and the cingulo-opercular network. Far from being exclusively an auditory problem, acoustic degradation presents listeners with a systems-level challenge that requires the allocation of executive cognitive resources. An important point is that a number of dissociable processes can be engaged to understand degraded speech, including verbal working memory and attention-based performance monitoring. The specific resources required likely differ as a function of the acoustic, linguistic, and cognitive demands of the task, as well as individual differences in listeners' abilities. A greater appreciation of cognitive contributions to processing degraded speech is critical in understanding individual differences in comprehension ability, variability in the efficacy of assistive devices, and guiding rehabilitation approaches to reducing listening effort and facilitating communication.
Collapse
Affiliation(s)
- Jonathan E Peelle
- Department of Otolaryngology, Washington University in Saint Louis, Saint Louis, Missouri, USA
| |
Collapse
|
24
|
Leon M, Woo C. Environmental Enrichment and Successful Aging. Front Behav Neurosci 2018; 12:155. [PMID: 30083097 PMCID: PMC6065351 DOI: 10.3389/fnbeh.2018.00155] [Citation(s) in RCA: 31] [Impact Index Per Article: 5.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/29/2018] [Accepted: 07/04/2018] [Indexed: 12/18/2022] Open
Abstract
The human brain sustains a slow but progressive decline in function as it ages and these changes are particularly profound in cognitive processing. A potential contributor to this deterioration is the gradual decline in the functioning of multiple sensory systems and the effects they have on areas of the brain that mediate cognitive function. In older adults, diminished capacity is typically observed in the visual, auditory, masticatory, olfactory, and motor systems, and these age-related declines are associated with both a decline in cognitive proficiency, and a loss of neurons in regions of the brain. We will review how the loss of hearing, vision, mastication skills, olfactory impairment, and motoric decline accompany cognitive loss, and how improved functioning of these systems may aid in the restoration of the cognitive abilities in older adults. The human brain appears to require a great deal of stimulation to maintain its cognitive efficacy as people age and environmental enrichment may aid in its maintenance and recovery.
Collapse
Affiliation(s)
- Michael Leon
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, United States
| | - Cynthia Woo
- Department of Neurobiology and Behavior, University of California, Irvine, Irvine, CA, United States
| |
Collapse
|
25
|
Koeritzer MA, Rogers CS, Van Engen KJ, Peelle JE. The Impact of Age, Background Noise, Semantic Ambiguity, and Hearing Loss on Recognition Memory for Spoken Sentences. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:740-751. [PMID: 29450493 PMCID: PMC5963044 DOI: 10.1044/2017_jslhr-h-17-0077] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/27/2017] [Revised: 08/28/2017] [Accepted: 09/20/2017] [Indexed: 05/20/2023]
Abstract
PURPOSE The goal of this study was to determine how background noise, linguistic properties of spoken sentences, and listener abilities (hearing sensitivity and verbal working memory) affect cognitive demand during auditory sentence comprehension. METHOD We tested 30 young adults and 30 older adults. Participants heard lists of sentences in quiet and in 8-talker babble at signal-to-noise ratios of +15 dB and +5 dB, which increased acoustic challenge but left the speech largely intelligible. Half of the sentences contained semantically ambiguous words to additionally manipulate cognitive challenge. Following each list, participants performed a visual recognition memory task in which they viewed written sentences and indicated whether they remembered hearing the sentence previously. RESULTS Recognition memory (indexed by d') was poorer for acoustically challenging sentences, poorer for sentences containing ambiguous words, and differentially poorer for noisy high-ambiguity sentences. Similar patterns were observed for Z-transformed response time data. There were no main effects of age, but age interacted with both acoustic clarity and semantic ambiguity such that older adults' recognition memory was poorer for acoustically degraded high-ambiguity sentences than the young adults'. Within the older adult group, exploratory correlation analyses suggested that poorer hearing ability was associated with poorer recognition memory for sentences in noise, and better verbal working memory was associated with better recognition memory for sentences in noise. CONCLUSIONS Our results demonstrate listeners' reliance on domain-general cognitive processes when listening to acoustically challenging speech, even when speech is highly intelligible. Acoustic challenge and semantic ambiguity both reduce the accuracy of listeners' recognition memory for spoken sentences. SUPPLEMENTAL MATERIALS https://doi.org/10.23641/asha.5848059.
Collapse
Affiliation(s)
- Margaret A Koeritzer
- Program in Audiology and Communication Sciences, Washington University in St. Louis, MO
| | - Chad S Rogers
- Department of Otolaryngology, Washington University in St. Louis, MO
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences and Program in Linguistics, Washington University in St. Louis, MO
| | - Jonathan E Peelle
- Department of Otolaryngology, Washington University in St. Louis, MO
| |
Collapse
|
26
|
Is Listening in Noise Worth It? The Neurobiology of Speech Recognition in Challenging Listening Conditions. Ear Hear 2018; 37 Suppl 1:101S-10S. [PMID: 27355759 DOI: 10.1097/aud.0000000000000300] [Citation(s) in RCA: 80] [Impact Index Per Article: 13.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
This review examines findings from functional neuroimaging studies of speech recognition in noise to provide a neural systems level explanation for the effort and fatigue that can be experienced during speech recognition in challenging listening conditions. Neuroimaging studies of speech recognition consistently demonstrate that challenging listening conditions engage neural systems that are used to monitor and optimize performance across a wide range of tasks. These systems appear to improve speech recognition in younger and older adults, but sustained engagement of these systems also appears to produce an experience of effort and fatigue that may affect the value of communication. When considered in the broader context of the neuroimaging and decision making literature, the speech recognition findings from functional imaging studies indicate that the expected value, or expected level of speech recognition given the difficulty of listening conditions, should be considered when measuring effort and fatigue. The authors propose that the behavioral economics or neuroeconomics of listening can provide a conceptual and experimental framework for understanding effort and fatigue that may have clinical significance.
Collapse
|
27
|
A Novel Communication Value Task Demonstrates Evidence of Response Bias in Cases with Presbyacusis. Sci Rep 2017; 7:16512. [PMID: 29184188 PMCID: PMC5705661 DOI: 10.1038/s41598-017-16673-y] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/01/2017] [Accepted: 11/06/2017] [Indexed: 01/21/2023] Open
Abstract
Decision-making about the expected value of an experience or behavior can explain hearing health behaviors in older adults with hearing loss. Forty-four middle-aged to older adults (68.45 ± 7.73 years) performed a task in which they were asked to decide whether information from a surgeon or an administrative assistant would be important to their health in hypothetical communication scenarios across visual signal-to-noise ratios (SNR). Participants also could choose to view the briefly presented sentences multiple times. The number of these effortful attempts to read the stimuli served as a measure of demand for information to make a health importance decision. Participants with poorer high frequency hearing more frequently decided that information was important to their health compared to participants with better high frequency hearing. This appeared to reflect a response bias because participants with high frequency hearing loss demonstrated shorter response latencies when they rated the sentences as important to their health. However, elevated high frequency hearing thresholds did not predict demand for information to make a health importance decision. The results highlight the utility of a performance-based measure to characterize effort and expected value from performing tasks in older adults with hearing loss.
Collapse
|
28
|
Li J, Guo H, Ge L, Cheng L, Wang J, Li H, Zhang K, Xiang J, Chen J, Zhang H, Xu Y. Mechanism of Cerebralcare Granule® for Improving Cognitive Function in Resting-State Brain Functional Networks of Sub-healthy Subjects. Front Neurosci 2017; 11:410. [PMID: 28769748 PMCID: PMC5509764 DOI: 10.3389/fnins.2017.00410] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/04/2017] [Accepted: 06/30/2017] [Indexed: 11/13/2022] Open
Abstract
Cerebralcare Granule® (CG), a Chinese herbal medicine, has been used to ameliorate cognitive impairment induced by ischemia or mental disorders. The ability of CG to improve health status and cognitive function has drawn researchers' attention, but the relevant brain circuits that underlie the ameliorative effects of CG remain unclear. The present study aimed to explore the underlying neurobiological mechanisms of CG in ameliorating cognitive function in sub-healthy subjects using resting-state functional magnetic resonance imaging (fMRI). Thirty sub-healthy participants were instructed to take one 2.5-g package of CG three times a day for 3 months. Clinical cognitive functions were assessed with the Chinese Revised Wechsler Adult Intelligence Scale (WAIS-RC) and Wechsler Memory Scale (WMS), and fMRI scans were performed at baseline and the end of intervention. Functional brain network data were analyzed by conventional network metrics (CNM) and frequent subgraph mining (FSM). Then 21 other sub-healthy participants were enrolled as a blank control group of cognitive functional. We found that administrating CG can improve the full scale of intelligence quotient (FIQ) and Memory Quotient (MQ) scores. At the same time, following CG treatment, in CG group, the topological properties of functional brain networks were altered in various frontal, temporal, occipital cortex regions, and several subcortical brain regions, including essential components of the executive attention network, the salience network, and the sensory-motor network. The nodes involved in the FSM results were largely consistent with the CNM findings, and the changes in nodal metrics correlated with improved cognitive function. These findings indicate that CG can improve sub-healthy subjects' cognitive function through altering brain functional networks. These results provide a foundation for future studies of the potential physiological mechanism of CG.
Collapse
Affiliation(s)
- Jing Li
- Department of Humanities and Social Science, Shanxi Medical UniversityTaiyuan, China
| | - Hao Guo
- Department of Computer Science and Technology, Taiyuan University of TechnologyTaiyuan, China
| | - Ling Ge
- Department of Humanities and Social Science, Shanxi Medical UniversityTaiyuan, China.,Department of Medical Psychology, Shanxi Medical College for Continuing EducationTaiyuan, China
| | - Long Cheng
- Department of Psychiatry, First Hospital, First Clinical Medical College of Shanxi Medical UniversityTaiyuan, China
| | - Junjie Wang
- Department of Humanities and Social Science, Shanxi Medical UniversityTaiyuan, China
| | - Hong Li
- Department of Humanities and Social Science, Shanxi Medical UniversityTaiyuan, China
| | - Kerang Zhang
- Department of Psychiatry, First Hospital, First Clinical Medical College of Shanxi Medical UniversityTaiyuan, China
| | - Jie Xiang
- Department of Computer Science and Technology, Taiyuan University of TechnologyTaiyuan, China
| | - Junjie Chen
- Department of Computer Science and Technology, Taiyuan University of TechnologyTaiyuan, China
| | - Hui Zhang
- Department of Radiology, First Hospital of Shanxi Medical UniversityTaiyuan, China
| | - Yong Xu
- Department of Psychiatry, First Hospital, First Clinical Medical College of Shanxi Medical UniversityTaiyuan, China.,MDT Center for Cognitive Impairment and Sleep Disorders, First Hospital, First Clinical Medical College of Shanxi Medical UniversityTaiyuan, China
| |
Collapse
|
29
|
Cognitive persistence: Development and validation of a novel measure from the Wisconsin Card Sorting Test. Neuropsychologia 2017; 102:95-108. [PMID: 28552783 DOI: 10.1016/j.neuropsychologia.2017.05.027] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Revised: 05/23/2017] [Accepted: 05/25/2017] [Indexed: 12/30/2022]
Abstract
The Wisconsin Card Sorting Test (WCST) has long been used as a neuropsychological assessment of executive function abilities, in particular, cognitive flexibility or "set-shifting". Recent advances in scoring the task have helped to isolate specific WCST performance metrics that index set-shifting abilities and have improved our understanding of how prefrontal and parietal cortex contribute to set-shifting. We present evidence that the ability to overcome task difficulty to achieve a goal, or "cognitive persistence", is another important prefrontal function that is characterized by the WCST and that can be differentiated from efficient set-shifting. This novel measure of cognitive persistence was developed using the WCST-64 in an adult lifespan sample of 230 participants. The measure was validated using individual variation in cingulo-opercular cortex function in a sub-sample of older adults who had completed a challenging speech recognition in noise fMRI task. Specifically, older adults with higher cognitive persistence were more likely to demonstrate word recognition benefit from cingulo-opercular activity. The WCST-derived cognitive persistence measure can be used to disentangle neural processes involved in set-shifting from those involved in persistence.
Collapse
|
30
|
Peelle JE. Introduction to Special Issue on Age, Hearing, and Speech Comprehension. Exp Aging Res 2016; 42:1-2. [PMID: 26683037 DOI: 10.1080/0361073x.2016.1108714] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022]
Affiliation(s)
- Jonathan E Peelle
- a Department of Otolaryngology , Washington University in St. Louis , St. Louis , Missouri , USA
| |
Collapse
|
31
|
Wang S, Yang M, Du S, Yang J, Liu B, Gorriz JM, Ramírez J, Yuan TF, Zhang Y. Wavelet Entropy and Directed Acyclic Graph Support Vector Machine for Detection of Patients with Unilateral Hearing Loss in MRI Scanning. Front Comput Neurosci 2016; 10:106. [PMID: 27807415 PMCID: PMC5069288 DOI: 10.3389/fncom.2016.00106] [Citation(s) in RCA: 41] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/23/2016] [Accepted: 09/28/2016] [Indexed: 12/17/2022] Open
Abstract
HighlightsWe develop computer-aided diagnosis system for unilateral hearing loss detection in structural magnetic resonance imaging. Wavelet entropy is introduced to extract image global features from brain images. Directed acyclic graph is employed to endow support vector machine an ability to handle multi-class problems. The developed computer-aided diagnosis system achieves an overall accuracy of 95.1% for this three-class problem of differentiating left-sided and right-sided hearing loss from healthy controls.
Aim: Sensorineural hearing loss (SNHL) is correlated to many neurodegenerative disease. Now more and more computer vision based methods are using to detect it in an automatic way. Materials: We have in total 49 subjects, scanned by 3.0T MRI (Siemens Medical Solutions, Erlangen, Germany). The subjects contain 14 patients with right-sided hearing loss (RHL), 15 patients with left-sided hearing loss (LHL), and 20 healthy controls (HC). Method: We treat this as a three-class classification problem: RHL, LHL, and HC. Wavelet entropy (WE) was selected from the magnetic resonance images of each subjects, and then submitted to a directed acyclic graph support vector machine (DAG-SVM). Results: The 10 repetition results of 10-fold cross validation shows 3-level decomposition will yield an overall accuracy of 95.10% for this three-class classification problem, higher than feedforward neural network, decision tree, and naive Bayesian classifier. Conclusions: This computer-aided diagnosis system is promising. We hope this study can attract more computer vision method for detecting hearing loss.
Collapse
Affiliation(s)
- Shuihua Wang
- School of Electronic Science and Engineering, Nanjing UniversityNanjing, China; School of Computer Science and Technology, Nanjing Normal UniversityNanjing, China; Hunan Provincial Key Laboratory of Network Investigational Technology, Hunan Police AcademyChangsha, China
| | - Ming Yang
- Department of Radiology, Nanjing Children's Hospital, Nanjing Medical UniversityNanjing, China; Key Laboratory of Intelligent Computing and Information Processing in Fujian Provincial University, Quanzhou Normal UniversityQuanzhou, China
| | - Sidan Du
- School of Electronic Science and Engineering, Nanjing University Nanjing, China
| | - Jiquan Yang
- Jiangsu Key Laboratory of 3D Printing Equipment and Manufacturing Nanjing, China
| | - Bin Liu
- Department of Radiology, Zhong-Da Hospital of Southeast University Nanjing, China
| | - Juan M Gorriz
- Department of Signal Theory, Networking and Communications, University of Granada Granada, Spain
| | - Javier Ramírez
- Department of Signal Theory, Networking and Communications, University of Granada Granada, Spain
| | - Ti-Fei Yuan
- School of Computer Science and Technology, Nanjing Normal UniversityNanjing, China; State Key Lab of CAD & CG, Zhejiang UniversityHangzhou, China
| | - Yudong Zhang
- School of Computer Science and Technology, Nanjing Normal UniversityNanjing, China; Key Laboratory of Statistical Information Technology and Data Mining, State Statistics BureauChengdu, China
| |
Collapse
|
32
|
Thiel CM, Özyurt J, Nogueira W, Puschmann S. Effects of Age on Long Term Memory for Degraded Speech. Front Hum Neurosci 2016; 10:473. [PMID: 27708570 PMCID: PMC5030220 DOI: 10.3389/fnhum.2016.00473] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 09/07/2016] [Indexed: 12/15/2022] Open
Abstract
Prior research suggests that acoustical degradation impacts encoding of items into memory, especially in elderly subjects. We here aimed to investigate whether acoustically degraded items that are initially encoded into memory are more prone to forgetting as a function of age. Young and old participants were tested with a vocoded and unvocoded serial list learning task involving immediate and delayed free recall. We found that degraded auditory input increased forgetting of previously encoded items, especially in older participants. We further found that working memory capacity predicted forgetting of degraded information in young participants. In old participants, verbal IQ was the most important predictor for forgetting acoustically degraded information. Our data provide evidence that acoustically degraded information, even if encoded, is especially vulnerable to forgetting in old age.
Collapse
Affiliation(s)
- Christiane M Thiel
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität OldenburgOldenburg, Germany; Research Center Neurosensory Science, Carl von Ossietzky Universität OldenburgOldenburg, Germany
| | - Jale Özyurt
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| | - Waldo Nogueira
- Cluster of Excellence "Hearing4all", Department of Otolaryngology, Medical University Hannover Hannover, Germany
| | - Sebastian Puschmann
- Biological Psychology Lab, Cluster of Excellence "Hearing4all", Department of Psychology, European Medical School, Carl von Ossietzky Universität Oldenburg Oldenburg, Germany
| |
Collapse
|
33
|
Cardin V. Effects of Aging and Adult-Onset Hearing Loss on Cortical Auditory Regions. Front Neurosci 2016; 10:199. [PMID: 27242405 PMCID: PMC4862970 DOI: 10.3389/fnins.2016.00199] [Citation(s) in RCA: 77] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/01/2015] [Accepted: 04/22/2016] [Indexed: 11/13/2022] Open
Abstract
Hearing loss is a common feature in human aging. It has been argued that dysfunctions in central processing are important contributing factors to hearing loss during older age. Aging also has well documented consequences for neural structure and function, but it is not clear how these effects interact with those that arise as a consequence of hearing loss. This paper reviews the effects of aging and adult-onset hearing loss in the structure and function of cortical auditory regions. The evidence reviewed suggests that aging and hearing loss result in atrophy of cortical auditory regions and stronger engagement of networks involved in the detection of salient events, adaptive control and re-allocation of attention. These cortical mechanisms are engaged during listening in effortful conditions in normal hearing individuals. Therefore, as a consequence of aging and hearing loss, all listening becomes effortful and cognitive load is constantly high, reducing the amount of available cognitive resources. This constant effortful listening and reduced cognitive spare capacity could be what accelerates cognitive decline in older adults with hearing loss.
Collapse
Affiliation(s)
- Velia Cardin
- Department of Experimental Psychology, Deafness, Cognition and Language Research Centre, University College LondonLondon, UK; Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping UniversityLinköping, Sweden
| |
Collapse
|