51
|
Wagner TM, Wagner L, Plontke SK, Rahne T. Enhancing Cochlear Implant Outcomes across Age Groups: The Interplay of Forward Focus and Advanced Combination Encoder Coding Strategies in Noisy Conditions. J Clin Med 2024; 13:1399. [PMID: 38592239 PMCID: PMC10931918 DOI: 10.3390/jcm13051399] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2024] [Revised: 02/14/2024] [Accepted: 02/23/2024] [Indexed: 04/10/2024] Open
Abstract
Background: Hearing in noise is challenging for cochlear implant users and requires significant listening effort. This study investigated the influence of ForwardFocus and number of maxima of the Advanced Combination Encoder (ACE) strategy, as well as age, on speech recognition threshold and listening effort in noise. Methods: A total of 33 cochlear implant recipients were included (age ≤ 40 years: n = 15, >40 years: n = 18). The Oldenburg Sentence Test was used to measure 50% speech recognition thresholds (SRT50) in fluctuating and stationary noise. Speech was presented frontally, while three frontal or rear noise sources were used, and the number of ACE maxima varied between 8 and 12. Results: ForwardFocus significantly improved the SRT50 when noise was presented from the back, independent of subject age. The use of 12 maxima further improved the SRT50 when ForwardFocus was activated and when noise and speech were presented frontally. Listening effort was significantly worse in the older age group compared to the younger age group and was reduced by ForwardFocus but not by increasing the number of ACE maxima. Conclusion: Forward Focus can improve speech recognition in noisy environments and reduce listening effort, especially in older cochlear implant users.
Collapse
Affiliation(s)
- Telse M. Wagner
- Department of Otorhinolaryngology, University Medicine Halle, Ernst-Grube-Straße 40, 06120 Halle (Saale), Germany; (L.W.); (S.K.P.); (T.R.)
| | | | | | | |
Collapse
|
52
|
Dietze A, Sörös P, Pöntynen H, Witt K, Dietz M. Longitudinal observations of the effects of ischemic stroke on binaural perception. Front Neurosci 2024; 18:1322762. [PMID: 38482140 PMCID: PMC10936579 DOI: 10.3389/fnins.2024.1322762] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/16/2023] [Accepted: 02/08/2024] [Indexed: 11/02/2024] Open
Abstract
Acute ischemic stroke, characterized by a localized reduction in blood flow to specific areas of the brain, has been shown to affect binaural auditory perception. In a previous study conducted during the acute phase of ischemic stroke, two tasks of binaural hearing were performed: binaural tone-in-noise detection, and lateralization of stimuli with interaural time- or level differences. Various lesion-specific, as well as individual, differences in binaural performance between patients in the acute phase of stroke and a control group were demonstrated. For the current study, we re-invited the same group of patients, whereupon a subgroup repeated the experiments during the subacute and chronic phases of stroke. Similar to the initial study, this subgroup consisted of patients with lesions in different locations, including cortical and subcortical areas. At the group level, the results from the tone-in-noise detection experiment remained consistent across the three measurement phases, as did the number of deviations from normal performance in the lateralization task. However, the performance in the lateralization task exhibited variations over time among individual patients. Some patients demonstrated improvements in their lateralization abilities, indicating recovery, whereas others' lateralization performance deteriorated during the later stages of stroke. Notably, our analyses did not reveal consistent patterns for patients with similar lesion locations. These findings suggest that recovery processes are more individual than the acute effects of stroke on binaural perception. Individual impairments in binaural hearing abilities after the acute phase of ischemic stroke have been demonstrated and should therefore also be targeted in rehabilitation programs.
Collapse
Affiliation(s)
- Anna Dietze
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
| | - Peter Sörös
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| | - Henri Pöntynen
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
| | - Karsten Witt
- Department of Neurology, School of Medicine and Health Sciences, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
- Department of Neurology, Evangelical Hospital, Oldenburg, Germany
| | - Mathias Dietz
- Department of Medical Physics and Acoustics, University of Oldenburg, Oldenburg, Germany
- Cluster of Excellence “Hearing4all”, University of Oldenburg, Oldenburg, Germany
- Research Center Neurosensory Science, University of Oldenburg, Oldenburg, Germany
| |
Collapse
|
53
|
Fitzgerald LP, DeDe G, Shen J. Effects of linguistic context and noise type on speech comprehension. Front Psychol 2024; 15:1345619. [PMID: 38375107 PMCID: PMC10875108 DOI: 10.3389/fpsyg.2024.1345619] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Accepted: 01/17/2024] [Indexed: 02/21/2024] Open
Abstract
Introduction Understanding speech in background noise is an effortful endeavor. When acoustic challenges arise, linguistic context may help us fill in perceptual gaps. However, more knowledge is needed regarding how different types of background noise affect our ability to construct meaning from perceptually complex speech input. Additionally, there is limited evidence regarding whether perceptual complexity (e.g., informational masking) and linguistic complexity (e.g., occurrence of contextually incongruous words) interact during processing of speech material that is longer and more complex than a single sentence. Our first research objective was to determine whether comprehension of spoken sentence pairs is impacted by the informational masking from a speech masker. Our second objective was to identify whether there is an interaction between perceptual and linguistic complexity during speech processing. Methods We used multiple measures including comprehension accuracy, reaction time, and processing effort (as indicated by task-evoked pupil response), making comparisons across three different levels of linguistic complexity in two different noise conditions. Context conditions varied by final word, with each sentence pair ending with an expected exemplar (EE), within-category violation (WV), or between-category violation (BV). Forty young adults with typical hearing performed a speech comprehension in noise task over three visits. Each participant heard sentence pairs presented in either multi-talker babble or spectrally shaped steady-state noise (SSN), with the same noise condition across all three visits. Results We observed an effect of context but not noise on accuracy. Further, we observed an interaction of noise and context in peak pupil dilation data. Specifically, the context effect was modulated by noise type: context facilitated processing only in the more perceptually complex babble noise condition. Discussion These findings suggest that when perceptual complexity arises, listeners make use of the linguistic context to facilitate comprehension of speech obscured by background noise. Our results extend existing accounts of speech processing in noise by demonstrating how perceptual and linguistic complexity affect our ability to engage in higher-level processes, such as construction of meaning from speech segments that are longer than a single sentence.
Collapse
Affiliation(s)
- Laura P. Fitzgerald
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Gayle DeDe
- Speech, Language, and Brain Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| | - Jing Shen
- Speech Perception and Cognition Laboratory, Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA, United States
| |
Collapse
|
54
|
Sendesen E, Turkyilmaz D. WITHDRAWN: Listening handicap in tinnitus patients by controlling extended high frequencies - Effort or fatigue? Auris Nasus Larynx 2024; 51:198-205. [PMID: 37137796 DOI: 10.1016/j.anl.2023.04.011] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2023] [Revised: 03/30/2023] [Accepted: 04/25/2023] [Indexed: 05/05/2023]
Abstract
This article has been withdrawn: please see Elsevier Policy on Article Withdrawal (https://www.elsevier.com/about/policies/article-withdrawal). This article has been withdrawn at the request of the editor and publisher. The publisher regrets that an error occurred which led to the premature publication of this paper. This error bears no reflection on the article or its authors. The publisher apologizes to the authors and the readers for this unfortunate error.
Collapse
Affiliation(s)
- Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| | | |
Collapse
|
55
|
Goodwin MV, Hogervorst E, Maidment DW. A qualitative study assessing the barriers and facilitators to physical activity in adults with hearing loss. Br J Health Psychol 2024; 29:95-111. [PMID: 37658583 DOI: 10.1111/bjhp.12689] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/11/2022] [Accepted: 08/22/2023] [Indexed: 09/03/2023]
Abstract
OBJECTIVES Growing epidemiological evidence has shown hearing loss is associated with physical inactivity. Currently, there is a dearth in evidence investigating why this occurs. This study aimed to investigate the barriers and facilitators to physical activity in middle-aged and older adults with hearing loss. DESIGN Individual semi-structured qualitative interviews. METHODS A phenomenological approach was taken. Ten adults (≥40 years) were interviewed via videoconferencing. The interview schedule was underpinned by the capability, opportunity, motivation and behaviour (COM-B) model. Reflexive thematic analysis was used to generate themes, which were subsequently mapped onto the COM-B model and behaviour change wheel. RESULTS Nine hearing loss specific themes were generated, which included the following barriers to physical activity: mental fatigue, interaction with the environment (acoustically challenging environments, difficulties with hearing aids when physically active) and social interactions (perceived stigma). Environmental modifications (digital capabilities of hearing aids), social support (hearing loss-only groups) and hearing loss self-efficacy were reported to facilitate physical activity. CONCLUSIONS Middle-aged and older adults with hearing loss experience hearing-specific barriers to physical activity, which has a deleterious impact on their overall health and well-being. Interventions and public health programmes need to be tailored to account for these additional barriers. Further research is necessary to test potential behaviour change techniques.
Collapse
Affiliation(s)
- Maria V Goodwin
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, UK
| | - Eef Hogervorst
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, UK
| | - David W Maidment
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, UK
| |
Collapse
|
56
|
McClaskey CM. Neural hyperactivity and altered envelope encoding in the central auditory system: Changes with advanced age and hearing loss. Hear Res 2024; 442:108945. [PMID: 38154191 PMCID: PMC10942735 DOI: 10.1016/j.heares.2023.108945] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/25/2023] [Revised: 12/04/2023] [Accepted: 12/22/2023] [Indexed: 12/30/2023]
Abstract
Temporal modulations are ubiquitous features of sound signals that are important for auditory perception. The perception of temporal modulations, or temporal processing, is known to decline with aging and hearing loss and negatively impact auditory perception in general and speech recognition specifically. However, neurophysiological literature also provides evidence of exaggerated or enhanced encoding of specifically temporal envelopes in aging and hearing loss, which may arise from changes in inhibitory neurotransmission and neuronal hyperactivity. This review paper describes the physiological changes to the neural encoding of temporal envelopes that have been shown to occur with age and hearing loss and discusses the role of disinhibition and neural hyperactivity in contributing to these changes. Studies in both humans and animal models suggest that aging and hearing loss are associated with stronger neural representations of both periodic amplitude modulation envelopes and of naturalistic speech envelopes, but primarily for low-frequency modulations (<80 Hz). Although the frequency dependence of these results is generally taken as evidence of amplified envelope encoding at the cortex and impoverished encoding at the midbrain and brainstem, there is additional evidence to suggest that exaggerated envelope encoding may also occur subcortically, though only for envelopes with low modulation rates. A better understanding of how temporal envelope encoding is altered in aging and hearing loss, and the contexts in which neural responses are exaggerated/diminished, may aid in the development of interventions, assistive devices, and treatment strategies that work to ameliorate age- and hearing-loss-related auditory perceptual deficits.
Collapse
Affiliation(s)
- Carolyn M McClaskey
- Department of Otolaryngology - Head and Neck Surgery, Medical University of South Carolina, 135 Rutledge Ave, MSC 550, Charleston, SC 29425, United States.
| |
Collapse
|
57
|
Nourski KV, Steinschneider M, Rhone AE, Berger JI, Dappen ER, Kawasaki H, Howard III MA. Intracranial electrophysiology of spectrally degraded speech in the human cortex. Front Hum Neurosci 2024; 17:1334742. [PMID: 38318272 PMCID: PMC10839784 DOI: 10.3389/fnhum.2023.1334742] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/07/2023] [Accepted: 12/28/2023] [Indexed: 02/07/2024] Open
Abstract
Introduction Cochlear implants (CIs) are the treatment of choice for severe to profound hearing loss. Variability in CI outcomes remains despite advances in technology and is attributed in part to differences in cortical processing. Studying these differences in CI users is technically challenging. Spectrally degraded stimuli presented to normal-hearing individuals approximate input to the central auditory system in CI users. This study used intracranial electroencephalography (iEEG) to investigate cortical processing of spectrally degraded speech. Methods Participants were adult neurosurgical epilepsy patients. Stimuli were utterances /aba/ and /ada/, spectrally degraded using a noise vocoder (1-4 bands) or presented without vocoding. The stimuli were presented in a two-alternative forced choice task. Cortical activity was recorded using depth and subdural iEEG electrodes. Electrode coverage included auditory core in posteromedial Heschl's gyrus (HGPM), superior temporal gyrus (STG), ventral and dorsal auditory-related areas, and prefrontal and sensorimotor cortex. Analysis focused on high gamma (70-150 Hz) power augmentation and alpha (8-14 Hz) suppression. Results Chance task performance occurred with 1-2 spectral bands and was near-ceiling for clear stimuli. Performance was variable with 3-4 bands, permitting identification of good and poor performers. There was no relationship between task performance and participants demographic, audiometric, neuropsychological, or clinical profiles. Several response patterns were identified based on magnitude and differences between stimulus conditions. HGPM responded strongly to all stimuli. A preference for clear speech emerged within non-core auditory cortex. Good performers typically had strong responses to all stimuli along the dorsal stream, including posterior STG, supramarginal, and precentral gyrus; a minority of sites in STG and supramarginal gyrus had a preference for vocoded stimuli. In poor performers, responses were typically restricted to clear speech. Alpha suppression was more pronounced in good performers. In contrast, poor performers exhibited a greater involvement of posterior middle temporal gyrus when listening to clear speech. Discussion Responses to noise-vocoded speech provide insights into potential factors underlying CI outcome variability. The results emphasize differences in the balance of neural processing along the dorsal and ventral stream between good and poor performers, identify specific cortical regions that may have diagnostic and prognostic utility, and suggest potential targets for neuromodulation-based CI rehabilitation strategies.
Collapse
Affiliation(s)
- Kirill V. Nourski
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Mitchell Steinschneider
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Departments of Neurology and Neuroscience, Albert Einstein College of Medicine, Bronx, NY, United States
| | - Ariane E. Rhone
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Joel I. Berger
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Emily R. Dappen
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
| | - Hiroto Kawasaki
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
| | - Matthew A. Howard III
- Department of Neurosurgery, The University of Iowa, Iowa City, IA, United States
- Iowa Neuroscience Institute, The University of Iowa, Iowa City, IA, United States
- Pappajohn Biomedical Institute, The University of Iowa, Iowa City, IA, United States
| |
Collapse
|
58
|
Valzolgher C, Capra S, Gessa E, Rosi T, Giovanelli E, Pavani F. Sound localization in noisy contexts: performance, metacognitive evaluations and head movements. Cogn Res Princ Implic 2024; 9:4. [PMID: 38191869 PMCID: PMC10774233 DOI: 10.1186/s41235-023-00530-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/31/2023] [Accepted: 12/26/2023] [Indexed: 01/10/2024] Open
Abstract
Localizing sounds in noisy environments can be challenging. Here, we reproduce real-life soundscapes to investigate the effects of environmental noise on sound localization experience. We evaluated participants' performance and metacognitive assessments, including measures of sound localization effort and confidence, while also tracking their spontaneous head movements. Normal-hearing participants (N = 30) were engaged in a speech-localization task conducted in three common soundscapes that progressively increased in complexity: nature, traffic, and a cocktail party setting. To control visual information and measure behaviors, we used visual virtual reality technology. The results revealed that the complexity of the soundscape had an impact on both performance errors and metacognitive evaluations. Participants reported increased effort and reduced confidence for sound localization in more complex noise environments. On the contrary, the level of soundscape complexity did not influence the use of spontaneous exploratory head-related behaviors. We also observed that, irrespective of the noisy condition, participants who implemented a higher number of head rotations and explored a wider extent of space by rotating their heads made lower localization errors. Interestingly, we found preliminary evidence that an increase in spontaneous head movements, specifically the extent of head rotation, leads to a decrease in perceived effort and an increase in confidence at the single-trial level. These findings expand previous observations regarding sound localization in noisy environments by broadening the perspective to also include metacognitive evaluations, exploratory behaviors and their interactions.
Collapse
Affiliation(s)
- Chiara Valzolgher
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy.
| | - Sara Capra
- Department of Psychology and Cognitive Sciences (DiPSCo), University of Trento, Trento, Italy
| | - Elena Gessa
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Tommaso Rosi
- Department of Physics, University of Trento, Trento, Italy
| | - Elena Giovanelli
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
| | - Francesco Pavani
- Center for Mind/Brain Sciences (CIMeC), University of Trento, Corso Bettini 31, 38068, Rovereto, TN, Italy
- Centro Interuniversitario di Ricerca "Cognizione, Linguaggio e Sordità" (CIRCLeS), Trento, Italy
| |
Collapse
|
59
|
Granberg S, Widén S, Gustafsson J. How to remain in working life with hearing loss - health factors for a sustainable work situation. Work 2024; 79:1391-1406. [PMID: 38875067 DOI: 10.3233/wor-230377] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2024] Open
Abstract
BACKGROUND Persons with hearing loss (HL) are a vulnerable group in working life. Studies have shown that they are more likely than the general population to be in part-time work, to be unemployed, receive disability pension, and to be on sick leave. Many workers with HL also experience unhealthy work conditions, such as jobs where they experience high demands combined with low control as well as safety concerns and social isolation. There is a lack of studies that focus on factors that promote a healthy, sustainable work situation for the target group. OBJECTIVE To investigate health factors that contribute to a sustainable work situation for employees with HL. METHODS The current study was a comparative, observational study with a cross-sectional design including a clinical population of adults with HL. Comparisons were made between workers with HL "in work" and workers with HL on "HL-related sick leave". RESULTS Seven health factors were identified. Those "in work" experienced a healthier work environment as well as lower levels of mental strain, hearing-related work characteristics and content, cognitively demanding work content, hearing-related symptoms, energy-demanding activities, and bodily aches and pain than those on "HL-related sick leave". CONCLUSION The results demonstrate a clear pattern regarding health factors for a sustainable working life. The type of job was not related to whether an individual was on sick leave or working. Rather, the work climate and the content of the work mattered.
Collapse
Affiliation(s)
- Sarah Granberg
- School of Health Sciences, Örebro University, Örebro, Sweden
- Faculty of Medicine and Health, Örebro University, Örebro, Sweden
| | - Stephen Widén
- School of Health Sciences, Örebro University, Örebro, Sweden
- Faculty of Medicine and Health, Örebro University, Örebro, Sweden
| | - Johanna Gustafsson
- School of Health Sciences, Örebro University, Örebro, Sweden
- Faculty of Medicine and Health, Örebro University, Örebro, Sweden
| |
Collapse
|
60
|
Frei V, Schmitt R, Meyer M, Giroud N. Processing of Visual Speech Cues in Speech-in-Noise Comprehension Depends on Working Memory Capacity and Enhances Neural Speech Tracking in Older Adults With Hearing Impairment. Trends Hear 2024; 28:23312165241287622. [PMID: 39444375 PMCID: PMC11520018 DOI: 10.1177/23312165241287622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2024] [Revised: 08/21/2024] [Accepted: 09/11/2024] [Indexed: 10/25/2024] Open
Abstract
Comprehending speech in noise (SiN) poses a challenge for older hearing-impaired listeners, requiring auditory and working memory resources. Visual speech cues provide additional sensory information supporting speech understanding, while the extent of such visual benefit is characterized by large variability, which might be accounted for by individual differences in working memory capacity (WMC). In the current study, we investigated behavioral and neurofunctional (i.e., neural speech tracking) correlates of auditory and audio-visual speech comprehension in babble noise and the associations with WMC. Healthy older adults with hearing impairment quantified by pure-tone hearing loss (threshold average: 31.85-57 dB, N = 67) listened to sentences in babble noise in audio-only, visual-only and audio-visual speech modality and performed a pattern matching and a comprehension task, while electroencephalography (EEG) was recorded. Behaviorally, no significant difference in task performance was observed across modalities. However, we did find a significant association between individual working memory capacity and task performance, suggesting a more complex interplay between audio-visual speech cues, working memory capacity and real-world listening tasks. Furthermore, we found that the visual speech presentation was accompanied by increased cortical tracking of the speech envelope, particularly in a right-hemispheric auditory topographical cluster. Post-hoc, we investigated the potential relationships between the behavioral performance and neural speech tracking but were not able to establish a significant association. Overall, our results show an increase in neurofunctional correlates of speech associated with congruent visual speech cues, specifically in a right auditory cluster, suggesting multisensory integration.
Collapse
Affiliation(s)
- Vanessa Frei
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
| | - Raffael Schmitt
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
| | - Martin Meyer
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
- University of Zurich, University Research Priority Program Dynamics of Healthy Aging, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
- Evolutionary Neuroscience of Language, Department of Comparative Language Science, University of Zurich, Zurich, Switzerland
- Cognitive Psychology Unit, Alpen-Adria University, Klagenfurt, Austria
| | - Nathalie Giroud
- Computational Neuroscience of Speech and Hearing, Department of Computational Linguistics, University of Zurich, Zurich, Switzerland
- International Max Planck Research School for the Life Course: Evolutionary and Ontogenetic Dynamics (LIFE), Berlin, Germany
- Competence Center Language & Medicine, Center of Medical Faculty and Faculty of Arts and Sciences, University of Zurich, Zurich, Switzerland
- Center for Neuroscience Zurich, University and ETH of Zurich, Zurich, Switzerland
| |
Collapse
|
61
|
Plain B, Pielage H, Kramer SE, Richter M, Saunders GH, Versfeld NJ, Zekveld AA, Bhuiyan TA. Combining Cardiovascular and Pupil Features Using k-Nearest Neighbor Classifiers to Assess Task Demand, Social Context, and Sentence Accuracy During Listening. Trends Hear 2024; 28:23312165241232551. [PMID: 38549351 PMCID: PMC10981225 DOI: 10.1177/23312165241232551] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2023] [Revised: 01/04/2024] [Accepted: 01/25/2024] [Indexed: 04/01/2024] Open
Abstract
In daily life, both acoustic factors and social context can affect listening effort investment. In laboratory settings, information about listening effort has been deduced from pupil and cardiovascular responses independently. The extent to which these measures can jointly predict listening-related factors is unknown. Here we combined pupil and cardiovascular features to predict acoustic and contextual aspects of speech perception. Data were collected from 29 adults (mean = 64.6 years, SD = 9.2) with hearing loss. Participants performed a speech perception task at two individualized signal-to-noise ratios (corresponding to 50% and 80% of sentences correct) and in two social contexts (the presence and absence of two observers). Seven features were extracted per trial: baseline pupil size, peak pupil dilation, mean pupil dilation, interbeat interval, blood volume pulse amplitude, pre-ejection period and pulse arrival time. These features were used to train k-nearest neighbor classifiers to predict task demand, social context and sentence accuracy. The k-fold cross validation on the group-level data revealed above-chance classification accuracies: task demand, 64.4%; social context, 78.3%; and sentence accuracy, 55.1%. However, classification accuracies diminished when the classifiers were trained and tested on data from different participants. Individually trained classifiers (one per participant) performed better than group-level classifiers: 71.7% (SD = 10.2) for task demand, 88.0% (SD = 7.5) for social context, and 60.0% (SD = 13.1) for sentence accuracy. We demonstrated that classifiers trained on group-level physiological data to predict aspects of speech perception generalized poorly to novel participants. Individually calibrated classifiers hold more promise for future applications.
Collapse
Affiliation(s)
- Bethany Plain
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Hidde Pielage
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
- Eriksholm Research Centre, Snekkersten, Denmark
| | - Sophia E. Kramer
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Michael Richter
- School of Psychology, Liverpool John Moores University, Liverpool, UK
| | - Gabrielle H. Saunders
- Manchester Centre for Audiology and Deafness (ManCAD), University of Manchester, Manchester, UK
| | - Niek J. Versfeld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | - Adriana A. Zekveld
- Otolaryngology Head and Neck Surgery, Ear & Hearing, Amsterdam Public Health Research Institute, Vrije Universiteit Amsterdam, Amsterdam UMC, Amsterdam, the Netherlands
| | | |
Collapse
|
62
|
Diao T, Ma X, Fang X, Duan M, Yu L. Compensation in neuro-system related to age-related hearing loss. Acta Otolaryngol 2024; 144:30-34. [PMID: 38265951 DOI: 10.1080/00016489.2023.2295400] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2023] [Accepted: 12/10/2023] [Indexed: 01/26/2024]
Abstract
BACKGROUND Age-related hearing loss (ARHL) is a major cause of chronic disability among the elderly. Individuals with ARHL not only have trouble hearing sounds, but also with speech perception. As the perception of auditory information is reliant on integration between widespread brain networks to interpret auditory stimuli, both auditory and extra-auditory systems which mainly include visual, motor and attention systems, play an important role in compensating for ARHL. OBJECTIVES To better understand the compensatory mechanism of ARHL and inspire better interventions that may alleviate ARHL. METHODS We mainly focus on the existing information on ARHL-related central compensation. The compensatory effects of hearing aids (HAs) and cochlear implants (CIs) on ARHL were also discussed. RESULTS Studies have shown that ARHL can induce cochlear hair cell damage or loss and cochlear synaptopathy, which could induce central compensation including compensation of auditory and extra-auditory neural networks. The use of HAs and CIs can improve bottom-up processing by enabling 'better' input to the auditory pathways and then to the cortex by enhancing the diminished auditory signal. CONCLUSIONS The central compensation of ARHL and its possible correlation with HAs and CIs are current hotspots in the field and should be given focus in future research.
Collapse
Affiliation(s)
- Tongxiang Diao
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Xin Ma
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| | - Xuan Fang
- Department of Human Anatomy, Histology & Embryology, School of Basic Medical Sciences, Peking University, Beijing, China
| | - Maoli Duan
- Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
- Department of Otolaryngology, Head and Neck Surgery & Audiology and Neurotology, Karolinska University Hospital, Karolinska Institute, Stockholm, Sweden
| | - Lisheng Yu
- Department of Otolaryngology, Head and Neck Surgery, People's Hospital, Peking University, Beijing, China
| |
Collapse
|
63
|
Vaisberg JM, Gilmore S, Qian J, Russo FA. The Benefit of Hearing Aids as Measured by Listening Accuracy, Subjective Listening Effort, and Functional Near Infrared Spectroscopy. Trends Hear 2024; 28:23312165241273346. [PMID: 39195628 PMCID: PMC11363059 DOI: 10.1177/23312165241273346] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2024] [Revised: 07/12/2024] [Accepted: 07/16/2024] [Indexed: 08/29/2024] Open
Abstract
There is broad consensus that listening effort is an important outcome for measuring hearing performance. However, there remains debate on the best ways to measure listening effort. This study sought to measure neural correlates of listening effort using functional near-infrared spectroscopy (fNIRS) in experienced adult hearing aid users. The study evaluated impacts of amplification and signal-to-noise ratio (SNR) on cerebral blood oxygenation, with the expectation that easier listening conditions would be associated with less oxygenation in the prefrontal cortex. Thirty experienced adult hearing aid users repeated sentence-final words from low-context Revised Speech Perception in Noise Test sentences. Participants repeated words at a hard SNR (individual SNR-50) or easy SNR (individual SNR-50 + 10 dB), while wearing hearing aids fit to prescriptive targets or without wearing hearing aids. In addition to assessing listening accuracy and subjective listening effort, prefrontal blood oxygenation was measured using fNIRS. As expected, easier listening conditions (i.e., easy SNR, with hearing aids) led to better listening accuracy, lower subjective listening effort, and lower oxygenation across the entire prefrontal cortex compared to harder listening conditions. Listening accuracy and subjective listening effort were also significant predictors of oxygenation.
Collapse
Affiliation(s)
| | - Sean Gilmore
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
| | - Jinyu Qian
- Innovation Centre Toronto, Sonova Canada Inc., Kitchener, ON, Canada
- Department of Communicative Sciences Disorders and Sciences, University at Buffalo, State University of New York, Buffalo, NY, USA
| | - Frank A. Russo
- Department of Psychology, Toronto Metropolitan University, Toronto, ON, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
64
|
Ferguson MA, Nakano K, Jayakody DMP. Clinical Assessment Tools for the Detection of Cognitive Impairment and Hearing Loss in the Ageing Population: A Scoping Review. Clin Interv Aging 2023; 18:2041-2051. [PMID: 38088948 PMCID: PMC10713803 DOI: 10.2147/cia.s409114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2023] [Accepted: 11/07/2023] [Indexed: 12/18/2023] Open
Abstract
Objective There is a strong association between cognitive impairment and hearing loss, both highly prevalent in the ageing population. Early detection of both hearing loss and cognitive impairment is essential in the management of these conditions to ensure effective and informed decisions on healthcare. The main objective was to identify existing and emerging cognitive and auditory assessment tools used in clinical settings (eg, memory clinics, audiology clinics), which manage the ageing population. Methods A scoping review of peer-reviewed publications and results were reported according to the PRISMA-ScR guidelines. Results A total of 289 articles were selected for data extraction. The majority of studies (76.1%) were conducted in 2017 or later. Tests of global cognitive function (ie, Mini-Mental State Exam, Montreal Cognitive Assessment) were the most commonly used method to detect cognitive impairment in hearing healthcare settings. Behavioral hearing testing (ie, pure-tone audiometry) was the most commonly used method to detect hearing loss in cognitive healthcare settings. Objective, physiological measures were seldom used across disciplines. Conclusion Preferences among clinicians for short, accessible tests likely explain the use of tests of global cognitive function and behavioral hearing tests. Rapidly evolving literature has identified inherent limitations of administering global cognitive function tests and pure-tone testing in an ageing population. Using electrophysiological measures as an adjunct to standard methods of assessment may provide more reliable information for clinical recommendations in those with cognitive and hearing impairment, and subsequently achieve better healthcare outcomes.
Collapse
Affiliation(s)
- Melanie A Ferguson
- School of Allied Health, Faculty of Health Sciences, Curtin University, Perth, Australia
- Curtin enAble Institute, Faculty of Health Sciences, Curtin University, Perth, Australia
- Centre for Ear Sciences, Medical School, University of Western Australia, Perth, Australia
| | - Kento Nakano
- Ear Science Institute Australia, Perth, Australia
| | - Dona M P Jayakody
- School of Allied Health, Faculty of Health Sciences, Curtin University, Perth, Australia
- Centre for Ear Sciences, Medical School, University of Western Australia, Perth, Australia
- Ear Science Institute Australia, Perth, Australia
| |
Collapse
|
65
|
Kraus F, Obleser J, Herrmann B. Pupil Size Sensitivity to Listening Demand Depends on Motivational State. eNeuro 2023; 10:ENEURO.0288-23.2023. [PMID: 37989588 PMCID: PMC10734370 DOI: 10.1523/eneuro.0288-23.2023] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/10/2023] [Revised: 10/19/2023] [Accepted: 10/22/2023] [Indexed: 11/23/2023] Open
Abstract
Motivation plays a role when a listener needs to understand speech under acoustically demanding conditions. Previous work has demonstrated pupil-linked arousal being sensitive to both listening demands and motivational state during listening. It is less clear how motivational state affects the temporal evolution of the pupil size and its relation to subsequent behavior. We used an auditory gap detection task (N = 33) to study the joint impact of listening demand and motivational state on the pupil size response and examine its temporal evolution. Task difficulty and a listener's motivational state were orthogonally manipulated through changes in gap duration and monetary reward prospect. We show that participants' performance decreased with task difficulty, but that reward prospect enhanced performance under hard listening conditions. Pupil size increased with both increased task difficulty and higher reward prospect, and this reward prospect effect was largest under difficult listening conditions. Moreover, pupil size time courses differed between detected and missed gaps, suggesting that the pupil response indicates upcoming behavior. Larger pre-gap pupil size was further associated with faster response times on a trial-by-trial within-participant level. Our results reiterate the utility of pupil size as an objective and temporally sensitive measure in audiology. However, such assessments of cognitive resource recruitment need to consider the individual's motivational state.
Collapse
Affiliation(s)
- Frauke Kraus
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Jonas Obleser
- Department of Psychology, University of Lübeck, 23562 Lübeck, Germany
- Center of Brain, Behavior, and Metabolism, University of Lübeck, 23562 Lübeck, Germany
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, Toronto M6A 2E1, Ontario, Canada
- Department of Psychology, University of Toronto, Toronto M5S 3G3, Ontario, Canada
| |
Collapse
|
66
|
Mohammadi Y, Østergaard J, Graversen C, Andersen OK, Biurrun Manresa J. Validity and reliability of self-reported and neural measures of listening effort. Eur J Neurosci 2023; 58:4357-4370. [PMID: 37984406 DOI: 10.1111/ejn.16187] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/10/2023] [Revised: 10/17/2023] [Accepted: 10/23/2023] [Indexed: 11/22/2023]
Abstract
Listening effort can be defined as a measure of cognitive resources used by listeners to perform a listening task. Various methods have been proposed to measure this effort, yet their reliability remains unestablished, a crucial step before their application in research or clinical settings. This study encompassed 32 participants undertaking speech-in-noise tasks across two sessions, approximately a week apart. They listened to sentences and word lists at varying signal-to-noise ratios (SNRs) (-9, -6, -3 and 0 dB), then retaining them for roughly 3 s. We evaluated the test-retest reliability of self-reported effort ratings, theta (4-7 Hz) and alpha (8-13 Hz) oscillatory power, suggested previously as neural markers of listening effort. Additionally, we examined the reliability of correct word percentages. Both relative and absolute reliability were assessed using intraclass correlation coefficients (ICC) and Bland-Altman analysis. We also computed the standard error of measurement (SEM) and smallest detectable change (SDC). Our findings indicated heightened frontal midline theta power for word lists compared to sentences during the retention phase under high SNRs (0 dB, -3 dB), likely indicating a greater memory load for word lists. We observed SNR's impact on alpha power in the right central region during the listening phase and frontal theta power during the retention phase in sentences. Overall, the reliability analysis demonstrated satisfactory between-session variability for correct words and effort ratings. However, neural measures (frontal midline theta power and right central alpha power) displayed substantial variability, even though group-level outcomes appeared consistent across sessions.
Collapse
Affiliation(s)
- Yousef Mohammadi
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Jan Østergaard
- Department of Electronic Systems, Aalborg University, Aalborg, Denmark
| | - Carina Graversen
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - Ole Kaeseler Andersen
- Integrative Neuroscience, Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
| | - José Biurrun Manresa
- Center for Neuroplasticity and Pain (CNAP), Department of Health Science and Technology, Aalborg University, Aalborg, Denmark
- Institute for Research and Development in Bioengineering and Bioinformatics (IBB), CONICET-UNER, Oro Verde, Argentina
| |
Collapse
|
67
|
Carraturo S, McLaughlin DJ, Peelle JE, Van Engen KJ. Pupillometry reveals differences in cognitive demands of listening to face mask-attenuated speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:3973-3985. [PMID: 38149818 DOI: 10.1121/10.0023953] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/20/2023] [Accepted: 11/29/2023] [Indexed: 12/28/2023]
Abstract
Face masks offer essential protection but also interfere with speech communication. Here, audio-only sentences spoken through four types of masks were presented in noise to young adult listeners. Pupil dilation (an index of cognitive demand), intelligibility, and subjective effort and performance ratings were collected. Dilation increased in response to each mask relative to the no-mask condition and differed significantly where acoustic attenuation was most prominent. These results suggest that the acoustic impact of the mask drives not only the intelligibility of speech, but also the cognitive demands of listening. Subjective effort ratings reflected the same trends as the pupil data.
Collapse
Affiliation(s)
- Sita Carraturo
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| | - Drew J McLaughlin
- Basque Center on Cognition, Brain and Language, San Sebastian, Basque Country 20009, Spain
| | - Jonathan E Peelle
- Department of Communication Sciences and Disorders, Northeastern University, Boston, Massachusetts 02115, USA
| | - Kristin J Van Engen
- Department of Psychological & Brain Sciences, Washington University in St. Louis, Saint Louis, Missouri 63130, USA
| |
Collapse
|
68
|
Nageswara Rao A, Jeyapaul R, Najar SA, Chaitanya B. Driving errors as a function of listening to music and FM radio: A simulator study. TRAFFIC INJURY PREVENTION 2023; 25:49-56. [PMID: 37815797 DOI: 10.1080/15389588.2023.2263119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2023] [Accepted: 09/21/2023] [Indexed: 10/11/2023]
Abstract
OBJECTIVES Driving is a dynamic activity that takes place in a constantly changing environment, carrying safety implications not only for the driver but also for other road users. Despite the potentially life-threatening consequences of incorrect driving behavior, drivers often engage in activities unrelated to driving. This study aims to investigate the frequency and types of errors committed by drivers when they are distracted compared to when they are not distracted. METHODS A total of 64 young male participants volunteered for the study, completing four driving trials in a driving simulator. The trials consisted of different distraction conditions: listening to researcher-selected music, driver-selected music, FM radio conversation, and driving without any auditory distractions. The simulated driving scenario resembled a semi-urban environment, with a track length of 12 km. RESULTS The findings of the study indicate that drivers are more prone to making errors when engaged in FM radio conversations compared to listening to music. Additionally, errors related to speeding were found to be more prevalent across all experimental conditions. CONCLUSIONS These results emphasize the significance of reducing distractions while driving to improve road safety. The findings add to our understanding of the particular distractions that carry higher risks and underscore the necessity for focused interventions to reduce driver errors, especially related to FM radio conversations. Future research can delve into additional factors that contribute to driving errors and develop effective strategies to promote safer driving practices.
Collapse
Affiliation(s)
- A Nageswara Rao
- Ergonomics Laboratory, Department of Production Engineering, National Institute of Technology - Tiruchirappalli, Tiruchirappalli, Tamil Nadu, India
| | - R Jeyapaul
- Ergonomics Laboratory, Department of Production Engineering, National Institute of Technology - Tiruchirappalli, Tiruchirappalli, Tamil Nadu, India
| | - Sajad Ahmad Najar
- Department of Psychology, Central University of Punjab, Bathinda, Punjab, India
| | - B Chaitanya
- Cognitive Science Research Centre, Department of Mechanical Engineering, Lakireddy Bali Reddy College of Engineering, Krishna District, Andhra Pradesh, India
| |
Collapse
|
69
|
Anbuhl KL, Diez Castro M, Lee NA, Lee VS, Sanes DH. Cingulate cortex facilitates auditory perception under challenging listening conditions. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.10.566668. [PMID: 38014324 PMCID: PMC10680599 DOI: 10.1101/2023.11.10.566668] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/29/2023]
Abstract
We often exert greater cognitive resources (i.e., listening effort) to understand speech under challenging acoustic conditions. This mechanism can be overwhelmed in those with hearing loss, resulting in cognitive fatigue in adults, and potentially impeding language acquisition in children. However, the neural mechanisms that support listening effort are uncertain. Evidence from human studies suggest that the cingulate cortex is engaged under difficult listening conditions, and may exert top-down modulation of the auditory cortex (AC). Here, we asked whether the gerbil cingulate cortex (Cg) sends anatomical projections to the AC that facilitate perceptual performance. To model challenging listening conditions, we used a sound discrimination task in which stimulus parameters were presented in either 'Easy' or 'Hard' blocks (i.e., long or short stimulus duration, respectively). Gerbils achieved statistically identical psychometric performance in Easy and Hard blocks. Anatomical tracing experiments revealed a strong, descending projection from layer 2/3 of the Cg1 subregion of the cingulate cortex to superficial and deep layers of primary and dorsal AC. To determine whether Cg improves task performance under challenging conditions, we bilaterally infused muscimol to inactivate Cg1, and found that psychometric thresholds were degraded for only Hard blocks. To test whether the Cg-to-AC projection facilitates task performance, we chemogenetically inactivated these inputs and found that performance was only degraded during Hard blocks. Taken together, the results reveal a descending cortical pathway that facilitates perceptual performance during challenging listening conditions. Significance Statement Sensory perception often occurs under challenging conditions, such a noisy background or dim environment, yet stimulus sensitivity can remain unaffected. One hypothesis is that cognitive resources are recruited to the task, thereby facilitating perceptual performance. Here, we identify a top-down cortical circuit, from cingulate to auditory cortex in the gerbils, that supports auditory perceptual performance under challenging listening conditions. This pathway is a plausible circuit that supports effortful listening, and may be degraded by hearing loss.
Collapse
|
70
|
Kestens K, Van Yper L, Degeest S, Keppler H. The P300 Auditory Evoked Potential: A Physiological Measure of the Engagement of Cognitive Systems Contributing to Listening Effort? Ear Hear 2023; 44:1389-1403. [PMID: 37287098 DOI: 10.1097/aud.0000000000001381] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
OBJECTIVES This study aimed to explore the potential of the P300 (P3b) as a physiological measure of the engagement of cognitive systems contributing to listening effort. DESIGN Nineteen right-handed young adults (mean age: 24.79 years) and 20 right-handed older adults (mean age: 58.90 years) with age-appropriate hearing were included. The P300 was recorded at Fz, Cz, and Pz using a two-stimulus oddball paradigm with the Flemish monosyllabic numbers "one" and "three" as standard and deviant stimuli, respectively. This oddball paradigm was conducted in three listening conditions, varying in listening demand: one quiet and two noisy listening conditions (+4 and -2 dB signal to noise ratio [SNR]). At each listening condition, physiological, behavioral, and subjective tests of listening effort were administered. P300 amplitude and latency served as a potential physiological measure of the engagement of cognitive systems contributing to listening effort. In addition, the mean reaction time to respond to the deviant stimuli was used as a behavioral listening effort measurement. Last, subjective listening effort was administered through a visual analog scale. To assess the effects of listening condition and age group on each of these measures, linear mixed models were conducted. Correlation coefficients were calculated to determine the relationship between the physiological, behavioral, and subjective measures. RESULTS P300 amplitude and latency, mean reaction time, and subjective scores significantly increased as the listening condition became more taxing. Moreover, a significant group effect was found for all physiological, behavioral, and subjective measures, favoring young adults. Last, no clear relationships between the physiological, behavioral, and subjective measures were found. CONCLUSIONS The P300 was considered a physiological measure of the engagement of cognitive systems contributing to listening effort. Because advancing age is associated with hearing loss and cognitive decline, more research is needed on the effects of all these variables on the P300 to further explore its usefulness as a listening effort measurement for research and clinical purposes.
Collapse
Affiliation(s)
- Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Lindsey Van Yper
- Department of Linguistics, The Australian Hearing Hub, Macquarie University, Sydney, Australia
- Institute of Clinical Research, University of Southern Denmark, Odense, Denmark
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Hannah Keppler
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
- Department of Oto-rhino-laryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
71
|
Sendesen E, Kılıç S, Erbil N, Aydın Ö, Turkyilmaz D. An Exploratory Study of the Effect of Tinnitus on Listening Effort Using EEG and Pupillometry. Otolaryngol Head Neck Surg 2023; 169:1259-1267. [PMID: 37172313 DOI: 10.1002/ohn.367] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/05/2022] [Revised: 03/24/2023] [Accepted: 04/23/2023] [Indexed: 05/14/2023]
Abstract
OBJECTIVE Previous behavioral studies on listening effort in tinnitus patients did not consider extended high-frequency hearing thresholds and had conflicting results. This inconsistency may be related that listening effort is not evaluated by the central nervous system (CNS) and autonomic nervous system (ANS), which are directly related to tinnitus pathophysiology. This study matches hearing thresholds at all frequencies, including the extended high-frequency and reduces hearing loss to objectively evaluate listening effort over the CNS and ANS simultaneously in tinnitus patients. STUDY DESIGN Case-control study. SETTING University hospital. METHODS Sixteen chronic tinnitus patients and 23 matched healthy controls having normal pure-tone averages with symmetrical hearing thresholds were included. Subjects were evaluated with 0.125 to 20 kHz pure-tone audiometry, Montreal Cognitive Assessment Test (MoCA), Tinnitus Handicap Inventory (THI), Visual Analog Scale (VAS), electroencephalography (EEG), and pupillometry. RESULTS Pupil dilation and EEG alpha band in the "coding" phase of the sentence presented in tinnitus patients was less than in the control group (p < .05). VAS score was higher in the tinnitus group (p < .01). Also, there was no statistically significant relationship between EEG and pupillometry components and THI or MoCA (p > .05). CONCLUSION This study suggests that tinnitus patients may need to make an extra effort to listen. Also, pupillometry may not be sufficiently reliable to assess listening effort in ANS-related pathologies. Considering the possible listening difficulties in tinnitus patients, reducing the listening difficulties, especially in noisy environments, can be added to the goals of tinnitus therapy protocols.
Collapse
Affiliation(s)
- Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Samet Kılıç
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Nurhan Erbil
- Department of Biophysics, Hacettepe University, Ankara, Turkey
| | - Özgür Aydın
- Department of Biophysics, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
72
|
Wu F, Liu H, Liu W. Association between sensation, perception, negative socio-psychological factors and cognitive impairment. Heliyon 2023; 9:e22101. [PMID: 38034815 PMCID: PMC10682144 DOI: 10.1016/j.heliyon.2023.e22101] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/13/2023] [Revised: 08/17/2023] [Accepted: 11/03/2023] [Indexed: 12/02/2023] Open
Abstract
Background Evidence has suggested that sensation and socio-psychological factors may be associated with cognitive impairment separately in older adults. However, the association between those risk factors and cognitive impairment is still unknown. Objective To investigate the association between sensation, perception, negative socio-psychological factors, and cognitive impairment in institutionalized older adults. Methods From two public aged care facilities, 215 participants were investigated. The Mini-mental State Examination was applied to assess cognitive function. The sensory function was bifurcated into auditory and somatosensory realms which were evaluated using pure tone audiometry and Nottingham Sensory Assessment, respectively. Albert's test, left and right resolution, and visuospatial distribution were used to evaluate perception. Depression and social isolation were selected as negative socio-psychological factors and were evaluated by the Geriatric Depression Scale and the Lubben Social Network Scale. The multivariate analysis was performed utilizing binary logistic regression. Results Participants with moderately severe or severe hearing loss exhibited significant cognitive impairment compared to those with mild hearing loss. It was observed that perceptual dysfunction and depression were independently related to cognitive impairment. However, there was no significant association between somatosensory function, social isolation, and cognitive impairment in the institutionalized older adults. Conclusion More profound hearing loss, abnormal perception, and depression are associated with cognitive impairment in older adults. Subsequent research endeavors should delve into the causal mechanisms underpinning these associations and explore whether combined interventions have the potential to postpone the onset of cognitive impairment.
Collapse
Affiliation(s)
- Fan Wu
- College of Medicine and Health Science, Wuhan Polytechnic University, 68 Xuefu South Road, Changqing Garden, Wuhan, 430023, Hubei, China
| | - Hanxin Liu
- College of Medicine and Health Science, Wuhan Polytechnic University, 68 Xuefu South Road, Changqing Garden, Wuhan, 430023, Hubei, China
| | - Wenbin Liu
- College of Medicine and Health Science, Wuhan Polytechnic University, 68 Xuefu South Road, Changqing Garden, Wuhan, 430023, Hubei, China
| |
Collapse
|
73
|
Shin J, Noh S, Park J, Sung JE. Syntactic complexity differentially affects auditory sentence comprehension performance for individuals with age-related hearing loss. Front Psychol 2023; 14:1264994. [PMID: 37965654 PMCID: PMC10641445 DOI: 10.3389/fpsyg.2023.1264994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Objectives This study examined whether older adults with hearing loss (HL) experience greater difficulties in auditory sentence comprehension compared to those with typical-hearing (TH) when the linguistic burdens of syntactic complexity were systematically manipulated by varying either the sentence type (active vs. passive) or sentence length (3- vs. 4-phrases). Methods A total of 22 individuals with HL and 24 controls participated in the study, completing sentence comprehension test (SCT), standardized memory assessments, and pure-tone audiometry tests. Generalized linear mixed effects models were employed to compare the effects of sentence type and length on SCT accuracy, while Pearson correlation coefficients were conducted to explore the relationships between SCT accuracy and other factors. Additionally, stepwise regression analyses were employed to identify memory-related predictors of sentence comprehension ability. Results Older adults with HL exhibited poorer performance on passive sentences than on active sentences compared to controls, while the sentence length was controlled. Greater difficulties on passive sentences were linked to working memory capacity, emerging as the most significant predictor for the comprehension of passive sentences among participants with HL. Conclusion Our findings contribute to the understanding of the linguistic-cognitive deficits linked to age-related hearing loss by demonstrating its detrimental impact on the processing of passive sentences. Cognitively healthy adults with hearing difficulties may face challenges in comprehending syntactically more complex sentences that require higher computational demands, particularly in working memory allocation.
Collapse
Affiliation(s)
| | | | | | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
74
|
Downey R, Gagné N, Mohanathas N, Campos JL, Pichora-Fuller KM, Bherer L, Lussier M, Phillips NA, Wittich W, St-Onge N, Gagné JP, Li K. At-home computerized executive-function training to improve cognition and mobility in normal-hearing adults and older hearing aid users: a multi-centre, single-blinded randomized controlled trial. BMC Neurol 2023; 23:378. [PMID: 37864139 PMCID: PMC10588173 DOI: 10.1186/s12883-023-03405-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 09/26/2023] [Indexed: 10/22/2023] Open
Abstract
BACKGROUND Hearing loss predicts cognitive decline and falls risk. It has been argued that degraded hearing makes listening effortful, causing competition for higher-level cognitive resources needed for secondary cognitive or motor tasks. Therefore, executive function training has the potential to improve cognitive performance, in turn improving mobility, especially when older adults with hearing loss are engaged in effortful listening. Moreover, research using mobile neuroimaging and ecologically valid measures of cognition and mobility in this population is limited. The objective of this research is to examine the effect of at-home cognitive training on dual-task performance using laboratory and simulated real-world conditions in normal-hearing adults and older hearing aid users. We hypothesize that executive function training will lead to greater improvements in cognitive-motor dual-task performance compared to a wait-list control group. We also hypothesize that executive function training will lead to the largest dual-task improvements in older hearing aid users, followed by normal-hearing older adults, and then middle-aged adults. METHODS A multi-site (Concordia University and KITE-Toronto Rehabilitation Institute, University Health Network) single-blinded randomized controlled trial will be conducted whereby participants are randomized to either 12 weeks of at-home computerized executive function training or a wait-list control. Participants will consist of normal-hearing middle-aged adults (45-60 years old) and older adults (65-80 years old), as well as older hearing aid users (65-80 years old, ≥ 6 months hearing aid experience). Separate samples will undergo the same training protocol and the same pre- and post-evaluations of cognition, hearing, and mobility across sites. The primary dual-task outcome measures will involve either static balance (KITE site) or treadmill walking (Concordia site) with a secondary auditory-cognitive task. Dual-task performance will be assessed in an immersive virtual reality environment in KITE's StreetLab and brain activity will be measured using functional near infrared spectroscopy at Concordia's PERFORM Centre. DISCUSSION This research will establish the efficacy of an at-home cognitive training program on complex auditory and motor functioning under laboratory and simulated real-world conditions. This will contribute to rehabilitation strategies in order to mitigate or prevent physical and cognitive decline in older adults with hearing loss. TRIAL REGISTRATION Identifier: NCT05418998. https://clinicaltrials.gov/ct2/show/NCT05418998.
Collapse
Affiliation(s)
- Rachel Downey
- Department of Psychology, Concordia University, Montréal, Québec, Canada.
- PERFORM Centre, Concordia University, Montréal, Québec, Canada.
| | - Nathan Gagné
- Department of Psychology, Concordia University, Montréal, Québec, Canada
| | - Niroshica Mohanathas
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | - Jennifer L Campos
- Department of Psychology, University of Toronto, Toronto, ON, Canada
- KITE-Toronto Rehabilitation Institute, University Health Network, Toronto, ON, Canada
| | | | - Louis Bherer
- Département de Médecine, Université de Montréal, Montréal, Québec, Canada
- Centre de Recherche de L'Institut de Cardiologie de Montréal, Montréal, Québec, Canada
- Centre de Recherche de L'Institut Universitaire de Gériatrie de Montréal, Montréal, Québec, Canada
| | - Maxime Lussier
- Département de Médecine, Université de Montréal, Montréal, Québec, Canada
- Centre de Recherche de L'Institut de Cardiologie de Montréal, Montréal, Québec, Canada
| | - Natalie A Phillips
- Department of Psychology, Concordia University, Montréal, Québec, Canada
- PERFORM Centre, Concordia University, Montréal, Québec, Canada
| | - Walter Wittich
- École d'optométrie, Université de Montréal, Montréal, Québec, Canada
| | - Nancy St-Onge
- PERFORM Centre, Concordia University, Montréal, Québec, Canada
- Department of Health, Kinesiology and Applied Physiology, Concordia University, Montreal, QC, Canada
| | - Jean-Pierre Gagné
- École d'orthophonie Et d'audiologie, Université de Montréal, Montréal, Québec, Canada
| | - Karen Li
- Department of Psychology, Concordia University, Montréal, Québec, Canada
- PERFORM Centre, Concordia University, Montréal, Québec, Canada
| |
Collapse
|
75
|
Huber M, Reuter L, Weitgasser L, Pletzer B, Rösch S, Illg A. Hearing loss, depression, and cognition in younger and older adult CI candidates. Front Neurol 2023; 14:1272210. [PMID: 37900591 PMCID: PMC10613094 DOI: 10.3389/fneur.2023.1272210] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/03/2023] [Accepted: 09/04/2023] [Indexed: 10/31/2023] Open
Abstract
Background and Aim Hearing loss in old age is associated with cognitive decline and with depression. Our study aimed to investigate the relationship between hearing loss, cognitive decline, and secondary depressive symptoms in a sample of younger and older cochlear implant candidates with profound to severe hearing loss. Methods This study is part of a larger cohort study designated to provide information on baseline data before CI. Sixty-one cochlear implant candidates with hearing loss from adulthood onwards (>18 years) were enrolled in this study. All had symmetrical sensorineural hearing loss in both ears (four-frequency hearing threshold difference of no more than 20 dB, PTA). Individuals with primary affective disorders, psychosis, below-average intelligence, poor German language skills, visual impairment, and a medical diagnosis with potential impact on cognition (e.g., neurodegenerative diseases,) were excluded. Four-frequency hearing thresholds (dB, PTA, better ear) were collected. Using the Abbreviated Profile of Hearing Aid Benefit, we assessed subjective hearing in noise. Clinical and subclinical depressive symptoms were assessed with the Beck Depression Inventory (BDI II). Cognitive status was assessed with a neurocognitive test battery. Results Our findings revealed a significant negative association between subjective hearing in noise (APHAB subscale "Background Noise") and BDII. However, we did not observe any link between hearing thresholds, depression, and cognition. Additionally, no differences emerged between younger (25-54 years) and older subjects (55-75 years). Unexpectedly, further unplanned analyses unveiled correlations between subjective hearing in quiet environments (APHAB) and cognitive performance [phonemic fluency (Regensburg Word Fluency), cognitive flexibility (TMTB), and nonverbal episodic memory (Nonverbal Learning Test), as well as subjective hearing of aversive/loud sounds (APHAB)], cognitive performance [semantic word fluency (RWT), and inhibition (Go/Nogo) and depression]. Duration of hearing loss and speech recognition at quiet (Freiburg Monosyllables) were not related to depression and cognitive performance. Conclusion Impact of hearing loss on mood and cognition appears to be independent, suggesting a relationship with distinct aspects of hearing loss. These results underscore the importance of considering not only conventional audiometric measures like hearing thresholds but also variables related to hearing abilities during verbal communication in everyday life, both in quiet and noisy settings.
Collapse
Affiliation(s)
- Maria Huber
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University Salzburg, Salzburg, Austria
| | - Lisa Reuter
- Clinic for Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Lennart Weitgasser
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University Salzburg, Salzburg, Austria
| | - Belinda Pletzer
- Department of Psychology, Center for Neurocognitive Research, University of Salzburg, Salzburg, Austria
| | - Sebastian Rösch
- Department of Otorhinolaryngology, Head and Neck Surgery, Paracelsus Medical University Salzburg, Salzburg, Austria
| | - Angelika Illg
- Clinic for Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| |
Collapse
|
76
|
Ness T, Langlois VJ, Kim AE, Novick JM. The State of Cognitive Control in Language Processing. PERSPECTIVES ON PSYCHOLOGICAL SCIENCE 2023:17456916231197122. [PMID: 37819251 DOI: 10.1177/17456916231197122] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
Understanding language requires readers and listeners to cull meaning from fast-unfolding messages that often contain conflicting cues pointing to incompatible ways of interpreting the input (e.g., "The cat was chased by the mouse"). This article reviews mounting evidence from multiple methods demonstrating that cognitive control plays an essential role in resolving conflict during language comprehension. How does cognitive control accomplish this task? Psycholinguistic proposals have conspicuously failed to address this question. We introduce an account in which cognitive control aids language processing when cues conflict by sending top-down biasing signals that strengthen the interpretation supported by the most reliable evidence available. We also provide a computationally plausible model that solves the critical problem of how cognitive control "knows" which way to direct its biasing signal by allowing linguistic knowledge itself to issue crucial guidance. Such a mental architecture can explain a range of experimental findings, including how moment-to-moment shifts in cognitive-control state-its level of activity within a person-directly impact how quickly and successfully language comprehension is achieved.
Collapse
Affiliation(s)
- Tal Ness
- Department of Hearing and Speech Sciences and Program in Neuroscience and Cognitive Science, University of Maryland, College Park
| | - Valerie J Langlois
- Institute for Cognitive Science and Department of Psychology and Neuroscience, University of Colorado, Boulder
| | - Albert E Kim
- Institute for Cognitive Science and Department of Psychology and Neuroscience, University of Colorado, Boulder
| | - Jared M Novick
- Department of Hearing and Speech Sciences and Program in Neuroscience and Cognitive Science, University of Maryland, College Park
| |
Collapse
|
77
|
Oiticica J, Vasconcelos LGE, Horiuti MB. White noise effect on listening effort among patients with chronic tinnitus and normal hearing thresholds. Braz J Otorhinolaryngol 2023; 90:101340. [PMID: 39492232 PMCID: PMC10630604 DOI: 10.1016/j.bjorl.2023.101340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Revised: 09/07/2023] [Accepted: 09/28/2023] [Indexed: 11/05/2024] Open
Abstract
OBJECTIVE This study investigated the effects of WN on LE in subjects with chronic tinnitus and normal hearing thresholds. The study was a prospective, non-randomized, before-and-after, intra-participant intervention. METHODS Twenty-five subjects performed the following tests: conventional and high-frequency audiometry, acuphenometry, screening questionnaires for depression and anxiety symptoms, Tinnitus Handicap Inventory (THI), Montreal Cognitive Assessment, and high WM test from the Working Memory Assessment Battery, Federal University of Minas Gerais (WMAB) as the LE measure in two conditions: No Added Noise (NAN) and with Added Noise (AN). RESULTS Seventeen participants (68%) performed better on AN condition. Data analysis revealed a 45% improvement in the WMAB total span count on AN setting, with a significant p value (p=0.001). CONCLUSION The subgroup of participants without traces of anxiety symptoms, up to mild traces of depressive symptoms, having unilateral tinnitus, and a THI level up to grade 2, had improved WM performance in the presence of WN, which suggests a release of cognitive resources and less auditory effort under these combined conditions. EVIDENCE LEVEL 4.
Collapse
Affiliation(s)
- Jeanne Oiticica
- Otorhinolaryngology/LIM32, Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Universidade de São Paulo, São Paulo 01246-000, Brazil.
| | - Laura G E Vasconcelos
- Otorhinolaryngology/LIM32, Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Universidade de São Paulo, São Paulo 01246-000, Brazil
| | - Mirella B Horiuti
- Otorhinolaryngology/LIM32, Hospital das Clínicas HCFMUSP, Faculdade de Medicina, Universidade de São Paulo, São Paulo 01246-000, Brazil
| |
Collapse
|
78
|
Jiang J, Johnson JCS, Requena-Komuro MC, Benhamou E, Sivasathiaseelan H, Chokesuwattanaskul A, Nelson A, Nortley R, Weil RS, Volkmer A, Marshall CR, Bamiou DE, Warren JD, Hardy CJD. Comprehension of acoustically degraded speech in Alzheimer's disease and primary progressive aphasia. Brain 2023; 146:4065-4076. [PMID: 37184986 PMCID: PMC10545509 DOI: 10.1093/brain/awad163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2022] [Revised: 04/20/2023] [Accepted: 04/27/2023] [Indexed: 05/17/2023] Open
Abstract
Successful communication in daily life depends on accurate decoding of speech signals that are acoustically degraded by challenging listening conditions. This process presents the brain with a demanding computational task that is vulnerable to neurodegenerative pathologies. However, despite recent intense interest in the link between hearing impairment and dementia, comprehension of acoustically degraded speech in these diseases has been little studied. Here we addressed this issue in a cohort of 19 patients with typical Alzheimer's disease and 30 patients representing the three canonical syndromes of primary progressive aphasia (non-fluent/agrammatic variant primary progressive aphasia; semantic variant primary progressive aphasia; logopenic variant primary progressive aphasia), compared to 25 healthy age-matched controls. As a paradigm for the acoustically degraded speech signals of daily life, we used noise-vocoding: synthetic division of the speech signal into frequency channels constituted from amplitude-modulated white noise, such that fewer channels convey less spectrotemporal detail thereby reducing intelligibility. We investigated the impact of noise-vocoding on recognition of spoken three-digit numbers and used psychometric modelling to ascertain the threshold number of noise-vocoding channels required for 50% intelligibility by each participant. Associations of noise-vocoded speech intelligibility threshold with general demographic, clinical and neuropsychological characteristics and regional grey matter volume (defined by voxel-based morphometry of patients' brain images) were also assessed. Mean noise-vocoded speech intelligibility threshold was significantly higher in all patient groups than healthy controls, and significantly higher in Alzheimer's disease and logopenic variant primary progressive aphasia than semantic variant primary progressive aphasia (all P < 0.05). In a receiver operating characteristic analysis, vocoded intelligibility threshold discriminated Alzheimer's disease, non-fluent variant and logopenic variant primary progressive aphasia patients very well from healthy controls. Further, this central hearing measure correlated with overall disease severity but not with peripheral hearing or clear speech perception. Neuroanatomically, after correcting for multiple voxel-wise comparisons in predefined regions of interest, impaired noise-vocoded speech comprehension across syndromes was significantly associated (P < 0.05) with atrophy of left planum temporale, angular gyrus and anterior cingulate gyrus: a cortical network that has previously been widely implicated in processing degraded speech signals. Our findings suggest that the comprehension of acoustically altered speech captures an auditory brain process relevant to daily hearing and communication in major dementia syndromes, with novel diagnostic and therapeutic implications.
Collapse
Affiliation(s)
- Jessica Jiang
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Jeremy C S Johnson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Maï-Carmen Requena-Komuro
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Kidney Cancer Program, UT Southwestern Medical Centre, Dallas, TX 75390, USA
| | - Elia Benhamou
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Harri Sivasathiaseelan
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anthipa Chokesuwattanaskul
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Division of Neurology, Department of Internal Medicine, King Chulalongkorn Memorial Hospital, Thai Red Cross Society, Bangkok 10330, Thailand
| | - Annabel Nelson
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Ross Nortley
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
- Wexham Park Hospital, Frimley Health NHS Foundation Trust, Slough SL2 4HL, UK
| | - Rimona S Weil
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Anna Volkmer
- Division of Psychology and Language Sciences, University College London, London WC1H 0AP, UK
| | - Charles R Marshall
- Preventive Neurology Unit, Wolfson Institute of Population Health, Queen Mary University of London, London EC1M 6BQ, UK
| | - Doris-Eva Bamiou
- UCL Ear Institute and UCL/UCLH Biomedical Research Centre, National Institute of Health Research, University College London, London WC1X 8EE, UK
| | - Jason D Warren
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| | - Chris J D Hardy
- Dementia Research Centre, Department of Neurodegenerative Disease, UCL Queen Square Institute of Neurology, University College London, London WC1N 3AR, UK
| |
Collapse
|
79
|
Shetty HN, Raju S, Singh S S. The relationship between age, acceptable noise level, and listening effort in middle-aged and older-aged individuals. J Otol 2023; 18:220-229. [PMID: 37877073 PMCID: PMC10593579 DOI: 10.1016/j.joto.2023.09.004] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/12/2023] [Revised: 09/14/2023] [Accepted: 09/18/2023] [Indexed: 10/26/2023] Open
Abstract
Objective The purpose of the study was to evaluate listening effort in adults who experience varied annoyance towards noise. Materials and methods Fifty native Kannada-speaking adults aged 41-68 years participated. We evaluated the participant's acceptable noise level while listening to speech. Further, a sentence-final word-identification and recall test at 0 dB SNR (less favorable condition) and 4 dB SNR (relatively favorable condition) was used to assess listening effort. The repeat and recall scores were obtained for each condition. Results The regression model revealed that the listening effort increased by 0.6% at 0 dB SNR and by 0.5% at 4 dB SNR with every one-year advancement in age. Listening effort increased by 0.9% at 0 dB SNR and by 0.7% at 4 dB SNR with every one dB change in the value of Acceptable Noise Level (ANL). At 0 dB SNR and 4 dB SNR, a moderate and mild negative correlation was noted respectively between listening effort and annoyance towards noise when the factor age was controlled. Conclusion Listening effort increases with age, and its effect is more in less favorable than in relatively favorable conditions. However, if the annoyance towards noise was controlled, the impact of age on listening effort was reduced. Listening effort correlated with the level of annoyance once the age effect was controlled. Furthermore, the listening effort was predicted from the ANL to a moderate degree.
Collapse
Affiliation(s)
| | - Suma Raju
- Department of Speech-Language Pathology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| | - Sanjana Singh S
- Department of Audiology, JSS Institute of Speech and Hearing, Mysuru, Karnataka, India
| |
Collapse
|
80
|
Arican-Dinc B, Gable SL. Responsiveness in romantic partners' interactions. Curr Opin Psychol 2023; 53:101652. [PMID: 37515977 DOI: 10.1016/j.copsyc.2023.101652] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2023] [Revised: 06/23/2023] [Accepted: 06/25/2023] [Indexed: 07/31/2023]
Abstract
Close relationships, such as romantic partner dyads, involve numerous social exchanges in myriad contexts. During these exchanges, when one of the interaction partners discloses information, the other partner typically communicates a response. The discloser then evaluates the extent to which that response conveys that the responder understood their thoughts, goals, and needs, validated their position, and cared for their well-being. The degree to which the discloser believes that the partner showed this understanding, validation, and caring to the disclosure is known as perceived responsiveness. Perceived responsiveness has long been viewed as a fundamental construct in the development and maintenance of intimacy in romantic relationships. Perceived responsiveness is a common currency that lies at the heart of interactions across multiple contexts, such as social support, gratitude, and capitalization interactions. Being a responsive interaction partner starts with understanding what the other is conveying and how they are viewing the information. Thus, a critical step in the ability to convey responsiveness to a partner is listening. While listening is the first step and indicator of the listening motivation of a responder, a responder must also have the ability and motivation to convey their understanding, validation, and caring to the discloser.
Collapse
|
81
|
Rahne T, Wagner TM, Kopsch AC, Plontke SK, Wagner L. Influence of Age on Speech Recognition in Noise and Hearing Effort in Listeners with Age-Related Hearing Loss. J Clin Med 2023; 12:6133. [PMID: 37834776 PMCID: PMC10573265 DOI: 10.3390/jcm12196133] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 09/20/2023] [Accepted: 09/20/2023] [Indexed: 10/15/2023] Open
Abstract
The aim of this study was to measure how age affects the speech recognition threshold (SRT50) of the Oldenburg Sentence Test (OLSA) and the listening effort at the corresponding signal-to-noise ratio (SNRcut). The study also investigated the effect of the spatial configuration of sound sources and noise signals on SRT50 and SNRcut. To achieve this goal, the study used olnoise and icra5 noise presented from one or more spatial locations from the front and back. Ninety-nine participants with age-related hearing loss in the 18-80 years age range, specifically in the 18-30, 31-40, 41-50, 51-60, 61-70, and 71-80 age groups, participated in this study. Speech recognition and listening effort in noise were measured and compared between the different age groups, different spatial sound configurations and noise signals. Speech recognition in noise decreased with age and became significant from the age group of 50-51. The decrease in SRT50 with age was greater for icra5 noise than for olnoise. For all age groups, SRT50 and SNRcut were better for icra5 noise than for olnoise. The measured age-related reference data for SRT50 and SNRcut can be used in further studies in listeners with age-related hearing loss and hearing aid or implant users.
Collapse
|
82
|
Philips M, Schneck SM, Levy DF, Wilson SM. Modality-Specificity of the Neural Correlates of Linguistic and Non-Linguistic Demand. NEUROBIOLOGY OF LANGUAGE (CAMBRIDGE, MASS.) 2023; 4:516-535. [PMID: 37841966 PMCID: PMC10575553 DOI: 10.1162/nol_a_00114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/15/2022] [Accepted: 06/28/2023] [Indexed: 10/17/2023]
Abstract
Imaging studies of language processing in clinical populations can be complicated to interpret for several reasons, one being the difficulty of matching the effortfulness of processing across individuals or tasks. To better understand how effortful linguistic processing is reflected in functional activity, we investigated the neural correlates of task difficulty in linguistic and non-linguistic contexts in the auditory modality and then compared our findings to a recent analogous experiment in the visual modality in a different cohort. Nineteen neurologically normal individuals were scanned with fMRI as they performed a linguistic task (semantic matching) and a non-linguistic task (melodic matching), each with two levels of difficulty. We found that left hemisphere frontal and temporal language regions, as well as the right inferior frontal gyrus, were modulated by linguistic demand and not by non-linguistic demand. This was broadly similar to what was previously observed in the visual modality. In contrast, the multiple demand (MD) network, a set of brain regions thought to support cognitive flexibility in many contexts, was modulated neither by linguistic demand nor by non-linguistic demand in the auditory modality. This finding was in striking contradistinction to what was previously observed in the visual modality, where the MD network was robustly modulated by both linguistic and non-linguistic demand. Our findings suggest that while the language network is modulated by linguistic demand irrespective of modality, modulation of the MD network by linguistic demand is not inherent to linguistic processing, but rather depends on specific task factors.
Collapse
Affiliation(s)
- Mackenzie Philips
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Sarah M. Schneck
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Deborah F. Levy
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Stephen M. Wilson
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN, USA
- School of Health and Rehabilitation Sciences, University of Queensland, Brisbane, Australia
| |
Collapse
|
83
|
Herrera C, Whittle N, Leek MR, Brodbeck C, Lee G, Barcenas C, Barnes S, Holshouser B, Yi A, Venezia JH. Cortical networks for recognition of speech with simultaneous talkers. Hear Res 2023; 437:108856. [PMID: 37531847 DOI: 10.1016/j.heares.2023.108856] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 07/05/2023] [Accepted: 07/21/2023] [Indexed: 08/04/2023]
Abstract
The relative contributions of superior temporal vs. inferior frontal and parietal networks to recognition of speech in a background of competing speech remain unclear, although the contributions themselves are well established. Here, we use fMRI with spectrotemporal modulation transfer function (ST-MTF) modeling to examine the speech information represented in temporal vs. frontoparietal networks for two speech recognition tasks with and without a competing talker. Specifically, 31 listeners completed two versions of a three-alternative forced choice competing speech task: "Unison" and "Competing", in which a female (target) and a male (competing) talker uttered identical or different phrases, respectively. Spectrotemporal modulation filtering (i.e., acoustic distortion) was applied to the two-talker mixtures and ST-MTF models were generated to predict brain activation from differences in spectrotemporal-modulation distortion on each trial. Three cortical networks were identified based on differential patterns of ST-MTF predictions and the resultant ST-MTF weights across conditions (Unison, Competing): a bilateral superior temporal (S-T) network, a frontoparietal (F-P) network, and a network distributed across cortical midline regions and the angular gyrus (M-AG). The S-T network and the M-AG network responded primarily to spectrotemporal cues associated with speech intelligibility, regardless of condition, but the S-T network responded to a greater range of temporal modulations suggesting a more acoustically driven response. The F-P network responded to the absence of intelligibility-related cues in both conditions, but also to the absence (presence) of target-talker (competing-talker) vocal pitch in the Competing condition, suggesting a generalized response to signal degradation. Task performance was best predicted by activation in the S-T and F-P networks, but in opposite directions (S-T: more activation = better performance; F-P: vice versa). Moreover, S-T network predictions were entirely ST-MTF mediated while F-P network predictions were ST-MTF mediated only in the Unison condition, suggesting an influence from non-acoustic sources (e.g., informational masking) in the Competing condition. Activation in the M-AG network was weakly positively correlated with performance and this relation was entirely superseded by those in the S-T and F-P networks. Regarding contributions to speech recognition, we conclude: (a) superior temporal regions play a bottom-up, perceptual role that is not qualitatively dependent on the presence of competing speech; (b) frontoparietal regions play a top-down role that is modulated by competing speech and scales with listening effort; and (c) performance ultimately relies on dynamic interactions between these networks, with ancillary contributions from networks not involved in speech processing per se (e.g., the M-AG network).
Collapse
Affiliation(s)
| | - Nicole Whittle
- VA Loma Linda Healthcare System, Loma Linda, CA, United States
| | - Marjorie R Leek
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | | | - Grace Lee
- Loma Linda University, Loma Linda, CA, United States
| | | | - Samuel Barnes
- Loma Linda University, Loma Linda, CA, United States
| | | | - Alex Yi
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States
| | - Jonathan H Venezia
- VA Loma Linda Healthcare System, Loma Linda, CA, United States; Loma Linda University, Loma Linda, CA, United States.
| |
Collapse
|
84
|
An S, Jo E, Jun SB, Sung JE. Effects of cochlear implantation on cognitive decline in older adults: A systematic review and meta-analysis. Heliyon 2023; 9:e19703. [PMID: 37809368 PMCID: PMC10558942 DOI: 10.1016/j.heliyon.2023.e19703] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 08/20/2023] [Accepted: 08/30/2023] [Indexed: 10/10/2023] Open
Abstract
Background Hearing loss has been reported as the most significant modifiable risk factor for dementia, but it is still unknown whether auditory rehabilitation can practically prevent cognitive decline. We aim to systematically analyze the longitudinal effects of auditory rehabilitation via cochlear implants (CIs). Methods In this systematic review and meta-analysis, we searched relevant literature published from January 1, 2000 to April 30, 2022, using electronic databases, and selected studies in which CIs were performed mainly on older adults and follow-up assessments were conducted in both domains: speech perception and cognitive function. A random-effects meta-analysis was conducted for each domain and for each timepoint comparison (pre-CI vs. six months post-CI; six months post-CI vs. 12 months post-CI; pre-CI vs. 12 months post-CI), and heterogeneity was assessed using Cochran's Q test. Findings Of the 1918 retrieved articles, 20 research papers (648 CI subjects) were included. The results demonstrated that speech perception was rapidly enhanced after CI, whereas cognitive function had different speeds of improvement for different subtypes: executive function steadily improved significantly up to 12 months post-CI (g = 0.281, p < 0.001; g = 0.115, p = 0.003; g = 0.260, p < 0.001 in the order of timepoint comparison); verbal memory was significantly enhanced at six months post-CI and was maintained until 12 months post-CI (g = 0.296, p = 0.002; g = 0.095, p = 0.427; g = 0.401, p < 0.001); non-verbal memory showed no considerable progress at six months post-CI, but significant improvement at 12 months post-CI (g = -0.053, p = 0.723; g = 0.112, p = 0.089; g = 0.214, p = 0.023). Interpretation The outcomes demonstrate that auditory rehabilitation via CIs could have a long-term positive impact on cognitive abilities. Given that older adults' cognitive abilities are on the trajectory of progressive decline with age, these results highlight the need to increase the adoption of CIs among this population.
Collapse
Affiliation(s)
- Sora An
- Department of Communication Disorders, Ewha Womans University, Seoul, 03760, Republic of Korea
| | - Eunha Jo
- Department of Communication Disorders, Ewha Womans University, Seoul, 03760, Republic of Korea
| | - Sang Beom Jun
- Department of Electronic and Electrical Engineering, Ewha Womans University, Seoul, 03760, Republic of Korea
- Graduate Program in Smart Factory, Ewha Womans University, Seoul, 03760, Republic of Korea
- Department of Brain and Cognitive Sciences, Ewha Womans University, Seoul, 03760, Republic of Korea
| | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, 03760, Republic of Korea
| |
Collapse
|
85
|
Portelli D, Ciodaro F, Loteta S, Alberti G, Bruno R. Audiological assessment with Matrix sentence test of percutaneous vs transcutaneous bone-anchored hearing aids: a pilot study. Eur Arch Otorhinolaryngol 2023; 280:4065-4072. [PMID: 36933021 DOI: 10.1007/s00405-023-07918-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2023] [Accepted: 03/06/2023] [Indexed: 03/19/2023]
Abstract
PURPOSE The study evaluated if there were differences between two types of bone-anchored hearing aids (BAHA), percutaneous vs transcutaneous implants in terms of audiological and psychosocial outcomes. METHODS Eleven patients were enrolled. Inclusion criteria were: patients with conductive or mixed hearing loss in the implanted ear with a bone conduction pure-tone average (BC PTA) of the hearing threshold at 500, 1000, 2000, and 3000 Hz ≤ 55 dB HL, aged > 5 years. Patients were assigned to two groups: percutaneous implant (BAHA Connect) and transcutaneous implant (BAHA Attract). Pure-tone audiometry, speech audiometry, free-field pure-tone and speech audiometry with the hearing aid, and Matrix sentence test were performed. The Satisfaction with Amplification in Daily Life (SADL) questionnaire, the Abbreviated Profile of Hearing Aid Benefit (APHAB) questionnaire, and the Glasgow Benefit Inventory (GBI) were used to assess the psychosocial and audiological benefits provided by the implant, and the variation in the quality of life after the surgery. RESULTS No differences were found comparing the data of Matrix SRT. APHAB and GBI questionnaires did not show a statistically significant difference comparing each subscale and the global score. The comparison of scores obtained from the SADL questionnaire demonstrated a difference in the "Personal Image" subscale with a better score for the transcutaneous implant. Furthermore, the Global Score of the SADL questionnaire was statistically different between groups. Other subscales did not show any significant difference. A Spearman's ρ correlation test was used to evaluate if the age could influence the SRT results; no correlation was found between age and SRT. Furthermore, the same test was used to confirm a negative correlation between SRT and the global benefit of the APHAB questionnaire. CONCLUSION The current research confirms the absence of statistically significant differences comparing percutaneous and transcutaneous implants. The Matrix sentence test has shown the comparability of the two implants in the speech-in-noise intelligibility. Actually, the choice of the implant type can be done according to the patient's personal needs, the surgeon's experience, and the patient anatomy.
Collapse
Affiliation(s)
- Daniele Portelli
- Unit of Otorhinolaryngology, Department of Adult and Development Age Human Pathology "Gaetano Barresi", Policlinico "G. Martino", University of Messina, Via Consolare Valeria 1, 98125, Messina, ME, Italy.
| | - Francesco Ciodaro
- Unit of Otorhinolaryngology, Department of Adult and Development Age Human Pathology "Gaetano Barresi", Policlinico "G. Martino", University of Messina, Via Consolare Valeria 1, 98125, Messina, ME, Italy
| | - Sabrina Loteta
- Unit of Otorhinolaryngology, Department of Adult and Development Age Human Pathology "Gaetano Barresi", Policlinico "G. Martino", University of Messina, Via Consolare Valeria 1, 98125, Messina, ME, Italy
| | - Giuseppe Alberti
- Unit of Otorhinolaryngology, Department of Adult and Development Age Human Pathology "Gaetano Barresi", Policlinico "G. Martino", University of Messina, Via Consolare Valeria 1, 98125, Messina, ME, Italy
| | - Rocco Bruno
- Unit of Otorhinolaryngology, Department of Adult and Development Age Human Pathology "Gaetano Barresi", Policlinico "G. Martino", University of Messina, Via Consolare Valeria 1, 98125, Messina, ME, Italy
| |
Collapse
|
86
|
Visentin C, Pellegatti M, Garraffa M, Di Domenico A, Prodi N. Individual characteristics moderate listening effort in noisy classrooms. Sci Rep 2023; 13:14285. [PMID: 37652970 PMCID: PMC10471719 DOI: 10.1038/s41598-023-40660-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
Comprehending the teacher's message when other students are chatting is challenging. Even though the sound environment is the same for a whole class, differences in individual performance can be observed, which might depend on a variety of personal factors and their specific interaction with the listening condition. This study was designed to explore the role of individual characteristics (reading comprehension, inhibitory control, noise sensitivity) when primary school children perform a listening comprehension task in the presence of a two-talker masker. The results indicated that this type of noise impairs children's accuracy, effort, and motivation during the task. Its specific impact depended on the level and was modulated by the child's characteristics. In particular, reading comprehension was found to support task accuracy, whereas inhibitory control moderated the effect of listening condition on the two measures of listening effort included in the study (response time and self-ratings), even though with a different pattern of association. A moderation effect of noise sensitivity on perceived listening effort was also observed. Understanding the relationship between individual characteristics and classroom sound environment has practical implications for the acoustic design of spaces promoting students' well-being, and supporting their learning performance.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy.
- Institute for Renewable Energy, Eurac Research, Via A. Volta/A. Volta Straße 13/A, 39100, Bolzano-Bozen, Italy.
| | - Matteo Pellegatti
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| | - Maria Garraffa
- School of Health Sciences, University of East Anglia, Norwich Research Park, Norwich, Norfolk, NR4 7TJ, UK
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, University of Chieti-Pescara, Via dei Vestini 31, 66100, Chieti, Italy
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| |
Collapse
|
87
|
Rogers CS, Jones MS, McConkey S, McLaughlin DJ, Peelle JE. Real-time feedback reduces participant motion during task-based fMRI. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.01.12.523791. [PMID: 36711722 PMCID: PMC9882243 DOI: 10.1101/2023.01.12.523791] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 01/15/2023]
Abstract
The potential negative impact of head movement during fMRI has long been appreciated. Although a variety of prospective and retrospective approaches have been developed to help mitigate these effects, reducing head movement in the first place remains the most appealing strategy for optimizing data quality. Real-time interventions, in which participants are provided feedback regarding their scan-to-scan motion, have recently shown promise in reducing motion during resting state fMRI. However, whether feedback might similarly reduce motion during task-based fMRI is an open question. In particular, it is unclear whether participants can effectively monitor motion feedback while attending to task-related demands. Here we assessed whether a combination of real-time and between-run feedback could reduce head motion during task-based fMRI. During an auditory word repetition task, 78 adult participants (aged 19-81) were pseudorandomly assigned to receive feedback or not. Feedback was provided FIRMM software that used real-time calculation of realignment parameters to estimate participant motion. We quantified movement using framewise displacement (FD). We found that motion feedback resulted in a statistically significant reduction in participant head motion, with a small-to-moderate effect size (reducing average FD from 0.347 to 0.282). Reductions were most apparent in high-motion events. We conclude that under some circumstances real-time feedback may reduce head motion during task-based fMRI, although its effectiveness may depend on the specific participant population and task demands of a given study.
Collapse
Affiliation(s)
| | - Michael S Jones
- Department of Otolaryngology, Washington University in St. Louis
| | - Sarah McConkey
- Department of Otolaryngology, Washington University in St. Louis
| | | | - Jonathan E Peelle
- Center for Cognitive and Brain Health, Northeastern University
- Department of Communication Sciences and Disorders, Northeastern University
- Department of Psychology, Northeastern University
| |
Collapse
|
88
|
McHaney JR, Hancock KE, Polley DB, Parthasarathy A. Sensory representations and pupil-indexed listening effort provide complementary contributions to multi-talker speech intelligibility. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.08.13.553131. [PMID: 37645975 PMCID: PMC10462058 DOI: 10.1101/2023.08.13.553131] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/01/2023]
Abstract
Optimal speech perception in noise requires successful separation of the target speech stream from multiple competing background speech streams. The ability to segregate these competing speech streams depends on the fidelity of bottom-up neural representations of sensory information in the auditory system and top-down influences of effortful listening. Here, we use objective neurophysiological measures of bottom-up temporal processing using envelope-following responses (EFRs) to amplitude modulated tones and investigate their interactions with pupil-indexed listening effort, as it relates to performance on the Quick speech in noise (QuickSIN) test in young adult listeners with clinically normal hearing thresholds. We developed an approach using ear-canal electrodes and adjusting electrode montages for modulation rate ranges, which extended the rage of reliable EFR measurements as high as 1024Hz. Pupillary responses revealed changes in listening effort at the two most difficult signal-to-noise ratios (SNR), but behavioral deficits at the hardest SNR only. Neither pupil-indexed listening effort nor the slope of the EFR decay function independently related to QuickSIN performance. However, a linear model using the combination of EFRs and pupil metrics significantly explained variance in QuickSIN performance. These results suggest a synergistic interaction between bottom-up sensory coding and top-down measures of listening effort as it relates to speech perception in noise. These findings can inform the development of next-generation tests for hearing deficits in listeners with normal-hearing thresholds that incorporates a multi-dimensional approach to understanding speech intelligibility deficits.
Collapse
Affiliation(s)
- Jacie R. McHaney
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
| | - Kenneth E. Hancock
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Daniel B. Polley
- Deparment of Otolaryngology – Head and Neck Surgery, Harvard Medical School, Boston, MA
- Eaton-Peabody Laboratories, Massachusetts Eye and Ear, Boston MA
| | - Aravindakshan Parthasarathy
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA
- Department of Bioengineering, University of Pittsburgh, Pittsburgh PA
| |
Collapse
|
89
|
Cui ME, Herrmann B. Eye Movements Decrease during Effortful Speech Listening. J Neurosci 2023; 43:5856-5869. [PMID: 37491313 PMCID: PMC10423048 DOI: 10.1523/jneurosci.0240-23.2023] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2023] [Revised: 06/09/2023] [Accepted: 07/18/2023] [Indexed: 07/27/2023] Open
Abstract
Hearing impairment affects many older adults but is often diagnosed decades after speech comprehension in noisy situations has become effortful. Accurate assessment of listening effort may thus help diagnose hearing impairment earlier. However, pupillometry-the most used approach to assess listening effort-has limitations that hinder its use in practice. The current study explores a novel way to assess listening effort through eye movements. Building on cognitive and neurophysiological work, we examine the hypothesis that eye movements decrease when speech listening becomes challenging. In three experiments with human participants from both sexes, we demonstrate, consistent with this hypothesis, that fixation duration increases and spatial gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (simple sentences, naturalistic stories). In contrast, pupillometry was less sensitive to speech masking during story listening, suggesting pupillometric measures may not be as effective for the assessments of listening effort in naturalistic speech-listening paradigms. Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in the brain regions that support the regulation of eye movements, such as frontal eye field and superior colliculus, are modulated when listening is effortful.SIGNIFICANCE STATEMENT Assessment of listening effort is critical for early diagnosis of age-related hearing loss. Pupillometry is most used but has several disadvantages. The current study explores a novel way to assess listening effort through eye movements. We examine the hypothesis that eye movements decrease when speech listening becomes effortful. We demonstrate, consistent with this hypothesis, that fixation duration increases and gaze dispersion decreases with increasing speech masking. Eye movements decreased during effortful speech listening for different visual scenes (free viewing, object tracking) and speech materials (sentences, naturalistic stories). Our results reveal a critical link between eye movements and cognitive load, suggesting that neural activity in brain regions that support the regulation of eye movements are modulated when listening is effortful.
Collapse
Affiliation(s)
- M Eric Cui
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest Academy for Research and Education, North York, Ontario M6A 2E1, Canada
- Department of Psychology, University of Toronto, Toronto, Ontario M5S 1A1, Canada
| |
Collapse
|
90
|
Aschenbrenner AJ, Crawford JL, Peelle JE, Fagan AM, Benzinger TLS, Morris JC, Hassenstab J, Braver TS. Increased cognitive effort costs in healthy aging and preclinical Alzheimer's disease. Psychol Aging 2023; 38:428-442. [PMID: 37067479 PMCID: PMC10440282 DOI: 10.1037/pag0000742] [Citation(s) in RCA: 3] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/18/2023]
Abstract
Life-long engagement in cognitively demanding activities may mitigate against declines in cognitive ability observed in healthy or pathological aging. However, the "mental costs" associated with completing cognitive tasks also increase with age and may be partly attributed to increases in preclinical levels of Alzheimer's disease (AD) pathology, specifically amyloid. We test whether cognitive effort costs increase in a domain-general manner among older adults, and further, whether such age-related increases in cognitive effort costs are associated with working memory (WM) capacity or amyloid burden, a signature pathology of AD. In two experiments, we administered a behavioral measure of cognitive effort costs (cognitive effort discounting) to a sample of older adults recruited from online sources (Experiment 1) or from ongoing longitudinal studies of aging and dementia (Experiment 2). Experiment 1 compared age-related differences in cognitive effort costs across two domains, WM and speech comprehension. Experiment 2 compared cognitive effort costs between a group of participants who were rated positive for amyloid relative to those with no evidence of amyloid. Results showed age-related increases in cognitive effort costs were evident in both domains. Cost estimates were highly correlated between the WM and speech comprehension tasks but did not correlate with WM capacity. In addition, older adults who were amyloid positive had higher cognitive effort costs than those who were amyloid negative. Cognitive effort costs may index a domain-general trait that consistently increases in aging. Differences in cognitive effort costs associated with amyloid burden suggest a potential neurobiological mechanism for age-related differences. (PsycInfo Database Record (c) 2023 APA, all rights reserved).
Collapse
Affiliation(s)
| | - Jennifer L Crawford
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| | | | - Anne M Fagan
- Department of Neurology, Washington University in St. Louis
| | | | - John C Morris
- Department of Neurology, Washington University in St. Louis
| | | | - Todd S Braver
- Department of Psychological and Brain Sciences, Washington University in St. Louis
| |
Collapse
|
91
|
Villard S, Perrachione TK, Lim SJ, Alam A, Kidd G. Energetic and informational masking place dissociable demands on listening effort: Evidence from simultaneous electroencephalography and pupillometrya). THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 154:1152-1167. [PMID: 37610284 PMCID: PMC10449482 DOI: 10.1121/10.0020539] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Revised: 07/09/2023] [Accepted: 07/14/2023] [Indexed: 08/24/2023]
Abstract
The task of processing speech masked by concurrent speech/noise can pose a substantial challenge to listeners. However, performance on such tasks may not directly reflect the amount of listening effort they elicit. Changes in pupil size and neural oscillatory power in the alpha range (8-12 Hz) are prominent neurophysiological signals known to reflect listening effort; however, measurements obtained through these two approaches are rarely correlated, suggesting that they may respond differently depending on the specific cognitive demands (and, by extension, the specific type of effort) elicited by specific tasks. This study aimed to compare changes in pupil size and alpha power elicited by different types of auditory maskers (highly confusable intelligible speech maskers, speech-envelope-modulated speech-shaped noise, and unmodulated speech-shaped noise maskers) in young, normal-hearing listeners. Within each condition, the target-to-masker ratio was set at the participant's individually estimated 75% correct point on the psychometric function. The speech masking condition elicited a significantly greater increase in pupil size than either of the noise masking conditions, whereas the unmodulated noise masking condition elicited a significantly greater increase in alpha oscillatory power than the speech masking condition, suggesting that the effort needed to solve these respective tasks may have different neural origins.
Collapse
Affiliation(s)
- Sarah Villard
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Tyler K Perrachione
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Sung-Joo Lim
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Ayesha Alam
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| |
Collapse
|
92
|
Yasmin S, Irsik VC, Johnsrude IS, Herrmann B. The effects of speech masking on neural tracking of acoustic and semantic features of natural speech. Neuropsychologia 2023; 186:108584. [PMID: 37169066 DOI: 10.1016/j.neuropsychologia.2023.108584] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/07/2023] [Revised: 04/30/2023] [Accepted: 05/08/2023] [Indexed: 05/13/2023]
Abstract
Listening environments contain background sounds that mask speech and lead to communication challenges. Sensitivity to slow acoustic fluctuations in speech can help segregate speech from background noise. Semantic context can also facilitate speech perception in noise, for example, by enabling prediction of upcoming words. However, not much is known about how different degrees of background masking affect the neural processing of acoustic and semantic features during naturalistic speech listening. In the current electroencephalography (EEG) study, participants listened to engaging, spoken stories masked at different levels of multi-talker babble to investigate how neural activity in response to acoustic and semantic features changes with acoustic challenges, and how such effects relate to speech intelligibility. The pattern of neural response amplitudes associated with both acoustic and semantic speech features across masking levels was U-shaped, such that amplitudes were largest for moderate masking levels. This U-shape may be due to increased attentional focus when speech comprehension is challenging, but manageable. The latency of the neural responses increased linearly with increasing background masking, and neural latency change associated with acoustic processing most closely mirrored the changes in speech intelligibility. Finally, tracking responses related to semantic dissimilarity remained robust until severe speech masking (-3 dB SNR). The current study reveals that neural responses to acoustic features are highly sensitive to background masking and decreasing speech intelligibility, whereas neural responses to semantic features are relatively robust, suggesting that individuals track the meaning of the story well even in moderate background sound.
Collapse
Affiliation(s)
- Sonia Yasmin
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada.
| | - Vanessa C Irsik
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada
| | - Ingrid S Johnsrude
- Department of Psychology & the Brain and Mind Institute,The University of Western Ontario, London, ON, N6A 3K7, Canada; School of Communication and Speech Disorders,The University of Western Ontario, London, ON, N6A 5B7, Canada
| | - Björn Herrmann
- Rotman Research Institute, Baycrest, M6A 2E1, Toronto, ON, Canada; Department of Psychology,University of Toronto, M5S 1A1, Toronto, ON, Canada
| |
Collapse
|
93
|
Wisniewski MG, Zakrzewski AC. Effortful listening produces both enhancement and suppression of alpha in the EEG. AUDITORY PERCEPTION & COGNITION 2023; 6:289-299. [PMID: 38665905 PMCID: PMC11044958 DOI: 10.1080/25742442.2023.2218239] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/18/2023] [Indexed: 04/28/2024]
Abstract
Introduction Adverse listening conditions can drive increased mental effort during listening. Neuromagnetic alpha oscillations (8-13 Hz) may index this listening effort, but inconsistencies regarding the direction of the relationship are abundant. We performed source analyses on high-density EEG data collected during a speech-on-speech listening task to address the possibility that opposing alpha power relationships among alpha producing brain sources drive this inconsistency. Methods Listeners (N=20) heard two simultaneously presented sentences of the form: Ready go to now. They either reported the color/number pair of a "Baron" call sign sentence (active: high effort), or ignored the stimuli (passive: low effort). Independent component analysis (ICA) was used to segregate temporally distinct sources in the EEG. Results Analysis of independent components (ICs) revealed simultaneous alpha enhancements (e.g., for somatomotor mu ICs) and suppressions (e.g., for left temporal ICs) for different brain sources. The active condition exhibited stronger enhancement for left somatomotor mu rhythm ICs, but stronger suppression for central occipital ICs. Discussion This study shows both alpha enhancement and suppression to be associated with increases in listening effort. Literature inconsistencies could partially relate to some source activities overwhelming others in scalp recordings.
Collapse
Affiliation(s)
- Matthew G. Wisniewski
- Department of Psychological Sciences, Kansas State University, Manhattan, Kansas, USA
| | | |
Collapse
|
94
|
Aedo-Sanchez C, Oliveros J, Aranguiz C, Muñoz C, Lazo-Maturana C, Aguilar-Vidal E. Subclinical hearing loss associated with aging. J Otol 2023; 18:111-117. [PMID: 37497327 PMCID: PMC10366586 DOI: 10.1016/j.joto.2023.05.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/12/2022] [Revised: 05/09/2023] [Accepted: 05/15/2023] [Indexed: 07/28/2023] Open
Abstract
Objective Contribute to clarifying the existence of subclinical hearing deficits associated with aging. Design In this work, we study and compare the auditory perceptual and electrophysiological performance of normal-hearing young and adult subjects (tonal audiometry, high-frequency tone threshold, a triplet of digits in noise, and click-evoked auditory brainstem response). Study sample 45 normal hearing volunteers were evaluated and divided into two groups according to age. 27 subjects were included in the "young group" (mean 22.1 years), and 18 subjects (mean 42.22 years) were included in the "adult group." Results In the perceptual tests, the adult group presented significantly worse tonal thresholds in the high frequencies (12 and 16 kHz) and worse performance in the digit triplet tests in noise. In the electrophysiological test using the auditory brainstem response technique, the adult group presented significantly lower I and V wave amplitudes and higher V wave latencies at the supra-threshold level. At the threshold level, we observed a significantly higher latency in wave V in the adult group. In addition, in the partial correlation analysis, controlling for the hearing level, we observed a relationship (negative) between age and speech in noise performance and high-frequency thresholds. No significant association was observed between age and the auditory brainstem response. Conclusion The results are compatible with subclinical hearing loss associated with aging.
Collapse
Affiliation(s)
- Cristian Aedo-Sanchez
- Departamento de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| | - José Oliveros
- Escuela de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| | - Constanza Aranguiz
- Escuela de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| | - Camila Muñoz
- Escuela de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| | - Claudia Lazo-Maturana
- Departamento de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| | - Enzo Aguilar-Vidal
- Departamento de Tecnología Médica, Facultad de Medicina, Universidad de Chile, Chile
| |
Collapse
|
95
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
96
|
Goderie T, Hendricks S, Cocchi C, Maroger ID, Mekking D, Mosnier I, Musacchio A, Vernick D, Smits C. The International Standard Set of Outcome Measures for the Assessment of Hearing in People with Osteogenesis Imperfecta. Otol Neurotol 2023; Publish Ahead of Print:00129492-990000000-00310. [PMID: 37317476 DOI: 10.1097/mao.0000000000003921] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
OBJECTIVE The aim is to recommend a minimum standard set of clinician-reported outcome measures (CROMs) and patient-reported outcome measures (PROMs) on hearing for people with osteogenesis imperfecta (OI). This project is part of the larger "Key4OI" project initiated by the "Care4BrittleBones foundation" of which the goal is to improve quality of life of people with OI. Key4OI provides a standard set of outcome measures and covers a large set of domains affecting the well-being of people with OI. METHODS An international team of experts in OI, comprising specialists in audiological science, medical specialists, and an expert patient representative, used a modified Delphi consensus process to select CROMs and PROMs to evaluate hearing problems in people with OI. In addition, focus groups of people with OI identified key consequences of their hearing loss. These criteria were matched to categories of preselected questionnaires to select a PROM that matched their specific hearing-related concerns best. RESULTS Consensus on PROMs for adults and CROMs for adults and children was reached. The focus of the CROMs was on specific audiological outcome measures and standardized follow-up. CONCLUSIONS This project resulted in a clear consensus statement for standardization of hearing-related PROMs and CROMs and follow-up management of patients with OI. This standardization of outcome measurements will facilitate comparability of research and easier international cooperation in OI and hearing loss. Furthermore, it can improve standard of care in people with OI and hearing loss by incorporating the recommendations into care pathways.
Collapse
Affiliation(s)
| | - Sebastian Hendricks
- Department of Audiology and Audiovestibular Medicine, Sight and Sound Centre, Great Ormond Street Hospital for Children NHS FT, London, UK
| | | | | | - Dagmar Mekking
- Care4BrittleBones Foundation, Wassenaar, the Netherlands
| | - Isabelle Mosnier
- Technologies et thérapie génique pour la surdité, Institut de l'audition, Institut Pasteur/Inserm/Université Paris Cité, Paris, France-Unité Fonctionnelle Implants Auditifs, ORL, GH Pitié-Salpêtrière, AP-HP Sorbonne Université, Paris, France
| | - Angela Musacchio
- Department of Sensorial Organs, Audiology Operative Unit, Sapienza University of Rome, Rome Italy
| | - David Vernick
- Harvard Medical School, Beth Israel Lahey Hospital, Department of Surgery, Division of Otolaryngogy, Boston, Massachusetts
| | | |
Collapse
|
97
|
Sulas E, Hasan PY, Zhang Y, Patou F. Streamlining experiment design in cognitive hearing science using OpenSesame. Behav Res Methods 2023; 55:1965-1979. [PMID: 35794416 PMCID: PMC10250502 DOI: 10.3758/s13428-022-01886-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 05/23/2022] [Indexed: 11/08/2022]
Abstract
Auditory science increasingly builds on concepts and testing paradigms originated in behavioral psychology and cognitive neuroscience - an evolution of which the resulting discipline is now known as cognitive hearing science. Experimental cognitive hearing science paradigms call for hybrid cognitive and psychobehavioral tests such as those relating the attentional system, working memory, and executive functioning to low-level auditory acuity or speech intelligibility. Building complex multi-stimuli experiments can rapidly become time-consuming and error-prone. Platform-based experiment design can help streamline the implementation of cognitive hearing science experimental paradigms, promote the standardization of experiment design practices, and ensure reliability and control. Here, we introduce a set of features for the open-source python-based OpenSesame platform that allows the rapid implementation of custom behavioral and cognitive hearing science tests, including complex multichannel audio stimuli while interfacing with various synchronous inputs/outputs. Our integration includes advanced audio playback capabilities with multiple loudspeakers, an adaptive procedure, compatibility with standard I/Os and their synchronization through implementation of the Lab Streaming Layer protocol. We exemplify the capabilities of this extended OpenSesame platform with an implementation of the three-alternative forced choice amplitude modulation detection test and discuss reliability and performance. The new features are available free of charge from GitHub: https://github.com/elus-om/BRM_OMEXP .
Collapse
|
98
|
Tai Y, Shahsavarani S, Khan RA, Schmidt SA, Husain FT. An Inverse Relationship Between Gray Matter Volume and Speech-in-Noise Performance in Tinnitus Patients with Normal Hearing Sensitivity. J Assoc Res Otolaryngol 2023; 24:385-395. [PMID: 36869165 PMCID: PMC10335974 DOI: 10.1007/s10162-023-00895-1] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/21/2023] [Indexed: 03/05/2023] Open
Abstract
Speech-in-noise (SiN) recognition difficulties are often reported in patients with tinnitus. Although brain structural changes such as reduced gray matter (GM) volume in auditory and cognitive processing regions have been reported in the tinnitus population, it remains unclear how such changes influence speech understanding, such as SiN performance. In this study, pure-tone audiometry and Quick Speech-in-Noise test were conducted on individuals with tinnitus and normal hearing and hearing-matched controls. T1-weighted structural MRI images were obtained from all participants. After preprocessing, GM volumes were compared between tinnitus and control groups using whole-brain and region-of-interest analyses. Further, regression analyses were performed to examine the correlation between regional GM volume and SiN scores in each group. The results showed decreased GM volume in the right inferior frontal gyrus in the tinnitus group relative to the control group. In the tinnitus group, SiN performance showed a negative correlation with GM volume in the left cerebellum (Crus I/II) and the left superior temporal gyrus; no significant correlation between SiN performance and regional GM volume was found in the control group. Even with clinically defined normal hearing and comparable SiN performance relative to controls, tinnitus appears to change the association between SiN recognition and regional GM volume. This change may reflect compensatory mechanisms utilized by individuals with tinnitus who maintain behavioral performance.
Collapse
Affiliation(s)
- Yihsin Tai
- Department of Speech Pathology and Audiology, Ball State University, Muncie, IN, USA.
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA.
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA.
| | - Somayeh Shahsavarani
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Mortimer B. Zuckerman Mind Brain Behavior Institute, Columbia University, New York, NY, USA
| | - Rafay A Khan
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Sara A Schmidt
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| | - Fatima T Husain
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, IL, USA
- Beckman Institute for Advanced Science and Technology, University of Illinois at Urbana-Champaign, Urbana, IL, USA
- Neuroscience Program, University of Illinois at Urbana-Champaign, Urbana, IL, USA
| |
Collapse
|
99
|
Martínez-Vilavella G, Pujol J, Blanco-Hinojo L, Deus J, Rivas I, Persavento C, Sunyer J, Foraster M. The effects of exposure to road traffic noise at school on central auditory pathway functional connectivity. ENVIRONMENTAL RESEARCH 2023; 226:115574. [PMID: 36841520 DOI: 10.1016/j.envres.2023.115574] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/04/2022] [Revised: 02/21/2023] [Accepted: 02/23/2023] [Indexed: 06/18/2023]
Abstract
As the world becomes more urbanized, more people become exposed to traffic and the risks associated with a higher exposure to road traffic noise increase. Excessive exposure to environmental noise could potentially interfere with functional maturation of the auditory brain in developing individuals. The aim of the present study was to assess the association between exposure to annual average road traffic noise (LAeq) in schools and functional connectivity of key elements of the central auditory pathway in schoolchildren. A total of 229 children from 34 representative schools in the city of Barcelona with ages between 8 and 12 years (49.2% girls) were evaluated. LAeq was obtained as the mean of 2-consecutive day measurements inside classrooms before lessons started following standard procedures to obtain an indicator of long-term road traffic noise levels. A region-of-interest functional connectivity Magnetic Resonance Imaging (MRI) approach was adopted. Functional connectivity maps were generated for the inferior colliculus, medial geniculate body of the thalamus and primary auditory cortex as key levels of the central auditory pathway. Road traffic noise in schools was significantly associated with stronger connectivity between the inferior colliculus and a bilateral thalamic region adjacent to the medial geniculate body, and with stronger connectivity between the medial geniculate body and a bilateral brainstem region adjacent to the inferior colliculus. Such a functional connectivity strengthening effect did not extend to the cerebral cortex. The anatomy of the association implicating subcortical relays suggests that prolonged road traffic noise exposure in developing individuals may accelerate maturation in the basic elements of the auditory pathway. Future research is warranted to establish whether such a faster maturation in early pathway levels may ultimately reduce the developing potential in the whole auditory system.
Collapse
Affiliation(s)
- Gerard Martínez-Vilavella
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain; Department of Clinical and Health Psychology, Autonomous University of Barcelona, Barcelona, Spain
| | - Jesus Pujol
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain; CIBER de Salud Mental, Instituto de Salud Carlos III, Barcelona, Spain
| | - Laura Blanco-Hinojo
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain; CIBER de Salud Mental, Instituto de Salud Carlos III, Barcelona, Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona, Spain
| | - Joan Deus
- MRI Research Unit, Department of Radiology, Hospital del Mar, Barcelona, Spain; Department of Clinical and Health Psychology, Autonomous University of Barcelona, Barcelona, Spain
| | - Ioar Rivas
- ISGlobal, Barcelona, Spain; Pompeu Fabra University (UPF), Barcelona, Spain; CIBER Epidemiología y Salud Pública (CIBEREsp), Spain
| | - Cecilia Persavento
- ISGlobal, Barcelona, Spain; Pompeu Fabra University (UPF), Barcelona, Spain; CIBER Epidemiología y Salud Pública (CIBEREsp), Spain
| | - Jordi Sunyer
- ISGlobal, Barcelona, Spain; Pompeu Fabra University (UPF), Barcelona, Spain; CIBER Epidemiología y Salud Pública (CIBEREsp), Spain; IMIM (Hospital del Mar Medical Research Institute), Barcelona, Spain
| | - Maria Foraster
- ISGlobal, Barcelona, Spain; Pompeu Fabra University (UPF), Barcelona, Spain; CIBER Epidemiología y Salud Pública (CIBEREsp), Spain; PHAGEX Research Group, Blanquerna School of Health Science, Universitat Ramon Llull (URL), Barcelona, Spain.
| |
Collapse
|
100
|
da Silva K, Ribeiro VV, Santos ADN, Almeida SBS, Cruz PJA, Behlau M. Influence of Teachers' Vocal Quality on Students' Learning and/or Cognition: A Scoping Review. J Voice 2023:S0892-1997(23)00079-6. [PMID: 37147140 DOI: 10.1016/j.jvoice.2023.02.022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2022] [Revised: 02/16/2023] [Accepted: 02/17/2023] [Indexed: 05/07/2023]
Abstract
OBJECTIVE To verify if the teacher's vocal quality can influence the student's cognition. METHODS The present study is a scoping review performed to answer the research question: Can the teacher's vocal quality influence the student's learning and cognition?. To verify if the teacher's vocal quality can influence the student's cognition. The electronic search was performed in PubMed, Lilacs, SciELO, Scopus, Web of Science, Embase, and databases, in addition to a manual search in citation and gray literature. Two independent authors performed selection and extraction. Data were extracted about the study design: the sample, the cognitive tests used, the assessed cognitive skills, the type of altered voice (real or simulated), the assessment of the vocal quality, alone or associated with environmental noise, and the main outcomes evaluated. RESULTS The initial research identified 476 articles, and 13 were selected for analysis. Seven (54%) studies evaluated the impact of altered voices in an isolated way on cognitive abilities. From these, they verified that the altered voices could negatively influence children's cognitive performance. Other 6 studies (46%) associated altered voices with competitive noise in their analysis, and 4 concluded that competitive noise rather than altered voices influenced students' cognitive performance. CONCLUSION The altered voice seems to affect the cognitive tasks involved in the learning process. The competitive noise associated with the presentation of deviant voices had a stronger influence on cognitive performance than altered voice alone, demonstrating that cognitive performance is sensitive to the stages of information acquisition (input of acoustic signals).
Collapse
Affiliation(s)
- Kelly da Silva
- Speech-Language Pathology Course, Campus Lagarto, Universidade Federal de Sergipe - UFS, Lagarto, SE, Brazil; Post-graduate program in Human Communication Disorders, Universidade Federal de São Paulo, São Paulo, SP, Brazil.
| | - Vanessa Veis Ribeiro
- Faculdade de Ceilândia, Speech-Language Pathology Course, Universidade de Brasília - UnB, Brasília, DF, Brazil; Voice Specialization Course, Centro de Estudos da Voz, São Paulo, SP, Brazil
| | - Allicia Diely Nunes Santos
- Speech-Language Pathology Course, Campus Lagarto, Universidade Federal de Sergipe - UFS, Lagarto, SE, Brazil
| | | | - Pablo Jordão Alcântara Cruz
- Speech-Language Pathology Course, Campus Lagarto, Universidade Federal de Sergipe - UFS, Lagarto, SE, Brazil
| | - Mara Behlau
- Post-graduate program in Human Communication Disorders, Universidade Federal de São Paulo, São Paulo, SP, Brazil; Voice Specialization Course, Centro de Estudos da Voz, São Paulo, SP, Brazil
| |
Collapse
|