1
|
Phillips I, Bieber RE, Dirks C, Grant KW, Brungart DS. Age Impacts Speech-in-Noise Recognition Differently for Nonnative and Native Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1602-1623. [PMID: 38569080 DOI: 10.1044/2024_jslhr-23-00470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
PURPOSE The purpose of this study was to explore potential differences in suprathreshold auditory function among native and nonnative speakers of English as a function of age. METHOD Retrospective analyses were performed on three large data sets containing suprathreshold auditory tests completed by 5,572 participants who were self-identified native and nonnative speakers of English between the ages of 18-65 years, including a binaural tone detection test, a digit identification test, and a sentence recognition test. RESULTS The analyses show a significant interaction between increasing age and participant group on tests involving speech-based stimuli (digit strings, sentences) but not on the binaural tone detection test. For both speech tests, differences in speech recognition emerged between groups during early adulthood, and increasing age had a more negative impact on word recognition for nonnative compared to native participants. Age-related declines in performance were 2.9 times faster for digit strings and 3.3 times faster for sentences for nonnative participants compared to native participants. CONCLUSIONS This set of analyses extends the existing literature by examining interactions between aging and self-identified native English speaker status in several auditory domains in a cohort of adults spanning young adulthood through middle age. The finding that older nonnative English speakers in this age cohort may have greater-than-expected deficits on speech-in-noise perception may have clinical implications on how these individuals should be diagnosed and treated for hearing difficulties.
Collapse
Affiliation(s)
- Ian Phillips
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Rebecca E Bieber
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Coral Dirks
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Ken W Grant
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Douglas S Brungart
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| |
Collapse
|
2
|
Lander DM, Liu S, Roup CM. Associations Between Auditory Working Memory, Self-Perceived Listening Effort, and Hearing Difficulty in Adults With Mild Traumatic Brain Injury. Ear Hear 2024; 45:695-709. [PMID: 38229218 DOI: 10.1097/aud.0000000000001462] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/18/2024]
Abstract
OBJECTIVES Mild traumatic brain injury (TBI) can have persistent effects in the auditory domain (e.g., difficulty listening in noise), despite individuals having normal pure-tone auditory sensitivity. Individuals with a history of mild TBI often perceive hearing difficulty and greater listening effort in complex listening situations. The purpose of the present study was to examine self-perceived hearing difficulty, listening effort, and performance on an auditory processing test battery in adults with a history of mild TBI compared with a control group. DESIGN Twenty adults ages 20 to 53 years old participated divided into a mild TBI (n = 10) and control group (n = 10). Perceived hearing difficulties were measured using the Adult Auditory Processing Scale and the Hearing Handicap Inventory for Adults. Listening effort was measured using the National Aeronautics and Space Administration-Task Load Index. Listening effort ratings were obtained at baseline, after each auditory processing test, and at the completion of the test battery. The auditory processing test battery included (1) dichotic word recognition, (2) the 500-Hz masking level difference, (3) the Listening in Spatialized Noise-Sentences test, and (4) the Word Auditory Recognition and Recall Measure (WARRM). RESULTS Results indicated that individuals with a history of mild TBI perceived significantly greater degrees of hearing difficulty and listening effort than the control group. There were no significant group differences on two of the auditory processing tasks (dichotic word recognition or Listening in Spatialized Noise-Sentences). The mild TBI group exhibited significantly poorer performance on the 500-Hz MLD and the WARRM, a measure of auditory working memory, than the control group. Greater degrees of self-perceived hearing difficulty were significantly associated with greater listening effort and poorer auditory working memory. Greater listening effort was also significantly associated with poorer auditory working memory. CONCLUSIONS Results demonstrate that adults with a history of mild TBI may experience subjective hearing difficulty and listening effort when listening in challenging acoustic environments. Poorer auditory working memory on the WARRM task was observed for the adults with mild TBI and was associated with greater hearing difficulty and listening effort. Taken together, the present study suggests that conventional clinical audiometric battery alone may not provide enough information about auditory processing deficits in individuals with a history of mild TBI. The results support the use of a multifaceted battery of auditory processing tasks and subjective measures when evaluating individuals with a history of mild TBI.
Collapse
Affiliation(s)
- Devan M Lander
- Department of Speech & Hearing Science, The Ohio State University, Columbus, Ohio, USA
| | - Shuang Liu
- Independent Statistical Consultant, Columbus, Ohio, USA
| | - Christina M Roup
- Department of Speech & Hearing Science, The Ohio State University, Columbus, Ohio, USA
| |
Collapse
|
3
|
Patro C, Monfiletto A, Singer A, Srinivasan NK, Mishra SK. Midlife Speech Perception Deficits: Impact of Extended High-Frequency Hearing, Peripheral Neural Function, and Cognitive Abilities. Ear Hear 2024:00003446-990000000-00269. [PMID: 38556645 DOI: 10.1097/aud.0000000000001504] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/02/2024]
Abstract
OBJECTIVES The objectives of the present study were to investigate the effects of age-related changes in extended high-frequency (EHF) hearing, peripheral neural function, working memory, and executive function on speech perception deficits in middle-aged individuals with clinically normal hearing. DESIGN We administered a comprehensive assessment battery to 37 participants spanning the age range of 20 to 56 years. This battery encompassed various evaluations, including standard and EHF pure-tone audiometry, ranging from 0.25 to 16 kHz. In addition, we conducted auditory brainstem response assessments with varying stimulation rates and levels, a spatial release from masking (SRM) task, and cognitive evaluations that involved the Trail Making test (TMT) for assessing executive function and the Abbreviated Reading Span test (ARST) for measuring working memory. RESULTS The results indicated a decline in hearing sensitivities at EHFs and an increase in completion times for the TMT with age. In addition, as age increased, there was a corresponding decrease in the amount of SRM. The declines in SRM were associated with age-related declines in hearing sensitivity at EHFs and TMT performance. While we observed an age-related decline in wave I responses, this decline was primarily driven by age-related reductions in EHF thresholds. In addition, the results obtained using the ARST did not show an age-related decline. Neither the auditory brainstem response results nor ARST scores were correlated with the amount of SRM. CONCLUSIONS These findings suggest that speech perception deficits in middle age are primarily linked to declines in EHF hearing and executive function, rather than cochlear synaptopathy or working memory.
Collapse
Affiliation(s)
- Chhayakanta Patro
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Angela Monfiletto
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Aviya Singer
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Nirmal Kumar Srinivasan
- Department of Speech Language Pathology & Audiology, Towson University, Towson, Maryland, USA
| | - Srikanta Kumar Mishra
- Department of Speech, Language and Hearing Sciences, The University of Texas at Austin, Austin, Texas, USA
| |
Collapse
|
4
|
Fernández-Vargas M, Macedo-Lima M, Remage-Healey L. Acute Aromatase Inhibition Impairs Neural and Behavioral Auditory Scene Analysis in Zebra Finches. eNeuro 2024; 11:ENEURO.0423-23.2024. [PMID: 38467426 PMCID: PMC10960633 DOI: 10.1523/eneuro.0423-23.2024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2023] [Revised: 12/31/2023] [Accepted: 01/04/2024] [Indexed: 03/13/2024] Open
Abstract
Auditory perception can be significantly disrupted by noise. To discriminate sounds from noise, auditory scene analysis (ASA) extracts the functionally relevant sounds from acoustic input. The zebra finch communicates in noisy environments. Neurons in their secondary auditory pallial cortex (caudomedial nidopallium, NCM) can encode song from background chorus, or scenes, and this capacity may aid behavioral ASA. Furthermore, song processing is modulated by the rapid synthesis of neuroestrogens when hearing conspecific song. To examine whether neuroestrogens support neural and behavioral ASA in both sexes, we retrodialyzed fadrozole (aromatase inhibitor, FAD) and recorded in vivo awake extracellular NCM responses to songs and scenes. We found that FAD affected neural encoding of songs by decreasing responsiveness and timing reliability in inhibitory (narrow-spiking), but not in excitatory (broad-spiking) neurons. Congruently, FAD decreased neural encoding of songs in scenes for both cell types, particularly in females. Behaviorally, we trained birds using operant conditioning and tested their ability to detect songs in scenes after administering FAD orally or injected bilaterally into NCM. Oral FAD increased response bias and decreased correct rejections in females, but not in males. FAD in NCM did not affect performance. Thus, FAD in the NCM impaired neuronal ASA but that did not lead to behavioral disruption suggesting the existence of resilience or compensatory responses. Moreover, impaired performance after systemic FAD suggests involvement of other aromatase-rich networks outside the auditory pathway in ASA. This work highlights how transient estrogen synthesis disruption can modulate higher-order processing in an animal model of vocal communication.
Collapse
Affiliation(s)
- Marcela Fernández-Vargas
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Matheus Macedo-Lima
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| | - Luke Remage-Healey
- Neuroscience and Behavior Program, Center for Neuroendocrine Studies, University of Massachusetts Amherst, Amherst, Massachusetts 01003
| |
Collapse
|
5
|
Bransby L, Rosenich E, Maruff P, Lim YY. How Modifiable Are Modifiable Dementia Risk Factors? A Framework for Considering the Modifiability of Dementia Risk Factors. J Prev Alzheimers Dis 2024; 11:22-37. [PMID: 38230714 PMCID: PMC10995020 DOI: 10.14283/jpad.2023.119] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/13/2023] [Accepted: 08/06/2023] [Indexed: 01/18/2024]
Abstract
Many risk factors for dementia, identified from observational studies, are potentially modifiable. This raises the possibility that targeting key modifiable dementia risk factors may reduce the prevalence of dementia, which has led to the development of dementia risk reduction and prevention strategies, such as intervention trials or dementia prevention guidelines. However, what has rarely been considered in the studies that inform these strategies is the extent to which modifiable dementia risk factors can (1) be identified by individuals, and (2) be readily modified by individuals. Characteristics of modifiable dementia risk factors such as readiness of identification and targeting, as well as when they should be targeted, can influence the design, or success of strategies for reducing dementia risk. This review aims to develop a framework for classifying the degree of modifiability of dementia risk factors for research studies. The extent to which these modifiable dementia risk factors could be modified by an individual seeking to reduce their dementia risk is determined, as well as the resources that might be needed for both risk factor identification and modification, and whether modification may be optimal in early-life (aged <45 years), midlife (aged 45-65 years) or late-life (aged >65 years). Finally, barriers that could influence the ability of an individual to engage in risk factor modification and, ultimately, dementia risk reduction are discussed.
Collapse
Affiliation(s)
- L Bransby
- Lisa Bransby, Turner Institute for Brain and Mental Health, 18 Innovation Walk, Clayton, VIC 3800, Australia;
| | | | | | | |
Collapse
|
6
|
Suri H, Salgado-Puga K, Wang Y, Allen N, Lane K, Granroth K, Olivei A, Nass N, Rothschild G. A Cortico-Striatal Circuit for Sound-Triggered Prediction of Reward Timing. BIORXIV : THE PREPRINT SERVER FOR BIOLOGY 2023:2023.11.21.568134. [PMID: 38045246 PMCID: PMC10690153 DOI: 10.1101/2023.11.21.568134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/05/2023]
Abstract
A crucial aspect of auditory perception is the ability to use sound cues to predict future events and to time actions accordingly. For example, distinct smartphone notification sounds reflect a call that needs to be answered within a few seconds, or a text that can be read later; the sound of an approaching vehicle signals when it is safe to cross the street. Other animals similarly use sounds to plan, time and execute behaviors such as hunting, evading predation and tending to offspring. However, the neural mechanisms that underlie sound-guided prediction of upcoming salient event timing are not well understood. To address this gap, we employed an appetitive sound-triggered reward time prediction behavior in head-fixed mice. We find that mice trained on this task reliably estimate the time from a sound cue to upcoming reward on the scale of a few seconds, as demonstrated by learning-dependent well-timed increases in reward-predictive licking. Moreover, mice showed a dramatic impairment in their ability to use sound to predict delayed reward when the auditory cortex was inactivated, demonstrating its causal involvement. To identify the neurophysiological signatures of auditory cortical reward-timing prediction, we recorded local field potentials during learning and performance of this behavior and found that the magnitude of auditory cortical responses to the sound prospectively encoded the duration of the anticipated sound-reward time interval. Next, we explored how and where these sound-triggered time interval prediction signals propagate from the auditory cortex to time and initiate consequent action. We targeted the monosynaptic projections from the auditory cortex to the posterior striatum and found that chemogenetic inactivation of these projections impairs animal's ability to predict sound-triggered delayed reward. Simultaneous neural recordings in the auditory cortex and posterior striatum during task performance revealed coordination of neural activity across these regions during the sound cue predicting the time interval to reward. Collectively, our findings identify an auditory cortical-striatal circuit supporting sound-triggered timing-prediction behaviors.
Collapse
|
7
|
Xu N, Qin X, Zhou Z, Shan W, Ren J, Yang C, Lu L, Wang Q. Age differentially modulates the cortical tracking of the lower and higher level linguistic structures during speech comprehension. Cereb Cortex 2023; 33:10463-10474. [PMID: 37566910 DOI: 10.1093/cercor/bhad296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/10/2022] [Revised: 07/23/2023] [Accepted: 07/24/2023] [Indexed: 08/13/2023] Open
Abstract
Speech comprehension requires listeners to rapidly parse continuous speech into hierarchically-organized linguistic structures (i.e. syllable, word, phrase, and sentence) and entrain the neural activities to the rhythm of different linguistic levels. Aging is accompanied by changes in speech processing, but it remains unclear how aging affects different levels of linguistic representation. Here, we recorded magnetoencephalography signals in older and younger groups when subjects actively and passively listened to the continuous speech in which hierarchical linguistic structures of word, phrase, and sentence were tagged at 4, 2, and 1 Hz, respectively. A newly-developed parameterization algorithm was applied to separate the periodically linguistic tracking from the aperiodic component. We found enhanced lower-level (word-level) tracking, reduced higher-level (phrasal- and sentential-level) tracking, and reduced aperiodic offset in older compared with younger adults. Furthermore, we observed the attentional modulation on the sentential-level tracking being larger for younger than for older ones. Notably, the neuro-behavior analyses showed that subjects' behavioral accuracy was positively correlated with the higher-level linguistic tracking, reversely correlated with the lower-level linguistic tracking. Overall, these results suggest that the enhanced lower-level linguistic tracking, reduced higher-level linguistic tracking and less flexibility of attentional modulation may underpin aging-related decline in speech comprehension.
Collapse
Affiliation(s)
- Na Xu
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Xiaoxiao Qin
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Ziqi Zhou
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Wei Shan
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Jiechuan Ren
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Chunqing Yang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
| | - Lingxi Lu
- Center for the Cognitive Science of Language, Beijing Language and Culture University, Beijing 100083, China
| | - Qun Wang
- Department of Neurology, Beijing Tiantan Hospital, Capital Medical University, Beijing 100070, China
- National Clinical Research Center for Neurological Diseases, Beijing 100070, China
- Beijing Institute of Brain Disorders, Collaborative Innovation Center for Brain Disorders, Capital Medical University, Beijing 100069, China
| |
Collapse
|
8
|
Cohn M, Barreda S, Zellou G. Differences in a Musician's Advantage for Speech-in-Speech Perception Based on Age and Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:545-564. [PMID: 36729698 DOI: 10.1044/2022_jslhr-22-00259] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/18/2023]
Abstract
PURPOSE This study investigates the debate that musicians have an advantage in speech-in-noise perception from years of targeted auditory training. We also consider the effect of age on any such advantage, comparing musicians and nonmusicians (age range: 18-66 years), all of whom had normal hearing. We manipulate the degree of fundamental frequency (f o) separation between the competing talkers, as well as use different tasks, to probe attentional differences that might shape a musician's advantage across ages. METHOD Participants (ranging in age from 18 to 66 years) included 29 musicians and 26 nonmusicians. They completed two tasks varying in attentional demands: (a) a selective attention task where listeners identify the target sentence presented with a one-talker interferer (Experiment 1), and (b) a divided attention task where listeners hear two vowels played simultaneously and identify both competing vowels (Experiment 2). In both paradigms, f o separation was manipulated between the two voices (Δf o = 0, 0.156, 0.306, 1, 2, 3 semitones). RESULTS Results show that increasing differences in f o separation lead to higher accuracy on both tasks. Additionally, we find evidence for a musician's advantage across the two studies. In the sentence identification task, younger adult musicians show higher accuracy overall, as well as a stronger reliance on f o separation. Yet, this advantage declines with musicians' age. In the double vowel identification task, musicians of all ages show an across-the-board advantage in detecting two vowels-and use f o separation more to aid in stream separation-but show no consistent difference in double vowel identification. CONCLUSIONS Overall, we find support for a hybrid auditory encoding-attention account of music-to-speech transfer. The musician's advantage includes f o, but the benefit also depends on the attentional demands in the task and listeners' age. Taken together, this study suggests a complex relationship between age, musical experience, and speech-in-speech paradigm on a musician's advantage. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21956777.
Collapse
Affiliation(s)
- Michelle Cohn
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Santiago Barreda
- Phonetics Lab, Department of Linguistics, University of California, Davis
| | - Georgia Zellou
- Phonetics Lab, Department of Linguistics, University of California, Davis
| |
Collapse
|
9
|
Maillard E, Joyal M, Murray MM, Tremblay P. Are musical activities associated with enhanced speech perception in noise in adults? A systematic review and meta-analysis. CURRENT RESEARCH IN NEUROBIOLOGY 2023. [DOI: 10.1016/j.crneur.2023.100083] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/30/2023] Open
|
10
|
Kaplan Neeman R, Roziner I, Muchnik C. A Clinical Paradigm for Listening Effort Assessment in Middle-Aged Listeners. Front Psychol 2022; 13:820227. [PMID: 35250756 PMCID: PMC8891448 DOI: 10.3389/fpsyg.2022.820227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/22/2021] [Accepted: 01/24/2022] [Indexed: 11/13/2022] Open
Abstract
Listening effort (LE) has been known to characterize speech recognition in noise regardless of hearing sensitivity and age. Whereas the behavioral measure of dual-task paradigm effectively manifests the cognitive cost that listeners exert when processing speech in background noise, there is no consensus as to a clinical procedure that might best express LE. In order to assess the cognitive load underlying speech recognition in noise and promote counselling for coping strategies, a feasible clinical paradigm is warranted. The ecological validity of such a paradigm might best be demonstrated in middle-aged adults, exhibiting intact hearing sensitivity on one hand, however, experiencing difficulties in degraded listening conditions, unaware of the implicated cognitive cost of speech recognition in noise. To this end, we constructed a dual-task paradigm that consists of a primary task of sentences-in-noise recognition and a secondary task of simple visual colored-shape matching. Research objective was to develop a clinical paradigm for the assessment of LE in middle-aged adults. Participants were 17 middle-aged adults (mean age of 52.81 years) and 23 young adults (mean age of 24.90 years). All participants had normal hearing according to age. Speech stimuli consisted of the Hebrew Matrix sentences in noise test. SRTn was obtained for 80% correct identification. Visual stimuli were colored geometric shapes. Outcome measures were obtained initially for each task separately, to establish performance ability, and then obtained simultaneously. Reaction time and accuracy in the secondary task were the defined metrics for LE. Results: LE was indicated for both groups, however, was more pronounced in the middle-aged, manifested in the visual accuracy and reaction time metrics. Both groups maintained the 80% correct recognition-in-noise in the dual-task, however, the middle-aged group necessitated a better SNR of 1.4dB than the normal hearing group. Moreover, the middle-aged group was taxed in a greater prolongation of reaction time, in order to uphold the correct recognition. Conclusion: a dual-task paradigm consisting of sentences-in-noise primary task combined with a simple secondary task successfully showed different manifestations of LE in middle-aged adults compared to young adults, thus approximating the use of such a paradigm in a clinical setting.
Collapse
Affiliation(s)
- Ricky Kaplan Neeman
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Hearing, Speech and Language Center, Sheba Medical Cente, Ramat-Gan, Israel
- *Correspondence: Ricky Kaplan Neeman,
| | - Ilan Roziner
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
| | - Chava Muchnik
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Tel Aviv, Israel
- Hearing, Speech and Language Center, Sheba Medical Cente, Ramat-Gan, Israel
| |
Collapse
|
11
|
Tuomainen O, Taschenberger L, Rosen S, Hazan V. Speech modifications in interactive speech: effects of age, sex and noise type. Philos Trans R Soc Lond B Biol Sci 2022; 377:20200398. [PMID: 34775827 PMCID: PMC8591383 DOI: 10.1098/rstb.2020.0398] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
When attempting to maintain conversations in noisy communicative settings, talkers typically modify their speech to make themselves understood by the listener. In this study, we investigated the impact of background interference type and talker age on speech adaptations, vocal effort and communicative success. We measured speech acoustics (articulation rate, mid-frequency energy, fundamental frequency), vocal effort (correlation between mid-frequency energy and fundamental frequency) and task completion time in 114 participants aged 8-80 years carrying out an interactive problem-solving task in good and noisy listening conditions (quiet, non-speech noise, background speech). We found greater changes in fundamental frequency and mid-frequency energy in non-speech noise than in background speech and similar reductions in articulation rate in both. However, older participants (50+ years) increased vocal effort in both background interference types, whereas younger children (less than 13 years) increased vocal effort only in background speech. The presence of background interference did not lead to longer task completion times. These results suggest that when the background interference involves a higher cognitive load, as in the case of other speech of other talkers, children and older talkers need to exert more vocal effort to ensure successful communication. We discuss these findings within the communication effort framework. This article is part of the theme issue 'Voice modulation: from origin and mechanism to social impact (Part II)'.
Collapse
Affiliation(s)
- Outi Tuomainen
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK,Department of Linguistics, University of Potsdam, Haus 14, Karl-Liebknecht-Straße 24-25, 14476 Potsdam, Germany
| | - Linda Taschenberger
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Stuart Rosen
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| | - Valerie Hazan
- Speech Hearing and Phonetic Sciences, University College London, 2 Wakefield Street, London WC1N 1PF, UK
| |
Collapse
|
12
|
Xu D, Newell MD, Francis AL. Fall-related Injuries Mediate the Relationship between Self-Reported Hearing Loss and Mortality in Middle-Aged and Older Adults. J Gerontol A Biol Sci Med Sci 2021; 76:e213-e220. [PMID: 33929532 DOI: 10.1093/gerona/glab123] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/11/2021] [Indexed: 11/14/2022] Open
Abstract
BACKGROUND Hearing loss is associated with a greater risk of death in older adults. This relationship has been attributed to an increased risk of injury, particularly due to falling, in individuals with hearing loss. However, the link between hearing loss and mortality across the lifespan is less clear. METHODS We used structural equation modeling and mediation analysis to investigate the relationship between hearing loss, falling, injury, and mortality across the adult lifespan in public-use data from the National Health Interview Survey and the National Death Index. We examined 1) the association between self-reported hearing problems and later mortality, 2) the associations between self-reported hearing problems and the risk of injury and degree and type of injury, 3) the mediating role of falling and injury in the association between self-reported hearing problems and mortality, and 4) whether these relationships differ in young (18-39), middle-aged (40-59) and older (60+) age groups. RESULTS In all three age ranges, those reporting hearing problems were more likely to fall, were more likely to sustain an injury, and were more likely to sustain a serious injury, than those not reporting hearing problems. While there was no significant association between hearing loss and mortality in the youngest category, there was for middle-aged and older participants and for both fall-related injury was a significant mediator in this relationship. CONCLUSIONS Fall-related injury mediates the relationship between hearing loss and mortality for middle-aged as well as older adults, suggesting a need for further research into mechanisms and remediation.
Collapse
Affiliation(s)
- Dongjuan Xu
- School of Nursing Purdue University.,Center on Aging and the Life Course Purdue University
| | - Melissa D Newell
- Department of Speech, Language and Hearing Sciences Purdue University
| | - Alexander L Francis
- Department of Speech, Language and Hearing Sciences Purdue University.,Center on Aging and the Life Course Purdue University
| |
Collapse
|