1
|
Ceuleers D, Dhooge I, Baudonck N, Swinnen F, Kestens K, Keppler H. Dual-Task Interference in the Assessment of Listening Effort Before and After Cochlear Implantation in Adults: A Longitudinal Study. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025:1-13. [PMID: 39772699 DOI: 10.1044/2024_jslhr-24-00449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/11/2025]
Abstract
PURPOSE This study aimed to assess the magnitude and direction of dual-task interference in a listening effort dual-task paradigm in individuals with severe-to-profound hearing loss before and in the short- and long-term after cochlear implantation. DESIGN The study sample consisted of 26 adult candidates for cochlear implantation with severe-to-profound hearing loss. The dual-task paradigm consisted of a primary speech understanding task, conducted in a quiet condition, and a favorable and unfavorable noise condition on the one hand and a secondary visual memory task on the other hand. The dual-task effect for both tasks and the derived patterns of dual-task interference were determined. Participants were evaluated at four test moments: before cochlear implantation and at 3 months, 6 months, and 12 months after implantation. RESULTS Across all listening conditions, a shift was observed from patterns of dual-task interference with worse and stable scores for the primary speech understanding task in the dual-task condition compared to the baseline condition before implantation, toward patterns in which stable or better scores were obtained, respectively, for the primary task in the dual-task condition after implantation. This indicates that more attention could be allocated to the primary speech understanding task during the dual-task condition after implantation, implying a decreased listening effort. CONCLUSIONS A decreased listening effort was found after cochlear implantation. This study provides additional insights into the evolution of dual-task interference after cochlear implantation. It highlights the importance of interpreting both the primary and secondary tasks using a dual-task paradigm in the assessment of listening effort.
Collapse
Affiliation(s)
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Belgium
- Department of Otorhinolaryngology, Head and Neck Surgery, Ghent University Hospital, Belgium
| | - Nele Baudonck
- Department of Otorhinolaryngology, Head and Neck Surgery, Ghent University Hospital, Belgium
| | - Freya Swinnen
- Department of Otorhinolaryngology, Head and Neck Surgery, Ghent University Hospital, Belgium
| | - Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Belgium
| | - Hannah Keppler
- Department of Otorhinolaryngology, Head and Neck Surgery, Ghent University Hospital, Belgium
- Department of Rehabilitation Sciences, Ghent University, Belgium
| |
Collapse
|
2
|
Marsja E, Holmer E, Stenbäck V, Micula A, Tirado C, Danielsson H, Rönnberg J. Fluid Intelligence Partially Mediates the Effect of Working Memory on Speech Recognition in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:399-410. [PMID: 39666895 DOI: 10.1044/2024_jslhr-24-00465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2024]
Abstract
PURPOSE Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence. METHOD We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning. We analyzed two age-matched samples: participants with hearing aids and a group with normal hearing. WMC was assessed using the Reading Span task, and fluid intelligence was measured with Raven's Progressive Matrices. Speech recognition in noise was evaluated using Hagerman sentences presented to target 80% speech-reception thresholds in four-talker babble. Data were analyzed using mediation analysis to examine fluid intelligence as a mediator between WMC and speech recognition in noise. RESULTS We found a partial mediating effect of fluid intelligence on the relationship between WMC and speech recognition in noise, and that hearing status did not moderate this effect. In other words, WMC and fluid intelligence were related, and fluid intelligence partially explained the influence of WMC on speech recognition in noise. CONCLUSIONS This study shows the importance of fluid intelligence in speech recognition in noise, regardless of hearing status. Future research should use other advanced statistical techniques and explore various speech recognition tests and background maskers to deepen our understanding of the interplay between WMC and fluid intelligence in speech recognition.
Collapse
Affiliation(s)
- Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Andreea Micula
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- National Institute of Public Health, University of Southern Denmark, Copenhagen
- Eriksholm Research Centre, Oticon A/S, Copenhagen, Denmark
| | - Carlos Tirado
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Henrik Danielsson
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Jerker Rönnberg
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
3
|
Carlie J, Sahlén B, Andersson K, Johansson R, Whitling S, Jonas Brännström K. Culturally and linguistically diverse children's retention of spoken narratives encoded in quiet and in babble noise. J Exp Child Psychol 2025; 249:106088. [PMID: 39316884 DOI: 10.1016/j.jecp.2024.106088] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2024] [Revised: 09/03/2024] [Accepted: 09/04/2024] [Indexed: 09/26/2024]
Abstract
Multi-talker noise impedes children's speech processing and may affect children listening to their second language more than children listening to their first language. Evidence suggests that multi-talker noise also may impede children's memory retention and learning. A total of 80 culturally and linguistically diverse children aged 7 to 9 years listened to narratives in two listening conditions: quiet and multi-talker noise (signal-to-noise ratio +6 dB). Repeated recall (immediate and delayed recall), was measured with a 1-week retention interval. Retention was calculated as the difference in recall accuracy per question between immediate and delayed recall. Working memory capacity was assessed, and the children's degree of school language (Swedish) exposure was quantified. Immediate narrative recall was lower for the narrative encoded in noise than in quiet. During delayed recall, narrative recall was similar for both listening conditions. Children with higher degrees of school language exposure and higher working memory capacity had better narrative recall overall, but these factors were not associated with an effect of listening condition or retention. Multi-talker babble noise does not impair culturally and linguistically diverse primary school children's retention of spoken narratives as measured by multiple-choice questions. Although a quiet listening condition allows for a superior encoding compared with a noisy listening condition, details are likely lost during memory consolidation and re-consolidation.
Collapse
Affiliation(s)
- Johanna Carlie
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden.
| | - Birgitta Sahlén
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - Ketty Andersson
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - Roger Johansson
- Department of Psychology, Lund University, 221 00 Lund, Sweden
| | - Susanna Whitling
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| | - K Jonas Brännström
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 00 Lund, Sweden
| |
Collapse
|
4
|
Fernandez LB, Pickering MJ, Naylor G, Hadley LV. Uses of Linguistic Context in Speech Listening: Does Acquired Hearing Loss Lead to Reduced Engagement of Prediction? Ear Hear 2024; 45:1107-1114. [PMID: 38880953 PMCID: PMC11325976 DOI: 10.1097/aud.0000000000001515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/01/2024] [Indexed: 06/18/2024]
Abstract
Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.
Collapse
Affiliation(s)
- Leigh B. Fernandez
- Department of Social Sciences, Psycholinguistics Group, University of Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Martin J. Pickering
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Lauren V. Hadley
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
5
|
Shende SA, Jones SE, Mudar RA. Alpha and theta oscillations on a visual strategic processing task in age-related hearing loss. Front Neurosci 2024; 18:1382613. [PMID: 39086839 PMCID: PMC11289776 DOI: 10.3389/fnins.2024.1382613] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2024] [Accepted: 06/28/2024] [Indexed: 08/02/2024] Open
Abstract
Introduction Emerging evidence suggests changes in several cognitive control processes in individuals with age-related hearing loss (ARHL). However, value-directed strategic processing, which involves selectively processing salient information based on high value, has been relatively unexplored in ARHL. Our previous work has shown behavioral changes in strategic processing in individuals with ARHL. The current study examined event-related alpha and theta oscillations linked to a visual, value-directed strategic processing task in 19 individuals with mild untreated ARHL and 17 normal hearing controls of comparable age and education. Methods Five unique word lists were presented where words were assigned high- or low-value based on the letter case, and electroencephalography (EEG) data was recorded during task performance. Results The main effect of the group was observed in early time periods. Specifically, greater theta synchronization was seen in the ARHL group relative to the control group. Interaction between group and value was observed at later time points, with greater theta synchronization for high- versus low-value information in those with ARHL. Discussion Our findings provide evidence for oscillatory changes tied to a visual task of value-directed strategic processing in individuals with mild untreated ARHL. This points towards modality-independent neurophysiological changes in cognitive control in individuals with mild degrees of ARHL and adds to the rapidly growing literature on the cognitive consequences of ARHL.
Collapse
Affiliation(s)
- Shraddha A. Shende
- Department of Communication Sciences and Disorders, Illinois State University, Normal, IL, United States
| | - Sarah E. Jones
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, IL, United States
| | - Raksha A. Mudar
- Department of Speech and Hearing Science, University of Illinois Urbana-Champaign, Champaign, IL, United States
| |
Collapse
|
6
|
Holmer E, Rönnberg J, Asutay E, Tirado C, Ekberg M. Facial mimicry interference reduces working memory accuracy for facial emotion expressions. PLoS One 2024; 19:e0306113. [PMID: 38924006 PMCID: PMC11207140 DOI: 10.1371/journal.pone.0306113] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Accepted: 06/11/2024] [Indexed: 06/28/2024] Open
Abstract
Facial mimicry, the tendency to imitate facial expressions of other individuals, has been shown to play a critical role in the processing of emotion expressions. At the same time, there is evidence suggesting that its role might change when the cognitive demands of the situation increase. In such situations, understanding another person is dependent on working memory. However, whether facial mimicry influences working memory representations for facial emotion expressions is not fully understood. In the present study, we experimentally interfered with facial mimicry by using established behavioral procedures, and investigated how this interference influenced working memory recall for facial emotion expressions. Healthy, young adults (N = 36) performed an emotion expression n-back paradigm with two levels of working memory load, low (1-back) and high (2-back), and three levels of mimicry interference: high, low, and no interference. Results showed that, after controlling for block order and individual differences in the perceived valence and arousal of the stimuli, the high level of mimicry interference impaired accuracy when working memory load was low (1-back) but, unexpectedly, not when load was high (2-back). Working memory load had a detrimental effect on performance in all three mimicry conditions. We conclude that facial mimicry might support working memory for emotion expressions when task load is low, but that the supporting effect possibly is reduced when the task becomes more cognitively challenging.
Collapse
Affiliation(s)
- Emil Holmer
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Erkin Asutay
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- JEDI Lab, Linköping University, Linköping, Sweden
| | - Carlos Tirado
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | - Mattias Ekberg
- Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
7
|
Andéol G, Paraouty N, Giraudet F, Wallaert N, Isnard V, Moulin A, Suied C. Predictors of Speech-in-Noise Understanding in a Population of Occupationally Noise-Exposed Individuals. BIOLOGY 2024; 13:416. [PMID: 38927296 PMCID: PMC11200776 DOI: 10.3390/biology13060416] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/12/2024] [Revised: 05/20/2024] [Accepted: 05/21/2024] [Indexed: 06/28/2024]
Abstract
Understanding speech in noise is particularly difficult for individuals occupationally exposed to noise due to a mix of noise-induced auditory lesions and the energetic masking of speech signals. For years, the monitoring of conventional audiometric thresholds has been the usual method to check and preserve auditory function. Recently, suprathreshold deficits, notably, difficulties in understanding speech in noise, has pointed out the need for new monitoring tools. The present study aims to identify the most important variables that predict speech in noise understanding in order to suggest a new method of hearing status monitoring. Physiological (distortion products of otoacoustic emissions, electrocochleography) and behavioral (amplitude and frequency modulation detection thresholds, conventional and extended high-frequency audiometric thresholds) variables were collected in a population of individuals presenting a relatively homogeneous occupational noise exposure. Those variables were used as predictors in a statistical model (random forest) to predict the scores of three different speech-in-noise tests and a self-report of speech-in-noise ability. The extended high-frequency threshold appears to be the best predictor and therefore an interesting candidate for a new way of monitoring noise-exposed professionals.
Collapse
Affiliation(s)
- Guillaume Andéol
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| | - Nihaad Paraouty
- iAudiogram—My Medical Assistant SAS, 51100 Reims, France; (N.P.); (N.W.)
| | - Fabrice Giraudet
- Department of Neurosensory Biophysics, INSERM U1107 NEURO-DOL, School of Medecine, Université Clermont Auvergne, 63000 Clermont-Ferrand, France;
| | - Nicolas Wallaert
- iAudiogram—My Medical Assistant SAS, 51100 Reims, France; (N.P.); (N.W.)
- Laboratoire des Systèmes Perceptifs, UMR CNRS 8248, Département d’Etudes Cognitives, Ecole Normale Supérieure, Université Paris Sciences et Lettres (PSL), 75005 Paris, France
- Department of Otorhinolaryngology-Head and Neck Surgery, Rennes University Hospital, 35000 Rennes, France
| | - Vincent Isnard
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| | - Annie Moulin
- Centre de Recherche en Neurosciences de Lyon, CRNL Inserm U1028—CNRS UMR5292—UCBLyon1, Perception Attention Memory Team, Bâtiment 452 B, 95 Bd Pinel, 69675 Bron Cedex, France;
| | - Clara Suied
- Institut de Recherche Biomédicale des Armées, 1 Place Valérie André, 91220 Brétigny sur Orge, France; (V.I.); (C.S.)
| |
Collapse
|
8
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
9
|
Ceuleers D, Keppler H, Degeest S, Baudonck N, Swinnen F, Kestens K, Dhooge I. Auditory, Visual, and Cognitive Abilities in Normal-Hearing Adults, Hearing Aid Users, and Cochlear Implant Users. Ear Hear 2024; 45:679-694. [PMID: 38192017 DOI: 10.1097/aud.0000000000001458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
OBJECTIVES Speech understanding is considered a bimodal and bidirectional process, whereby visual information (i.e., speechreading) and also cognitive functions (i.e., top-down processes) are involved. Therefore, the purpose of the present study is twofold: (1) to investigate the auditory (A), visual (V), and cognitive (C) abilities in normal-hearing individuals, hearing aid (HA) users, and cochlear implant (CI) users, and (2) to determine an auditory, visual, cognitive (AVC)-profile providing a comprehensive overview of a person's speech processing abilities, containing a broader variety of factors involved in speech understanding. DESIGN Three matched groups of subjects participated in this study: (1) 31 normal-hearing adults (mean age = 58.76), (2) 31 adults with moderate to severe hearing loss using HAs (mean age = 59.31), (3) 31 adults with a severe to profound hearing loss using a CI (mean age = 58.86). The audiological assessments consisted of pure-tone audiometry, speech audiometry in quiet and in noise. For evaluation of the (audio-) visual speech processing abilities, the Test for (Audio) Visual Speech perception was used. The cognitive test battery consisted of the letter-number sequencing task, the letter detection test, and an auditory Stroop test, measuring working memory and processing speed, selective attention, and cognitive flexibility and inhibition, respectively. Differences between the three groups were examined using a one-way analysis of variance or Kruskal-Wallis test, depending on the normality of the variables. Furthermore, a principal component analysis was conducted to determine the AVC-profile. RESULTS Normal-hearing individuals scored better for both auditory, and cognitive abilities compared to HA users and CI users, listening in a best aided condition. No significant differences were found for speech understanding in a visual condition, despite a larger audiovisual gain for the HA users and CI users. Furthermore, an AVC-profile was composed based on the different auditory, visual, and cognitive assessments. On the basis of that profile, it is possible to determine one comprehensive score for auditory, visual, and cognitive functioning. In the future, these scores could be used in auditory rehabilitation to determine specific strengths and weaknesses per individual patient for the different abilities related to the process of speech understanding in daily life. CONCLUSIONS It is suggested to evaluate individuals with hearing loss from a broader perspective, considering more than only the typical auditory abilities. Also, cognitive and visual abilities are important to take into account to have a more complete overview of the speech understanding abilities in daily life.
Collapse
Affiliation(s)
- Dorien Ceuleers
- Department of Head and Skin, Ghent University, Ghent, Belgium
| | - Hannah Keppler
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Sofie Degeest
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Nele Baudonck
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Freya Swinnen
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
10
|
Carlie J, Sahlén B, Johansson R, Andersson K, Whitling S, Brännström KJ. The Effect of Background Noise, Bilingualism, Socioeconomic Status, and Cognitive Functioning on Primary School Children's Narrative Listening Comprehension. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:960-973. [PMID: 38363725 DOI: 10.1044/2023_jslhr-22-00637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/18/2024]
Abstract
PURPOSE This study focuses on 7- to 9-year-old children attending primary school in Swedish areas of low socioeconomic status, where most children's school language is their second language. The aim was to better understand what factors influence these children's narrative listening comprehension both in an ideal listening condition (in quiet) and for the primary school classroom, a typical listening condition (with multitalker babble noise). METHOD A total of 86 typically developing 7- to 9-year-olds performed a narrative listening comprehension test (Lyssna, Förstå och Minnas [LFM]; English translation: Listen, Comprehend, and Remember) in two listening conditions: quiet and multitalker babble noise. They also performed the crosslinguistic nonword repetition test and a digit span backwards (DSB) test. A predictive statistical model including these factors, the children's degree of school language exposure, parental education level, and age was derived. RESULTS Listening condition had the strongest predictive value for LFM performance, followed by school language exposure and nonword repetition accuracy. Parental education level was also a significant predictor. There was a significant three-way interaction effect between listening condition, age, and DSB performance. CONCLUSIONS Multitalker babble noise has a negative effect on children's narrative listening comprehension. The effect of multitalker babble noise could be explained by age differences in the ability to allocate working memory capacity during the narrative listening comprehension task, suggesting that younger children may be more vulnerable for missing information when listening in background noise than their older peers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25209248.
Collapse
Affiliation(s)
- Johanna Carlie
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Birgitta Sahlén
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | | | - Ketty Andersson
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Susanna Whitling
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Karl Jonas Brännström
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| |
Collapse
|
11
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
12
|
Moradi S, Engdahl B, Johannessen A, Selbæk G, Aarhus L, Haanes GG. Hearing loss, hearing aid use, and performance on the Montreal cognitive assessment (MoCA): findings from the HUNT study in Norway. Front Neurosci 2024; 17:1327759. [PMID: 38260012 PMCID: PMC10800991 DOI: 10.3389/fnins.2023.1327759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024] Open
Abstract
Purpose To evaluate the associations between hearing status and hearing aid use and performance on the Montreal Cognitive Assessment (MoCA) in older adults in a cross-sectional study in Norway. Methods This study utilized data from the fourth wave of the Trøndelag Health Study (HUNT4, 2017-2019). Hearing thresholds at frequencies of 0.5, 1, 2, and 4 kHz (or PTA4) in the better hearing ear were used to determine participants' hearing status [normal hearing (PTA4 hearing threshold, ≤ 15 dB), or slight (PTA4, 16-25 dB), mild (PTA4, 26-40 dB), moderate (PTA4, 41-55 dB), or severe (PTA4, ≥ 56 dB) hearing loss]. Both standard scoring and alternate MoCA scoring for people with hearing loss (deleting MoCA items that rely on auditory function) were used in data analysis. The analysis was adjusted for the confounders age, sex, education, and health covariates. Results The pattern of results for the alternate scoring was similar to that for standard scoring. Compared with the normal-hearing group, only individuals with moderate or severe hearing loss performed worse in the MoCA. In addition, people with slight hearing loss performed better in the MoCA than those with moderate or severe hearing loss. Within the hearing loss group, hearing aid use was associated with better performance in the MoCA. No interaction was observed between hearing aid use and participants' hearing status with performance on the MoCA test. Conclusion While hearing loss was associated with poorer performance in the MoCA, hearing aid use was found to be associated with better performance in the MoCA. Future randomized control trials are needed to further examine the efficacy of hearing aid use on the MoCA performance. When compared with standard scoring, the alternate MoCA scoring had no effect on the pattern of results.
Collapse
Affiliation(s)
- Shahram Moradi
- Research Group for Disability and Inclusion, Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Porsgrunn, Porsgrunn, Norway
- Research Group for Health Promotion in Settings, Department of Health, Social and Welfare Studies, University of South-Eastern Norway, Tønsberg, Norway
| | - Bo Engdahl
- Department of Physical Health and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Aud Johannessen
- Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Vestfold, Horten, Norway
- Norwegian National Centre for Ageing and Health, Tønsberg, Norway
| | - Geir Selbæk
- Norwegian National Centre for Ageing and Health, Tønsberg, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
- Geriatric Department, Oslo University Hospital, Oslo, Norway
| | - Lisa Aarhus
- Department of Occupational Medicine and Epidemiology, National Institute of Occupational Health, Oslo, Norway
- Medical Department, Diakonhjemmet Hospital, Oslo, Norway
| | - Gro Gade Haanes
- Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Vestfold, Horten, Norway
- USN Research Group of Older Peoples’ Health, University of South-Eastern Norway Department of Nursing and Health Sciences, Faculty of Health and Social Sciences, University of South-Eastern Norway, Drammen, Norway
| |
Collapse
|
13
|
Wang S, Wong LLN. An Exploration of the Memory Performance in Older Adult Hearing Aid Users on the Integrated Digit-in-Noise Test. Trends Hear 2024; 28:23312165241253653. [PMID: 38715401 PMCID: PMC11080745 DOI: 10.1177/23312165241253653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/09/2024] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| | - Lena L. N. Wong
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| |
Collapse
|
14
|
Visentin C, Pellegatti M, Garraffa M, Di Domenico A, Prodi N. Individual characteristics moderate listening effort in noisy classrooms. Sci Rep 2023; 13:14285. [PMID: 37652970 PMCID: PMC10471719 DOI: 10.1038/s41598-023-40660-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
Comprehending the teacher's message when other students are chatting is challenging. Even though the sound environment is the same for a whole class, differences in individual performance can be observed, which might depend on a variety of personal factors and their specific interaction with the listening condition. This study was designed to explore the role of individual characteristics (reading comprehension, inhibitory control, noise sensitivity) when primary school children perform a listening comprehension task in the presence of a two-talker masker. The results indicated that this type of noise impairs children's accuracy, effort, and motivation during the task. Its specific impact depended on the level and was modulated by the child's characteristics. In particular, reading comprehension was found to support task accuracy, whereas inhibitory control moderated the effect of listening condition on the two measures of listening effort included in the study (response time and self-ratings), even though with a different pattern of association. A moderation effect of noise sensitivity on perceived listening effort was also observed. Understanding the relationship between individual characteristics and classroom sound environment has practical implications for the acoustic design of spaces promoting students' well-being, and supporting their learning performance.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy.
- Institute for Renewable Energy, Eurac Research, Via A. Volta/A. Volta Straße 13/A, 39100, Bolzano-Bozen, Italy.
| | - Matteo Pellegatti
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| | - Maria Garraffa
- School of Health Sciences, University of East Anglia, Norwich Research Park, Norwich, Norfolk, NR4 7TJ, UK
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, University of Chieti-Pescara, Via dei Vestini 31, 66100, Chieti, Italy
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| |
Collapse
|
15
|
Higgins NC, Pupo DA, Ozmeral EJ, Eddins DA. Head movement and its relation to hearing. Front Psychol 2023; 14:1183303. [PMID: 37448716 PMCID: PMC10338176 DOI: 10.3389/fpsyg.2023.1183303] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2023] [Accepted: 06/07/2023] [Indexed: 07/15/2023] Open
Abstract
Head position at any point in time plays a fundamental role in shaping the auditory information that reaches a listener, information that continuously changes as the head moves and reorients to different listening situations. The connection between hearing science and the kinesthetics of head movement has gained interest due to technological advances that have increased the feasibility of providing behavioral and biological feedback to assistive listening devices that can interpret movement patterns that reflect listening intent. Increasing evidence also shows that the negative impact of hearing deficits on mobility, gait, and balance may be mitigated by prosthetic hearing device intervention. Better understanding of the relationships between head movement, full body kinetics, and hearing health, should lead to improved signal processing strategies across a range of assistive and augmented hearing devices. The purpose of this review is to introduce the wider hearing community to the kinesiology of head movement and to place it in the context of hearing and communication with the goal of expanding the field of ecologically-specific listener behavior.
Collapse
Affiliation(s)
- Nathan C. Higgins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - Daniel A. Pupo
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
- School of Aging Studies, University of South Florida, Tampa, FL, United States
| | - Erol J. Ozmeral
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| | - David A. Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL, United States
| |
Collapse
|
16
|
Moradi S, Rönnberg J. Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing. Brain Sci 2023; 13:brainsci13040601. [PMID: 37190566 DOI: 10.3390/brainsci13040601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 03/27/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Face-to-face communication is one of the most common means of communication in daily life. We benefit from both auditory and visual speech signals that lead to better language understanding. People prefer face-to-face communication when access to auditory speech cues is limited because of background noise in the surrounding environment or in the case of hearing impairment. We demonstrated that an early, short period of exposure to audiovisual speech stimuli facilitates subsequent auditory processing of speech stimuli for correct identification, but early auditory exposure does not. We called this effect “perceptual doping” as an early audiovisual speech stimulation dopes or recalibrates auditory phonological and lexical maps in the mental lexicon in a way that results in better processing of auditory speech signals for correct identification. This short opinion paper provides an overview of perceptual doping and how it differs from similar auditory perceptual aftereffects following exposure to audiovisual speech materials, its underlying cognitive mechanism, and its potential usefulness in the aural rehabilitation of people with hearing difficulties.
Collapse
Affiliation(s)
- Shahram Moradi
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, 3918 Porsgrunn, Norway
| | - Jerker Rönnberg
- Department of Behavioral Sciences and Learning, Linnaeus Centre Head, Linköping University, 581 83 Linköping, Sweden
| |
Collapse
|
17
|
Rönnberg J, Sharma A, Signoret C, Campbell TA, Sörqvist P. Editorial: Cognitive hearing science: Investigating the relationship between selective attention and brain activity. Front Neurosci 2022; 16:1098340. [PMID: 36583104 PMCID: PMC9793772 DOI: 10.3389/fnins.2022.1098340] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/14/2022] [Accepted: 11/22/2022] [Indexed: 12/15/2022] Open
Affiliation(s)
- Jerker Rönnberg
- Department of Behavioral Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden,*Correspondence: Jerker Rönnberg
| | - Anu Sharma
- Department of Speech, Language and Hearing Sciences University of Colorado at Boulder, Boulder, CO, United States
| | - Carine Signoret
- Department of Behavioral Sciences and Learning, Linnaeus Centre HEAD, Linköping University, Linköping, Sweden
| | - Tom A. Campbell
- Faculty of Information Technology and Communication Sciences, Tampere University, Tampere, Finland
| | - Patrik Sörqvist
- Department of Building Engineering, Energy Systems and Sustainability Science, University of Gävle, Gävle, Sweden
| |
Collapse
|