1
|
Marsja E, Holmer E, Stenbäck V, Micula A, Tirado C, Danielsson H, Rönnberg J. Fluid Intelligence Partially Mediates the Effect of Working Memory on Speech Recognition in Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2025; 68:399-410. [PMID: 39666895 DOI: 10.1044/2024_jslhr-24-00465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/14/2024]
Abstract
PURPOSE Although the existing literature has explored the link between cognitive functioning and speech recognition in noise, the specific role of fluid intelligence still needs to be studied. Given the established association between working memory capacity (WMC) and fluid intelligence and the predictive power of WMC for speech recognition in noise, we aimed to elucidate the mediating role of fluid intelligence. METHOD We used data from the n200 study, a longitudinal investigation into aging, hearing ability, and cognitive functioning. We analyzed two age-matched samples: participants with hearing aids and a group with normal hearing. WMC was assessed using the Reading Span task, and fluid intelligence was measured with Raven's Progressive Matrices. Speech recognition in noise was evaluated using Hagerman sentences presented to target 80% speech-reception thresholds in four-talker babble. Data were analyzed using mediation analysis to examine fluid intelligence as a mediator between WMC and speech recognition in noise. RESULTS We found a partial mediating effect of fluid intelligence on the relationship between WMC and speech recognition in noise, and that hearing status did not moderate this effect. In other words, WMC and fluid intelligence were related, and fluid intelligence partially explained the influence of WMC on speech recognition in noise. CONCLUSIONS This study shows the importance of fluid intelligence in speech recognition in noise, regardless of hearing status. Future research should use other advanced statistical techniques and explore various speech recognition tests and background maskers to deepen our understanding of the interplay between WMC and fluid intelligence in speech recognition.
Collapse
Affiliation(s)
- Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Andreea Micula
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- National Institute of Public Health, University of Southern Denmark, Copenhagen
- Eriksholm Research Centre, Oticon A/S, Copenhagen, Denmark
| | - Carlos Tirado
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Henrik Danielsson
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Jerker Rönnberg
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|
2
|
Moberly AC, Du L, Tamati TN. Individual Differences in the Recognition of Spectrally Degraded Speech: Associations With Neurocognitive Functions in Adult Cochlear Implant Users and With Noise-Vocoded Simulations. Trends Hear 2025; 29:23312165241312449. [PMID: 39819389 PMCID: PMC11742172 DOI: 10.1177/23312165241312449] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2024] [Revised: 12/02/2024] [Accepted: 12/12/2024] [Indexed: 01/19/2025] Open
Abstract
When listening to speech under adverse conditions, listeners compensate using neurocognitive resources. A clinically relevant form of adverse listening is listening through a cochlear implant (CI), which provides a spectrally degraded signal. CI listening is often simulated through noise-vocoding. This study investigated the neurocognitive mechanisms supporting recognition of spectrally degraded speech in adult CI users and normal-hearing (NH) peers listening to noise-vocoded speech, with the hypothesis that an overlapping set of neurocognitive functions would contribute to speech recognition in both groups. Ninety-seven adults with either a CI (54 CI individuals, mean age 66.6 years, range 45-87 years) or age-normal hearing (43 NH individuals, mean age 66.8 years, range 50-81 years) participated. Listeners heard materials varying in linguistic complexity consisting of isolated words, meaningful sentences, anomalous sentences, high-variability sentences, and audiovisually (AV) presented sentences. Participants were also tested for vocabulary knowledge, nonverbal reasoning, working memory capacity, inhibition-concentration, and speed of lexical and phonological access. Linear regression analyses with robust standard errors were performed for speech recognition tasks on neurocognitive functions. Nonverbal reasoning contributed to meaningful sentence recognition in NH peers and anomalous sentence recognition in CI users. Speed of lexical access contributed to performance on most speech tasks for CI users but not for NH peers. Finally, inhibition-concentration and vocabulary knowledge contributed to AV sentence recognition in NH listeners alone. Findings suggest that the complexity of speech materials may determine the particular contributions of neurocognitive skills, and that NH processing of noise-vocoded speech may not represent how CI listeners process speech.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Liping Du
- Department of Biostatistics, Vanderbilt University Medical Center, Nashville, TN, USA
| | - Terrin N. Tamati
- Department of Otolaryngology – Head & Neck Surgery, Vanderbilt University Medical Center, Nashville, TN, USA
| |
Collapse
|
3
|
Abbas M, Szpiro SFA, Karawani H. Interconnected declines in audition vision and cognition in healthy aging. Sci Rep 2024; 14:30805. [PMID: 39730569 DOI: 10.1038/s41598-024-81154-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/17/2024] [Accepted: 11/25/2024] [Indexed: 12/29/2024] Open
Abstract
Age-related sensory declines are unavoidable and closely linked to decreased visual, auditory, and cognitive functions. However, the interrelations of these declines remain poorly understood. Despite extensive studies in each domain, shared age-related characteristics are complex and may not consistently manifest direct relationships at the individual level. We investigated the link between visual and auditory perceptual declines in healthy aging and their relation to cognitive function using six psychophysical and three cognitive tasks. Eighty young and older healthy adults participated, revealing a general age-related decline. Young adults consistently outperformed older adults in all tasks. Critically, the performance in visual tasks significantly correlated with performance in auditory tasks in older adults. This suggests a domain-general decline in perception, where declines in vision are related to declines in audition within individuals. Additionally, perceptual performance in older adults decreased monotonically year by year. Working memory performance significantly correlated with perceptual performance across both age groups and modalities, further supporting the hypothesis of a domain-general decline. These findings highlight the complex and interconnected nature of sensory and cognitive declines in aging, providing a foundation for future translational research focused on enhancing cognitive and perceptual abilities to promote healthy aging and ultimately improve the quality of life for older adults.
Collapse
Affiliation(s)
- Mais Abbas
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | - Sarit F A Szpiro
- Department of Special Education, University of Haifa, Haifa, Israel
- Edmond J. Safra Brain Research Center, University of Haifa, Haifa, Israel
- The Haifa Brain and Behavior Hub, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel.
- The Haifa Brain and Behavior Hub, University of Haifa, Haifa, Israel.
| |
Collapse
|
4
|
Wang S, Wong LLN, Chen Y. Development of the mandarin reading span test and confirmation of its relationship with speech perception in noise. Int J Audiol 2024; 63:1009-1018. [PMID: 38270384 DOI: 10.1080/14992027.2024.2305685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Revised: 11/24/2023] [Accepted: 01/08/2024] [Indexed: 01/26/2024]
Abstract
OBJECTIVE This study aimed to develop a dual-task Mandarin Reading Span Test (RST) to assess verbal working memory related to speech perception in noise. DESIGN The test material was developed taking into account psycholinguistic factors (i.e. sentence structure, number of syllables, word familiarity, and sentences plausibility), to achieve good test reliability and face validity. The relationship between the 28-sentence Mandarin RST and speech perception in noise was confirmed using three speech perception in noise measures containing varying levels of contextual and linguistic information. STUDY SAMPLE The study comprised 42 young adults with normal hearing and 56 older adult who were hearing aid users with moderate to severe hearing loss. RESULTS In older hearing aid users, the 28-sentence RST showed significant correlation with speech reception thresholds as measured by three Mandarin sentence in noise tests (rs or r = -.681 to -.419) but not with the 2-digit sequence Digit-in-Noise Test. CONCLUSION The newly developed dual-task Mandarin RST, constructed with careful psycholinguistic consideration, demonstrates a significant relationship with sentence perception in noise. This suggests that the Mandarin RST could serve as a measure of verbal working memory.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Lena L N Wong
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Yuan Chen
- Department of Special Education and Counselling, Integrated Center for Wellbeing (I-WELL), The Education University of Hong Kong, Taipo, New Territories, China
| |
Collapse
|
5
|
Bsharat-Maalouf D, Schmidtke J, Degani T, Karawani H. Through the Pupils' Lens: Multilingual Effort in First and Second Language Listening. Ear Hear 2024:00003446-990000000-00363. [PMID: 39660813 DOI: 10.1097/aud.0000000000001602] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2024]
Abstract
OBJECTIVES The present study aimed to examine the involvement of listening effort among multilinguals in their first (L1) and second (L2) languages in quiet and noisy listening conditions and investigate how the presence of a constraining context within sentences influences listening effort. DESIGN A group of 46 young adult Arabic (L1)-Hebrew (L2) multilinguals participated in a listening task. This task aimed to assess participants' perceptual performance and the effort they exert (as measured through pupillometry) while listening to single words and sentences presented in their L1 and L2, in quiet and noisy environments (signal to noise ratio = 0 dB). RESULTS Listening in quiet was easier than in noise, supported by both perceptual and pupillometry results. Perceptually, multilinguals performed similarly and reached ceiling levels in both languages in quiet. However, under noisy conditions, perceptual accuracy was significantly lower in L2, especially when processing sentences. Critically, pupil dilation was larger and more prolonged when listening to L2 than L1 stimuli. This difference was observed even in the quiet condition. Contextual support resulted in better perceptual performance of high-predictability sentences compared with low-predictability sentences, but only in L1 under noisy conditions. In L2, pupillometry showed increased effort when listening to high-predictability sentences compared with low-predictability sentences, but this increased effort did not lead to better understanding. In fact, in noise, speech perception was lower in high-predictability L2 sentences compared with low-predictability ones. CONCLUSIONS The findings underscore the importance of examining listening effort in multilingual speech processing and suggest that increased effort may be present in multilingual's L2 within clinical and educational settings.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel; and
| | - Jens Schmidtke
- Haifa Center for German and European Studies, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel; and
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel; and
| |
Collapse
|
6
|
Kattner F, Föcker J, Moshona CC, Marsh JE. When softer sounds are more distracting: Task-irrelevant whispered speech causes disruption of serial recall. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 156:3632-3648. [PMID: 39589332 DOI: 10.1121/10.0034454] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Accepted: 11/04/2024] [Indexed: 11/27/2024]
Abstract
Two competing accounts propose that the disruption of short-term memory by irrelevant speech arises either due to interference-by-process (e.g., changing-state effect) or attentional capture, but it is unclear how whispering affects the irrelevant speech effect. According to the interference-by-process account, whispered speech should be less disruptive due to its reduced periodic spectro-temporal fine structure and lower amplitude modulations. In contrast, the attentional account predicts more disruption by whispered speech, possibly via enhanced listening effort in the case of a comprehended language. In two experiments, voiced and whispered speech (spoken sentences or monosyllabic words) were presented while participants memorized the order of visually presented letters. In both experiments, a changing-state effect was observed regardless of the phonation (sentences produced more disruption than "steady-state" words). Moreover, whispered speech (lower fluctuation strength) was more disruptive than voiced speech when participants understood the language (Experiment 1), but not when the language was incomprehensible (Experiment 2). The results suggest two functionally distinct mechanisms of auditory distraction: While changing-state speech causes automatic interference with seriation processes regardless of its meaning or intelligibility, whispering appears to contain cues that divert attention from the focal task primarily when presented in a comprehended language, possibly via enhanced listening effort.
Collapse
Affiliation(s)
- Florian Kattner
- Institute for Mind, Brain and Behavior, Health and Medical University, Schiffbauergasse 14, 14467 Potsdam, Germany
| | - Julia Föcker
- College of Health and Science, School of Psychology, Sport Science and Wellbeing, University of Lincoln, Brayford Pool, Lincoln, LN6 7TS, United Kingdom
| | - Cleopatra Christina Moshona
- Engineering Acoustics, Institute of Fluid Dynamics and Technical Acoustics, Technische Universität Berlin, Einsteinufer 25, 10587 Berlin, Germany
| | - John E Marsh
- School of Psychology and Humanities, University of Central Lancashire, Preston, PR1 2HE, United Kingdom
- Department of Health, Learning and Technology, Luleå University of Technology, Luleå, Sweden
| |
Collapse
|
7
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024; 45:1585-1599. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study ( Taitlebaum-Swead et al. 2022 ; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
8
|
Wang S, Wong LLN. Development of the Mandarin Digit-in-Noise Test and Examination of the Effect of the Number of Digits Used in the Test. Ear Hear 2024; 45:572-582. [PMID: 37990396 DOI: 10.1097/aud.0000000000001447] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
OBJECTIVES The study aimed to develop and validate the Mandarin digit-in-noise (DIN) test using four digit (i.e., two-, three-, four-, and five-digit) sequences. Test-retest reliability and criterion validity were evaluated. How the number of digits affected the results was examined. The research might lead to more informed choice of DIN tests for populations with specific cognitive needs such as memory impairment. DESIGN The International Collegium of Rehabilitative Audiology guideline for developing the DIN was adapted to create test materials. The test-retest reliability and psychometric function of each digit sequence were determined among young normal-hearing adults. The criterion validity of each digit sequence was determined by comparing the measured performance of older adult hearing aid users with that obtained from two other well-established sentence-in-noise tests: the Mandarin hearing-in-noise test and the Mandarin Chinese matrix test. The relation between the speech reception thresholds (SRTs) of each digit sequence of the DIN test and working memory capacity measured using the digit span test and the reading span test were explored among older adult hearing aid users. Together, the study sample consisted of 54 young normal-hearing adults and 56 older adult hearing aid users. RESULTS The slopes associated with the two-, three-, four-, and five-digit DIN test were 16.58, 18.79, 20.42, and 21.09 %/dB, respectively, and the mean SRTs were -11.11, -10.99, -10.56, and -10.02 dB SNR, respectively. Test-retest SRTs did not differ by more than 0.74 dB across all digit sequences, suggesting good test-retest reliability. Spearman rank-order correlation coefficients between SRTs obtained using the DIN across the four digit (i.e., two-, three-, four-, and five-digit) sequences and the two sentence-in-noise tests were uniformly high ( rs = 0.9) across all participants, when data from all participants were considered. Results from the digit span test and reading span test correlated significantly with the results of the five-digit sequences ( rs = -0.37 and -0.42, respectively) but not with the results of the two-, three-, and four-digit sequences among older hearing aid users. CONCLUSIONS While the three-digit sequence was found to be appropriate for clinical use for assessment of auditory perception, the two-digit sequence could be used for hearing screening. The five-digit sequence could be difficult for older hearing aid users, and with its SRT related to working memory capacity, its use in the evaluation of speech perception should be investigated further. The Mandarin DIN test was found to be reliable, and the findings are in line with SRTs obtained using standardized sentence tests, suggesting good criterion validity.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong, China
| | | |
Collapse
|
9
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
10
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
11
|
Moradi S, Engdahl B, Johannessen A, Selbæk G, Aarhus L, Haanes GG. Hearing loss, hearing aid use, and performance on the Montreal cognitive assessment (MoCA): findings from the HUNT study in Norway. Front Neurosci 2024; 17:1327759. [PMID: 38260012 PMCID: PMC10800991 DOI: 10.3389/fnins.2023.1327759] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/26/2023] [Accepted: 12/19/2023] [Indexed: 01/24/2024] Open
Abstract
Purpose To evaluate the associations between hearing status and hearing aid use and performance on the Montreal Cognitive Assessment (MoCA) in older adults in a cross-sectional study in Norway. Methods This study utilized data from the fourth wave of the Trøndelag Health Study (HUNT4, 2017-2019). Hearing thresholds at frequencies of 0.5, 1, 2, and 4 kHz (or PTA4) in the better hearing ear were used to determine participants' hearing status [normal hearing (PTA4 hearing threshold, ≤ 15 dB), or slight (PTA4, 16-25 dB), mild (PTA4, 26-40 dB), moderate (PTA4, 41-55 dB), or severe (PTA4, ≥ 56 dB) hearing loss]. Both standard scoring and alternate MoCA scoring for people with hearing loss (deleting MoCA items that rely on auditory function) were used in data analysis. The analysis was adjusted for the confounders age, sex, education, and health covariates. Results The pattern of results for the alternate scoring was similar to that for standard scoring. Compared with the normal-hearing group, only individuals with moderate or severe hearing loss performed worse in the MoCA. In addition, people with slight hearing loss performed better in the MoCA than those with moderate or severe hearing loss. Within the hearing loss group, hearing aid use was associated with better performance in the MoCA. No interaction was observed between hearing aid use and participants' hearing status with performance on the MoCA test. Conclusion While hearing loss was associated with poorer performance in the MoCA, hearing aid use was found to be associated with better performance in the MoCA. Future randomized control trials are needed to further examine the efficacy of hearing aid use on the MoCA performance. When compared with standard scoring, the alternate MoCA scoring had no effect on the pattern of results.
Collapse
Affiliation(s)
- Shahram Moradi
- Research Group for Disability and Inclusion, Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Porsgrunn, Porsgrunn, Norway
- Research Group for Health Promotion in Settings, Department of Health, Social and Welfare Studies, University of South-Eastern Norway, Tønsberg, Norway
| | - Bo Engdahl
- Department of Physical Health and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Aud Johannessen
- Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Vestfold, Horten, Norway
- Norwegian National Centre for Ageing and Health, Tønsberg, Norway
| | - Geir Selbæk
- Norwegian National Centre for Ageing and Health, Tønsberg, Norway
- Institute of Clinical Medicine, Faculty of Medicine, University of Oslo, Oslo, Norway
- Geriatric Department, Oslo University Hospital, Oslo, Norway
| | - Lisa Aarhus
- Department of Occupational Medicine and Epidemiology, National Institute of Occupational Health, Oslo, Norway
- Medical Department, Diakonhjemmet Hospital, Oslo, Norway
| | - Gro Gade Haanes
- Faculty of Health and Social Sciences, Department of Health, Social and Welfare Studies, University of South-Eastern Norway Campus Vestfold, Horten, Norway
- USN Research Group of Older Peoples’ Health, University of South-Eastern Norway Department of Nursing and Health Sciences, Faculty of Health and Social Sciences, University of South-Eastern Norway, Drammen, Norway
| |
Collapse
|
12
|
Wang S, Wong LLN. An Exploration of the Memory Performance in Older Adult Hearing Aid Users on the Integrated Digit-in-Noise Test. Trends Hear 2024; 28:23312165241253653. [PMID: 38715401 PMCID: PMC11080745 DOI: 10.1177/23312165241253653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/09/2024] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| | - Lena L. N. Wong
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| |
Collapse
|
13
|
Shin J, Noh S, Park J, Sung JE. Syntactic complexity differentially affects auditory sentence comprehension performance for individuals with age-related hearing loss. Front Psychol 2023; 14:1264994. [PMID: 37965654 PMCID: PMC10641445 DOI: 10.3389/fpsyg.2023.1264994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Objectives This study examined whether older adults with hearing loss (HL) experience greater difficulties in auditory sentence comprehension compared to those with typical-hearing (TH) when the linguistic burdens of syntactic complexity were systematically manipulated by varying either the sentence type (active vs. passive) or sentence length (3- vs. 4-phrases). Methods A total of 22 individuals with HL and 24 controls participated in the study, completing sentence comprehension test (SCT), standardized memory assessments, and pure-tone audiometry tests. Generalized linear mixed effects models were employed to compare the effects of sentence type and length on SCT accuracy, while Pearson correlation coefficients were conducted to explore the relationships between SCT accuracy and other factors. Additionally, stepwise regression analyses were employed to identify memory-related predictors of sentence comprehension ability. Results Older adults with HL exhibited poorer performance on passive sentences than on active sentences compared to controls, while the sentence length was controlled. Greater difficulties on passive sentences were linked to working memory capacity, emerging as the most significant predictor for the comprehension of passive sentences among participants with HL. Conclusion Our findings contribute to the understanding of the linguistic-cognitive deficits linked to age-related hearing loss by demonstrating its detrimental impact on the processing of passive sentences. Cognitively healthy adults with hearing difficulties may face challenges in comprehending syntactically more complex sentences that require higher computational demands, particularly in working memory allocation.
Collapse
Affiliation(s)
| | | | | | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
14
|
Khayr R, Karawani H, Banai K. Implicit learning and individual differences in speech recognition: an exploratory study. Front Psychol 2023; 14:1238823. [PMID: 37744578 PMCID: PMC10513179 DOI: 10.3389/fpsyg.2023.1238823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/22/2023] [Indexed: 09/26/2023] Open
Abstract
Individual differences in speech recognition in challenging listening environments are pronounced. Studies suggest that implicit learning is one variable that may contribute to this variability. Here, we explored the unique contributions of three indices of implicit learning to individual differences in the recognition of challenging speech. To this end, we assessed three indices of implicit learning (perceptual, statistical, and incidental), three types of challenging speech (natural fast, vocoded, and speech in noise), and cognitive factors associated with speech recognition (vocabulary, working memory, and attention) in a group of 51 young adults. Speech recognition was modeled as a function of the cognitive factors and learning, and the unique contribution of each index of learning was statistically isolated. The three indices of learning were uncorrelated. Whereas all indices of learning had unique contributions to the recognition of natural-fast speech, only statistical learning had a unique contribution to the recognition of speech in noise and vocoded speech. These data suggest that although implicit learning may contribute to the recognition of challenging speech, the contribution may depend on the type of speech challenge and on the learning task.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | | | | |
Collapse
|
15
|
Wisniewski MG, Zakrzewski AC. Effortful listening produces both enhancement and suppression of alpha in the EEG. AUDITORY PERCEPTION & COGNITION 2023; 6:289-299. [PMID: 38665905 PMCID: PMC11044958 DOI: 10.1080/25742442.2023.2218239] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/20/2023] [Accepted: 05/18/2023] [Indexed: 04/28/2024]
Abstract
Introduction Adverse listening conditions can drive increased mental effort during listening. Neuromagnetic alpha oscillations (8-13 Hz) may index this listening effort, but inconsistencies regarding the direction of the relationship are abundant. We performed source analyses on high-density EEG data collected during a speech-on-speech listening task to address the possibility that opposing alpha power relationships among alpha producing brain sources drive this inconsistency. Methods Listeners (N=20) heard two simultaneously presented sentences of the form: Ready go to now. They either reported the color/number pair of a "Baron" call sign sentence (active: high effort), or ignored the stimuli (passive: low effort). Independent component analysis (ICA) was used to segregate temporally distinct sources in the EEG. Results Analysis of independent components (ICs) revealed simultaneous alpha enhancements (e.g., for somatomotor mu ICs) and suppressions (e.g., for left temporal ICs) for different brain sources. The active condition exhibited stronger enhancement for left somatomotor mu rhythm ICs, but stronger suppression for central occipital ICs. Discussion This study shows both alpha enhancement and suppression to be associated with increases in listening effort. Literature inconsistencies could partially relate to some source activities overwhelming others in scalp recordings.
Collapse
Affiliation(s)
- Matthew G. Wisniewski
- Department of Psychological Sciences, Kansas State University, Manhattan, Kansas, USA
| | | |
Collapse
|
16
|
Windle R, Dillon H, Heinrich A. A review of auditory processing and cognitive change during normal ageing, and the implications for setting hearing aids for older adults. Front Neurol 2023; 14:1122420. [PMID: 37409017 PMCID: PMC10318159 DOI: 10.3389/fneur.2023.1122420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/02/2023] [Indexed: 07/07/2023] Open
Abstract
Throughout our adult lives there is a decline in peripheral hearing, auditory processing and elements of cognition that support listening ability. Audiometry provides no information about the status of auditory processing and cognition, and older adults often struggle with complex listening situations, such as speech in noise perception, even if their peripheral hearing appears normal. Hearing aids can address some aspects of peripheral hearing impairment and improve signal-to-noise ratios. However, they cannot directly enhance central processes and may introduce distortion to sound that might act to undermine listening ability. This review paper highlights the need to consider the distortion introduced by hearing aids, specifically when considering normally-ageing older adults. We focus on patients with age-related hearing loss because they represent the vast majority of the population attending audiology clinics. We believe that it is important to recognize that the combination of peripheral and central, auditory and cognitive decline make older adults some of the most complex patients seen in audiology services, so they should not be treated as "standard" despite the high prevalence of age-related hearing loss. We argue that a primary concern should be to avoid hearing aid settings that introduce distortion to speech envelope cues, which is not a new concept. The primary cause of distortion is the speed and range of change to hearing aid amplification (i.e., compression). We argue that slow-acting compression should be considered as a default for some users and that other advanced features should be reconsidered as they may also introduce distortion that some users may not be able to tolerate. We discuss how this can be incorporated into a pragmatic approach to hearing aid fitting that does not require increased loading on audiology services.
Collapse
Affiliation(s)
- Richard Windle
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Harvey Dillon
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Department of Linguistics, Macquarie University, North Ryde, NSW, Australia
| | - Antje Heinrich
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Division of Human Communication, Development and Hearing, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
17
|
Großmann W. Listening with an Ageing Brain - a Cognitive Challenge. Laryngorhinootologie 2023; 102:S12-S34. [PMID: 37130528 PMCID: PMC10184676 DOI: 10.1055/a-1973-3038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Hearing impairment has been recently identified as a major modifiable risk factor for cognitive decline in later life and has been becoming of increasing scientific interest. Sensory and cognitive decline are connected by complex bottom-up and top-down processes, a sharp distinction between sensation, perception, and cognition is impossible. This review provides a comprehensive overview on the effects of healthy and pathological aging on auditory as well as cognitive functioning on speech perception and comprehension, as well as specific auditory deficits in the 2 most common neurodegenerative diseases in old age: Alzheimer disease and Parkinson syndrome. Hypotheses linking hearing loss to cognitive decline are discussed, and current knowledge on the effect of hearing rehabilitation on cognitive functioning is presented. This article provides an overview of the complex relationship between hearing and cognition in old age.
Collapse
Affiliation(s)
- Wilma Großmann
- Universitätsmedizin Rostock, Klinik und Poliklinik für Hals-Nasen-Ohrenheilkunde,Kopf- und Halschirurgie "Otto Körner"
| |
Collapse
|
18
|
Moradi S, Rönnberg J. Perceptual Doping: A Hypothesis on How Early Audiovisual Speech Stimulation Enhances Subsequent Auditory Speech Processing. Brain Sci 2023; 13:brainsci13040601. [PMID: 37190566 DOI: 10.3390/brainsci13040601] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/08/2023] [Revised: 03/27/2023] [Accepted: 03/30/2023] [Indexed: 04/05/2023] Open
Abstract
Face-to-face communication is one of the most common means of communication in daily life. We benefit from both auditory and visual speech signals that lead to better language understanding. People prefer face-to-face communication when access to auditory speech cues is limited because of background noise in the surrounding environment or in the case of hearing impairment. We demonstrated that an early, short period of exposure to audiovisual speech stimuli facilitates subsequent auditory processing of speech stimuli for correct identification, but early auditory exposure does not. We called this effect “perceptual doping” as an early audiovisual speech stimulation dopes or recalibrates auditory phonological and lexical maps in the mental lexicon in a way that results in better processing of auditory speech signals for correct identification. This short opinion paper provides an overview of perceptual doping and how it differs from similar auditory perceptual aftereffects following exposure to audiovisual speech materials, its underlying cognitive mechanism, and its potential usefulness in the aural rehabilitation of people with hearing difficulties.
Collapse
Affiliation(s)
- Shahram Moradi
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, 3918 Porsgrunn, Norway
| | - Jerker Rönnberg
- Department of Behavioral Sciences and Learning, Linnaeus Centre Head, Linköping University, 581 83 Linköping, Sweden
| |
Collapse
|
19
|
Moradi S, Engdahl B, Johannessen A, Selbæk G, Aarhus L, Haanes GG. Hearing loss, hearing aid use, and subjective memory complaints: Results of the HUNT study in Norway. Front Neurol 2023; 13:1094270. [PMID: 36712418 PMCID: PMC9875071 DOI: 10.3389/fneur.2022.1094270] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2022] [Accepted: 12/13/2022] [Indexed: 01/13/2023] Open
Abstract
Objective This study aimed to explore the association between hearing loss severity, hearing aid use, and subjective memory complaints in a large cross-sectional study in Norway. Methods Data were drawn from the fourth wave of the Trøndelag Health Study (HUNT4 Hearing, 2017-2019). The hearing threshold was defined as the pure-tone average of 0.5, 1, 2, and 4 kHz in the better ear. The participants were divided into five groups: normal hearing or slight/mild/moderate/severe hearing loss. Subjective self-reported short-term and long-term memory complaints were measured by the nine-item Meta-Memory Questionnaire (MMQ). The sample included 20,092 individuals (11,675 women, mean age 58.3 years) who completed both hearing and MMQ tasks. A multivariate analysis of variance (adjusted for covariates of age, sex, education, and health cofounders) was used to evaluate the association between hearing status and hearing aid use (in the hearing-impaired groups) and long-term and short-term subjective memory complaints. Results A multivariate analysis of variance, followed by univariate ANOVA and pairwise comparisons, showed that hearing loss was associated only with more long-term subjective memory complaints and not with short-term subjective memory complaints. In the hearing-impaired groups, the univariate main effect of hearing aid use was only observed for subjective long-term memory complaints and not for subjective short-term memory complaints. Similarly, the univariate interaction of hearing aid use and hearing status was significant for subjective long-term memory complaints and not for subjective short-term memory complaints. Pairwise comparisons, however, revealed no significant differences between hearing loss groups with respect to subjective long-term complaints. Conclusion This cross-sectional study indicates an association between hearing loss and subjective long-term memory complaints but not with subjective short-term memory complaints. In addition, an interaction between hearing status and hearing aid use for subjective long-term memory complaints was observed in hearing-impaired groups, which calls for future research to examine the effects of hearing aid use on different memory systems.
Collapse
Affiliation(s)
- Shahram Moradi
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, Porsgrunn, Norway,*Correspondence: Shahram Moradi ✉
| | - Bo Engdahl
- Department of Physical Health and Ageing, Norwegian Institute of Public Health, Oslo, Norway
| | - Aud Johannessen
- Department of Health, Social and Welfare Studies, Faculty of Health and Social Sciences, University of South-Eastern Norway, Horten, Norway,Norwegian National Centre for Ageing and Health, Vestfold Hospital Trust, Tønsberg, Norway
| | - Geir Selbæk
- Norwegian National Centre for Ageing and Health, Vestfold Hospital Trust, Tønsberg, Norway,Faculty of Medicine, Institute of Clinical Medicine, University of Oslo, Oslo, Norway,Geriatric Department, Oslo University Hospital, Oslo, Norway
| | - Lisa Aarhus
- Department of Occupational Medicine and Epidemiology, National Institute of Occupational Health, Oslo, Norway,Medical Department, Diakonhjemmet Hospital, Oslo, Norway
| | - Gro Gade Haanes
- Department of Nursing and Health Sciences, Faculty of Health and Social Sciences, University of South-Eastern Norway, Horten, Norway
| |
Collapse
|
20
|
Homman L, Danielsson H, Rönnberg J. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model. Front Psychol 2023; 14:1015227. [PMID: 36936006 PMCID: PMC10020708 DOI: 10.3389/fpsyg.2023.1015227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Objective The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.
Collapse
Affiliation(s)
- Lina Homman
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
- *Correspondence: Lina Homman,
| | - Henrik Danielsson
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
21
|
Baese-Berk MM, Levi SV, Van Engen KJ. Intelligibility as a measure of speech perception: Current approaches, challenges, and recommendations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:68. [PMID: 36732227 DOI: 10.1121/10.0016806] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 12/18/2022] [Indexed: 06/18/2023]
Abstract
Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.
Collapse
Affiliation(s)
| | - Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, New York, New York 10012, USA
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
22
|
Stenbäck V, Marsja E, Hällgren M, Lyxell B, Larsby B. Informational Masking and Listening Effort in Speech Recognition in Noise: The Role of Working Memory Capacity and Inhibitory Control in Older Adults With and Without Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4417-4428. [PMID: 36283680 DOI: 10.1044/2022_jslhr-21-00674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The study aimed to assess the relationship between (a) speech recognition in noise, mask type, working memory capacity (WMC), and inhibitory control and (b) self-rated listening effort, speech material, and mask type, in older adults with and without hearing impairment. It was of special interest to assess the relationship between WMC, inhibitory control, and speech recognition in noise when informational maskers masked target speech. METHOD A mixed design was used. A group (N = 24) of older (Mage = 69.7 years) individuals with hearing impairment and a group of age normal-hearing adults (Mage = 59.3 years, SD = 6.5) participated in the study. The participants were presented with auditory tests in a sound-attenuated room and with cognitive tests in a quiet office. The participants were asked to rate listening effort after being presented with energetic and informational background maskers in two different speech materials used in this study (i.e., Hearing In Noise Test and Hagerman test). Linear mixed-effects models were set up to assess the effect of the two different speech materials, energetic and informational maskers, hearing ability, WMC, inhibitory control, and self-rated listening effort. RESULTS Results showed that WMC and inhibitory control were of importance for speech recognition in noise, even when controlling for pure-tone average 4 hearing thresholds and age, when the maskers were informational. Concerning listening effort, on the other hand, the results suggest that hearing ability, but not cognitive abilities, is important for self-rated listening effort in speech recognition in noise. CONCLUSIONS Speech-in-noise recognition is more dependent on WMC for older adults in informational maskers than in energetic maskers. Hearing ability is a stronger predictor than cognition for self-rated listening effort. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357648.
Collapse
Affiliation(s)
- Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| | - Björn Lyxell
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Department of Special Needs Education, University of Oslo, Norway
| | - Birgitta Larsby
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| |
Collapse
|
23
|
Is Having Hearing Loss Fundamentally Different? Multigroup Structural Equation Modeling of the Effect of Cognitive Functioning on Speech Identification. Ear Hear 2022; 43:1437-1446. [PMID: 34983896 DOI: 10.1097/aud.0000000000001196] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Previous research suggests that there is a robust relationship between cognitive functioning and speech-in-noise performance for older adults with age-related hearing loss. For normal-hearing adults, on the other hand, the research is not entirely clear. Therefore, the current study aimed to examine the relationship between cognitive functioning, aging, and speech-in-noise, in a group of older normal-hearing persons and older persons with hearing loss who wear hearing aids. DESIGN We analyzed data from 199 older normal-hearing individuals (mean age = 61.2) and 200 older individuals with hearing loss (mean age = 60.9) using multigroup structural equation modeling. Four cognitively related tasks were used to create a cognitive functioning construct: the reading span task, a visuospatial working memory task, the semantic word-pairs task, and Raven's progressive matrices. Speech-in-noise, on the other hand, was measured using Hagerman sentences. The Hagerman sentences were presented via an experimental hearing aid to both normal hearing and hearing-impaired groups. Furthermore, the sentences were presented with one of the two background noise conditions: the Hagerman original speech-shaped noise or four-talker babble. Each noise condition was also presented with three different hearing processing settings: linear processing, fast compression, and noise reduction. RESULTS Cognitive functioning was significantly related to speech-in-noise identification. Moreover, aging had a significant effect on both speech-in-noise and cognitive functioning. With regression weights constrained to be equal for the two groups, the final model had the best fit to the data. Importantly, the results showed that the relationship between cognitive functioning and speech-in-noise was not different for the two groups. Furthermore, the same pattern was evident for aging: the effects of aging on cognitive functioning and aging on speech-in-noise were not different between groups. CONCLUSION Our findings revealed similar cognitive functioning and aging effects on speech-in-noise performance in older normal-hearing and aided hearing-impaired listeners. In conclusion, the findings support the Ease of Language Understanding model as cognitive processes play a critical role in speech-in-noise independent from the hearing status of elderly individuals.
Collapse
|
24
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
25
|
Karah H, Karawani H. Auditory Perceptual Exercises in Adults Adapting to the Use of Hearing Aids. Front Psychol 2022; 13:832100. [PMID: 35664209 PMCID: PMC9158114 DOI: 10.3389/fpsyg.2022.832100] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2021] [Accepted: 04/26/2022] [Indexed: 12/30/2022] Open
Abstract
Older adults with age-related hearing loss often use hearing aids (HAs) to compensate. However, certain challenges in speech perception, especially in noise still exist, despite today's HA technology. The current study presents an evaluation of a home-based auditory exercises program that can be used during the adaptation process for HA use. The home-based program was developed at a time when telemedicine became prominent in part due to the COVID-19 pandemic. The study included 53 older adults with age-related symmetrical sensorineural hearing loss. They were divided into three groups depending on their experience using HAs. Group 1: Experienced users (participants who used bilateral HAs for at least 2 years). Group 2: New users (participants who were fitted with bilateral HAs for the first time). Group 3: Non-users. These three groups underwent auditory exercises for 3 weeks. The auditory tasks included auditory detection, auditory discrimination, and auditory identification, as well as comprehension with basic (syllables) and more complex (sentences) stimuli, presented in quiet and in noisy listening conditions. All participants completed self-assessment questionnaires before and after the auditory exercises program and underwent a cognitive test at the end. Self-assessed improvements in hearing ability were observed across the HA users groups, with significant changes described by new users. Overall, speech perception in noise was poorer than in quiet. Speech perception accuracy was poorer in the non-users group compared to the users in all tasks. In sessions where stimuli were presented in quiet, similar performance was observed among new and experienced uses. New users performed significantly better than non-users in all speech in noise tasks; however, compared to the experienced users, performance differences depended on task difficulty. The findings indicate that HA users, even new users, had better perceptual performance than their peers who did not receive hearing aids.
Collapse
Affiliation(s)
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
26
|
Gray R, Sarampalis A, Başkent D, Harding EE. Working-Memory, Alpha-Theta Oscillations and Musical Training in Older Age: Research Perspectives for Speech-on-speech Perception. Front Aging Neurosci 2022; 14:806439. [PMID: 35645774 PMCID: PMC9131017 DOI: 10.3389/fnagi.2022.806439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
During the normal course of aging, perception of speech-on-speech or “cocktail party” speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.
Collapse
Affiliation(s)
- Ryan Gray
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Psychology, Centre for Applied Behavioural Sciences, School of Social Sciences, Heriot-Watt University, Edinburgh, United Kingdom
| | - Anastasios Sarampalis
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Eleanor E. Harding
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- *Correspondence: Eleanor E. Harding,
| |
Collapse
|
27
|
Holmer E, Schönström K, Andin J. Associations Between Sign Language Skills and Resting-State Functional Connectivity in Deaf Early Signers. Front Psychol 2022; 13:738866. [PMID: 35369269 PMCID: PMC8975249 DOI: 10.3389/fpsyg.2022.738866] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2021] [Accepted: 02/03/2022] [Indexed: 11/13/2022] Open
Abstract
The processing of a language involves a neural language network including temporal, parietal, and frontal cortical regions. This applies to spoken as well as signed languages. Previous research suggests that spoken language proficiency is associated with resting-state functional connectivity (rsFC) between language regions and other regions of the brain. Given the similarities in neural activation for spoken and signed languages, rsFC-behavior associations should also exist for sign language tasks. In this study, we explored the associations between rsFC and two types of linguistic skills in sign language: phonological processing skill and accuracy in elicited sentence production. Fifteen adult, deaf early signers were enrolled in a resting-state functional magnetic resonance imaging (fMRI) study. In addition to fMRI data, behavioral tests of sign language phonological processing and sentence reproduction were administered. Using seed-to-voxel connectivity analysis, we investigated associations between behavioral proficiency and rsFC from language-relevant nodes: bilateral inferior frontal gyrus (IFG) and posterior superior temporal gyrus (STG). Results showed that worse sentence processing skill was associated with stronger positive rsFC between the left IFG and left sensorimotor regions. Further, sign language phonological processing skill was associated with positive rsFC from right IFG to middle frontal gyrus/frontal pole although this association could possibly be explained by domain-general cognitive functions. Our findings suggest a possible connection between rsFC and developmental language outcomes in deaf individuals.
Collapse
Affiliation(s)
- Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
- Center for Medical Image Science and Visualization, Linköping, Sweden
- *Correspondence: Emil Holmer,
| | | | - Josefine Andin
- Linnaeus Centre HEAD, Swedish Institute for Disability Research, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
28
|
Vickery B, Fogerty D, Dubno JR. Phonological and semantic similarity of misperceived words in babble: Effects of sentence context, age, and hearing loss. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:650. [PMID: 35105039 PMCID: PMC8807001 DOI: 10.1121/10.0009367] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/26/2021] [Revised: 01/03/2022] [Accepted: 01/08/2022] [Indexed: 05/29/2023]
Abstract
This study investigated how age and hearing loss influence the misperceptions made when listening to sentences in babble. Open-set responses to final words in sentences with low and high context were analyzed for younger adults with normal hearing and older adults with normal or impaired hearing. All groups performed similarly in overall accuracy but differed in error type. Misperceptions for all groups were analyzed according to phonological and semantic properties. Comparisons between groups indicated that misperceptions for older adults were more influenced by phonological factors. Furthermore, older adults with hearing loss omitted more responses. Overall, across all groups, results suggest that phonological confusions most explain misperceptions in low context sentences. In high context sentences, the meaningful sentence context appears to provide predictive cues that reduce misperceptions. When misperceptions do occur, responses tend to have greater semantic similarity and lesser phonological similarity to the target, compared to low context sentences. In this way, semantic similarity may index a postdictive process by which ambiguities due to phonological confusions are resolved to conform to the semantic context of the sentence. These patterns demonstrate that context, age, and hearing loss affect the misperceptions, and potential sentence interpretation, made when listening to sentences in babble.
Collapse
Affiliation(s)
- Blythe Vickery
- Department of Communication Sciences and Disorders, University of South Carolina, Columbia, South Carolina 29208, USA
| | - Daniel Fogerty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois 61801, USA
| | - Judy R Dubno
- Department of Otolaryngology-Head and Neck Surgery, Medical University of South Carolina, Charleston, South Carolina 29425, USA
| |
Collapse
|
29
|
Nagaraj NK, Yang J, Robinson TL, Magimairaj BM. Auditory closure with visual cues: Relationship with working memory and semantic memory. JASA EXPRESS LETTERS 2021; 1:095202. [PMID: 36154207 DOI: 10.1121/10.0006297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The role of working memory (WM) and long-term lexical-semantic memory (LTM) in the perception of interrupted speech with and without visual cues, was studied in 29 native English speakers. Perceptual stimuli were periodically interrupted sentences filled with speech noise. The memory measures included an LTM semantic fluency task, verbal WM, and visuo-spatial WM tasks. Whereas perceptual performance in the audio-only condition demonstrated a significant positive association with listeners' semantic fluency, perception in audio-video mode did not. These results imply that when listening to distorted speech without visual cues, listeners rely on lexical-semantic retrieval from LTM to restore missing speech information.
Collapse
Affiliation(s)
- Naveen K Nagaraj
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Jing Yang
- Department of Communication Sciences and Disorders, University of Wisconsin-Milwaukee, P.O. Box 413, Milwaukee, Wisconsin 53201, USA , , ,
| | - Tanner L Robinson
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| | - Beula M Magimairaj
- Cognitive Hearing Science Lab, Department of Communicative Disorders and Deaf Education, Utah State University, Logan, Utah 84322, USA
| |
Collapse
|
30
|
Brännström KJ, Rudner M, Carlie J, Sahlén B, Gulz A, Andersson K, Johansson R. Listening effort and fatigue in native and non-native primary school children. J Exp Child Psychol 2021; 210:105203. [PMID: 34118494 DOI: 10.1016/j.jecp.2021.105203] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/25/2021] [Revised: 04/15/2021] [Accepted: 05/14/2021] [Indexed: 11/26/2022]
Abstract
Background noise makes listening effortful and may lead to fatigue. This may compromise classroom learning, especially for children with a non-native background. In the current study, we used pupillometry to investigate listening effort and fatigue during listening comprehension under typical (0 dB signal-to-noise ratio [SNR]) and favorable (+10 dB SNR) listening conditions in 63 Swedish primary school children (7-9 years of age) performing a narrative speech-picture verification task. Our sample comprised both native (n = 25) and non-native (n = 38) speakers of Swedish. Results revealed greater pupil dilation, indicating more listening effort, in the typical listening condition compared with the favorable listening condition, and it was primarily the non-native speakers who contributed to this effect (and who also had lower performance accuracy than the native speakers). Furthermore, the native speakers had greater pupil dilation during successful trials, whereas the non-native speakers showed greatest pupil dilation during unsuccessful trials, especially in the typical listening condition. This set of results indicates that whereas native speakers can apply listening effort to good effect, non-native speakers may have reached their effort ceiling, resulting in poorer listening comprehension. Finally, we found that baseline pupil size decreased over trials, which potentially indicates more listening-related fatigue, and this effect was greater in the typical listening condition compared with the favorable listening condition. Collectively, these results provide novel insight into the underlying dynamics of listening effort, fatigue, and listening comprehension in typical classroom conditions compared with favorable classroom conditions, and they demonstrate for the first time how sensitive this interplay is to language experience.
Collapse
Affiliation(s)
- K Jonas Brännström
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, 581 83 Linköping, Sweden
| | - Johanna Carlie
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Birgitta Sahlén
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Agneta Gulz
- Division of Cognitive Science, Lund University, 221 00 Lund, Sweden
| | - Ketty Andersson
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, 221 85 Lund, Sweden
| | - Roger Johansson
- Department of Psychology, Lund University, 221 00 Lund, Sweden.
| |
Collapse
|
31
|
Mesik J, Ray L, Wojtczak M. Effects of Age on Cortical Tracking of Word-Level Features of Continuous Competing Speech. Front Neurosci 2021; 15:635126. [PMID: 33867920 PMCID: PMC8047075 DOI: 10.3389/fnins.2021.635126] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/29/2020] [Accepted: 03/12/2021] [Indexed: 01/17/2023] Open
Abstract
Speech-in-noise comprehension difficulties are common among the elderly population, yet traditional objective measures of speech perception are largely insensitive to this deficit, particularly in the absence of clinical hearing loss. In recent years, a growing body of research in young normal-hearing adults has demonstrated that high-level features related to speech semantics and lexical predictability elicit strong centro-parietal negativity in the EEG signal around 400 ms following the word onset. Here we investigate effects of age on cortical tracking of these word-level features within a two-talker speech mixture, and their relationship with self-reported difficulties with speech-in-noise understanding. While undergoing EEG recordings, younger and older adult participants listened to a continuous narrative story in the presence of a distractor story. We then utilized forward encoding models to estimate cortical tracking of four speech features: (1) word onsets, (2) "semantic" dissimilarity of each word relative to the preceding context, (3) lexical surprisal for each word, and (4) overall word audibility. Our results revealed robust tracking of all features for attended speech, with surprisal and word audibility showing significantly stronger contributions to neural activity than dissimilarity. Additionally, older adults exhibited significantly stronger tracking of word-level features than younger adults, especially over frontal electrode sites, potentially reflecting increased listening effort. Finally, neuro-behavioral analyses revealed trends of a negative relationship between subjective speech-in-noise perception difficulties and the model goodness-of-fit for attended speech, as well as a positive relationship between task performance and the goodness-of-fit, indicating behavioral relevance of these measures. Together, our results demonstrate the utility of modeling cortical responses to multi-talker speech using complex, word-level features and the potential for their use to study changes in speech processing due to aging and hearing loss.
Collapse
Affiliation(s)
- Juraj Mesik
- Department of Psychology, University of Minnesota, Minneapolis, MN, United States
| | | | | |
Collapse
|
32
|
Eddins DA. Select Papers From the 8th Aging and Speech Communication Conference. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:299-301. [PMID: 33561358 DOI: 10.1044/2021_jslhr-21-00031] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this introduction is to briefly describe the nature of the conference, Aging and Speech Communication: An International and Interdisciplinary Research Conference, and to introduce the articles featured in this forum that represent the nature of the biennial conference.
Collapse
Affiliation(s)
- David A Eddins
- Department of Communication Sciences and Disorders, University of South Florida, Tampa, FL
| |
Collapse
|