1
|
Fernandez LB, Pickering MJ, Naylor G, Hadley LV. Uses of Linguistic Context in Speech Listening: Does Acquired Hearing Loss Lead to Reduced Engagement of Prediction? Ear Hear 2024; 45:1107-1114. [PMID: 38880953 PMCID: PMC11325976 DOI: 10.1097/aud.0000000000001515] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/07/2022] [Accepted: 04/01/2024] [Indexed: 06/18/2024]
Abstract
Research investigating the complex interplay of cognitive mechanisms involved in speech listening for people with hearing loss has been gaining prominence. In particular, linguistic context allows the use of several cognitive mechanisms that are not well distinguished in hearing science, namely those relating to "postdiction", "integration", and "prediction". We offer the perspective that an unacknowledged impact of hearing loss is the differential use of predictive mechanisms relative to age-matched individuals with normal hearing. As evidence, we first review how degraded auditory input leads to reduced prediction in people with normal hearing, then consider the literature exploring context use in people with acquired postlingual hearing loss. We argue that no research on hearing loss has directly assessed prediction. Because current interventions for hearing do not fully alleviate difficulty in conversation, and avoidance of spoken social interaction may be a mediator between hearing loss and cognitive decline, this perspective could lead to greater understanding of cognitive effects of hearing loss and provide insight regarding new targets for intervention.
Collapse
Affiliation(s)
- Leigh B. Fernandez
- Department of Social Sciences, Psycholinguistics Group, University of Kaiserslautern-Landau, Kaiserslautern, Germany
| | - Martin J. Pickering
- Department of Psychology, University of Edinburgh, Edinburgh, United Kingdom
| | - Graham Naylor
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| | - Lauren V. Hadley
- Hearing Sciences—Scottish Section, School of Medicine, University of Nottingham, Glasgow, United Kingdom
| |
Collapse
|
2
|
Winn MB. The Effort of Repairing a Misperceived Word Can Impair Perception of Following Words, Especially for Listeners With Cochlear Implants. Ear Hear 2024:00003446-990000000-00300. [PMID: 38886880 DOI: 10.1097/aud.0000000000001537] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/20/2024]
Abstract
OBJECTIVES In clinical and laboratory settings, speech recognition is typically assessed in a way that cannot distinguish accurate auditory perception from misperception that was mentally repaired or inferred from context. Previous work showed that the process of repairing misperceptions elicits greater listening effort, and that this elevated effort lingers well after the sentence is heard. That result suggests that cognitive repair strategies might appear successful when testing a single utterance but fail for everyday continuous conversational speech. The present study tested the hypothesis that the effort of repairing misperceptions has the consequence of carrying over to interfere with perception of later words after the sentence. DESIGN Stimuli were open-set coherent sentences that were presented intact or with a word early in the sentence replaced with noise, forcing the listener to use later context to mentally repair the missing word. Sentences were immediately followed by digit triplets, which served to probe carryover effort from the sentence. Control conditions allowed for the comparison to intact sentences that did not demand mental repair, as well as to listening conditions that removed the need to attend to the post-sentence stimuli, or removed the post-sentence digits altogether. Intelligibility scores for the sentences and digits were accompanied by time-series measurements of pupil dilation to assess cognitive load during the task, as well as subjective rating of effort. Participants included adults with cochlear implants (CIs), as well as an age-matched group and a younger group of listeners with typical hearing for comparison. RESULTS For the CI group, needing to repair a missing word during a sentence resulted in more errors on the digits after the sentence, especially when the repair process did not result in a coherent sensible perception. Sentences that needed repair also contained more errors on the words that were unmasked. All groups showed substantial increase of pupil dilation when sentences required repair, even when the repair was successful. Younger typical hearing listeners showed clear differences in moment-to-moment allocation of effort in the different conditions, while the other groups did not. CONCLUSIONS For CI listeners, the effort of needing to repair misperceptions in a sentence can last long enough to interfere with words that follow the sentence. This pattern could pose a serious problem for regular communication but would go overlooked in typical testing with single utterances, where a listener has a chance to repair misperceptions before responding. Carryover effort was not predictable by basic intelligibility scores, but can be revealed in behavioral data when sentences are followed immediately by extra probe words such as digits.
Collapse
Affiliation(s)
- Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, USA
| |
Collapse
|
3
|
Mertel K, Dimitrijevic A, Thaut M. Can Music Enhance Working Memory and Speech in Noise Perception in Cochlear Implant Users? Design Protocol for a Randomized Controlled Behavioral and Electrophysiological Study. Audiol Res 2024; 14:611-624. [PMID: 39051196 PMCID: PMC11270222 DOI: 10.3390/audiolres14040052] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/28/2024] [Revised: 06/18/2024] [Accepted: 07/04/2024] [Indexed: 07/27/2024] Open
Abstract
BACKGROUND A cochlear implant (CI) enables deaf people to understand speech but due to technical restrictions, users face great limitations in noisy conditions. Music training has been shown to augment shared auditory and cognitive neural networks for processing speech and music and to improve auditory-motor coupling, which benefits speech perception in noisy listening conditions. These are promising prerequisites for studying multi-modal neurologic music training (NMT) for speech-in-noise (SIN) perception in adult cochlear implant (CI) users. Furthermore, a better understanding of the neurophysiological correlates when performing working memory (WM) and SIN tasks after multi-modal music training with CI users may provide clinicians with a better understanding of optimal rehabilitation. METHODS Within 3 months, 81 post-lingual deafened adult CI recipients will undergo electrophysiological recordings and a four-week neurologic music therapy multi-modal training randomly assigned to one of three training focusses (pitch, rhythm, and timbre). Pre- and post-tests will analyze behavioral outcomes and apply a novel electrophysiological measurement approach that includes neural tracking to speech and alpha oscillation modulations to the sentence-final-word-identification-and-recall test (SWIR-EEG). Expected outcome: Short-term multi-modal music training will enhance WM and SIN performance in post-lingual deafened adult CI recipients and will be reflected in greater neural tracking and alpha oscillation modulations in prefrontal areas. Prospectively, outcomes could contribute to understanding the relationship between cognitive functioning and SIN besides the technical deficits of the CI. Targeted clinical application of music training for post-lingual deafened adult CI carriers to significantly improve SIN and positively impact the quality of life can be realized.
Collapse
Affiliation(s)
- Kathrin Mertel
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| | - Andrew Dimitrijevic
- Sunnybrook Cochlear Implant Program, Sunnybrook Hospital, Toronto, ON M4N 3M5, Canada;
| | - Michael Thaut
- Music and Health Research Collaboratory (MaHRC), University of Toronto, Toronto, ON M5S 1C5, Canada;
| |
Collapse
|
4
|
Smith ML, Winn MB, Fitzgerald MB. A Large-Scale Study of the Relationship Between Degree and Type of Hearing Loss and Recognition of Speech in Quiet and Noise. Ear Hear 2024; 45:915-928. [PMID: 38389129 PMCID: PMC11175802 DOI: 10.1097/aud.0000000000001484] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/20/2022] [Accepted: 01/10/2024] [Indexed: 02/24/2024]
Abstract
OBJECTIVES Understanding speech in noise (SIN) is the dominant complaint of individuals with hearing loss. For decades, the default test of speech perception in routine audiologic assessment has been monosyllabic word recognition in quiet (WRQ), which does not directly address patient concerns, leading some to advocate that measures of SIN should be integrated into routine practice. However, very little is known with regard to how SIN abilities are affected by different types of hearing loss. Here, we examine performance on clinical measures of WRQ and SIN in a large patient base consisting of a variety of hearing loss types, including conductive (CHL), mixed (MHL), and sensorineural (SNHL) losses. DESIGN In a retrospective study, we examined data from 5593 patients (51% female) who underwent audiometric assessment at the Stanford Ear Institute. All individuals completed pure-tone audiometry, and speech perception testing of monaural WRQ, and monaural QuickSIN. Patient ages ranged from 18 to 104 years (average = 57). The average age in years for the different classifications of hearing loss was 51.1 (NH), 48.5 (CHL), 64.2 (MHL), and 68.5 (SNHL), respectively. Generalized linear mixed-effect models and quartile regression were used to determine the relationship between hearing loss type and severity for the different speech-recognition outcome measures. RESULTS Patients with CHL had similar performance to patients with normal hearing on both WRQ and QuickSIN, regardless of the hearing loss severity. In patients with MHL or SNHL, WRQ scores remained largely excellent with increasing hearing loss until the loss was moderately severe or worse. In contrast, QuickSIN signal to noise ratio (SNR) losses showed an orderly systematic decrease as the degree of hearing loss became more severe. This effect scaled with the data, with threshold-QuickSIN relationships absent for CHL, and becoming increasingly stronger for MHL and strongest in patients with SNHL. However, the variability in these data suggests that only 57% of the variance in WRQ scores, and 50% of the variance in QuickSIN SNR losses, could be accounted for by the audiometric thresholds. Patients who would not be differentiated by WRQ scores are shown to be potentially differentiable by SIN scores. CONCLUSIONS In this data set, conductive hearing loss had little effect on WRQ scores or QuickSIN SNR losses. However, for patients with MHL or SNHL, speech perception abilities decreased as the severity of the hearing loss increased. In these data, QuickSIN SNR losses showed deficits in performance with degrees of hearing loss that yielded largely excellent WRQ scores. However, the considerable variability in the data suggests that even after classifying patients according to their type of hearing loss, hearing thresholds only account for a portion of the variance in speech perception abilities, particularly in noise. These results are consistent with the idea that variables such as cochlear health and aging add explanatory power over audibility alone.
Collapse
Affiliation(s)
- Michael L Smith
- Department of Otolaryngology-Head and Neck Surgery Stanford Ear Institute, Stanford University, Palo Alto, California, USA
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota, USA
| | - Matthew B Fitzgerald
- Department of Otolaryngology-Head and Neck Surgery Stanford Ear Institute, Stanford University, Palo Alto, California, USA
| |
Collapse
|
5
|
Khayr R, Khnifes R, Shpak T, Banai K. Task-Specific Rapid Auditory Perceptual Learning in Adult Cochlear Implant Recipients: What Could It Mean for Speech Recognition. Ear Hear 2024:00003446-990000000-00285. [PMID: 38829780 DOI: 10.1097/aud.0000000000001523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
OBJECTIVES Speech recognition in cochlear implant (CI) recipients is quite variable, particularly in challenging listening conditions. Demographic, audiological, and cognitive factors explain some, but not all, of this variance. The literature suggests that rapid auditory perceptual learning explains unique variance in speech recognition in listeners with normal hearing and those with hearing loss. The present study focuses on the early adaptation phase of task-specific rapid auditory perceptual learning. It investigates whether adult CI recipients exhibit this learning and, if so, whether it accounts for portions of the variance in their recognition of fast speech and speech in noise. DESIGN Thirty-six adult CI recipients (ages = 35 to 77, M = 55) completed a battery of general speech recognition tests (sentences in speech-shaped noise, four-talker babble noise, and natural-fast speech), cognitive measures (vocabulary, working memory, attention, and verbal processing speed), and a rapid auditory perceptual learning task with time-compressed speech. Accuracy in the general speech recognition tasks was modeled with a series of generalized mixed models that accounted for demographic, audiological, and cognitive factors before accounting for the contribution of task-specific rapid auditory perceptual learning of time-compressed speech. RESULTS Most CI recipients exhibited early task-specific rapid auditory perceptual learning of time-compressed speech within the course of the first 20 sentences. This early task-specific rapid auditory perceptual learning had unique contribution to the recognition of natural-fast speech in quiet and speech in noise, although the contribution to natural-fast speech may reflect the rapid learning that occurred in this task. When accounting for demographic and cognitive characteristics, an increase of 1 SD in the early task-specific rapid auditory perceptual learning rate was associated with ~52% increase in the odds of correctly recognizing natural-fast speech in quiet, and ~19% to 28% in the odds of correctly recognizing the different types of speech in noise. Age, vocabulary, attention, and verbal processing speed also had unique contributions to general speech recognition. However, their contribution varied between the different general speech recognition tests. CONCLUSIONS Consistent with previous findings in other populations, in CI recipients, early task-specific rapid auditory perceptual, learning also accounts for some of the individual differences in the recognition of speech in noise and natural-fast speech in quiet. Thus, across populations, the early rapid adaptation phase of task-specific rapid auditory perceptual learning might serve as a skill that supports speech recognition in various adverse conditions. In CI users, the ability to rapidly adapt to ongoing acoustical challenges may be one of the factors associated with good CI outcomes. Overall, CI recipients with higher cognitive resources and faster rapid learning rates had better speech recognition.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Riyad Khnifes
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Talma Shpak
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
| |
Collapse
|
6
|
Nagaraj NK. Hearing Loss and Cognitive Decline in the Aging Population: Emerging Perspectives in Audiology. Audiol Res 2024; 14:479-492. [PMID: 38920961 PMCID: PMC11200945 DOI: 10.3390/audiolres14030040] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/02/2024] [Revised: 05/16/2024] [Accepted: 05/17/2024] [Indexed: 06/27/2024] Open
Abstract
In this perspective article, the author explores the connections between hearing loss, central auditory processing, and cognitive decline, offering insights into the complex dynamics at play. Drawing upon a range of studies, the relationship between age-related central auditory processing disorders and Alzheimer's disease is discussed, with the aim of enhancing our understanding of these interconnected conditions. Highlighting the evolving significance of audiologists in the dual management of cognitive health and hearing impairments, the author focuses on their role in identifying early signs of cognitive impairment and evaluates various cognitive screening tools used in this context. The discussion extends to adaptations of hearing assessments for older adults, especially those diagnosed with dementia, and highlights the significance of objective auditory electrophysiological tests. These tests are presented as vital in assessing the influence of aging and Alzheimer's disease on auditory processing capabilities and to signal cognitive dysfunction. The article underscores the critical role of audiologists in addressing the challenges faced by the aging population. The perspective calls for further research to improve diagnostic and therapeutic strategies in audiology, and emphasizes the need for a multidisciplinary approach in tackling the nexus of hearing loss, auditory processing, and cognitive decline.
Collapse
Affiliation(s)
- Naveen K Nagaraj
- Cognitive Hearing Science Lab, Communicative Disorders & Deaf Education, Utah State University, Logan, UT 84322, USA
| |
Collapse
|
7
|
Dor YI, Algom D, Shakuf V, Ben-David BM. Age-related differences in processing of emotions in speech disappear with babble noise in the background. Cogn Emot 2024:1-10. [PMID: 38764186 DOI: 10.1080/02699931.2024.2351960] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2023] [Accepted: 04/26/2024] [Indexed: 05/21/2024]
Abstract
Older adults process emotional speech differently than young adults, relying less on prosody (tone) relative to semantics (words). This study aimed to elucidate the mechanisms underlying these age-related differences via an emotional speech-in-noise test. A sample of 51 young and 47 older adults rated spoken sentences with emotional content on both prosody and semantics, presented on the background of wideband speech-spectrum noise (sensory interference) or on the background of multi-talker babble (sensory/cognitive interference). The presence of wideband noise eliminated age-related differences in semantics but not in prosody when processing emotional speech. Conversely, the presence of babble resulted in the elimination of age-related differences across all measures. The results suggest that both sensory and cognitive-linguistic factors contribute to age-related changes in emotional speech processing. Because real world conditions typically involve noisy background, our results highlight the importance of testing under such conditions.
Collapse
Affiliation(s)
- Yehuda I Dor
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Daniel Algom
- School of Psychological Sciences, Tel Aviv University, Tel Aviv, Israel
- Department of Communication Disorders, Achva Academic College, Arugot, Israel
| | - Vered Shakuf
- Department of Communication Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- KITE, Toronto Rehabilitation Institute, University Health Networks (UHN), Toronto, ON, Canada
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
8
|
Wang S, Wong LLN. Development of the Mandarin Digit-in-Noise Test and Examination of the Effect of the Number of Digits Used in the Test. Ear Hear 2024; 45:572-582. [PMID: 37990396 DOI: 10.1097/aud.0000000000001447] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2023]
Abstract
OBJECTIVES The study aimed to develop and validate the Mandarin digit-in-noise (DIN) test using four digit (i.e., two-, three-, four-, and five-digit) sequences. Test-retest reliability and criterion validity were evaluated. How the number of digits affected the results was examined. The research might lead to more informed choice of DIN tests for populations with specific cognitive needs such as memory impairment. DESIGN The International Collegium of Rehabilitative Audiology guideline for developing the DIN was adapted to create test materials. The test-retest reliability and psychometric function of each digit sequence were determined among young normal-hearing adults. The criterion validity of each digit sequence was determined by comparing the measured performance of older adult hearing aid users with that obtained from two other well-established sentence-in-noise tests: the Mandarin hearing-in-noise test and the Mandarin Chinese matrix test. The relation between the speech reception thresholds (SRTs) of each digit sequence of the DIN test and working memory capacity measured using the digit span test and the reading span test were explored among older adult hearing aid users. Together, the study sample consisted of 54 young normal-hearing adults and 56 older adult hearing aid users. RESULTS The slopes associated with the two-, three-, four-, and five-digit DIN test were 16.58, 18.79, 20.42, and 21.09 %/dB, respectively, and the mean SRTs were -11.11, -10.99, -10.56, and -10.02 dB SNR, respectively. Test-retest SRTs did not differ by more than 0.74 dB across all digit sequences, suggesting good test-retest reliability. Spearman rank-order correlation coefficients between SRTs obtained using the DIN across the four digit (i.e., two-, three-, four-, and five-digit) sequences and the two sentence-in-noise tests were uniformly high ( rs = 0.9) across all participants, when data from all participants were considered. Results from the digit span test and reading span test correlated significantly with the results of the five-digit sequences ( rs = -0.37 and -0.42, respectively) but not with the results of the two-, three-, and four-digit sequences among older hearing aid users. CONCLUSIONS While the three-digit sequence was found to be appropriate for clinical use for assessment of auditory perception, the two-digit sequence could be used for hearing screening. The five-digit sequence could be difficult for older hearing aid users, and with its SRT related to working memory capacity, its use in the evaluation of speech perception should be investigated further. The Mandarin DIN test was found to be reliable, and the findings are in line with SRTs obtained using standardized sentence tests, suggesting good criterion validity.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Faculty of Education, The University of Hong Kong, Pokfulam, Hong Kong, China
| | | |
Collapse
|
9
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
10
|
Ceuleers D, Keppler H, Degeest S, Baudonck N, Swinnen F, Kestens K, Dhooge I. Auditory, Visual, and Cognitive Abilities in Normal-Hearing Adults, Hearing Aid Users, and Cochlear Implant Users. Ear Hear 2024; 45:679-694. [PMID: 38192017 DOI: 10.1097/aud.0000000000001458] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/10/2024]
Abstract
OBJECTIVES Speech understanding is considered a bimodal and bidirectional process, whereby visual information (i.e., speechreading) and also cognitive functions (i.e., top-down processes) are involved. Therefore, the purpose of the present study is twofold: (1) to investigate the auditory (A), visual (V), and cognitive (C) abilities in normal-hearing individuals, hearing aid (HA) users, and cochlear implant (CI) users, and (2) to determine an auditory, visual, cognitive (AVC)-profile providing a comprehensive overview of a person's speech processing abilities, containing a broader variety of factors involved in speech understanding. DESIGN Three matched groups of subjects participated in this study: (1) 31 normal-hearing adults (mean age = 58.76), (2) 31 adults with moderate to severe hearing loss using HAs (mean age = 59.31), (3) 31 adults with a severe to profound hearing loss using a CI (mean age = 58.86). The audiological assessments consisted of pure-tone audiometry, speech audiometry in quiet and in noise. For evaluation of the (audio-) visual speech processing abilities, the Test for (Audio) Visual Speech perception was used. The cognitive test battery consisted of the letter-number sequencing task, the letter detection test, and an auditory Stroop test, measuring working memory and processing speed, selective attention, and cognitive flexibility and inhibition, respectively. Differences between the three groups were examined using a one-way analysis of variance or Kruskal-Wallis test, depending on the normality of the variables. Furthermore, a principal component analysis was conducted to determine the AVC-profile. RESULTS Normal-hearing individuals scored better for both auditory, and cognitive abilities compared to HA users and CI users, listening in a best aided condition. No significant differences were found for speech understanding in a visual condition, despite a larger audiovisual gain for the HA users and CI users. Furthermore, an AVC-profile was composed based on the different auditory, visual, and cognitive assessments. On the basis of that profile, it is possible to determine one comprehensive score for auditory, visual, and cognitive functioning. In the future, these scores could be used in auditory rehabilitation to determine specific strengths and weaknesses per individual patient for the different abilities related to the process of speech understanding in daily life. CONCLUSIONS It is suggested to evaluate individuals with hearing loss from a broader perspective, considering more than only the typical auditory abilities. Also, cognitive and visual abilities are important to take into account to have a more complete overview of the speech understanding abilities in daily life.
Collapse
Affiliation(s)
- Dorien Ceuleers
- Department of Head and Skin, Ghent University, Ghent, Belgium
| | - Hannah Keppler
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Sofie Degeest
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Nele Baudonck
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Freya Swinnen
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| |
Collapse
|
11
|
Giuliani NP, Venkitakrishnan S, Wu YH. Input-related demands: vocoded sentences evoke different pupillometrics and subjective listening effort than sentences in speech-shaped noise. Int J Audiol 2024; 63:199-206. [PMID: 36519812 PMCID: PMC10947987 DOI: 10.1080/14992027.2022.2150901] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/25/2022] [Revised: 11/17/2022] [Accepted: 11/18/2022] [Indexed: 12/23/2022]
Abstract
OBJECTIVES The Framework for Effortful Listening (FUEL) suggests five input-related demands can alter listening effort: source, transmission, listener, message and context factors. We hypothesised that vocoded sentences represented a source factor degradation and sentences in speech-shaped noise represented a transmission factor degradation. We used pupillometry and a subjective scale to examine our hypothesis. DESIGN Participants listened to vocoded sentences and sentences in speech-shaped noise at several difficulty levels designed to produce similar word recognition abilities; they also listened to unprocessed sentences. Within-participant pupillometrics and subjective listening effort were analysed. Post-hoc analyses were performed to examine if word recognition accuracy differentially influenced pupil responses. STUDY SAMPLES Twenty young adults with normal hearing. RESULTS Baseline pupil diameter was significantly smaller, peak pupil dilation was significantly larger, peak pupil dilation latency was significantly shorter, and subjective listening effort was significantly greater for the vocoded sentences than the sentences-in-noise. Word recognition ability also affected pupillometrics, but only for the vocoded sentences. CONCLUSIONS Our findings suggest that source factor degradations result in greater listening effort than transmission factor degradations. Future research should address how clinical interventions tailored towards different input-related demands may lead to reduced listening effort and improve patient outcomes.
Collapse
Affiliation(s)
- Nicholas P. Giuliani
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
| | - Soumya Venkitakrishnan
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Yu-Hsiang Wu
- Department of Otolaryngology, University of Iowa Hospitals and Clinics, Iowa City, IA, USA
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
12
|
Illg A, Adams D, Lesinski-Schiedat A, Lenarz T, Kral A. Variability in Receptive Language Development Following Bilateral Cochlear Implantation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:618-632. [PMID: 38198368 DOI: 10.1044/2023_jslhr-23-00297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/12/2024]
Abstract
OBJECTIVES The primary aim was to investigate the variability in language development in children aged 5-7.5 years after bilateral cochlear implantation (CI) up to the age of 2 years, and any impact of the age at implantation and additional noncognitive or anatomical disorders at implantation. DESIGN Data of 84 congenitally deaf children that had received simultaneous bilateral CI at the age of ≤ 24 months were included in this retrospective study. The results of language comprehension acquisition were evaluated using a standardized German language acquisition test for normal hearing preschoolers and first graders. Data on speech perception of monosyllables and sentences in quiet and noise were added. RESULTS In a monosyllabic test, the children achieved a median performance of 75.0 ± 12.88%. In the sentence test in quiet, the median performance was 89 ± 12.69%, but dropped to 54 ± 18.92% in noise. A simple analysis showed a significant main effect of age at implantation on monosyllabic word comprehension (p < .001), but no significant effect of comorbidities that lacked cognitive effects (p = .24). Language acquisition values correspond to the normal range of children with normal hearing. Approximately 25% of the variability in the language acquisition tests is due to the outcome of the monosyllabic speech perception test. CONCLUSIONS Congenitally deaf children who were fitted bilaterally in the 1st year of life can develop age-appropriate language skills by the time they start school. The high variability in the data is partly due to the age of implantation, but additional factors such as cognitive factors (e.g., working memory) are likely to influence the variability.
Collapse
Affiliation(s)
- Angelika Illg
- Department of Otolaryngology, Medical University Hannover, Germany
| | - Doris Adams
- Department of Otolaryngology, Medical University Hannover, Germany
| | | | - Thomas Lenarz
- Department of Otolaryngology, Medical University Hannover, Germany
| | - Andrej Kral
- Department of Otolaryngology, Medical University Hannover, Germany
| |
Collapse
|
13
|
Wang S, Wong LLN, Chen Y. Development of the mandarin reading span test and confirmation of its relationship with speech perception in noise. Int J Audiol 2024:1-10. [PMID: 38270384 DOI: 10.1080/14992027.2024.2305685] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/14/2023] [Accepted: 01/08/2024] [Indexed: 01/26/2024]
Abstract
OBJECTIVE This study aimed to develop a dual-task Mandarin Reading Span Test (RST) to assess verbal working memory related to speech perception in noise. DESIGN The test material was developed taking into account psycholinguistic factors (i.e. sentence structure, number of syllables, word familiarity, and sentences plausibility), to achieve good test reliability and face validity. The relationship between the 28-sentence Mandarin RST and speech perception in noise was confirmed using three speech perception in noise measures containing varying levels of contextual and linguistic information. STUDY SAMPLE The study comprised 42 young adults with normal hearing and 56 older adult who were hearing aid users with moderate to severe hearing loss. RESULTS In older hearing aid users, the 28-sentence RST showed significant correlation with speech reception thresholds as measured by three Mandarin sentence in noise tests (rs or r = -.681 to -.419) but not with the 2-digit sequence Digit-in-Noise Test. CONCLUSION The newly developed dual-task Mandarin RST, constructed with careful psycholinguistic consideration, demonstrates a significant relationship with sentence perception in noise. This suggests that the Mandarin RST could serve as a measure of verbal working memory.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Lena L N Wong
- Unit of Human Communication, Learning, and Development, Faculty of Education, The University of Hong Kong, Hong Kong, Hong Kong SAR, China
| | - Yuan Chen
- Department of Special Education and Counselling, Integrated Center for Wellbeing (I-WELL), The Education University of Hong Kong, Taipo, New Territories, China
| |
Collapse
|
14
|
Carolan PJ, Heinrich A, Munro KJ, Millman RE. Divergent effects of listening demands and evaluative threat on listening effort in online and laboratory settings. Front Psychol 2024; 15:1171873. [PMID: 38333064 PMCID: PMC10850315 DOI: 10.3389/fpsyg.2024.1171873] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/22/2023] [Accepted: 01/15/2024] [Indexed: 02/10/2024] Open
Abstract
Objective Listening effort (LE) varies as a function of listening demands, motivation and resource availability, among other things. Motivation is posited to have a greater influence on listening effort under high, compared to low, listening demands. Methods To test this prediction, we manipulated the listening demands of a speech recognition task using tone vocoders to create moderate and high listening demand conditions. We manipulated motivation using evaluative threat, i.e., informing participants that they must reach a particular "score" for their results to be usable. Resource availability was assessed by means of working memory span and included as a fixed effects predictor. Outcome measures were indices of LE, including reaction times (RTs), self-rated work and self-rated tiredness, in addition to task performance (correct response rates). Given the recent popularity of online studies, we also wanted to examine the effect of experimental context (online vs. laboratory) on the efficacy of manipulations of listening demands and motivation. We carried out two highly similar experiments with two groups of 37 young adults, a laboratory experiment and an online experiment. To make listening demands comparable between the two studies, vocoder settings had to differ. All results were analysed using linear mixed models. Results Results showed that under laboratory conditions, listening demands affected all outcomes, with significantly lower correct response rates, slower RTs and greater self-rated work with higher listening demands. In the online study, listening demands only affected RTs. In addition, motivation affected self-rated work. Resource availability was only a significant predictor for RTs in the online study. Discussion These results show that the influence of motivation and listening demands on LE depends on the type of outcome measures used and the experimental context. It may also depend on the exact vocoder settings. A controlled laboratory settings and/or particular vocoder settings may be necessary to observe all expected effects of listening demands and motivation.
Collapse
Affiliation(s)
- Peter J. Carolan
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Antje Heinrich
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Kevin J. Munro
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| | - Rebecca E. Millman
- School of Health Sciences, Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
- Manchester University Hospitals NHS Foundation Trust, Manchester Academic Health Science Centre, Manchester, United Kingdom
| |
Collapse
|
15
|
Shen J, Wu J. Recognition of Speech With Dynamic Pitch Manipulation in Noise: Effects of Manipulation Methods. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:269-281. [PMID: 37983169 PMCID: PMC11000783 DOI: 10.1044/2023_jslhr-23-00142] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 06/26/2023] [Accepted: 09/27/2023] [Indexed: 11/22/2023]
Abstract
PURPOSE Dynamic pitch, which is defined as the variation in fundamental frequency in speech, is one of the acoustic cues that affect speech recognition in noise. Built on the evidence that a symmetrical manipulation of dynamic pitch led to poorer speech recognition, the present study examined the effect of an asymmetrical manipulation method on speech recognition in noise by younger and older adults. METHOD Speech recognition accuracy in noise was measured from younger adults with normal hearing in Experiment 1, and speech reception threshold (in dB SNR) from older adults with normal hearing to mild-moderate hearing loss in Experiment 2. The dynamic pitch contours of the speech stimuli were manipulated using both symmetrical and asymmetrical methods. RESULTS Younger adults recognized speech better in noise with asymmetrical than symmetrical manipulation, and with weakened than strengthened dynamic pitch. A substantial amount of variability was observed in a group of older listeners. This variability was predominately predicted by the listeners' age but not hearing thresholds or their ability to perceive dynamic pitch in fluctuating noise. CONCLUSIONS The asymmetrical manipulation of dynamic pitch had a less negative effect than the symmetrical manipulation. This effect also interacted with pitch-change direction. These findings suggest the influence of perceptual naturalness on speech recognition with signal modification. Directions for future research are also discussed.
Collapse
Affiliation(s)
- Jing Shen
- Department of Communication Sciences and Disorders, College of Public Health, Temple University, Philadelphia, PA
| | - Jingwei Wu
- Department of Epidemiology and Biostatistics, College of Public Health, Temple University, Philadelphia, PA
| |
Collapse
|
16
|
Wang S, Wong LLN. An Exploration of the Memory Performance in Older Adult Hearing Aid Users on the Integrated Digit-in-Noise Test. Trends Hear 2024; 28:23312165241253653. [PMID: 38715401 PMCID: PMC11080745 DOI: 10.1177/23312165241253653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 04/09/2024] [Accepted: 04/19/2024] [Indexed: 05/12/2024] Open
Abstract
This study aimed to preliminarily investigate the associations between performance on the integrated Digit-in-Noise Test (iDIN) and performance on measures of general cognition and working memory (WM). The study recruited 81 older adult hearing aid users between 60 and 95 years of age with bilateral moderate to severe hearing loss. The Chinese version of the Montreal Cognitive Assessment Basic (MoCA-BC) was used to screen older adults for mild cognitive impairment. Speech reception thresholds (SRTs) were measured using 2- to 5-digit sequences of the Mandarin iDIN. The differences in SRT between five-digit and two-digit sequences (SRT5-2), and between five-digit and three-digit sequences (SRT5-3), were used as indicators of memory performance. The results were compared to those from the Digit Span Test and Corsi Blocks Tapping Test, which evaluate WM and attention capacity. SRT5-2 and SRT5-3 demonstrated significant correlations with the three cognitive function tests (rs ranging from -.705 to -.528). Furthermore, SRT5-2 and SRT5-3 were significantly higher in participants who failed the MoCA-BC screening compared to those who passed. The findings show associations between performance on the iDIN and performance on memory tests. However, further validation and exploration are needed to fully establish its effectiveness and efficacy.
Collapse
Affiliation(s)
- Shangqiguo Wang
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| | - Lena L. N. Wong
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong, SAR, China
| |
Collapse
|
17
|
Oberfeld D, Staab K, Kattner F, Ellermeier W. Is Recognition of Speech in Noise Related to Memory Disruption Caused by Irrelevant Sound? Trends Hear 2024; 28:23312165241262517. [PMID: 39051688 PMCID: PMC11273587 DOI: 10.1177/23312165241262517] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/07/2023] [Revised: 04/24/2024] [Accepted: 05/21/2024] [Indexed: 07/27/2024] Open
Abstract
Listeners with normal audiometric thresholds show substantial variability in their ability to understand speech in noise (SiN). These individual differences have been reported to be associated with a range of auditory and cognitive abilities. The present study addresses the association between SiN processing and the individual susceptibility of short-term memory to auditory distraction (i.e., the irrelevant sound effect [ISE]). In a sample of 67 young adult participants with normal audiometric thresholds, we measured speech recognition performance in a spatial listening task with two interfering talkers (speech-in-speech identification), audiometric thresholds, binaural sensitivity to the temporal fine structure (interaural phase differences [IPD]), serial memory with and without interfering talkers, and self-reported noise sensitivity. Speech-in-speech processing was not significantly associated with the ISE. The most important predictors of high speech-in-speech recognition performance were a large short-term memory span, low IPD thresholds, bilaterally symmetrical audiometric thresholds, and low individual noise sensitivity. Surprisingly, the susceptibility of short-term memory to irrelevant sound accounted for a substantially smaller amount of variance in speech-in-speech processing than the nondisrupted short-term memory capacity. The data confirm the role of binaural sensitivity to the temporal fine structure, although its association to SiN recognition was weaker than in some previous studies. The inverse association between self-reported noise sensitivity and SiN processing deserves further investigation.
Collapse
Affiliation(s)
- Daniel Oberfeld
- Institute of Psychology, Section Experimental Psychology, Johannes Gutenberg-Universität Mainz, Germany
| | - Katharina Staab
- Department of Marketing and Human Resource Management, Technische Universität Darmstadt, Darmstadt, Germany
| | - Florian Kattner
- Institut für Psychologie, Technische Universität Darmstadt, Darmstadt, Germany
| | - Wolfgang Ellermeier
- Institut für Psychologie, Technische Universität Darmstadt, Darmstadt, Germany
| |
Collapse
|
18
|
Van Wilderode M, Van Humbeeck N, Krampe R, van Wieringen A. Speech-Identification During Standing as a Multitasking Challenge for Young, Middle-Aged and Older Adults. Trends Hear 2024; 28:23312165241260621. [PMID: 39053897 PMCID: PMC11282555 DOI: 10.1177/23312165241260621] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/16/2024] [Revised: 05/21/2024] [Accepted: 05/23/2024] [Indexed: 07/27/2024] Open
Abstract
While listening, we commonly participate in simultaneous activities. For instance, at receptions people often stand while engaging in conversation. It is known that listening and postural control are associated with each other. Previous studies focused on the interplay of listening and postural control when the speech identification task had rather high cognitive control demands. This study aimed to determine whether listening and postural control interact when the speech identification task requires minimal cognitive control, i.e., when words are presented without background noise, or a large memory load. This study included 22 young adults, 27 middle-aged adults, and 21 older adults. Participants performed a speech identification task (auditory single task), a postural control task (posture single task) and combined postural control and speech identification tasks (dual task) to assess the effects of multitasking. The difficulty levels of the listening and postural control tasks were manipulated by altering the level of the words (25 or 30 dB SPL) and the mobility of the platform (stable or moving). The sound level was increased for adults with a hearing impairment. In the dual-task, listening performance decreased, especially for middle-aged and older adults, while postural control improved. These results suggest that even when cognitive control demands for listening are minimal, interaction with postural control occurs. Correlational analysis revealed that hearing loss was a better predictor than age of speech identification and postural control.
Collapse
Affiliation(s)
- Mira Van Wilderode
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven, Leuven, Belgium
| | | | - Ralf Krampe
- Brain & Cognition Group, University of Leuven (KU Leuven), Leuven, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, Research Group Experimental ORL, KU Leuven, Leuven, Belgium
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
19
|
Slugocki C, Kuk F, Korhonen P. Alpha-Band Dynamics of Hearing Aid Wearers Performing the Repeat-Recall Test (RRT). Trends Hear 2024; 28:23312165231222098. [PMID: 38549287 PMCID: PMC10981257 DOI: 10.1177/23312165231222098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/28/2023] [Accepted: 12/06/2023] [Indexed: 04/01/2024] Open
Abstract
This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Francis Kuk
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Petri Korhonen
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| |
Collapse
|
20
|
Kestens K, Keppler H, Ceuleers D, Lecointre S, De Langhe F, Degeest S. The effect of age on the hearing-related quality of life in normal-hearing adults. JOURNAL OF COMMUNICATION DISORDERS 2023; 106:106386. [PMID: 37918084 DOI: 10.1016/j.jcomdis.2023.106386] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/19/2023] [Revised: 10/17/2023] [Accepted: 10/20/2023] [Indexed: 11/04/2023]
Abstract
INTRODUCTION Recently, a new holistic Patient Reported Outcome Measure (PROM) to assess hearing-related quality of life was developed, named the hearing-related quality of life questionnaire for Auditory-VIsual, COgnitive and Psychosocial functioning (hAVICOP). The purpose of the current study was to evaluate if the hAVICOP is sufficiently sensitive to detect an age effect in the hearing-related quality of life. METHODS One-hundred thirteen normal-hearing participants (mean age: 42.13; range: 19 to 69 years) filled in the entire hAVICOP questionnaire online through the Research Electronic Data Capture surface. The hAVICOP consists of 27 statements, across three major subdomains (auditory-visual, cognitive, and psychosocial functioning), which have to be rated on a visual analogue scale ranging from 0 (rarely to never) to 100 (almost always). Mean scores were calculated for each subdomain separately as well as combined within a total score; the worse one's hearing-related quality of life, the lower the score. Linear regression models were run to predict the hAVICOP total as well as the three subdomain scores from age and sex. RESULTS A significant main effect of age was observed for the total hAVICOP and all three subdomain scores, indicating a decrease in hearing-related quality of life with increasing age. For none of the analyses, a significant sex effect was found. CONCLUSION The hAVICOP is sufficiently sensitive to detect an age effect in the hearing-related quality of life within a large group of normal-hearing adults, emphasizing its clinical utility. This age effect on the hearing-related quality of life might be related to the interplay of age-related changes in the bottom-up and top-down processes involved during speech processing.
Collapse
Affiliation(s)
- Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium.
| | - Hannah Keppler
- Department of Rehabilitation Sciences, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium; Department of Oto-rhino-laryngology, Ghent University Hospital, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium
| | - Dorien Ceuleers
- Department of Head and Skin, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium
| | - Stephanie Lecointre
- Department of Rehabilitation Sciences, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium
| | - Flore De Langhe
- Department of Rehabilitation Sciences, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Corneel Heymanslaan 10 (2P1), 9000 Ghent, Belgium
| |
Collapse
|
21
|
Khayr R, Karawani H, Banai K. Implicit learning and individual differences in speech recognition: an exploratory study. Front Psychol 2023; 14:1238823. [PMID: 37744578 PMCID: PMC10513179 DOI: 10.3389/fpsyg.2023.1238823] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2023] [Accepted: 08/22/2023] [Indexed: 09/26/2023] Open
Abstract
Individual differences in speech recognition in challenging listening environments are pronounced. Studies suggest that implicit learning is one variable that may contribute to this variability. Here, we explored the unique contributions of three indices of implicit learning to individual differences in the recognition of challenging speech. To this end, we assessed three indices of implicit learning (perceptual, statistical, and incidental), three types of challenging speech (natural fast, vocoded, and speech in noise), and cognitive factors associated with speech recognition (vocabulary, working memory, and attention) in a group of 51 young adults. Speech recognition was modeled as a function of the cognitive factors and learning, and the unique contribution of each index of learning was statistically isolated. The three indices of learning were uncorrelated. Whereas all indices of learning had unique contributions to the recognition of natural-fast speech, only statistical learning had a unique contribution to the recognition of speech in noise and vocoded speech. These data suggest that although implicit learning may contribute to the recognition of challenging speech, the contribution may depend on the type of speech challenge and on the learning task.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| | | | | |
Collapse
|
22
|
Nicoras R, Gotowiec S, Hadley LV, Smeds K, Naylor G. Conversation success in one-to-one and group conversation: a group concept mapping study of adults with normal and impaired hearing. Int J Audiol 2023; 62:868-876. [PMID: 35875851 DOI: 10.1080/14992027.2022.2095538] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/01/2022] [Revised: 06/22/2022] [Accepted: 06/24/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE The concept of conversation success is undefined, although prior work has variously related it to accurate exchange of information, alignment between interlocutors, and good management of misunderstandings. This study aimed (1) to identify factors of conversation success and (2) to explore the importance of these factors in one-to-one versus group conversations. DESIGN Group concept mapping method was applied. Participants responded to two brainstorming prompts ("What does 'successful conversation' look like?" and "Think about a successful conversation you have taken part in. What aspects of that conversation contributed to its success?"). The resulting statements were sorted into related clusters and rated in importance for one-to-one and group conversation. STUDY SAMPLE Thirty-five adults with normal and impaired hearing. RESULTS Seven clusters were identified: (1) Being able to listen easily; (2) Being spoken to in a helpful way; (3) Being engaged and accepted; (4) Sharing information as desired; (5) Perceiving flowing and balanced interaction; (6) Feeling positive emotions; (7) Not having to engage coping mechanisms. Three clusters (1, 2, and 4) were more important in group than in one-to-one conversation. There were no differences by hearing group. CONCLUSIONS These findings emphasise that conversation success is a multifaceted concept.
Collapse
Affiliation(s)
- Raluca Nicoras
- Hearing Sciences - Scottish Section, School of Medicine, University of Nottingham, Nottingham, UK
| | | | - Lauren V Hadley
- Hearing Sciences - Scottish Section, School of Medicine, University of Nottingham, Nottingham, UK
| | - Karolina Smeds
- Hearing Sciences - Scottish Section, School of Medicine, University of Nottingham, Nottingham, UK
- ORCA Europe, WS Audiology, Stockholm, Sweden
| | - Graham Naylor
- Hearing Sciences - Scottish Section, School of Medicine, University of Nottingham, Nottingham, UK
| |
Collapse
|
23
|
Fengler A, Fuchs M, Tretbar K. [How Working Memory Supports Language Comprehension after Cochlear Implantation]. Laryngorhinootologie 2023; 102:658-661. [PMID: 37220774 DOI: 10.1055/a-1985-0238] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 05/25/2023]
Abstract
Language comprehension in challenging conditions requires the integration of multimodal information. Necessary resources are provided by the working memory. We discuss how adult cochlear implant users benefit from auditory-cognitive training during their rehabilitation process. Working memory capacity highly impacts language comprehension whenever listening effort is increased. Since CI users may have trouble to recover the phonological structure from the speech signal, working memory is required to provide necessary resources to disambiguate multiple interpretation options. However, either due to their hearing biography or due to their advanced age, CI user often show reduced working memory capacities. Previous studies with hearing-impaired adults provide evidence for the potential of combined auditory-cognitive training during the rehabilitation process of CI users.
Collapse
|
24
|
Viswanathan V, Bharadwaj HM, Heinz MG, Shinn-Cunningham BG. Induced alpha and beta electroencephalographic rhythms covary with single-trial speech intelligibility in competition. Sci Rep 2023; 13:10216. [PMID: 37353552 PMCID: PMC10290148 DOI: 10.1038/s41598-023-37173-2] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2023] [Accepted: 06/17/2023] [Indexed: 06/25/2023] Open
Abstract
Neurophysiological studies suggest that intrinsic brain oscillations influence sensory processing, especially of rhythmic stimuli like speech. Prior work suggests that brain rhythms may mediate perceptual grouping and selective attention to speech amidst competing sound, as well as more linguistic aspects of speech processing like predictive coding. However, we know of no prior studies that have directly tested, at the single-trial level, whether brain oscillations relate to speech-in-noise outcomes. Here, we combined electroencephalography while simultaneously measuring intelligibility of spoken sentences amidst two different interfering sounds: multi-talker babble or speech-shaped noise. We find that induced parieto-occipital alpha (7-15 Hz; thought to modulate attentional focus) and frontal beta (13-30 Hz; associated with maintenance of the current sensorimotor state and predictive coding) oscillations covary with trial-wise percent-correct scores; importantly, alpha and beta power provide significant independent contributions to predicting single-trial behavioral outcomes. These results can inform models of speech processing and guide noninvasive measures to index different neural processes that together support complex listening.
Collapse
Affiliation(s)
- Vibha Viswanathan
- Neuroscience Institute, Carnegie Mellon University, Pittsburgh, PA, 15213, USA.
| | - Hari M Bharadwaj
- Department of Communication Science and Disorders, University of Pittsburgh, Pittsburgh, PA, 15260, USA
| | - Michael G Heinz
- Department of Speech, Language, and Hearing Sciences, Purdue University, West Lafayette, IN, 47907, USA
| | | |
Collapse
|
25
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
26
|
Windle R, Dillon H, Heinrich A. A review of auditory processing and cognitive change during normal ageing, and the implications for setting hearing aids for older adults. Front Neurol 2023; 14:1122420. [PMID: 37409017 PMCID: PMC10318159 DOI: 10.3389/fneur.2023.1122420] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2022] [Accepted: 06/02/2023] [Indexed: 07/07/2023] Open
Abstract
Throughout our adult lives there is a decline in peripheral hearing, auditory processing and elements of cognition that support listening ability. Audiometry provides no information about the status of auditory processing and cognition, and older adults often struggle with complex listening situations, such as speech in noise perception, even if their peripheral hearing appears normal. Hearing aids can address some aspects of peripheral hearing impairment and improve signal-to-noise ratios. However, they cannot directly enhance central processes and may introduce distortion to sound that might act to undermine listening ability. This review paper highlights the need to consider the distortion introduced by hearing aids, specifically when considering normally-ageing older adults. We focus on patients with age-related hearing loss because they represent the vast majority of the population attending audiology clinics. We believe that it is important to recognize that the combination of peripheral and central, auditory and cognitive decline make older adults some of the most complex patients seen in audiology services, so they should not be treated as "standard" despite the high prevalence of age-related hearing loss. We argue that a primary concern should be to avoid hearing aid settings that introduce distortion to speech envelope cues, which is not a new concept. The primary cause of distortion is the speed and range of change to hearing aid amplification (i.e., compression). We argue that slow-acting compression should be considered as a default for some users and that other advanced features should be reconsidered as they may also introduce distortion that some users may not be able to tolerate. We discuss how this can be incorporated into a pragmatic approach to hearing aid fitting that does not require increased loading on audiology services.
Collapse
Affiliation(s)
- Richard Windle
- Audiology Department, Royal Berkshire NHS Foundation Trust, Reading, United Kingdom
| | - Harvey Dillon
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Department of Linguistics, Macquarie University, North Ryde, NSW, Australia
| | - Antje Heinrich
- NIHR Manchester Biomedical Research Centre, Manchester, United Kingdom
- Division of Human Communication, Development and Hearing, School of Health Sciences, University of Manchester, Manchester, United Kingdom
| |
Collapse
|
27
|
Hamdy M, El Shennawy A, Mostafa N, Hamdy HS. Working memory and listening fatigue in cochlear implantation. HEARING, BALANCE AND COMMUNICATION 2023. [DOI: 10.1080/21695717.2023.2188813] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Affiliation(s)
- Mona Hamdy
- Audiology Unit, Department of Otolaryngology, Cairo University, Cairo, Egypt
| | - Amira El Shennawy
- Audiology Unit, Department of Otolaryngology, Cairo University, Cairo, Egypt
| | - Nourhan Mostafa
- Audiology Unit, Department of Otolaryngology, Cairo University, Cairo, Egypt
| | | |
Collapse
|
28
|
Lansford KL, Barrett TS, Borrie SA. Cognitive Predictors of Perception and Adaptation to Dysarthric Speech in Young Adult Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:30-47. [PMID: 36480697 PMCID: PMC10023189 DOI: 10.1044/2022_jslhr-22-00391] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/01/2022] [Revised: 08/09/2022] [Accepted: 09/02/2022] [Indexed: 06/17/2023]
Abstract
PURPOSE Although recruitment of cognitive-linguistic resources to support dysarthric speech perception and adaptation is presumed by theoretical accounts of effortful listening and supported by cross-disciplinary empirical findings, prospective relationships have received limited attention in the disordered speech literature. This study aimed to examine the predictive relationships between cognitive-linguistic parameters and intelligibility outcomes associated with familiarization with dysarthric speech in young adult listeners. METHOD A cohort of 156 listener participants between the ages of 18 and 50 years completed a three-phase perceptual training protocol (pretest, training, and posttest) with one of three speakers with dysarthria. Additionally, listeners completed the National Institutes of Health Toolbox Cognition Battery to obtain measures of the following cognitive-linguistic constructs: working memory, inhibitory control of attention, cognitive flexibility, processing speed, and vocabulary knowledge. RESULTS Elastic net regression models revealed that select cognitive-linguistic measures and their two-way interactions predicted both initial intelligibility and intelligibility improvement of dysarthric speech. While some consistency across models was shown, unique constellations of select cognitive factors and their interactions predicted initial intelligibility and intelligibility improvement of the three different speakers with dysarthria. CONCLUSIONS Current findings extend empirical support for theoretical models of speech perception in adverse listening conditions to dysarthric speech signals. Although predictive relationships were complex, vocabulary knowledge, working memory, and cognitive flexibility often emerged as important variables across the models.
Collapse
Affiliation(s)
- Kaitlin L. Lansford
- School of Communication Science & Disorders, Florida State University, Tallahassee
| | | | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
29
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
30
|
Ceuleers D, Baudonck N, Keppler H, Kestens K, Dhooge I, Degeest S. Development of the hearing-related quality of life questionnaire for auditory-visual, cognitive and psychosocial functioning (hAVICOP). JOURNAL OF COMMUNICATION DISORDERS 2023; 101:106291. [PMID: 36508852 DOI: 10.1016/j.jcomdis.2022.106291] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/27/2022] [Revised: 11/18/2022] [Accepted: 11/27/2022] [Indexed: 06/17/2023]
Abstract
INTRODUCTION There is a need for a validated and standardized self-assessment instrument to assess the subjective effect of hearing aid (HA) use and/or cochlear implantation (CI) on different aspects of functioning in daily life. The aim of this study was to develop a new holistic Patient Reported Outcome Measure (PROM) to assess hearing-related quality of life. The new PROM is titled the hearing-related quality of life questionnaire for Auditory-VIsual, COgnitive and Psychosocial functioning (hAVICOP). METHODS A conceptual framework was set up and test items were prepared per domain. Preliminary testing involved a semi-structured interview-based assessment in normal-hearing and hearing-impaired adults and an expert panel. For the further psychometric evaluation, a new sample of 15 adult HA users, 20 adult CI users and 20 normal-hearing adults filled in the refined version of the hAVICOP, the Speech, Spatial and Qualities of Hearing Scale, the Nijmegen Cochlear Implant Questionnaire and the TNO-AZL Questionnaire for Adult's Health-Related Quality of Life. Based on these results, a factor analysis was conducted and internal consistency, discriminant validity and concurrent construct validity were determined. RESULTS The final version of the hAVICOP consists of three domains for hearing-related quality of life: (1) auditory-visual functioning, (2) cognitive functioning, and (3) psychosocial functioning. A sufficient internal consistency was found, and discriminant validity and concurrent construct validity were good. CONCLUSIONS A new PROM to assess hearing-related quality of life was developed, named the hAVICOP. In the future the validity and reliability should be examined further.
Collapse
Affiliation(s)
- Dorien Ceuleers
- Department of Head and Skin, Ghent University, Ghent, Belgium.
| | - Nele Baudonck
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Hannah Keppler
- Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium; Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Katrien Kestens
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| | - Ingeborg Dhooge
- Department of Head and Skin, Ghent University, Ghent, Belgium; Department of Otorhinolaryngology, Ghent University Hospital, Ghent, Belgium
| | - Sofie Degeest
- Department of Rehabilitation Sciences, Ghent University, Ghent, Belgium
| |
Collapse
|
31
|
Central Auditory Processing Disorder in Patients with Amnestic Mild Cognitive Impairment. Behav Neurol 2022; 2022:9001662. [PMID: 36567763 PMCID: PMC9779989 DOI: 10.1155/2022/9001662] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 08/10/2022] [Accepted: 11/26/2022] [Indexed: 12/23/2022] Open
Abstract
Background This study was conducted to comprehensively examine the central auditory processing (CAP) abilities of patients with amnestic mild cognitive impairment (aMCI) as well as to compare the results with cognitively normal elderly controls. Methods A total of 78 participants were screened through pure-tone audiometry and word recognition score in order to exclude peripheral auditory dysfunction. Forty-five people passed screening tests, and 33 people failed. Finally, 25 aMCI (mean age = 71.52 ± 4.8; male : female = 24 : 76) and 20 controls (mean age = 73.45 ± 4.32; male : female = 45 : 55) were enrolled in the study. Seven CAP tests (frequency pattern test, duration pattern test, Gap-In-Noise© test, dichotic digits test, low-pass filtered word test, speech perception in noise test, and binaural fusion test) were conducted only after the two groups passed the screening. A linear mixed model was applied to analyze CAP tests except for the binaural fusion test. For the binaural fusion test, the independent t-test was used to compare the means of test score between two groups. Results The aMCI group had a decrease in the mean score of the frequency pattern test, duration pattern test, Gaps-In-Noise© test, dichotic digits test, and speech perception in noise test compared with the control group. Conclusion The aMCI group's CAP abilities were significantly lower than those of the control group. Thus, if the cognitive assessment and hearing evaluation are conducted in combination, the sensitivity of the diagnostic process for aMCI will be increased.
Collapse
|
32
|
Beadle J, Kim J, Davis C. Visual Speech Improves Older and Younger Adults' Response Time and Accuracy for Speech Comprehension in Noise. Trends Hear 2022; 26:23312165221145006. [PMID: 36524310 PMCID: PMC9761220 DOI: 10.1177/23312165221145006] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/23/2022] Open
Abstract
Past research suggests that older adults expend more cognitive resources when processing visual speech than younger adults. If so, given resource limitations, older adults may not get as large a visual speech benefit as younger ones on a resource-demanding speech processing task. We tested this using a speech comprehension task that required attention across two talkers and a simple response (i.e., the question-and-answer task) and measured response time and accuracy. Specifically, we compared the size of visual speech benefit for older and younger adults. We also examined whether the presence of a visual distractor would reduce the visual speech benefit more for older than younger adults. Twenty-five older adults (12 females, MAge = 72) and 25 younger adults (17 females, MAge = 22) completed the question-and-answer task under time pressure. The task included the following conditions: auditory and visual (AV) speech; AV speech plus visual distractor; and auditory speech with static face images. Both age groups showed a visual speech benefit regardless of whether a visual distractor was also presented. Likewise, the size of the visual speech benefit did not significantly interact with age group for accuracy or the potentially more sensitive response time measure.
Collapse
Affiliation(s)
- Julie Beadle
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia,The HEARing CRC, Australia
| | - Jeesun Kim
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia
| | - Chris Davis
- The MARCS Institute for Brain, Behaviour, and Development,
Western Sydney
University, Sydney, Australia,The HEARing CRC, Australia,Chris Davis, Western Sydney University, The
MARCS Institute for Brain, Behaviour and Development, Westmead Innovation
Quarter, Building U, Level 4, 160 Hawkesbury Road, Westmead NSW 2145, Australia.
| |
Collapse
|
33
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
34
|
Moberly AC, Afreen H, Schneider KJ, Tamati TN. Preoperative Reading Efficiency as a Predictor of Adult Cochlear Implant Outcomes. Otol Neurotol 2022; 43:e1100-e1106. [PMID: 36351224 PMCID: PMC9694592 DOI: 10.1097/mao.0000000000003722] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
HYPOTHESES 1) Scores of reading efficiency (the Test of Word Reading Efficiency, second edition) obtained in adults before cochlear implant surgery will be predictive of speech recognition outcomes 6 months after surgery; and 2) Cochlear implantation will lead to improvements in language processing as measured through reading efficiency from preimplantation to postimplantation. BACKGROUND Adult cochlear implant (CI) users display remarkable variability in speech recognition outcomes. "Top-down" processing-the use of cognitive resources to make sense of degraded speech-contributes to speech recognition abilities in CI users. One area that has received little attention is the efficiency of lexical and phonological processing. In this study, a visual measure of word and nonword reading efficiency-relying on lexical and phonological processing, respectively-was investigated for its ability to predict CI speech recognition outcomes, as well as to identify any improvements after implantation. METHODS Twenty-four postlingually deaf adult CI candidates were tested on the Test of Word Reading Efficiency, Second Edition preoperatively and again 6 months post-CI. Six-month post-CI speech recognition measures were also assessed across a battery of word and sentence recognition. RESULTS Preoperative nonword reading scores were moderately predictive of sentence recognition outcomes, but real word reading scores were not; word recognition scores were not predicted by either. No 6-month post-CI improvement was demonstrated in either word or nonword reading efficiency. CONCLUSION Phonological processing as measured by the Test of Word Reading Efficiency, Second Edition nonword reading predicts to a moderate degree 6-month sentence recognition outcomes in adult CI users. Reading efficiency did not improve after implantation, although this could be because of the relatively short duration of CI use.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Hajera Afreen
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Kara J Schneider
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio
| | | |
Collapse
|
35
|
Stenbäck V, Marsja E, Hällgren M, Lyxell B, Larsby B. Informational Masking and Listening Effort in Speech Recognition in Noise: The Role of Working Memory Capacity and Inhibitory Control in Older Adults With and Without Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4417-4428. [PMID: 36283680 DOI: 10.1044/2022_jslhr-21-00674] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The study aimed to assess the relationship between (a) speech recognition in noise, mask type, working memory capacity (WMC), and inhibitory control and (b) self-rated listening effort, speech material, and mask type, in older adults with and without hearing impairment. It was of special interest to assess the relationship between WMC, inhibitory control, and speech recognition in noise when informational maskers masked target speech. METHOD A mixed design was used. A group (N = 24) of older (Mage = 69.7 years) individuals with hearing impairment and a group of age normal-hearing adults (Mage = 59.3 years, SD = 6.5) participated in the study. The participants were presented with auditory tests in a sound-attenuated room and with cognitive tests in a quiet office. The participants were asked to rate listening effort after being presented with energetic and informational background maskers in two different speech materials used in this study (i.e., Hearing In Noise Test and Hagerman test). Linear mixed-effects models were set up to assess the effect of the two different speech materials, energetic and informational maskers, hearing ability, WMC, inhibitory control, and self-rated listening effort. RESULTS Results showed that WMC and inhibitory control were of importance for speech recognition in noise, even when controlling for pure-tone average 4 hearing thresholds and age, when the maskers were informational. Concerning listening effort, on the other hand, the results suggest that hearing ability, but not cognitive abilities, is important for self-rated listening effort in speech recognition in noise. CONCLUSIONS Speech-in-noise recognition is more dependent on WMC for older adults in informational maskers than in energetic maskers. Hearing ability is a stronger predictor than cognition for self-rated listening effort. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21357648.
Collapse
Affiliation(s)
- Victoria Stenbäck
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Division of Education, Teaching and Learning, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Erik Marsja
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| | - Björn Lyxell
- Disability Research Division, Department of Behavioural Sciences and Learning, Linköping University, Sweden
- Department of Special Needs Education, University of Oslo, Norway
| | - Birgitta Larsby
- Department of Otorhinolaryngology in Östergötland and Department of Biomedical and Clinical Sciences, Linköping University, Sweden
| |
Collapse
|
36
|
Chen Y. Is Cantonese lexical tone information important for sentence recognition accuracy in quiet and in noise? PLoS One 2022; 17:e0276254. [PMID: 36282852 PMCID: PMC9595525 DOI: 10.1371/journal.pone.0276254] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/05/2022] [Accepted: 10/03/2022] [Indexed: 11/06/2022] Open
Abstract
In Chinese languages, tones are used to express the lexical meaning of words. It is therefore important to analyze the role of lexical tone in Chinese sentence recognition accuracy. There is a lack of research on the role of Cantonese lexical tones in sentence recognition accuracy. Therefore, this study examined the contribution of lexical tone information to Cantonese sentence recognition accuracy and its cognitive correlates in adults with normal hearing (NH). A text-to-speech synthesis engine was used to synthesize Cantonese daily-use sentences with each word carrying an original or a flat lexical tone, which were then presented to 97 participants in quiet, in speech-shaped noise (SSN), and in two-talker babble (TTB) noise conditions. Both target sentences and noises were presented at 65 dB binaurally via insert headphones. It was found that listeners with NH can almost perfectly recognize a daily-use Cantonese sentence with mismatched lexical tone information in quiet, while their sentence recognition decreases substantially in noise. The same finding was reported for Mandarin, which has a relatively simple tonal system, suggesting that the current results may be applicable to other tonal languages. In addition, working memory (WM) was significantly related to decline in sentence recognition score in the TTB but not in the SSN, when the lexical tones were mismatched. This finding can be explained using the Ease of Language Understanding model and suggests that those with higher WM are less likely to be affected by the degraded lexical information for perceiving daily-use sentences in the TTB.
Collapse
Affiliation(s)
- Yuan Chen
- Department of Special Education and Counselling, Integrated Center for Wellbeing (I-WELL), The Education University of Hong Kong, Taipo, New Territories, Hong Kong SAR, China
- * E-mail:
| |
Collapse
|
37
|
Winn MB, Teece KH. Effortful Listening Despite Correct Responses: The Cost of Mental Repair in Sentence Recognition by Listeners With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3966-3980. [PMID: 36112516 PMCID: PMC9927629 DOI: 10.1044/2022_jslhr-21-00631] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 04/20/2022] [Accepted: 06/24/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Speech recognition percent correct scores fail to capture the effort of mentally repairing the perception of speech that was initially misheard. This study measured the effort of listening to stimuli specifically designed to elicit mental repair in adults who use cochlear implants (CIs). METHOD CI listeners heard and repeated sentences in which specific words were distorted or masked by noise but recovered based on later context: a signature of mental repair. Changes in pupil dilation were tracked as an index of effort and time-locked with specific landmarks during perception. RESULTS Effort significantly increases when a listener needs to repair a misperceived word, even if the verbal response is ultimately correct. Mental repair of words in a sentence was accompanied by greater prevalence of errors elsewhere in the same sentence, suggesting that effort spreads to consume resources across time. The cost of mental repair in CI listeners was essentially the same as that observed in listeners with normal hearing in previous work. CONCLUSIONS Listening effort as tracked by pupil dilation is better explained by the mental repair and reconstruction of words rather than the appearance of correct or incorrect perception. Linguistic coherence drives effort more heavily than the mere presence of mistakes, highlighting the importance of testing materials that do not constrain coherence by design.
Collapse
Affiliation(s)
- Matthew B. Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| | - Katherine H. Teece
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
38
|
Is Having Hearing Loss Fundamentally Different? Multigroup Structural Equation Modeling of the Effect of Cognitive Functioning on Speech Identification. Ear Hear 2022; 43:1437-1446. [PMID: 34983896 DOI: 10.1097/aud.0000000000001196] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Previous research suggests that there is a robust relationship between cognitive functioning and speech-in-noise performance for older adults with age-related hearing loss. For normal-hearing adults, on the other hand, the research is not entirely clear. Therefore, the current study aimed to examine the relationship between cognitive functioning, aging, and speech-in-noise, in a group of older normal-hearing persons and older persons with hearing loss who wear hearing aids. DESIGN We analyzed data from 199 older normal-hearing individuals (mean age = 61.2) and 200 older individuals with hearing loss (mean age = 60.9) using multigroup structural equation modeling. Four cognitively related tasks were used to create a cognitive functioning construct: the reading span task, a visuospatial working memory task, the semantic word-pairs task, and Raven's progressive matrices. Speech-in-noise, on the other hand, was measured using Hagerman sentences. The Hagerman sentences were presented via an experimental hearing aid to both normal hearing and hearing-impaired groups. Furthermore, the sentences were presented with one of the two background noise conditions: the Hagerman original speech-shaped noise or four-talker babble. Each noise condition was also presented with three different hearing processing settings: linear processing, fast compression, and noise reduction. RESULTS Cognitive functioning was significantly related to speech-in-noise identification. Moreover, aging had a significant effect on both speech-in-noise and cognitive functioning. With regression weights constrained to be equal for the two groups, the final model had the best fit to the data. Importantly, the results showed that the relationship between cognitive functioning and speech-in-noise was not different for the two groups. Furthermore, the same pattern was evident for aging: the effects of aging on cognitive functioning and aging on speech-in-noise were not different between groups. CONCLUSION Our findings revealed similar cognitive functioning and aging effects on speech-in-noise performance in older normal-hearing and aided hearing-impaired listeners. In conclusion, the findings support the Ease of Language Understanding model as cognitive processes play a critical role in speech-in-noise independent from the hearing status of elderly individuals.
Collapse
|
39
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
40
|
Stewart HJ, Cash EK, Hunter LL, Maloney T, Vannest J, Moore DR. Speech cortical activation and connectivity in typically developing children and those with listening difficulties. Neuroimage Clin 2022; 36:103172. [PMID: 36087559 PMCID: PMC9467868 DOI: 10.1016/j.nicl.2022.103172] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2021] [Revised: 08/23/2022] [Accepted: 08/24/2022] [Indexed: 12/14/2022]
Abstract
Listening difficulties (LiD) in people who have normal audiometry are a widespread but poorly understood form of hearing impairment. Recent research suggests that childhood LiD are cognitive rather than auditory in origin. We examined decoding of sentences using a novel combination of behavioral testing and fMRI with 43 typically developing children and 42 age matched (6-13 years old) children with LiD, categorized by caregiver report (ECLiPS). Both groups had clinically normal hearing. For sentence listening tasks, we found no group differences in fMRI brain cortical activation by increasingly complex speech stimuli that progressed in emphasis from phonology to intelligibility to semantics. Using resting state fMRI, we examined the temporal connectivity of cortical auditory and related speech perception networks. We found significant group differences only in cortical connections engaged when processing more complex speech stimuli. The strength of the affected connections was related to the children's performance on tests of dichotic listening, speech-in-noise, attention, memory and verbal vocabulary. Together, these results support the novel hypothesis that childhood LiD reflects difficulties in language rather than in auditory or phonological processing.
Collapse
Affiliation(s)
- Hannah J Stewart
- Division of Psychology and Language Sciences, University College London, London, UK; Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Psychology, Lancaster University, Lancaster, UK.
| | - Erin K Cash
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Lisa L Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Thomas Maloney
- Pediatric Neuroimaging Research Consortium, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA
| | - Jennifer Vannest
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Communication Sciences and Disorders, University of Cincinnati, OH, USA
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, USA; Department of Otolaryngology, College of Medicine, University of Cincinnati, Cincinnati, OH, USA; Manchester Centre for Audiology and Deafness, University of Manchester, Manchester M13 9PL, UK
| |
Collapse
|
41
|
Cowan T, Paroby C, Leibold LJ, Buss E, Rodriguez B, Calandruccio L. Masked-Speech Recognition for Linguistically Diverse Populations: A Focused Review and Suggestions for the Future. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3195-3216. [PMID: 35917458 PMCID: PMC9911100 DOI: 10.1044/2022_jslhr-22-00011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/12/2022] [Accepted: 05/04/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Twenty years ago, von Hapsburg and Peña (2002) wrote a tutorial that reviewed the literature on speech audiometry and bilingualism and outlined valuable recommendations to increase the rigor of the evidence base. This review article returns to that seminal tutorial to reflect on how that advice was applied over the last 20 years and to provide updated recommendations for future inquiry. METHOD We conducted a focused review of the literature on masked-speech recognition for bilingual children and adults. First, we evaluated how studies published since 2002 described bilingual participants. Second, we reviewed the literature on native language masked-speech recognition. Third, we discussed theoretically motivated experimental work. Fourth, we outlined how recent research in bilingual speech recognition can be used to improve clinical practice. RESULTS Research conducted since 2002 commonly describes bilingual samples in terms of their language status, competency, and history. Bilingualism was not consistently associated with poor masked-speech recognition. For example, bilinguals who were exposed to English prior to age 7 years and who were dominant in English performed comparably to monolinguals for masked-sentence recognition tasks. To the best of our knowledge, there are no data to document the masked-speech recognition ability of these bilinguals in their other language compared to a second monolingual group, which is an important next step. Nonetheless, individual factors that commonly vary within bilingual populations were associated with masked-speech recognition and included language dominance, competency, and age of acquisition. We identified methodological issues in sampling strategies that could, in part, be responsible for inconsistent findings between studies. For instance, disparities in socioeconomic status (SES) between recruited bilingual and monolingual groups could cause confounding bias within the research design. CONCLUSIONS Dimensions of the bilingual linguistic profile should be considered in clinical practice to inform counseling and (re)habilitation strategies since susceptibility to masking is elevated in at least one language for most bilinguals. Future research should continue to report language status, competency, and history but should also report language stability and demand for use data. In addition, potential confounds (e.g., SES, educational attainment) when making group comparisons between monolinguals and bilinguals must be considered.
Collapse
Affiliation(s)
- Tiana Cowan
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Caroline Paroby
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
42
|
Ashori M. Working Memory-Based Cognitive Rehabilitation: Spoken Language of Deaf and Hard-of-Hearing Children. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2022; 27:234-244. [PMID: 35543013 DOI: 10.1093/deafed/enac007] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Revised: 03/19/2022] [Accepted: 03/21/2022] [Indexed: 06/14/2023]
Abstract
This research examined the effect of the Working Memory-based Cognitive Rehabilitation (WMCR) intervention on the spoken language development of deaf and hard-of-hearing (DHH) children. In this clinical trial study, 28 DHH children aged between 5 and 6 years were selected by random sampling method. The participants were randomly assigned to experimental and control groups. The experimental group participated in the WMCR intervention involving 11 sessions. All participants were assessed pre-and postintervention. Data were collected by the Newsha Development Scale and analyzed through MANCOVA. The results revealed a significant difference between the scores of the receptive and expressive language of the experimental group that were exposed to the WMCR intervention compared with the control group. The receptive and expressive language skills of the experimental group indicated a significant improvement after the intervention. Therefore, the WMCR intervention is an effective method that affects the spoken language skills of DHH children. These findings have critical implications for teachers, parents, and therapists in supporting DHH young children to develop their language skills.
Collapse
Affiliation(s)
- Mohammad Ashori
- Associate Professor, Department of Psychology and Education of People with Special Needs, Faculty of Education and Psychology, University of Isfahan, Isfahan, Iran
| |
Collapse
|
43
|
Nitsan G, Baharav S, Tal-Shir D, Shakuf V, Ben-David BM. Speech Processing as a Far-Transfer Gauge of Serious Games for Cognitive Training in Aging: Randomized Controlled Trial of Web-Based Effectivate Training. JMIR Serious Games 2022; 10:e32297. [PMID: 35900825 PMCID: PMC9400949 DOI: 10.2196/32297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.
Collapse
Affiliation(s)
- Gal Nitsan
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel.,Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Shai Baharav
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Dalith Tal-Shir
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Vered Shakuf
- Department of Communications Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
44
|
Sun J, Zhang Z, Sun B, Liu H, Wei C, Liu Y. The effect of aging on context use and reliance on context in speech: A behavioral experiment with Repeat–Recall Test. Front Aging Neurosci 2022; 14:924193. [PMID: 35936762 PMCID: PMC9354826 DOI: 10.3389/fnagi.2022.924193] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2022] [Accepted: 06/29/2022] [Indexed: 11/13/2022] Open
Abstract
PurposeTo elucidate how aging would affect the extent of semantic context use and the reliance on semantic context measured with the Repeat–Recall Test (RRT).MethodsA younger adult group (YA) aged between 18 and 25 and an older adult group (OA) aged between 50 and 65 were recruited. Participants from both the groups performed RRT: sentence repeat and delayed recall tasks, and subjective listening effort and noise tolerable time, under two noise types and seven signal-to-noise ratios (SNR). Performance–Intensity curves were fitted. The performance in SRT50 and SRT75 was predicted.ResultsFor the repeat task, the OA group used more semantic context and relied more on semantic context than the YA group. For the recall task, OA used less semantic context but relied more on context than the YA group. Age did not affect the subjective listening effort but significantly affected noise tolerable time. Participants in both age groups could use more context in SRT75 than SRT50 on four tasks of RRT. Under the same SRT, however, the YA group could use more context in repeat and recall tasks than the OA group.ConclusionAge affected the use and reliance of semantic context. Even though the OA group used more context in speech recognition, they failed in speech information maintenance (recall) even with the help of semantic context. The OA group relied more on context while performing repeat and recall tasks. The amount of context used was also influenced by SRT.
Collapse
Affiliation(s)
- Jiayu Sun
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
- Department of Otorhinolaryngology, Head and Neck Surgery, Shanghai Ninth People’s Hospital, Shanghai Jiao Tong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haotian Liu
- Department of Otolaryngology Head and Neck Surgery, West China Hospital of Sichuan University, Chengdu, China
| | - Chaogang Wei
- Department of Otolaryngology Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- *Correspondence: Yuhe Liu,
| |
Collapse
|
45
|
Bernstein LE, Jordan N, Auer ET, Eberhardt SP. Lipreading: A Review of Its Continuing Importance for Speech Recognition With an Acquired Hearing Loss and Possibilities for Effective Training. Am J Audiol 2022; 31:453-469. [PMID: 35316072 PMCID: PMC9524756 DOI: 10.1044/2021_aja-21-00112] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2021] [Revised: 10/25/2021] [Accepted: 12/30/2021] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The goal of this review article is to reinvigorate interest in lipreading and lipreading training for adults with acquired hearing loss. Most adults benefit from being able to see the talker when speech is degraded; however, the effect size is related to their lipreading ability, which is typically poor in adults who have experienced normal hearing through most of their lives. Lipreading training has been viewed as a possible avenue for rehabilitation of adults with an acquired hearing loss, but most training approaches have not been particularly successful. Here, we describe lipreading and theoretically motivated approaches to its training, as well as examples of successful training paradigms. We discuss some extensions to auditory-only (AO) and audiovisual (AV) speech recognition. METHOD Visual speech perception and word recognition are described. Traditional and contemporary views of training and perceptual learning are outlined. We focus on the roles of external and internal feedback and the training task in perceptual learning, and we describe results of lipreading training experiments. RESULTS Lipreading is commonly characterized as limited to viseme perception. However, evidence demonstrates subvisemic perception of visual phonetic information. Lipreading words also relies on lexical constraints, not unlike auditory spoken word recognition. Lipreading has been shown to be difficult to improve through training, but under specific feedback and task conditions, training can be successful, and learning can generalize to untrained materials, including AV sentence stimuli in noise. The results on lipreading have implications for AO and AV training and for use of acoustically processed speech in face-to-face communication. CONCLUSION Given its importance for speech recognition with a hearing loss, we suggest that the research and clinical communities integrate lipreading in their efforts to improve speech recognition in adults with acquired hearing loss.
Collapse
Affiliation(s)
- Lynne E. Bernstein
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Nicole Jordan
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Edward T. Auer
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| | - Silvio P. Eberhardt
- Department of Speech, Language & Hearing Sciences, George Washington University, Washington, DC
| |
Collapse
|
46
|
Brungart DS, Sherlock LP, Kuchinsky SE, Perry TT, Bieber RE, Grant KW, Bernstein JGW. Assessment methods for determining small changes in hearing performance over time. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:3866. [PMID: 35778214 DOI: 10.1121/10.0011509] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Abstract
Although the behavioral pure-tone threshold audiogram is considered the gold standard for quantifying hearing loss, assessment of speech understanding, especially in noise, is more relevant to quality of life but is only partly related to the audiogram. Metrics of speech understanding in noise are therefore an attractive target for assessing hearing over time. However, speech-in-noise assessments have more potential sources of variability than pure-tone threshold measures, making it a challenge to obtain results reliable enough to detect small changes in performance. This review examines the benefits and limitations of speech-understanding metrics and their application to longitudinal hearing assessment, and identifies potential sources of variability, including learning effects, differences in item difficulty, and between- and within-individual variations in effort and motivation. We conclude by recommending the integration of non-speech auditory tests, which provide information about aspects of auditory health that have reduced variability and fewer central influences than speech tests, in parallel with the traditional audiogram and speech-based assessments.
Collapse
Affiliation(s)
- Douglas S Brungart
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - LaGuinn P Sherlock
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Trevor T Perry
- Hearing Conservation and Readiness Branch, U.S. Army Public Health Center, E1570 8977 Sibert Road, Aberdeen Proving Ground, Maryland 21010, USA
| | - Rebecca E Bieber
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Ken W Grant
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| | - Joshua G W Bernstein
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Building 19, Floor 5, 4954 North Palmer Road, Bethesda, Maryland 20889, USA
| |
Collapse
|
47
|
Gray R, Sarampalis A, Başkent D, Harding EE. Working-Memory, Alpha-Theta Oscillations and Musical Training in Older Age: Research Perspectives for Speech-on-speech Perception. Front Aging Neurosci 2022; 14:806439. [PMID: 35645774 PMCID: PMC9131017 DOI: 10.3389/fnagi.2022.806439] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2021] [Accepted: 03/24/2022] [Indexed: 12/18/2022] Open
Abstract
During the normal course of aging, perception of speech-on-speech or “cocktail party” speech and use of working memory (WM) abilities change. Musical training, which is a complex activity that integrates multiple sensory modalities and higher-order cognitive functions, reportedly benefits both WM performance and speech-on-speech perception in older adults. This mini-review explores the relationship between musical training, WM and speech-on-speech perception in older age (> 65 years) through the lens of the Ease of Language Understanding (ELU) model. Linking neural-oscillation literature associating speech-on-speech perception and WM with alpha-theta oscillatory activity, we propose that two stages of speech-on-speech processing in the ELU are underpinned by WM-related alpha-theta oscillatory activity, and that effects of musical training on speech-on-speech perception may be reflected in these frequency bands among older adults.
Collapse
Affiliation(s)
- Ryan Gray
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Psychology, Centre for Applied Behavioural Sciences, School of Social Sciences, Heriot-Watt University, Edinburgh, United Kingdom
| | - Anastasios Sarampalis
- Department of Experimental Psychology, University of Groningen, Groningen, Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
| | - Eleanor E. Harding
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, Netherlands
- Department of Otorhinolaryngology, University of Groningen, University Medical Center Groningen, Groningen, Netherlands
- *Correspondence: Eleanor E. Harding,
| |
Collapse
|
48
|
Neal K, McMahon CM, Hughes SE, Boisvert I. Listening-Based Communication Ability in Adults With Hearing Loss: A Scoping Review of Existing Measures. Front Psychol 2022; 13:786347. [PMID: 35360643 PMCID: PMC8960922 DOI: 10.3389/fpsyg.2022.786347] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/30/2021] [Accepted: 01/31/2022] [Indexed: 11/13/2022] Open
Abstract
Introduction Hearing loss in adults has a pervasive impact on health and well-being. Its effects on everyday listening and communication can directly influence participation across multiple spheres of life. These impacts, however, remain poorly assessed within clinical settings. Whilst various tests and questionnaires that measure listening and communication abilities are available, there is a lack of consensus about which measures assess the factors that are most relevant to optimising auditory rehabilitation. This study aimed to map current measures used in published studies to evaluate listening skills needed for oral communication in adults with hearing loss. Methods A scoping review was conducted using systematic searches in Medline, EMBASE, Web of Science and Google Scholar to retrieve peer-reviewed articles that used one or more linguistic-based measure necessary to oral communication in adults with hearing loss. The range of measures identified and their frequency where charted in relation to auditory hierarchies, linguistic domains, health status domains, and associated neuropsychological and cognitive domains. Results 9121 articles were identified and 2579 articles that reported on 6714 discrete measures were included for further analysis. The predominant linguistic-based measure reported was word or sentence identification in quiet (65.9%). In contrast, discourse-based measures were used in 2.7% of the articles included. Of the included studies, 36.6% used a self-reported instrument purporting to measures of listening for communication. Consistent with previous studies, a large number of self-reported measures were identified (n = 139), but 60.4% of these measures were used in only one study and 80.7% were cited five times or fewer. Discussion Current measures used in published studies to assess listening abilities relevant to oral communication target a narrow set of domains. Concepts of communicative interaction have limited representation in current measurement. The lack of measurement consensus and heterogeneity amongst the assessments limit comparisons across studies. Furthermore, extracted measures rarely consider the broader linguistic, cognitive and interactive elements of communication. Consequently, existing measures may have limited clinical application if assessing the listening-related skills required for communication in daily life, as experienced by adults with hearing loss.
Collapse
Affiliation(s)
- Katie Neal
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia
| | - Catherine M. McMahon
- Department of Lingustics, Macquarie University, Sydney, NSW, Australia
- Hearing, Macquarie University, Sydney, NSW, Australia
| | - Sarah E. Hughes
- Centre for Patient Reported Outcome Research, Institute of Applied Health Research, University of Birmingham, Birmingham, United Kingdom
- National Institute of Health Research (NIHR), Applied Research Collaboration (ARC), West Midlands, United Kingdom
- Faculty of Medicine, Health and Life Science, Swansea University, Swansea, United Kingdom
| | - Isabelle Boisvert
- Hearing, Macquarie University, Sydney, NSW, Australia
- Sydney School of Health Sciences, Faculty of Medicine and Health, The University of Sydney, Sydney, NSW, Australia
| |
Collapse
|
49
|
Feldman A, Patou F, Baumann M, Stockmarr A, Waldemar G, Maier AM, Vogel A. Listen Carefully protocol: an exploratory case-control study of the association between listening effort and cognitive function. BMJ Open 2022; 12:e051109. [PMID: 35264340 PMCID: PMC8915370 DOI: 10.1136/bmjopen-2021-051109] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/11/2022] Open
Abstract
INTRODUCTION A growing body of evidence suggests that hearing loss is a significant and potentially modifiable risk factor for cognitive impairment. Although the mechanisms underlying the associations between cognitive decline and hearing loss are unclear, listening effort has been posited as one of the mechanisms involved with cognitive decline in older age. To date, there has been a lack of research investigating this association, particularly among adults with mild cognitive impairment (MCI). METHODS AND ANALYSIS 15-25 cognitively healthy participants and 15-25 patients with MCI (age 40-85 years) will be recruited to participate in an exploratory study investigating the association between cognitive functioning and listening effort. Both behavioural and objective measures of listening effort will be investigated. The sentence-final word identification and recall (SWIR) test will be administered with single talker non-intelligible speech background noise while monitoring pupil dilation. Evaluation of cognitive function will be carried out in a clinical setting using a battery of neuropsychological tests. This study is considered exploratory and proof of concept, with information taken to help decide the validity of larger-scale trials. ETHICS AND DISSEMINATION Written approval exemption was obtained by the Scientific Ethics Committee in the central region of Denmark (De Videnskabsetiske Komiteer i Region Hovedstaden), reference 19042404, and the project is registered pre-results at clinicaltrials.gov, reference NCT04593290, Protocol ID 19042404. Study results will be disseminated in peer-reviewed journals and conferences.
Collapse
Affiliation(s)
- Alix Feldman
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
| | - François Patou
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
- Research and Technology Group, Oticon Medical, Smørum, Denmark
| | - Monika Baumann
- Centre for Applied Audiology Research, Oticon, Smørum, Denmark
| | - Anders Stockmarr
- Statistics and Data Analysis, Department of Mathematics, Technical University of Denmark, Kongens Lyngby, Denmark
| | - Gunhild Waldemar
- Danish Dementia Research Centre, Department of Neurology, Rigshospitalet, Copenhagen, Denmark
- Department of Clinical Medicine, University of Copenhagen, Copenhagen, Denmark
| | - Anja M Maier
- Engineering Systems Design, Department of Technology Management and Economics, Technical University of Denmark, Kongens Lyngby, Denmark
- Department of Design, Manufacturing and Engineering Management, Faculty of Engineering, University of Strathclyde, Glasgow, UK
| | - Asmus Vogel
- Danish Dementia Research Centre, Department of Neurology, Rigshospitalet, Copenhagen, Denmark
- Department of Psychology, University of Copenhagen, Copenhagen, Denmark
| |
Collapse
|
50
|
Bsharat-Maalouf D, Karawani H. Bilinguals' speech perception in noise: Perceptual and neural associations. PLoS One 2022; 17:e0264282. [PMID: 35196339 PMCID: PMC8865662 DOI: 10.1371/journal.pone.0264282] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/05/2021] [Accepted: 02/07/2022] [Indexed: 01/26/2023] Open
Abstract
The current study characterized subcortical speech sound processing among monolinguals and bilinguals in quiet and challenging listening conditions and examined the relation between subcortical neural processing and perceptual performance. A total of 59 normal-hearing adults, ages 19–35 years, participated in the study: 29 native Hebrew-speaking monolinguals and 30 Arabic-Hebrew-speaking bilinguals. Auditory brainstem responses to speech sounds were collected in a quiet condition and with background noise. The perception of words and sentences in quiet and background noise conditions was also examined to assess perceptual performance and to evaluate the perceptual-physiological relationship. Perceptual performance was tested among bilinguals in both languages (first language (L1-Arabic) and second language (L2-Hebrew)). The outcomes were similar between monolingual and bilingual groups in quiet. Noise, as expected, resulted in deterioration in perceptual and neural responses, which was reflected in lower accuracy in perceptual tasks compared to quiet, and in more prolonged latencies and diminished neural responses. However, a mixed picture was observed among bilinguals in perceptual and physiological outcomes in noise. In the perceptual measures, bilinguals were significantly less accurate than their monolingual counterparts. However, in neural responses, bilinguals demonstrated earlier peak latencies compared to monolinguals. Our results also showed that perceptual performance in noise was related to subcortical resilience to the disruption caused by background noise. Specifically, in noise, increased brainstem resistance (i.e., fewer changes in the fundamental frequency (F0) representations or fewer shifts in the neural timing) was related to better speech perception among bilinguals. Better perception in L1 in noise was correlated with fewer changes in F0 representations, and more accurate perception in L2 was related to minor shifts in auditory neural timing. This study delves into the importance of using neural brainstem responses to speech sounds to differentiate individuals with different language histories and to explain inter-subject variability in bilinguals’ perceptual abilities in daily life situations.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
- * E-mail:
| |
Collapse
|