1
|
Lew EC, Sares A, Gilbert AC, Zhang Y, Lehmann A, Deroche M. Differences Between French and English in the Use of Suprasegmental Cues for the Short-Term Recall of Word Lists. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:3748-3761. [PMID: 39320319 DOI: 10.1044/2024_jslhr-23-00655] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/26/2024]
Abstract
PURPOSE Greater recognition of the impact of hearing loss on cognitive functions has led speech/hearing clinics to focus more on auditory memory outcomes. Typically evaluated by scoring participants' recall on a list of unrelated words after they have heard the list read out loud, this method implies pitch and timing variations across words. Here, we questioned whether these variations could impact performance differentially in one language or another. METHOD In a series of online studies evaluating auditory short-term memory in normally hearing adults, we examined how pitch patterns (Experiment 1), timing patterns (Experiment 2), and interactions between the two (Experiment 3) affected free recall of words, cued recall of forgotten words, and mental demand. Note that visual memory was never directly tested; written words were only used after auditory encoding in the cued recall part. Studies were administered in both French and English, always conducted with native listeners. RESULT Confirming prior work, grouping mechanisms facilitated free recall, but not cued recall (the latter being only affected by longer presentation time) or ratings of mental demand. Critically, grouping by pitch provided more benefit for French than for English listeners, while grouping by time was equally beneficial in both languages. CONCLUSION Pitch is more useful to French- than to English-speaking listeners for encoding spoken words in short-term memory, perhaps due to the syllable-based versus stress-based rhythms inherent to each language. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.27048328.
Collapse
Affiliation(s)
- Emilia C Lew
- Laboratory for Hearing and Cognition, Department of Psychology, Concordia University, Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music, Montréal, Québec, Canada
| | - Anastasia Sares
- Department of Psychology, Colorado State University, Fort Collins
| | - Annie C Gilbert
- Centre for Research on Brain, Language and Music, Montréal, Québec, Canada
- School of Communication Sciences and Disorders, McGill University, Montréal, Québec, Canada
| | | | - Alexandre Lehmann
- Centre for Research on Brain, Language and Music, Montréal, Québec, Canada
- Department of Otolaryngology - Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
| | - Mickael Deroche
- Laboratory for Hearing and Cognition, Department of Psychology, Concordia University, Montréal, Quebec, Canada
- Centre for Research on Brain, Language and Music, Montréal, Québec, Canada
| |
Collapse
|
2
|
Brisson V, Tremblay P. Assessing the Impact of Transcranial Magnetic Stimulation on Speech Perception in Noise. J Cogn Neurosci 2024; 36:2184-2207. [PMID: 39023366 DOI: 10.1162/jocn_a_02224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/20/2024]
Abstract
Healthy aging is associated with reduced speech perception in noise (SPiN) abilities. The etiology of these difficulties remains elusive, which prevents the development of new strategies to optimize the speech processing network and reduce these difficulties. The objective of this study was to determine if sublexical SPiN performance can be enhanced by applying TMS to three regions involved in processing speech: the left posterior temporal sulcus, the left superior temporal gyrus, and the left ventral premotor cortex. The second objective was to assess the impact of several factors (age, baseline performance, target, brain structure, and activity) on post-TMS SPiN improvement. The results revealed that participants with lower baseline performance were more likely to improve. Moreover, in older adults, cortical thickness within the target areas was negatively associated with performance improvement, whereas this association was null in younger individuals. No differences between the targets were found. This study suggests that TMS can modulate sublexical SPiN performance, but that the strength and direction of the effects depend on a complex combination of contextual and individual factors.
Collapse
Affiliation(s)
- Valérie Brisson
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| | - Pascale Tremblay
- Université Laval, School of Rehabilitation Sciences, Québec, Canada
- Centre de recherche CERVO, Québec, Canada
| |
Collapse
|
3
|
Mosnier I, Belmin J, Cuda D, Manrique Huarte R, Marx M, Ramos Macias A, Khnifes R, Hilly O, Bovo R, James CJ, Graham PL, Greenham P. Cognitive processing speed improvement after cochlear implantation. Front Aging Neurosci 2024; 16:1444330. [PMID: 39355541 PMCID: PMC11442269 DOI: 10.3389/fnagi.2024.1444330] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/05/2024] [Accepted: 08/27/2024] [Indexed: 10/03/2024] Open
Abstract
Background Untreated hearing loss has an effect on cognition. It is hypothesized that the additional processing required to compensate for the sensory loss affects the cognitive resources available for other tasks and that this could be mitigated by a hearing device. Methods The impact on cognition of cochlear implants (CIs) was tested in 100 subjects, ≥60 years old, with bilateral moderately-severe to profound post linguistic deafness using hearing aids. Data was compared pre and 12 and 18 months after cochlear implantation for the speech spatial qualities questionnaire, Mini Mental State Examination (MMSE), Trail making test B (TMTB) and digit symbol coding (DSC) from the Wechsler Adult Intelligence Scale version IV and finally the timed up and go test (TUG). Subjects were divided into young old (60-64), middle old (65-75) and old old (75+) groups. Cognitive test scores and times were standardized according to available normative data. Results Hearing significantly improved pre- to post-operatively across all age groups. There was no change post-implant in outcomes for TMTB, TUG or MMSE tests. Age-corrected values were within normal expectations for all age groups for the TUG and MMSE. However, DSC scores and TMTB times were worse than normal. There was a significant increase in DSC scores between baseline and 12-months for 60- to 64-year-olds (t[153] = 2.608, p = 0.027), which remained at 18 months (t[153] = 2.663, p = 0.023). Discussion The improved attention and processing speed in the youngest age group may be a consequence of reallocation of cognitive resources away from auditory processing due to greatly improved hearing. The oldest age group of participants had cognition scores closest to normal values, suggesting that only the most able older seniors tend to come forward for a CI. Severe to profoundly deaf individuals with hearing aids or cochlear implants were still poorer than age-equivalent normally hearing individuals with respect to cognitive flexibility, attention, working memory, processing speed and visuoperceptual functions. Due to a lack of data for the TUG, TMTB and DSC in the literature for hearing impaired individuals, the results reported here provide an important set of reference data for use in future research.
Collapse
Affiliation(s)
- Isabelle Mosnier
- Unité Fonctionnelle Implants Auditifs, ORL, GH Pitié-Salpêtrière, AP-HP Sorbonne Université and Université Paris Cité, Institut Pasteur, AP-HP, Inserm, Fondation Pour l’Audition, Institut de l’Audition, Paris, France
| | - Joël Belmin
- Sorbonne Université and Hôpital Charles Foix, Paris, France
| | - Domenico Cuda
- Ospedale Guglielmo da Saliceto, University of Parma, Piacenza, Italy
| | | | | | - Angel Ramos Macias
- Complejo Hospitalario Universitario Insular Materno Infantil, Las Palmas de Gran Canaria, Spain
| | | | - Ohad Hilly
- Rabin Medical Center, Petah Tikva, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | | | | | - Petra L. Graham
- School of Mathematical and Physical Sciences, Macquarie University, Sydney, NSW, Australia
| | - Paula Greenham
- Greenham Research Consulting Ltd., Ashbury, United Kingdom
| |
Collapse
|
4
|
Golob EJ, Olayo RC, Brown DMY, Mock JR. Relations Among Multiple Dimensions of Self-Reported Listening Effort in Response to an Auditory Psychomotor Vigilance Task. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:3217-3231. [PMID: 39116317 PMCID: PMC11427424 DOI: 10.1044/2024_jslhr-23-00465] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/04/2023] [Revised: 12/14/2023] [Accepted: 05/23/2024] [Indexed: 08/10/2024]
Abstract
PURPOSE Listening effort is a broad construct, and there is no consensus on how to subdivide listening effort into dimensions. This project focuses on the subjective experience of effortful listening and tests if cognitive workload, mental fatigue, and mood are interrelated dimensions. METHOD Two online studies tested young adults (n = 74 and n = 195) and measured subjective workload, fatigue (subscales of fatigue and energy), and mood (subscales of positive and negative mood) before and after a challenging listening task. In the listening effort task, participants responded to intermittent 1-kHz target tones in continuous white noise for approximately 12 min. RESULTS Correlations and principal component analysis showed that fatigue and mood were distinct but interrelated constructs that weakly correlated with workload. Effortful listening provoked increased fatigue and decreased energy and positive mood yet did not influence negative mood or workload. CONCLUSIONS The findings suggest that self-reported listening effort has multiple dimensions that can have different responses to the same effortful listening episode. The results can help guide evidence-based development of clinical listening effort tests and may reveal mechanisms for how listening effort relates to quality of life in those with hearing impairment. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.26418976.
Collapse
|
5
|
Kronenberger WG, Castellanos I, Pisoni DB. Association of domain-general speed of information processing with spoken language outcomes in prelingually-deaf children with cochlear implants. Hear Res 2024; 450:109069. [PMID: 38889562 PMCID: PMC11260235 DOI: 10.1016/j.heares.2024.109069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/19/2023] [Revised: 05/24/2024] [Accepted: 06/11/2024] [Indexed: 06/20/2024]
Abstract
Spoken language development after pediatric cochlear implantation requires rapid and efficient processing of novel, degraded auditory signals and linguistic information. These demands for rapid adaptation tax the information processing speed ability of children who receive cochlear implants. This study investigated the association of speed of information processing ability with spoken language outcomes after cochlear implantation in prelingually deaf children aged 4-6 years. Two domain-general (visual, non-linguistic) speed of information processing measures were administered to 21 preschool-aged children with cochlear implants and 23 normal-hearing peers. Measures of speech recognition, language (vocabulary and comprehension), nonverbal intelligence, and executive functioning skills were also obtained from each participant. Speed of information processing was positively associated with speech recognition and language skills in preschool-aged children with cochlear implants but not in normal-hearing peers. This association remained significant after controlling for hearing group, age, nonverbal intelligence, and executive functioning skills. These findings are consistent with models suggesting that domain-general, fast-efficient information processing speed underlies adaptation to speech perception and language learning following implantation. Assessment and intervention strategies targeting speed of information processing may provide better understanding and development of speech-language skills after cochlear implantation.
Collapse
Affiliation(s)
- William G Kronenberger
- Department of Otolaryngology - Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN 46202, USA; Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN 46202, USA.
| | - Irina Castellanos
- Department of Otolaryngology - Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN 46202, USA; Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN 46202, USA
| | - David B Pisoni
- Department of Otolaryngology - Head and Neck Surgery, Indiana University School of Medicine, Indianapolis, IN 46202, USA; Department of Psychiatry, Indiana University School of Medicine, Indianapolis, IN 46202, USA; Department of Psychological and Brain Sciences, Indiana University, Bloomington, IN 47405, USA
| |
Collapse
|
6
|
Mathews L, Schafer EC, Gopal KV, Lam B, Miller S. Speech-in-Noise and Dichotic Auditory Training Students With Autism Spectrum Disorder. Lang Speech Hear Serv Sch 2024:1-14. [PMID: 39008496 DOI: 10.1044/2024_lshss-23-00168] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/17/2024] Open
Abstract
PURPOSE Individuals diagnosed with autism spectrum disorder (ASD) often exhibit auditory processing issues, including poor speech recognition in background noise and dichotic processing (integration of different stimuli presented to the two ears). Auditory training could mitigate these auditory difficulties. However, few auditory training programs have been designed to target specific listening deficits for students with ASD. The present study summarizes the development of an innovative, one-on-one, clinician-developed speech-in-noise (SIN) training program that has not been previously described and an existing dichotic auditory training program to address common auditory processing deficits in students with ASD. METHOD Twenty verbal students with ASD, ages 7-17 years, completed a one-on-one, clinician-developed SIN training program and a commercially available dichotic training program 2-3 times a week (30-45 min per session) for 12 weeks. Maximum and minimum training levels from the SIN and dichotic training programs were analyzed statistically to document changes in training level over the training period. RESULTS Analyses of the pre- and posttraining data revealed significant improvements in training level for both the SIN and dichotic training programs. CONCLUSIONS Overall, the proposed SIN training resulted in significant improvements in training level and may be used along with dichotic training to improve some of the most common auditory processing issues documented in verbal individuals with ASD requiring minimal support. Both types of auditory training may be implemented in one-on-one therapy in clinics and in the schools.
Collapse
Affiliation(s)
- Lauren Mathews
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Erin C Schafer
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Kamakshi V Gopal
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Boji Lam
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| | - Sharon Miller
- Department of Audiology & Speech-Language Pathology, University of North Texas, Denton
| |
Collapse
|
7
|
Salvago P, Vaccaro D, Plescia F, Vitale R, Cirrincione L, Evola L, Martines F. Client Oriented Scale of Improvement in First-Time and Experienced Hearing Aid Users: An Analysis of Five Predetermined Predictability Categories through Audiometric and Speech Testing. J Clin Med 2024; 13:3956. [PMID: 38999521 PMCID: PMC11242641 DOI: 10.3390/jcm13133956] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/18/2024] [Revised: 07/02/2024] [Accepted: 07/04/2024] [Indexed: 07/14/2024] Open
Abstract
Objectives: The aim of our investigation was to explore the relationship between unaided pure-tone and speech audiometry and self-reported aided performance measured according to five predetermined COSI categories among first-time hearing aid users and experienced hearing aid users. Methods: Data from 286 patients were retrospectively evaluated. We divided the sample into first-time hearing aid users (G1) and experienced hearing aid users (G2). The correlation between unaided tonal and speech audiometry and five preliminary selected client-oriented scale of improvement (COSI) categories was studied. Results: A greater percentage of hearing aid users aged >80 years and a higher prevalence of severe-to-profound hearing loss in G2 group were observed (p < 0.05). For the total cohort, a mean hearing threshold of 60.37 ± 18.77 db HL emerged in the right ear, and 59.97 ± 18.76 db HL was detected in the left ear (p > 0.05). A significant statistical difference was observed in the group of first-time hearing aid users for the "Television/Radio at normal volume" item, where patients with a lower speech intellection threshold (SIT) were associated with higher COSI scores (p = 0.019). Studying the relationship between the speech reception threshold (SRT) and the COSI item "conversation with 1 or 2 in noise" evidenced worse speech audiometry in patients who scored ≤2 among experienced hearing aid users (p = 0.00012); a higher mean 4-8 kHz frequencies threshold for the better ear was found within the G2 group among those who scored ≤2 in the COSI item "conversation with 1 or 2 in quiet" (p = 0.043). Conclusions: Our study confirms a poor correlation between unaided tonal and speech audiometry and self-reported patient assessment. Although we included only five COSI categories in this study, it is clear that unaided audiometric tests may drive the choice of proper hearing rehabilitation, but their value in predicting the benefit of hearing aids remains limited.
Collapse
Affiliation(s)
- Pietro Salvago
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata (BiND), Sezione di Audiologia, Università degli Studi di Palermo, Via del Vespro 129, 90127 Palermo, Italy
| | - Davide Vaccaro
- UOSD Audiologia, Azienda Ospedaliera Universitaria Policlinico-A.O.U.P. "Paolo Giaccone", Via del Vespro 129, 90127 Palermo, Italy
| | - Fulvio Plescia
- Dipartimento di Promozione della Salute, Materno-Infantile, di Medicina Interna e Specialistica di Eccellenza "G. D'Alessandro", University of Palermo, Via del Vespro 133, 90127 Palermo, Italy
| | - Rossana Vitale
- UOSD Audiologia, Azienda Ospedaliera Universitaria Policlinico-A.O.U.P. "Paolo Giaccone", Via del Vespro 129, 90127 Palermo, Italy
| | - Luigi Cirrincione
- Dipartimento di Promozione della Salute, Materno-Infantile, di Medicina Interna e Specialistica di Eccellenza "G. D'Alessandro", University of Palermo, Via del Vespro 133, 90127 Palermo, Italy
| | - Lucrezia Evola
- UOSD Audiologia, Azienda Ospedaliera Universitaria Policlinico-A.O.U.P. "Paolo Giaccone", Via del Vespro 129, 90127 Palermo, Italy
| | - Francesco Martines
- Dipartimento di Biomedicina, Neuroscienze e Diagnostica Avanzata (BiND), Sezione di Audiologia, Università degli Studi di Palermo, Via del Vespro 129, 90127 Palermo, Italy
| |
Collapse
|
8
|
Khayr R, Khnifes R, Shpak T, Banai K. Task-Specific Rapid Auditory Perceptual Learning in Adult Cochlear Implant Recipients: What Could It Mean for Speech Recognition. Ear Hear 2024:00003446-990000000-00285. [PMID: 38829780 DOI: 10.1097/aud.0000000000001523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
OBJECTIVES Speech recognition in cochlear implant (CI) recipients is quite variable, particularly in challenging listening conditions. Demographic, audiological, and cognitive factors explain some, but not all, of this variance. The literature suggests that rapid auditory perceptual learning explains unique variance in speech recognition in listeners with normal hearing and those with hearing loss. The present study focuses on the early adaptation phase of task-specific rapid auditory perceptual learning. It investigates whether adult CI recipients exhibit this learning and, if so, whether it accounts for portions of the variance in their recognition of fast speech and speech in noise. DESIGN Thirty-six adult CI recipients (ages = 35 to 77, M = 55) completed a battery of general speech recognition tests (sentences in speech-shaped noise, four-talker babble noise, and natural-fast speech), cognitive measures (vocabulary, working memory, attention, and verbal processing speed), and a rapid auditory perceptual learning task with time-compressed speech. Accuracy in the general speech recognition tasks was modeled with a series of generalized mixed models that accounted for demographic, audiological, and cognitive factors before accounting for the contribution of task-specific rapid auditory perceptual learning of time-compressed speech. RESULTS Most CI recipients exhibited early task-specific rapid auditory perceptual learning of time-compressed speech within the course of the first 20 sentences. This early task-specific rapid auditory perceptual learning had unique contribution to the recognition of natural-fast speech in quiet and speech in noise, although the contribution to natural-fast speech may reflect the rapid learning that occurred in this task. When accounting for demographic and cognitive characteristics, an increase of 1 SD in the early task-specific rapid auditory perceptual learning rate was associated with ~52% increase in the odds of correctly recognizing natural-fast speech in quiet, and ~19% to 28% in the odds of correctly recognizing the different types of speech in noise. Age, vocabulary, attention, and verbal processing speed also had unique contributions to general speech recognition. However, their contribution varied between the different general speech recognition tests. CONCLUSIONS Consistent with previous findings in other populations, in CI recipients, early task-specific rapid auditory perceptual, learning also accounts for some of the individual differences in the recognition of speech in noise and natural-fast speech in quiet. Thus, across populations, the early rapid adaptation phase of task-specific rapid auditory perceptual learning might serve as a skill that supports speech recognition in various adverse conditions. In CI users, the ability to rapidly adapt to ongoing acoustical challenges may be one of the factors associated with good CI outcomes. Overall, CI recipients with higher cognitive resources and faster rapid learning rates had better speech recognition.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Riyad Khnifes
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Talma Shpak
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
| |
Collapse
|
9
|
Brown VA, Sewell K, Villanueva J, Strand JF. Noisy speech impairs retention of previously heard information only at short time scales. Mem Cognit 2024:10.3758/s13421-024-01583-y. [PMID: 38758512 DOI: 10.3758/s13421-024-01583-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 04/24/2024] [Indexed: 05/18/2024]
Abstract
When speech is presented in noise, listeners must recruit cognitive resources to resolve the mismatch between the noisy input and representations in memory. A consequence of this effortful listening is impaired memory for content presented earlier. In the first study on effortful listening, Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248 (1968; Experiment 2) found that recall for a list of digits was poorer when subsequent digits were presented with masking noise than without. Experiment 3 of that study extended this effect to more naturalistic, passage-length materials. Although the findings of Rabbitt's Experiment 2 have been replicated multiple times, no work has assessed the robustness of Experiment 3. We conducted a replication attempt of Rabbitt's Experiment 3 at three signal-to-noise ratios (SNRs). Results at one of the SNRs (Experiment 1a of the current study) were in the opposite direction from what Rabbitt, The Quarterly Journal of Experimental Psychology, 20, 241-248, (1968) reported - that is, speech was recalled more accurately when it was followed by speech presented in noise rather than in the clear - and results at the other two SNRs showed no effect of noise (Experiments 1b and 1c). In addition, reanalysis of a replication of Rabbitt's seminal finding in his second experiment showed that the effect of effortful listening on previously presented information is transient. Thus, effortful listening caused by noise appears to only impair memory for information presented immediately before the noise, which may account for our finding that noise in the second-half of a long passage did not impair recall of information presented in the first half of the passage.
Collapse
Affiliation(s)
- Violet A Brown
- Department of Psychology, Carleton College, Northfield, MN, USA.
| | - Katrina Sewell
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Jed Villanueva
- Department of Psychology, Carleton College, Northfield, MN, USA
| | - Julia F Strand
- Department of Psychology, Carleton College, Northfield, MN, USA
| |
Collapse
|
10
|
Phillips I, Bieber RE, Dirks C, Grant KW, Brungart DS. Age Impacts Speech-in-Noise Recognition Differently for Nonnative and Native Listeners. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1602-1623. [PMID: 38569080 DOI: 10.1044/2024_jslhr-23-00470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/05/2024]
Abstract
PURPOSE The purpose of this study was to explore potential differences in suprathreshold auditory function among native and nonnative speakers of English as a function of age. METHOD Retrospective analyses were performed on three large data sets containing suprathreshold auditory tests completed by 5,572 participants who were self-identified native and nonnative speakers of English between the ages of 18-65 years, including a binaural tone detection test, a digit identification test, and a sentence recognition test. RESULTS The analyses show a significant interaction between increasing age and participant group on tests involving speech-based stimuli (digit strings, sentences) but not on the binaural tone detection test. For both speech tests, differences in speech recognition emerged between groups during early adulthood, and increasing age had a more negative impact on word recognition for nonnative compared to native participants. Age-related declines in performance were 2.9 times faster for digit strings and 3.3 times faster for sentences for nonnative participants compared to native participants. CONCLUSIONS This set of analyses extends the existing literature by examining interactions between aging and self-identified native English speaker status in several auditory domains in a cohort of adults spanning young adulthood through middle age. The finding that older nonnative English speakers in this age cohort may have greater-than-expected deficits on speech-in-noise perception may have clinical implications on how these individuals should be diagnosed and treated for hearing difficulties.
Collapse
Affiliation(s)
- Ian Phillips
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Rebecca E Bieber
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Henry M Jackson Foundation for the Advancement of Military Medicine, Inc., Bethesda, MD
| | - Coral Dirks
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Ken W Grant
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| | - Douglas S Brungart
- Audiology & Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
| |
Collapse
|
11
|
Shen J, Sun J, Zhang Z, Sun B, Li H, Liu Y. The Effect of Hearing Loss and Working Memory Capacity on Context Use and Reliance on Context in Older Adults. Ear Hear 2024; 45:787-800. [PMID: 38273447 DOI: 10.1097/aud.0000000000001470] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/27/2024]
Abstract
OBJECTIVES Older adults often complain of difficulty in communicating in noisy environments. Contextual information is considered an important cue for identifying everyday speech. To date, it has not been clear exactly how context use (CU) and reliance on context in older adults are affected by hearing status and cognitive function. The present study examined the effects of semantic context on the performance of speech recognition, recall, perceived listening effort (LE), and noise tolerance, and further explored the impacts of hearing loss and working memory capacity on CU and reliance on context among older adults. DESIGN Fifty older adults with normal hearing and 56 older adults with mild-to-moderate hearing loss between the ages of 60 and 95 years participated in this study. A median split of the backward digit span further classified the participants into high working memory (HWM) and low working memory (LWM) capacity groups. Each participant performed high- and low-context Repeat and Recall tests, including a sentence repeat and delayed recall task, subjective assessments of LE, and tolerable time under seven signal to noise ratios (SNRs). CU was calculated as the difference between high- and low-context sentences for each outcome measure. The proportion of context use (PCU) in high-context performance was taken as the reliance on context to explain the degree to which participants relied on context when they repeated and recalled high-context sentences. RESULTS Semantic context helps improve the performance of speech recognition and delayed recall, reduces perceived LE, and prolongs noise tolerance in older adults with and without hearing loss. In addition, the adverse effects of hearing loss on the performance of repeat tasks were more pronounced in low context than in high context, whereas the effects on recall tasks and noise tolerance time were more significant in high context than in low context. Compared with other tasks, the CU and PCU in repeat tasks were more affected by listening status and working memory capacity. In the repeat phase, hearing loss increased older adults' reliance on the context of a relatively challenging listening environment, as shown by the fact that when the SNR was 0 and -5 dB, the PCU (repeat) of the hearing loss group was significantly greater than that of the normal-hearing group, whereas there was no significant difference between the two hearing groups under the remaining SNRs. In addition, older adults with LWM had significantly greater CU and PCU in repeat tasks than those with HWM, especially at SNRs with moderate task demands. CONCLUSIONS Taken together, semantic context not only improved speech perception intelligibility but also released cognitive resources for memory encoding in older adults. Mild-to-moderate hearing loss and LWM capacity in older adults significantly increased the use and reliance on semantic context, which was also modulated by the level of SNR.
Collapse
Affiliation(s)
- Jiayuan Shen
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Zhejiang, China
| | - Jiayu Sun
- Department of Otolaryngology, Head and Neck Surgery, Shanghai Ninth People's Hospital, Shanghai JiaoTong University School of Medicine, Shanghai, China
| | - Zhikai Zhang
- Department of Otolaryngology, Head and Neck Surgery, Beijing Chao-Yang Hospital, Capital Medical University, Beijing, China
| | - Baoxuan Sun
- Training Department, Widex Hearing Aid (Shanghai) Co., Ltd, Shanghai, China
| | - Haitao Li
- Department of Neurology, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing, China
- These authors contributed equally to this work and are co-corresponding authors
| |
Collapse
|
12
|
Bosen AK, Doria GM. Identifying Links Between Latent Memory and Speech Recognition Factors. Ear Hear 2024; 45:351-369. [PMID: 37882100 PMCID: PMC10922378 DOI: 10.1097/aud.0000000000001430] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2023]
Abstract
OBJECTIVES The link between memory ability and speech recognition accuracy is often examined by correlating summary measures of performance across various tasks, but interpretation of such correlations critically depends on assumptions about how these measures map onto underlying factors of interest. The present work presents an alternative approach, wherein latent factor models are fit to trial-level data from multiple tasks to directly test hypotheses about the underlying structure of memory and the extent to which latent memory factors are associated with individual differences in speech recognition accuracy. Latent factor models with different numbers of factors were fit to the data and compared to one another to select the structures which best explained vocoded sentence recognition in a two-talker masker across a range of target-to-masker ratios, performance on three memory tasks, and the link between sentence recognition and memory. DESIGN Young adults with normal hearing (N = 52 for the memory tasks, of which 21 participants also completed the sentence recognition task) completed three memory tasks and one sentence recognition task: reading span, auditory digit span, visual free recall of words, and recognition of 16-channel vocoded Perceptually Robust English Sentence Test Open-set sentences in the presence of a two-talker masker at target-to-masker ratios between +10 and 0 dB. Correlations between summary measures of memory task performance and sentence recognition accuracy were calculated for comparison to prior work, and latent factor models were fit to trial-level data and compared against one another to identify the number of latent factors which best explains the data. Models with one or two latent factors were fit to the sentence recognition data and models with one, two, or three latent factors were fit to the memory task data. Based on findings with these models, full models that linked one speech factor to one, two, or three memory factors were fit to the full data set. Models were compared via Expected Log pointwise Predictive Density and post hoc inspection of model parameters. RESULTS Summary measures were positively correlated across memory tasks and sentence recognition. Latent factor models revealed that sentence recognition accuracy was best explained by a single factor that varied across participants. Memory task performance was best explained by two latent factors, of which one was generally associated with performance on all three tasks and the other was specific to digit span recall accuracy at lists of six digits or more. When these models were combined, the general memory factor was closely related to the sentence recognition factor, whereas the factor specific to digit span had no apparent association with sentence recognition. CONCLUSIONS Comparison of latent factor models enables testing hypotheses about the underlying structure linking cognition and speech recognition. This approach showed that multiple memory tasks assess a common latent factor that is related to individual differences in sentence recognition, although performance on some tasks was associated with multiple factors. Thus, while these tasks provide some convergent assessment of common latent factors, caution is needed when interpreting what they tell us about speech recognition.
Collapse
|
13
|
Slugocki C, Kuk F, Korhonen P. Cortical sensory gating and reactions to dynamic speech-in-noise in older normal-hearing and hearing-impaired adults. Int J Audiol 2024:1-10. [PMID: 38334072 DOI: 10.1080/14992027.2024.2311663] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/07/2023] [Accepted: 01/23/2024] [Indexed: 02/10/2024]
Abstract
OBJECTIVE To examine whether cortical sensory gating predicts how older adults with and without hearing loss perform the Tracking of Noise Tolerance (TNT) test. DESIGN Single-blind mixed design. TNT performance was defined by average tolerated noise relative to speech levels (TNTAve) and by an average range of noise levels over a two-minute trial (excursion). Sensory gating of P1-N1-P2 components was measured using pairs of 1 kHz tone pips. STUDY SAMPLE Twenty-three normal-hearing (NH) and 16 hearing-impaired (HI) older adults with a moderate-to-severe degree of sensorineural hearing loss. RESULTS NH listeners tolerated significantly more noise than HI listeners, but the two groups did not differ in their excursion. Both NH and HI listeners exhibited significant gating of P1 amplitudes and N1P2 peak-to-peak amplitudes with no difference in gating magnitudes between listener groups. Sensory gating magnitudes of P1 and N1P2 did not predict TNTAve scores, but N1P2 gating negatively predicted excursion after accounting for listener age and hearing thresholds. CONCLUSIONS Listeners' reactivity to a roving noise (excursion), but not their average noise tolerance (TNTAve), was predicted by sensory gating at N1P2 generators. These results suggest that temporal aspects of speech-in-noise processing may be affected by declines in the central inhibition of older adults.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Francis Kuk
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Petri Korhonen
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| |
Collapse
|
14
|
Lalonde K, Walker EA, Leibold LJ, McCreery RW. Predictors of Susceptibility to Noise and Speech Masking Among School-Age Children With Hearing Loss or Typical Hearing. Ear Hear 2024; 45:81-93. [PMID: 37415268 PMCID: PMC10771540 DOI: 10.1097/aud.0000000000001403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate effects of masker type and hearing group on the relationship between school-age children's speech recognition and age, vocabulary, working memory, and selective attention. This study also explored effects of masker type and hearing group on the time course of maturation of masked speech recognition. DESIGN Participants included 31 children with normal hearing (CNH) and 41 children with mild to severe bilateral sensorineural hearing loss (CHL), between 6.7 and 13 years of age. Children with hearing aids used their personal hearing aids throughout testing. Audiometric thresholds and standardized measures of vocabulary, working memory, and selective attention were obtained from each child, along with masked sentence recognition thresholds in a steady state, speech-spectrum noise (SSN) and in a two-talker speech masker (TTS). Aided audibility through children's hearing aids was calculated based on the Speech Intelligibility Index (SII) for all children wearing hearing aids. Linear mixed effects models were used to examine the contribution of group, age, vocabulary, working memory, and attention to individual differences in speech recognition thresholds in each masker. Additional models were constructed to examine the role of aided audibility on masked speech recognition in CHL. Finally, to explore the time course of maturation of masked speech perception, linear mixed effects models were used to examine interactions between age, masker type, and hearing group as predictors of masked speech recognition. RESULTS Children's thresholds were higher in TTS than in SSN. There was no interaction of hearing group and masker type. CHL had higher thresholds than CNH in both maskers. In both hearing groups and masker types, children with better vocabularies had lower thresholds. An interaction of hearing group and attention was observed only in the TTS. Among CNH, attention predicted thresholds in TTS. Among CHL, vocabulary and aided audibility predicted thresholds in TTS. In both maskers, thresholds decreased as a function of age at a similar rate in CNH and CHL. CONCLUSIONS The factors contributing to individual differences in speech recognition differed as a function of masker type. In TTS, the factors contributing to individual difference in speech recognition further differed as a function of hearing group. Whereas attention predicted variance for CNH in TTS, vocabulary and aided audibility predicted variance in CHL. CHL required a more favorable signal to noise ratio (SNR) to recognize speech in TTS than in SSN (mean = +1 dB in TTS, -3 dB in SSN). We posit that failures in auditory stream segregation limit the extent to which CHL can recognize speech in a speech masker. Larger sample sizes or longitudinal data are needed to characterize the time course of maturation of masked speech perception in CHL.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
15
|
Slugocki C, Kuk F, Korhonen P. Alpha-Band Dynamics of Hearing Aid Wearers Performing the Repeat-Recall Test (RRT). Trends Hear 2024; 28:23312165231222098. [PMID: 38549287 PMCID: PMC10981257 DOI: 10.1177/23312165231222098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/28/2023] [Revised: 11/28/2023] [Accepted: 12/06/2023] [Indexed: 04/01/2024] Open
Abstract
This study measured electroencephalographic activity in the alpha band, often associated with task difficulty, to physiologically validate self-reported effort ratings from older hearing-impaired listeners performing the Repeat-Recall Test (RRT)-an integrative multipart assessment of speech-in-noise performance, context use, and auditory working memory. Following a single-blind within-subjects design, 16 older listeners (mean age = 71 years, SD = 13, 9 female) with a moderate-to-severe degree of bilateral sensorineural hearing loss performed the RRT while wearing hearing aids at four fixed signal-to-noise ratios (SNRs) of -5, 0, 5, and 10 dB. Performance and subjective ratings of listening effort were assessed for complementary versions of the RRT materials with high/low availability of semantic context. Listeners were also tested with a version of the RRT that omitted the memory (i.e., recall) component. As expected, results showed alpha power to decrease significantly with increasing SNR from 0 through 10 dB. When tested with high context sentences, alpha was significantly higher in conditions where listeners had to recall the sentence materials compared to conditions where the recall requirement was omitted. When tested with low context sentences, alpha power was relatively high irrespective of the memory component. Within-subjects, alpha power was related to listening effort ratings collected across the different RRT conditions. Overall, these results suggest that the multipart demands of the RRT modulate both neural and behavioral measures of listening effort in directions consistent with the expected/designed difficulty of the RRT conditions.
Collapse
Affiliation(s)
- Christopher Slugocki
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Francis Kuk
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| | - Petri Korhonen
- Office of Research in Clinical Amplification (ORCA-USA), WS Audiology, Lisle, IL, USA
| |
Collapse
|
16
|
Thompson E, Feldman JI, Valle A, Davis H, Keceli-Kaysili B, Dunham K, Woynaroski T, Tharpe AM, Picou EM. A Comparison of Listening Skills of Autistic and Non-Autistic Youth While Using and Not Using Remote Microphone Systems. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4618-4634. [PMID: 37870877 PMCID: PMC10721240 DOI: 10.1044/2023_jslhr-22-00720] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/16/2022] [Revised: 05/09/2023] [Accepted: 08/14/2023] [Indexed: 10/24/2023]
Abstract
OBJECTIVES The purposes of this study were to compare (a) listening-in-noise (accuracy and effort) and (b) remote microphone (RM) system benefits between autistic and non-autistic youth. DESIGN Groups of autistic and non-autistic youth that were matched on chronological age and biological sex completed listening-in-noise testing when wearing and not wearing an RM system. Listening-in-noise accuracy and listening effort were evaluated simultaneously using a dual-task paradigm for stimuli varying in type (syllables, words, sentences, and passages). Several putative moderators of RM system effects on outcomes of interest were also evaluated. RESULTS Autistic youth outperformed non-autistic youth in some conditions on listening-in-noise accuracy; listening effort between the two groups was not significantly different. RM system use resulted in listening-in-noise accuracy improvements that were nonsignificantly different across groups. Benefits of listening-in-noise accuracy were all large in magnitude. RM system use did not have an effect on listening effort for either group. None of the putative moderators yielded effects of the RM system on listening-in-noise accuracy or effort for non-autistic youth that were significant and interpretable, indicating that RM system benefits did not vary according to any of the participant characteristics assessed. CONCLUSIONS Contrary to expectations, autistic youth did not demonstrate listening-in-noise deficits compared to non-autistic youth. Both autistic and non-autistic youth appear to experience RM system benefits marked by large gains in listening-in-noise performance. Thus, the use of this technology in educational and other noisy settings where speech perception needs enhancement might be beneficial for both groups of children.
Collapse
Affiliation(s)
- Emily Thompson
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Jacob I. Feldman
- Frist Center for Autism and Innovation, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Annalise Valle
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Hilary Davis
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Bahar Keceli-Kaysili
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - Kacie Dunham
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Vanderbilt Brain Institute, Nashville, TN
| | - Tiffany Woynaroski
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Frist Center for Autism and Innovation, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Anne Marie Tharpe
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
- Vanderbilt Kennedy Center, Vanderbilt University Medical Center, Nashville, TN
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
17
|
Porto L, Wouters J, van Wieringen A. Speech perception in noise, working memory, and attention in children: A scoping review. Hear Res 2023; 439:108883. [PMID: 37722287 DOI: 10.1016/j.heares.2023.108883] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/28/2023] [Accepted: 09/07/2023] [Indexed: 09/20/2023]
Abstract
PURPOSE Speech perception in noise is an everyday occurrence for adults and children alike. The factors that influence how well individuals cope with noise during spoken communication are not well understood, particularly in the case of children. This article aims to review the available evidence on how working memory and attention play a role in children's speech perception in noise, how characteristics of measures affect results, and how this relationship differs in non-typical populations. METHOD This article is a scoping review of the literature available on PubMed. Forty articles were included for meeting the inclusion criteria of including children as participants, some measure of speech perception in noise, some measure of attention and/or working memory, and some attempt to establish relationships between the measures. Findings were charted and presented keeping in mind how they relate to the research questions. RESULTS The majority of studies report that attention and especially working memory are involved in speech perception in noise by children. We provide an overview of the impact of certain task characteristics on findings across the literature, as well as how these affect non-typical populations. CONCLUSION While most of the work reviewed here provides evidence suggesting that working memory and attention are important abilities employed by children in overcoming the difficulties imposed by noise during spoken communication, methodological variability still prevents a clearer picture from emerging.
Collapse
Affiliation(s)
- Lyan Porto
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium.
| | - Jan Wouters
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium; Department of Special Needs Education, University of Oslo, Norway
| |
Collapse
|
18
|
Shin J, Noh S, Park J, Sung JE. Syntactic complexity differentially affects auditory sentence comprehension performance for individuals with age-related hearing loss. Front Psychol 2023; 14:1264994. [PMID: 37965654 PMCID: PMC10641445 DOI: 10.3389/fpsyg.2023.1264994] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/21/2023] [Accepted: 10/09/2023] [Indexed: 11/16/2023] Open
Abstract
Objectives This study examined whether older adults with hearing loss (HL) experience greater difficulties in auditory sentence comprehension compared to those with typical-hearing (TH) when the linguistic burdens of syntactic complexity were systematically manipulated by varying either the sentence type (active vs. passive) or sentence length (3- vs. 4-phrases). Methods A total of 22 individuals with HL and 24 controls participated in the study, completing sentence comprehension test (SCT), standardized memory assessments, and pure-tone audiometry tests. Generalized linear mixed effects models were employed to compare the effects of sentence type and length on SCT accuracy, while Pearson correlation coefficients were conducted to explore the relationships between SCT accuracy and other factors. Additionally, stepwise regression analyses were employed to identify memory-related predictors of sentence comprehension ability. Results Older adults with HL exhibited poorer performance on passive sentences than on active sentences compared to controls, while the sentence length was controlled. Greater difficulties on passive sentences were linked to working memory capacity, emerging as the most significant predictor for the comprehension of passive sentences among participants with HL. Conclusion Our findings contribute to the understanding of the linguistic-cognitive deficits linked to age-related hearing loss by demonstrating its detrimental impact on the processing of passive sentences. Cognitively healthy adults with hearing difficulties may face challenges in comprehending syntactically more complex sentences that require higher computational demands, particularly in working memory allocation.
Collapse
Affiliation(s)
| | | | | | - Jee Eun Sung
- Department of Communication Disorders, Ewha Womans University, Seoul, Republic of Korea
| |
Collapse
|
19
|
Kuchinsky SE, Razeghi N, Pandža NB. Auditory, Lexical, and Multitasking Demands Interactively Impact Listening Effort. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4066-4082. [PMID: 37672797 PMCID: PMC10713022 DOI: 10.1044/2023_jslhr-22-00548] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/20/2022] [Revised: 03/12/2023] [Accepted: 06/27/2023] [Indexed: 09/08/2023]
Abstract
PURPOSE This study examined the extent to which acoustic, linguistic, and cognitive task demands interactively impact listening effort. METHOD Using a dual-task paradigm, on each trial, participants were instructed to perform either a single task or two tasks. In the primary word recognition task, participants repeated Northwestern University Auditory Test No. 6 words presented in speech-shaped noise at either an easier or a harder signal-to-noise ratio (SNR). The words varied in how commonly they occur in the English language (lexical frequency). In the secondary visual task, participants were instructed to press a specific key as soon as a number appeared on screen (simpler task) or one of two keys to indicate whether the visualized number was even or odd (more complex task). RESULTS Manipulation checks revealed that key assumptions of the dual-task design were met. A significant three-way interaction was observed, such that the expected effect of SNR on effort was only observable for words with lower lexical frequency and only when multitasking demands were relatively simpler. CONCLUSIONS This work reveals that variability across speech stimuli can influence the sensitivity of the dual-task paradigm for detecting changes in listening effort. In line with previous work, the results of this study also suggest that higher cognitive demands may limit the ability to detect expected effects of SNR on measures of effort. With implications for real-world listening, these findings highlight that even relatively minor changes in lexical and multitasking demands can alter the effort devoted to listening in noise.
Collapse
Affiliation(s)
- Stefanie E. Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, MD
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Niki Razeghi
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Nick B. Pandža
- Applied Research Laboratory for Intelligence and Security, University of Maryland, College Park
- Program in Second Language Acquisition, University of Maryland, College Park
- Maryland Language Science Center, University of Maryland, College Park
| |
Collapse
|
20
|
Chen F, Guo Q, Deng Y, Zhu J, Zhang H. Development of Mandarin Lexical Tone Identification in Noise and Its Relation With Working Memory. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4100-4116. [PMID: 37678219 DOI: 10.1044/2023_jslhr-22-00457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
PURPOSE This study aimed to examine the developmental trajectory of Mandarin tone identification in quiet and two noisy conditions: speech-shaped noise (SSN) and multitalker babble noise. In addition, we evaluated the relationship between tonal identification development and working memory capacity. METHOD Ninety-three typically developing children aged 5-8 years and 23 young adults completed categorical identification of two tonal continua (Tone 1-4 and Tone 2-3) in quiet, SSN, and babble noise. Their working memory was additionally measured using auditory digit span tests. Correlation analyses between digit span scores and boundary widths were performed. RESULTS Six-year-old children have achieved the adultlike ability of categorical identification of Tone 1-4 continuum under both types of noise. Moreover, 6-year-old children could identify Tone 2-3 continuum as well as adults in SSN. Nonetheless, the child participants, even 8-year-olds, performed worse when tokens from Tone 2-3 continuum were masked by babble noise. Greater working memory capacity was associated with better tone identification in noise for preschoolers aged 5-6 years; however, for school-age children aged 7-8 years, such correlation only existed in Tone 2-3 continuum in SSN. CONCLUSIONS Lexical tone perception might take a prolonged time to achieve adultlike competence in babble noise relative to SSN. Moreover, a significant interaction between masking type and stimulus difficulty was found, as indicated by Tone 2-3 being more susceptible to interference from babble noise than Tone 1-4. Furthermore, correlations between working memory capacity and tone perception in noise varied with developmental stage, stimulus difficulty, and masking type.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Qingqing Guo
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yunhua Deng
- Foreign Studies College, Hunan Normal University, Changsha, China
| | - Jiaqiang Zhu
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Hong Kong SAR, China
| | - Hao Zhang
- Center for Clinical Neurolinguistics, School of Foreign Languages and Literature, Shandong University, Jinan, China
| |
Collapse
|
21
|
Wagner L, Werle ALA, Hoffmann A, Rahne T, Fengler A. Is there an influence of perceptual or cognitive impairment on complex sentence processing in hearing aid users? PLoS One 2023; 18:e0291832. [PMID: 37768903 PMCID: PMC10538791 DOI: 10.1371/journal.pone.0291832] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2023] [Accepted: 09/06/2023] [Indexed: 09/30/2023] Open
Abstract
BACKGROUND Hearing-impaired listeners often have difficulty understanding complex sentences. It is not clear if perceptual or cognitive deficits have more impact on reduced language processing abilities, and how a hearing aid might compensate for that. METHODS In a prospective study with 5 hearing aid users and 5 normal hearing, age-matched participants, processing of complex sentences was investigated. Audiometric and working memory tests were performed. Subject- and object-initial sentences from the Oldenburg Corpus of Linguistically and audiologically controlled Sentences (OLACS) were presented to the participants during recording of an electroencephalogram (EEG). RESULTS The perceptual difference between object and subject leading sentences does not lead to processing changes whereas the ambiguity in object leading sentences with feminine or neuter articles evokes a P600 potential. For hearing aid users, this P600 has a longer latency compared to normal hearing subjects. CONCLUSION The EEG is a suitable method for investigating differences in complex speech processing for hearing aid users. Longer P600 latencies indicate higher cognitive effort for processing complex sentences in hearing aid users.
Collapse
Affiliation(s)
- Luise Wagner
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Anna-Leoni A. Werle
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Antonia Hoffmann
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Torsten Rahne
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), University Medicine Halle (Saale), Halle, Germany
| | - Anja Fengler
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Leipzig, Leipzig, Germany
| |
Collapse
|
22
|
Trau-Margalit A, Fostick L, Harel-Arbeli T, Nissanholtz-Gannot R, Taitelbaum-Swead R. Speech recognition in noise task among children and young-adults: a pupillometry study. Front Psychol 2023; 14:1188485. [PMID: 37425148 PMCID: PMC10328119 DOI: 10.3389/fpsyg.2023.1188485] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2023] [Accepted: 06/05/2023] [Indexed: 07/11/2023] Open
Abstract
Introduction Children experience unique challenges when listening to speech in noisy environments. The present study used pupillometry, an established method for quantifying listening and cognitive effort, to detect temporal changes in pupil dilation during a speech-recognition-in-noise task among school-aged children and young adults. Methods Thirty school-aged children and 31 young adults listened to sentences amidst four-talker babble noise in two signal-to-noise ratios (SNR) conditions: high accuracy condition (+10 dB and + 6 dB, for children and adults, respectively) and low accuracy condition (+5 dB and + 2 dB, for children and adults, respectively). They were asked to repeat the sentences while pupil size was measured continuously during the task. Results During the auditory processing phase, both groups displayed pupil dilation; however, adults exhibited greater dilation than children, particularly in the low accuracy condition. In the second phase (retention), only children demonstrated increased pupil dilation, whereas adults consistently exhibited a decrease in pupil size. Additionally, the children's group showed increased pupil dilation during the response phase. Discussion Although adults and school-aged children produce similar behavioural scores, group differences in dilation patterns point that their underlying auditory processing differs. A second peak of pupil dilation among the children suggests that their cognitive effort during speech recognition in noise lasts longer than in adults, continuing past the first auditory processing peak dilation. These findings support effortful listening among children and highlight the need to identify and alleviate listening difficulties in school-aged children, to provide proper intervention strategies.
Collapse
Affiliation(s)
- Avital Trau-Margalit
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
| | - Leah Fostick
- Department of Communication Disorders, Auditory Perception Lab in the Name of Laurent Levy, Ariel University, Ariel, Israel
| | - Tami Harel-Arbeli
- Department of Gerontology, University of Haifa, Haifa, Israel
- Baruch Ivcher School of Psychology, Reichman University, Herzliya, Israel
| | | | - Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the Name of Prof. Mordechai Himelfarb, Ariel University, Ariel, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| |
Collapse
|
23
|
Chen F, Zhang K, Guo Q, Lv J. Development of Achieving Constancy in Lexical Tone Identification With Contextual Cues. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1148-1164. [PMID: 36995907 DOI: 10.1044/2022_jslhr-22-00257] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE The aim of this study was to explore when and how Mandarin-speaking children use contextual cues to normalize speech variability in perceiving lexical tones. Two different cognitive mechanisms underlying speech normalization (lower level acoustic normalization and higher level acoustic-phonemic normalization) were investigated through the lexical tone identification task in nonspeech contexts and speech contexts, respectively. Besides, another aim of this study was to reveal how domain-general cognitive abilities contribute to the development of the speech normalization process. METHOD In this study, 94 five- to eight-year-old Mandarin-speaking children (50 boys, 44 girls) and 24 young adults (14 men, 10 women) were asked to identify ambiguous Mandarin high-level and mid-rising tones in either speech or nonspeech contexts. Furthermore, in this study, we tested participants' pitch sensitivity through a nonlinguistic pitch discrimination task and their working memory using the digit span task. RESULTS Higher level acoustic-phonemic normalization of lexical tones emerged at the age of 6 years and was relatively stable thereafter. However, lower level acoustic normalization was less stable across different ages. Neither pitch sensitivity nor working memory affected children's lexical tone normalization. CONCLUSIONS Mandarin-speaking children above 6 years of age successfully achieved constancy in lexical tone normalization based on speech contextual cues. The perceptual normalization of lexical tones was not affected by pitch sensitivity and working memory capacity.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Kaile Zhang
- Centre for Cognitive and Brain Sciences, University of Macau, China
| | - Qingqing Guo
- School of Foreign Languages, Hunan University, Changsha, China
| | - Jia Lv
- School of Foreign Languages and Literature, Wuhan University, China
| |
Collapse
|
24
|
Giallini I, Inguscio BMS, Nicastri M, Portanova G, Ciofalo A, Pace A, Greco A, D’Alessandro HD, Mancini P. Neuropsychological Functions and Audiological Findings in Elderly Cochlear Implant Users: The Role of Attention in Postoperative Performance. Audiol Res 2023; 13:236-253. [PMID: 37102772 PMCID: PMC10136178 DOI: 10.3390/audiolres13020022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/08/2022] [Revised: 03/17/2023] [Accepted: 03/20/2023] [Indexed: 03/29/2023] Open
Abstract
Objectives: The present study aimed to investigate in a group of elderly CI users working memory and attention, conventionally considered as predictors of better CI performance and to try to disentangle the effects of these cognitive domains on speech perception, finding potential markers of cognitive decline related to audiometric findings. Methods Thirty postlingually deafened CI users aged >60 underwent an audiological evaluation followed by a cognitive assessment of attention and verbal working memory. A correlation analysis was performed to evaluate the associations between cognitive variables while a simple regression investigated the relationships between cognitive and audiological variables. Comparative analysis was performed to compare variables on the basis of subjects’ attention performance. Results: Attention was found to play a significant role in sound field and speech perception. Univariate analysis found a significant difference between poor and high attention performers, while regression analysis showed that attention significantly predicted recognition of words presented at Signal/Noise +10. Further, the high attention performers showed significantly higher scores than low attentional performers for all working memory tasks. Conclusion: Overall findings confirmed that a better cognitive performance may positively contribute to better speech perception outcomes, especially in complex listening situations. WM may play a crucial role in storage and processing of auditory-verbal stimuli and a robust attention may lead to better performance for speech perception in noise. Implementation of cognitive training in auditory rehabilitation of CI users should be investigated in order to improve cognitive and audiological performance in elderly CI users.
Collapse
|
25
|
Lemel R, Shalev L, Nitsan G, Ben-David BM. Listen up! ADHD slows spoken-word processing in adverse listening conditions: Evidence from eye movements. RESEARCH IN DEVELOPMENTAL DISABILITIES 2023; 133:104401. [PMID: 36577332 DOI: 10.1016/j.ridd.2022.104401] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/23/2022] [Revised: 10/23/2022] [Accepted: 12/16/2022] [Indexed: 06/17/2023]
Abstract
BACKGROUND Cognitive skills such as sustained attention, inhibition and working memory are essential for speech processing, yet are often impaired in people with ADHD. Offline measures have indicated difficulties in speech recognition on multi-talker babble (MTB) background for young adults with ADHD (yaADHD). However, to-date no study has directly tested online speech processing in adverse conditions for yaADHD. AIMS Gauging the effects of ADHD on segregating the spoken target-word from its sound-sharing competitor, in MTB and working-memory (WM) load. METHODS AND PROCEDURES Twenty-four yaADHD and 22 matched controls that differ in sustained attention (SA) but not in WM were asked to follow spoken instructions presented on MTB to touch a named object, while retaining one (low-load) or four (high-load) digit/s for later recall. Their eye fixations were tracked. OUTCOMES AND RESULTS In the high-load condition, speech processing was less accurate and slowed by 140ms for yaADHD. In the low-load condition, the processing advantage shifted from early perceptual to later cognitive stages. Fixation transitions (hesitations) were inflated for yaADHD. CONCLUSIONS AND IMPLICATIONS ADHD slows speech processing in adverse listening conditions and increases hesitation, as speech unfolds in time. These effects, detected only by online eyetracking, relate to attentional difficulties. We suggest online speech processing as a novel purview on ADHD. WHAT THIS PAPER ADDS?: We suggest speech processing in adverse listening conditions as a novel vantage point on ADHD. Successful speech recognition in noise is essential for performance across daily settings: academic, employment and social interactions. It involves several executive functions, such as inhibition and sustained attention. Impaired performance in these functions is characteristic of ADHD. However, to date there is only scant research on speech processing in ADHD. The current study is the first to investigate online speech processing as the word unfolds in time using eyetracking for young adults with ADHD (yaADHD). This method uncovered slower speech processing in multi-talker babble noise for yaADHD compared to matched controls. The performance of yaADHD indicated increased hesitation between the spoken word and sound-sharing alternatives (e.g., CANdle-CANdy). These delays and hesitations, on the single word level, could accumulate in continuous speech to significantly impair communication in ADHD, with severe implications on their quality of life and academic success. Interestingly, whereas yaADHD and controls were matched on WM standardized tests, WM load appears to affect speech processing for yaADHD more than for controls. This suggests that ADHD may lead to inefficient deployment of WM resources that may not be detected when WM is tested alone. Note that these intricate differences could not be detected using traditional offline accuracy measures, further supporting the use of eyetracking in speech tasks. Finally, communication is vital for active living and wellbeing. We suggest paying attention to speech processing in ADHD in treatment and when considering accessibility and inclusion.
Collapse
Affiliation(s)
- Rony Lemel
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Lilach Shalev
- Constantiner School of Education and Sagol School of Neuroscience, Tel-Aviv University, Tel-Aviv, Israel
| | - Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel; Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada; Toronto Rehabilitation Institute, University Health Networks (UHN), ON, Canada.
| |
Collapse
|
26
|
Homman L, Danielsson H, Rönnberg J. A structural equation mediation model captures the predictions amongst the parameters of the ease of language understanding model. Front Psychol 2023; 14:1015227. [PMID: 36936006 PMCID: PMC10020708 DOI: 10.3389/fpsyg.2023.1015227] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/09/2022] [Accepted: 02/06/2023] [Indexed: 03/06/2023] Open
Abstract
Objective The aim of the present study was to assess the validity of the Ease of Language Understanding (ELU) model through a statistical assessment of the relationships among its main parameters: processing speed, phonology, working memory (WM), and dB Speech Noise Ratio (SNR) for a given Speech Recognition Threshold (SRT) in a sample of hearing aid users from the n200 database. Methods Hearing aid users were assessed on several hearing and cognitive tests. Latent Structural Equation Models (SEMs) were applied to investigate the relationship between the main parameters of the ELU model while controlling for age and PTA. Several competing models were assessed. Results Analyses indicated that a mediating SEM was the best fit for the data. The results showed that (i) phonology independently predicted speech recognition threshold in both easy and adverse listening conditions and (ii) WM was not predictive of dB SNR for a given SRT in the easier listening conditions (iii) processing speed was predictive of dB SNR for a given SRT mediated via WM in the more adverse conditions. Conclusion The results were in line with the predictions of the ELU model: (i) phonology contributed to dB SNR for a given SRT in all listening conditions, (ii) WM is only invoked when listening conditions are adverse, (iii) better WM capacity aids the understanding of what has been said in adverse listening conditions, and finally (iv) the results highlight the importance and optimization of processing speed in conditions when listening conditions are adverse and WM is activated.
Collapse
Affiliation(s)
- Lina Homman
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
- *Correspondence: Lina Homman,
| | - Henrik Danielsson
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | - Jerker Rönnberg
- Disability Research Division (FuSa), Department of Behavioural Sciences and Learning (IBL), Linköping University, Linköping, Sweden
- Linnaeus Centre HEAD, The Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| |
Collapse
|
27
|
Bsharat-Maalouf D, Degani T, Karawani H. The Involvement of Listening Effort in Explaining Bilingual Listening Under Adverse Listening Conditions. Trends Hear 2023; 27:23312165231205107. [PMID: 37941413 PMCID: PMC10637154 DOI: 10.1177/23312165231205107] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/28/2023] [Revised: 09/14/2023] [Accepted: 09/15/2023] [Indexed: 11/10/2023] Open
Abstract
The current review examines listening effort to uncover how it is implicated in bilingual performance under adverse listening conditions. Various measures of listening effort, including physiological, behavioral, and subjective measures, have been employed to examine listening effort in bilingual children and adults. Adverse listening conditions, stemming from environmental factors, as well as factors related to the speaker or listener, have been examined. The existing literature, although relatively limited to date, points to increased listening effort among bilinguals in their nondominant second language (L2) compared to their dominant first language (L1) and relative to monolinguals. Interestingly, increased effort is often observed even when speech intelligibility remains unaffected. These findings emphasize the importance of considering listening effort alongside speech intelligibility. Building upon the insights gained from the current review, we propose that various factors may modulate the observed effects. These include the particular measure selected to examine listening effort, the characteristics of the adverse condition, as well as factors related to the particular linguistic background of the bilingual speaker. Critically, further research is needed to better understand the impact of these factors on listening effort. The review outlines avenues for future research that would promote a comprehensive understanding of listening effort in bilingual individuals.
Collapse
Affiliation(s)
- Dana Bsharat-Maalouf
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Tamar Degani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Hanin Karawani
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| |
Collapse
|
28
|
Baese-Berk MM, Levi SV, Van Engen KJ. Intelligibility as a measure of speech perception: Current approaches, challenges, and recommendations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:68. [PMID: 36732227 DOI: 10.1121/10.0016806] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/30/2022] [Accepted: 12/18/2022] [Indexed: 06/18/2023]
Abstract
Intelligibility measures, which assess the number of words or phonemes a listener correctly transcribes or repeats, are commonly used metrics for speech perception research. While these measures have many benefits for researchers, they also come with a number of limitations. By pointing out the strengths and limitations of this approach, including how it fails to capture aspects of perception such as listening effort, this article argues that the role of intelligibility measures must be reconsidered in fields such as linguistics, communication disorders, and psychology. Recommendations for future work in this area are presented.
Collapse
Affiliation(s)
| | - Susannah V Levi
- Department of Communicative Sciences and Disorders, New York University, New York, New York 10012, USA
| | - Kristin J Van Engen
- Department of Psychological and Brain Sciences, Washington University in St. Louis, St. Louis, Missouri 63130, USA
| |
Collapse
|
29
|
Johns MA, Calloway RC, Phillips I, Karuzis VP, Dutta K, Smith E, Shamma SA, Goupell MJ, Kuchinsky SE. Performance on stochastic figure-ground perception varies with individual differences in speech-in-noise recognition and working memory capacity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:286. [PMID: 36732241 PMCID: PMC9851714 DOI: 10.1121/10.0016756] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 12/07/2022] [Accepted: 12/10/2022] [Indexed: 06/18/2023]
Abstract
Speech recognition in noisy environments can be challenging and requires listeners to accurately segregate a target speaker from irrelevant background noise. Stochastic figure-ground (SFG) tasks in which temporally coherent inharmonic pure-tones must be identified from a background have been used to probe the non-linguistic auditory stream segregation processes important for speech-in-noise processing. However, little is known about the relationship between performance on SFG tasks and speech-in-noise tasks nor the individual differences that may modulate such relationships. In this study, 37 younger normal-hearing adults performed an SFG task with target figure chords consisting of four, six, eight, or ten temporally coherent tones amongst a background of randomly varying tones. Stimuli were designed to be spectrally and temporally flat. An increased number of temporally coherent tones resulted in higher accuracy and faster reaction times (RTs). For ten target tones, faster RTs were associated with better scores on the Quick Speech-in-Noise task. Individual differences in working memory capacity and self-reported musicianship further modulated these relationships. Overall, results demonstrate that the SFG task could serve as an assessment of auditory stream segregation accuracy and RT that is sensitive to individual differences in cognitive and auditory abilities, even among younger normal-hearing adults.
Collapse
Affiliation(s)
- Michael A Johns
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Regina C Calloway
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ian Phillips
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| | - Valerie P Karuzis
- Applied Research Laboratory of Intelligence and Security, University of Maryland, College Park, Maryland 20742, USA
| | - Kelsey Dutta
- Institute for Systems Research, University of Maryland, College Park, Maryland 20742, USA
| | - Ed Smith
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Shihab A Shamma
- Department of Electrical and Computer Engineering, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Stefanie E Kuchinsky
- Audiology and Speech Pathology Center, Walter Reed National Military Medical Center, Bethesda, Maryland 20889, USA
| |
Collapse
|
30
|
Książek P, Zekveld AA, Fiedler L, Kramer SE, Wendt D. Time-specific Components of Pupil Responses Reveal Alternations in Effort Allocation Caused by Memory Task Demands During Speech Identification in Noise. Trends Hear 2023; 27:23312165231153280. [PMID: 36938784 PMCID: PMC10028670 DOI: 10.1177/23312165231153280] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/21/2023] Open
Abstract
Daily communication may be effortful due to poor acoustic quality. In addition, memory demands can induce effort, especially for long or complex sentences. In the current study, we tested the impact of memory task demands and speech-to-noise ratio on the time-specific components of effort allocation during speech identification in noise. Thirty normally hearing adults (15 females, mean age 42.2 years) participated. In an established auditory memory test, listeners had to listen to a list of seven sentences in noise, and repeat the sentence-final word after presentation, and, if instructed, recall the repeated words. We tested the effects of speech-to-noise ratio (SNR; -4 dB, +1 dB) and recall (Recall; Yes, No), on the time-specific components of pupil responses, trial baseline pupil size, and their dynamics (change) along the list. We found three components in the pupil responses (early, middle, and late). While the additional memory task (recall versus no recall) lowered all components' values, SNR (-4 dB versus +1 dB SNR) increased the middle and late component values. Increasing memory demands (Recall) progressively increased trial baseline and steepened decrease of the late component's values. Trial baseline increased most steeply in the condition of +1 dB SNR with recall. The findings suggest that adding a recall to the auditory task alters effort allocation for listening. Listeners are dynamically re-allocating effort from listening to memorizing under changing memory and acoustic demands. The pupil baseline and the time-specific components of pupil responses provide a comprehensive picture of the interplay of SNR and recall on effort.
Collapse
Affiliation(s)
- Patrycja Książek
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
- 263099Eriksholm Research Centre, Snekkersten, Denmark
| | - Adriana A Zekveld
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | | | - Sophia E Kramer
- 26066Amsterdam UMC, Vrije Universiteit Amsterdam, Otolaryngology/Head and Neck Surgery, Amsterdam Public Health Research Institute, Amsterdam, the Netherlands
| | - Dorothea Wendt
- 263099Eriksholm Research Centre, Snekkersten, Denmark
- Department of Health Technology, 5205Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
31
|
Burkhardt P, Müller V, Meister H, Weglage A, Lang-Roth R, Walger M, Sandmann P. Age effects on cognitive functions and speech-in-noise processing: An event-related potential study with cochlear-implant users and normal-hearing listeners. Front Neurosci 2022; 16:1005859. [PMID: 36620447 PMCID: PMC9815545 DOI: 10.3389/fnins.2022.1005859] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/28/2022] [Accepted: 11/15/2022] [Indexed: 12/24/2022] Open
Abstract
A cochlear implant (CI) can partially restore hearing in individuals with profound sensorineural hearing loss. However, electrical hearing with a CI is limited and highly variable. The current study aimed to better understand the different factors contributing to this variability by examining how age affects cognitive functions and cortical speech processing in CI users. Electroencephalography (EEG) was applied while two groups of CI users (young and elderly; N = 13 each) and normal-hearing (NH) listeners (young and elderly; N = 13 each) performed an auditory sentence categorization task, including semantically correct and incorrect sentences presented either with or without background noise. Event-related potentials (ERPs) representing earlier, sensory-driven processes (N1-P2 complex to sentence onset) and later, cognitive-linguistic integration processes (N400 to semantically correct/incorrect sentence-final words) were compared between the different groups and speech conditions. The results revealed reduced amplitudes and prolonged latencies of auditory ERPs in CI users compared to NH listeners, both at earlier (N1, P2) and later processing stages (N400 effect). In addition to this hearing-group effect, CI users and NH listeners showed a comparable background-noise effect, as indicated by reduced hit rates and reduced (P2) and delayed (N1/P2) ERPs in conditions with background noise. Moreover, we observed an age effect in CI users and NH listeners, with young individuals showing improved specific cognitive functions (working memory capacity, cognitive flexibility and verbal learning/retrieval), reduced latencies (N1/P2), decreased N1 amplitudes and an increased N400 effect when compared to the elderly. In sum, our findings extend previous research by showing that the CI users' speech processing is impaired not only at earlier (sensory) but also at later (semantic integration) processing stages, both in conditions with and without background noise. Using objective ERP measures, our study provides further evidence of strong age effects on cortical speech processing, which can be observed in both the NH listeners and the CI users. We conclude that elderly individuals require more effortful processing at sensory stages of speech processing, which however seems to be at the cost of the limited resources available for the later semantic integration processes.
Collapse
Affiliation(s)
- Pauline Burkhardt
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,*Correspondence: Pauline Burkhardt, ; orcid.org/0000-0001-9850-9881
| | - Verena Müller
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Anna Weglage
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Ruth Lang-Roth
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany,Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Cologne, Germany
| | - Pascale Sandmann
- Department of Otorhinolaryngology, Head and Neck Surgery, Audiology and Pediatric Audiology, Cochlear Implant Center, Faculty of Medicine and University Hospital Cologne, University of Cologne, Cologne, Germany
| |
Collapse
|
32
|
Shin S, Warner-Czyz A, Geers A, Katz WF. Speaking Rate, Immediate Memory, and Grammatical Processing in Prelingual Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4637-4651. [PMID: 36475864 DOI: 10.1044/2022_jslhr-22-00163] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE This study examined the extent to which prelingual cochlear implant (CI) users show a slowed speaking rate compared with typical-hearing (TH) talkers when repeating various speech stimuli and whether the slowed speech of CI users relates to their immediate verbal memory. METHOD Participants included 10 prelingually deaf teenagers who received CIs before the age of 5 years and 10 age-matched TH teenagers. Participants repeated nonword syllable strings, word strings, and center-embedded sentences, with conditions balanced for syllable length and metrical structure. Participants' digit span forward and backward scores were collected to measure immediate verbal memory. Speaking rate data were analyzed using a mixed-design, repeated-measures analysis of variance, and the relationships between speaking rate and digit spans were evaluated by Pearson correlation. RESULTS Participants with CIs spoke more slowly than their TH peers during the sentence repetition task but not in the nonword string and word string repetition tasks. For the CI group, significant correlations emerged between speaking rate and digit span scores (both forward and backward) for the sentence repetition task but not for the nonword string or word string repetition task. For the TH group, no significant correlations were found. CONCLUSIONS The findings indicate a relation between slowed speech production, reduced immediate verbal memory, and diminished language capabilities of prelingual CI users, particularly for syntactic processing. These results support theories claiming that immediate memory, including components of a central executive, influences the speaking rate of these talkers. Implications for therapies designed to increase speech fluency in CI recipients are discussed. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644795.
Collapse
Affiliation(s)
- Sujin Shin
- Department of Communication Sciences and Disorders, University of Redlands, CA
| | - Andrea Warner-Czyz
- Department of Speech, Language, and Hearing, The University of Texas at Dallas
| | - Ann Geers
- Department of Speech, Language, and Hearing, The University of Texas at Dallas
| | - William F Katz
- Department of Speech, Language, and Hearing, The University of Texas at Dallas
| |
Collapse
|
33
|
Gianakas SP, Fitzgerald MB, Winn MB. Identifying Listeners Whose Speech Intelligibility Depends on a Quiet Extra Moment After a Sentence. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4852-4865. [PMID: 36472938 PMCID: PMC9934912 DOI: 10.1044/2022_jslhr-21-00622] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/23/2021] [Revised: 05/29/2022] [Accepted: 08/16/2022] [Indexed: 06/03/2023]
Abstract
PURPOSE An extra moment after a sentence is spoken may be important for listeners with hearing loss to mentally repair misperceptions during listening. The current audiologic test battery cannot distinguish between a listener who repaired a misperception versus a listener who heard the speech accurately with no need for repair. This study aims to develop a behavioral method to identify individuals who are at risk for relying on a quiet moment after a sentence. METHOD Forty-three individuals with hearing loss (32 cochlear implant users, 11 hearing aid users) heard sentences that were followed by either 2 s of silence or 2 s of babble noise. Both high- and low-context sentences were used in the task. RESULTS Some individuals showed notable benefit in accuracy scores (particularly for high-context sentences) when given an extra moment of silent time following the sentence. This benefit was highly variable across individuals and sometimes absent altogether. However, the group-level patterns of results were mainly explained by the use of context and successful perception of the words preceding sentence-final words. CONCLUSIONS These results suggest that some but not all individuals improve their speech recognition score by relying on a quiet moment after a sentence, and that this fragility of speech recognition cannot be assessed using one isolated utterance at a time. Reliance on a quiet moment to repair perceptions would potentially impede the perception of an upcoming utterance, making continuous communication in real-world scenarios difficult especially for individuals with hearing loss. The methods used in this study-along with some simple modifications if necessary-could potentially identify patients with hearing loss who retroactively repair mistakes by using clinically feasible methods that can ultimately lead to better patient-centered hearing health care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21644801.
Collapse
|
34
|
Fletcher AR, Wisler AA, Gruver ER, Borrie SA. Beyond Speech Intelligibility: Quantifying Behavioral and Perceived Listening Effort in Response to Dysarthric Speech. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4060-4070. [PMID: 36198057 PMCID: PMC9940894 DOI: 10.1044/2022_jslhr-22-00136] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/03/2023]
Abstract
PURPOSE This study investigated whether listener processing of dysarthric speech requires the recruitment of more cognitive resources (i.e., higher levels of listening effort) than neurotypical speech. We also explored relationships between behavioral listening effort, perceived listening effort, and objective measures of word transcription accuracy. METHOD A word recall paradigm was used to index behavioral listening effort. The primary task involved word transcription, whereas a memory task involved recalling words from previous sentences. Nineteen listeners completed the paradigm twice, once while transcribing dysarthric speech and once while transcribing neurotypical speech. Perceived listening effort was rated using a visual analog scale. RESULTS Results revealed significant effects of dysarthria on the likelihood of correct word recall, indicating that the transcription of dysarthric speech required higher levels of behavioral listening effort relative to neurotypical speech. There was also a significant relationship between transcription accuracy and measures of behavioral listening effort, such that listeners who were more accurate in understanding dysarthric speech exhibited smaller changes in word recall when listening to dysarthria. The subjective measure of perceived listening effort did not have a statistically significant correlation with measures of behavioral listening effort or transcription accuracy. CONCLUSIONS Results suggest that cognitive resources, particularly listeners' working memory capacity, are more taxed when deciphering dysarthric versus neurotypical speech. An increased demand on these resources may affect a listener's ability to remember aspects of their conversations with people with dysarthria, even when the speaker is fully intelligible.
Collapse
Affiliation(s)
- Annalise R. Fletcher
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Alan A. Wisler
- Department of Mathematics and Statistics, Utah State University, Logan
| | - Emily R. Gruver
- Department of Audiology and Speech-Language Pathology, University of North Texas, Denton
| | - Stephanie A. Borrie
- Department of Communicative Disorders and Deaf Education, Utah State University, Logan
| |
Collapse
|
35
|
Li M, Chen X, Zhu J, Chen F. Audiovisual Mandarin Lexical Tone Perception in Quiet and Noisy Contexts: The Influence of Visual Cues and Speech Rate. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4385-4403. [PMID: 36269618 DOI: 10.1044/2022_jslhr-22-00024] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Armed with the theory of embodied cognition proposing tight interactions between perception, motor, and cognition, this study aimed to test the hypothesis that speech rate-altered Mandarin lexical tone perception in quiet and noisy environments could be affected by the bodily dynamic cross-modal information. METHOD Fifty-three adult listeners completed a Mandarin tone perception task with 720 tone stimuli in auditory-only (AO), auditory-facial (AF), and auditory-facial-plus-gestural (AFG) modalities, at fast, normal, and slow speech rates under quiet and noisy conditions. In AF and AFG modalities, both congruent and incongruent audiovisual information were designed and presented. Generalized linear mixed-effects models were constructed to analyze the accuracy of tone perception across different conditions. RESULTS In Mandarin tone perception, the magnitude of enhancement of AF and AFG cues across three speech rates was significantly higher than that of the AO cue in the adverse context of noise, yet additional metaphoric gestures did not show significant differences from the facial information. Furthermore, the performance of auditory tone perception at the fast speech rate was significantly better than that at the normal speech rate when the inputs were incongruent between auditory and visual channels in quiet. CONCLUSIONS This study provided compelling evidence showing that integrated audiovisual information plays a vital role not only in improving lexical tone perception in noise but also in modulating the effects of speech rate on Mandarin tone perception in quiet for native listeners. Our findings, supporting the theory of embodied cognition, are implicational for speech and hearing rehabilitation among both young and old clinical populations.
Collapse
Affiliation(s)
- Manhong Li
- School of Foreign Languages, Hunan University, Changsha, China
- School of Foreign Languages, Hunan First Normal University, Changsha, China
| | - Xiaoxiang Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Jiaqiang Zhu
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, China
| | - Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| |
Collapse
|
36
|
Evaluation of the Benefits of Bilateral Fitting in Bone-Anchored Hearing System Users: Spatial Resolution and Memory for Speech. Ear Hear 2022; 44:530-543. [PMID: 36378104 PMCID: PMC10097484 DOI: 10.1097/aud.0000000000001297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
OBJECTIVES The purpose of this study was to evaluate the benefits of bilateral implantation for bone-anchored hearing system (BAHS) users in terms of spatial resolution abilities and auditory memory for speech. DESIGN This is a prospective, single-center, comparative, single-blinded study where the listeners served as their own control. Twenty-four experienced bone-anchored users with a bilateral conductive or mixed hearing loss participated in the study. After fitting the listeners unilaterally and bilaterally with BAHS sound processor(s) (Ponto 3 SuperPower), spatial resolution was estimated by measuring the minimum audible angle (MAA) to achieve an 80% correct response via a two-alternative-forced choice task (right-left discrimination of noise bursts) in two conditions: both sound processors active (bilateral condition) and only one sound processor active (unilateral condition). In addition, a memory recall test, the Sentence-final Word Identification and Recall (SWIR) test was performed with five lists of seven sentences for each of the two conditions (unilateral and bilateral). Self-reported performance in everyday life with the listener's own sound processors was also evaluated via a questionnaire (the abbreviated version of the Speech, Spatial and Qualities of Hearing scale). RESULTS The MAA to discriminate noise bursts improved significantly from 75.04° in the unilateral condition to 3.61° in the bilateral condition ( p < 0.0001). The average improvement in performance was 54.28°. The SWIR test results showed that the listeners could recall, on average, 55.03% of the last words in a list of seven sentences in the unilateral condition and 57.23% in the bilateral condition. While the main effect of condition was not significant, there was a significant interaction between condition and repetition (list), revealing a significantly higher recall performance in the bilateral condition than in the unilateral condition for the second repetition/list out of five (10.2% difference; p = 0.022). Self-reported performance with bilateral BAHS obtained via the Speech, Spatial and Qualities of Hearing scale questionnaire was, on average, 4.4 for speech, 3.7 for spatial, and 5.1 for qualities of hearing. There was no correlation between self-reported performance in everyday life and bilateral performance in the MAA test, while significant correlations were obtained between self-reported performance and recall performance in the SWIR test. CONCLUSIONS These results showed a large benefit in spatial resolution for users with symmetric BC thresholds when being fitted with two BAHS, although their self-reported performance with bilateral BAHS in everyday life was rather low. In addition, there was no overall benefit of bilateral fitting on memory for speech, despite observing a benefit in one out of five repetitions of the SWIR test. Performance in the SWIR test was correlated with the users' self-reported performance in everyday life, such that users with higher recall ability reported to achieve better performance in real life. These findings highlight the advantages of bilateral fitting on spatial resolution, although bilaterally fitted BAHS users continue to experience some difficulties in their daily lives, especially when locating sounds, judging distance and movement. More research is needed to support a higher penetration of bilateral BAHS treatment for bilateral conductive and mixed hearing losses.
Collapse
|
37
|
Köse B, Karaman-Demirel A, Çiprut A. Psychoacoustic abilities in pediatric cochlear implant recipients: The relation with short-term memory and working memory capacity. Int J Pediatr Otorhinolaryngol 2022; 162:111307. [PMID: 36116181 DOI: 10.1016/j.ijporl.2022.111307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The aim was to investigate school-age children with cochlear implants (CIs) and their typically developing peers in terms of auditory short-term memory (ASTM), auditory working memory (AWM), visuospatial short-term memory (VSTM), visuospatial working memory (VWM), spectral resolution and monosyllabic word recognition in noise. METHODS Twenty-three prelingually deaf CI users and twenty-three typically developing (TD) peers aged 7-10 years participated. Twelve children with CI were earlier-implanted (i.e., age at implantation ≤24 months). Children with CIs were compared to typically developing peers and correlations between cognitive and psychoacoustic abilities were computed separately for the groups. Besides, regression analyses were conducted to develop models that could predict SMRT (spectral-temporally modulated ripple test) and speech recognition scores. RESULTS The AWM scores of the later-implanted group were significantly lower than both earlier-implanted and TD groups. ASTM scores of TD children were significantly higher than both earlier-implanted and later-implanted participants. There was no statistically significant difference between groups in terms of VSTM and VWM. AWM performance was positively correlated with ASTM, SMRT scores, and speech recognition under noisy conditions for pediatric CI recipients. The AWM was a statistically significant predictor of the SMRT score and the SMRT score was an indicator of speech recognition score under 0 dB SNR condition. CONCLUSION Most of children using CI are at risk for clinically remarkable deficits across cognitive abilities such as AWM and ASTM. While evaluating cognitive and psychoacoustic abilities in the clinic routine, it should be kept in mind that they can be influenced by each other.
Collapse
Affiliation(s)
- Büşra Köse
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey; Koç University Research Center for Translational Medicine (KUTTAM), Istanbul, Turkey.
| | - Ayşenur Karaman-Demirel
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey; Vocational School of Health Services, Okan University, Istanbul, Turkey
| | - Ayça Çiprut
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey
| |
Collapse
|
38
|
Rönnberg J, Signoret C, Andin J, Holmer E. The cognitive hearing science perspective on perceiving, understanding, and remembering language: The ELU model. Front Psychol 2022; 13:967260. [PMID: 36118435 PMCID: PMC9477118 DOI: 10.3389/fpsyg.2022.967260] [Citation(s) in RCA: 13] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/12/2022] [Accepted: 08/08/2022] [Indexed: 11/13/2022] Open
Abstract
The review gives an introductory description of the successive development of data patterns based on comparisons between hearing-impaired and normal hearing participants' speech understanding skills, later prompting the formulation of the Ease of Language Understanding (ELU) model. The model builds on the interaction between an input buffer (RAMBPHO, Rapid Automatic Multimodal Binding of PHOnology) and three memory systems: working memory (WM), semantic long-term memory (SLTM), and episodic long-term memory (ELTM). RAMBPHO input may either match or mismatch multimodal SLTM representations. Given a match, lexical access is accomplished rapidly and implicitly within approximately 100-400 ms. Given a mismatch, the prediction is that WM is engaged explicitly to repair the meaning of the input - in interaction with SLTM and ELTM - taking seconds rather than milliseconds. The multimodal and multilevel nature of representations held in WM and LTM are at the center of the review, being integral parts of the prediction and postdiction components of language understanding. Finally, some hypotheses based on a selective use-disuse of memory systems mechanism are described in relation to mild cognitive impairment and dementia. Alternative speech perception and WM models are evaluated, and recent developments and generalisations, ELU model tests, and boundaries are discussed.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Department of Behavioural Sciences and Learning, Linköping University, Linköping, Sweden
| | | | | | | |
Collapse
|
39
|
Beechey T. Is speech intelligibility what speech intelligibility tests test? THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1573. [PMID: 36182275 DOI: 10.1121/10.0013896] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/04/2022] [Accepted: 08/17/2022] [Indexed: 06/16/2023]
Abstract
Natural, conversational speech signals contain sources of symbolic and iconic information, both of which are necessary for the full understanding of speech. But speech intelligibility tests, which are generally derived from written language, present only symbolic information sources, including lexical semantics and syntactic structures. Speech intelligibility tests exclude almost all sources of information about talkers, including their communicative intentions and their cognitive states and processes. There is no reason to suspect that either hearing impairment or noise selectively affect perception of only symbolic information. We must therefore conclude that diagnosis of good or poor speech intelligibility on the basis of standard speech tests is based on measurement of only a fraction of the task of speech perception. This paper presents a descriptive comparison of information sources present in three widely used speech intelligibility tests and spontaneous, conversational speech elicited using a referential communication task. The aim of this comparison is to draw attention to the differences in not just the signals, but the tasks of listeners perceiving these different speech signals and to highlight the implications of these differences for the interpretation and generalizability of speech intelligibility test results.
Collapse
Affiliation(s)
- Timothy Beechey
- Hearing Sciences-Scottish Section, School of Medicine, The University of Nottingham, Glasgow G31 2ER, United Kingdom
| |
Collapse
|
40
|
Cowan T, Paroby C, Leibold LJ, Buss E, Rodriguez B, Calandruccio L. Masked-Speech Recognition for Linguistically Diverse Populations: A Focused Review and Suggestions for the Future. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3195-3216. [PMID: 35917458 PMCID: PMC9911100 DOI: 10.1044/2022_jslhr-22-00011] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/07/2022] [Revised: 04/12/2022] [Accepted: 05/04/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE Twenty years ago, von Hapsburg and Peña (2002) wrote a tutorial that reviewed the literature on speech audiometry and bilingualism and outlined valuable recommendations to increase the rigor of the evidence base. This review article returns to that seminal tutorial to reflect on how that advice was applied over the last 20 years and to provide updated recommendations for future inquiry. METHOD We conducted a focused review of the literature on masked-speech recognition for bilingual children and adults. First, we evaluated how studies published since 2002 described bilingual participants. Second, we reviewed the literature on native language masked-speech recognition. Third, we discussed theoretically motivated experimental work. Fourth, we outlined how recent research in bilingual speech recognition can be used to improve clinical practice. RESULTS Research conducted since 2002 commonly describes bilingual samples in terms of their language status, competency, and history. Bilingualism was not consistently associated with poor masked-speech recognition. For example, bilinguals who were exposed to English prior to age 7 years and who were dominant in English performed comparably to monolinguals for masked-sentence recognition tasks. To the best of our knowledge, there are no data to document the masked-speech recognition ability of these bilinguals in their other language compared to a second monolingual group, which is an important next step. Nonetheless, individual factors that commonly vary within bilingual populations were associated with masked-speech recognition and included language dominance, competency, and age of acquisition. We identified methodological issues in sampling strategies that could, in part, be responsible for inconsistent findings between studies. For instance, disparities in socioeconomic status (SES) between recruited bilingual and monolingual groups could cause confounding bias within the research design. CONCLUSIONS Dimensions of the bilingual linguistic profile should be considered in clinical practice to inform counseling and (re)habilitation strategies since susceptibility to masking is elevated in at least one language for most bilinguals. Future research should continue to report language status, competency, and history but should also report language stability and demand for use data. In addition, potential confounds (e.g., SES, educational attainment) when making group comparisons between monolinguals and bilinguals must be considered.
Collapse
Affiliation(s)
- Tiana Cowan
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Caroline Paroby
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
41
|
Nitsan G, Baharav S, Tal-Shir D, Shakuf V, Ben-David BM. Speech Processing as a Far-Transfer Gauge of Serious Games for Cognitive Training in Aging: Randomized Controlled Trial of Web-Based Effectivate Training. JMIR Serious Games 2022; 10:e32297. [PMID: 35900825 PMCID: PMC9400949 DOI: 10.2196/32297] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/24/2021] [Revised: 04/21/2022] [Accepted: 04/28/2022] [Indexed: 11/13/2022] Open
Abstract
BACKGROUND The number of serious games for cognitive training in aging (SGCTAs) is proliferating in the market and attempting to combat one of the most feared aspects of aging-cognitive decline. However, the efficacy of many SGCTAs is still questionable. Even the measures used to validate SGCTAs are up for debate, with most studies using cognitive measures that gauge improvement in trained tasks, also known as near transfer. This study takes a different approach, testing the efficacy of the SGCTA-Effectivate-in generating tangible far-transfer improvements in a nontrained task-the Eye tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL)-which tests speech processing in adverse conditions. OBJECTIVE This study aimed to validate the use of a real-time measure of speech processing as a gauge of the far-transfer efficacy of an SGCTA designed to train executive functions. METHODS In a randomized controlled trial that included 40 participants, we tested 20 (50%) older adults before and after self-administering the SGCTA Effectivate training and compared their performance with that of the control group of 20 (50%) older adults. The E-WINDMIL eye-tracking task was administered to all participants by blinded experimenters in 2 sessions separated by 2 to 8 weeks. RESULTS Specifically, we tested the change between sessions in the efficiency of segregating the spoken target word from its sound-sharing alternative, as the word unfolds in time. We found that training with the SGCTA Effectivate improved both early and late speech processing in adverse conditions, with higher discrimination scores in the training group than in the control group (early processing: F1,38=7.371; P=.01; ηp2=0.162 and late processing: F1,38=9.003; P=.005; ηp2=0.192). CONCLUSIONS This study found the E-WINDMIL measure of speech processing to be a valid gauge for the far-transfer effects of executive function training. As the SGCTA Effectivate does not train any auditory task or language processing, our results provide preliminary support for the ability of Effectivate to create a generalized cognitive improvement. Given the crucial role of speech processing in healthy and successful aging, we encourage researchers and developers to use speech processing measures, the E-WINDMIL in particular, to gauge the efficacy of SGCTAs. We advocate for increased industry-wide adoption of far-transfer metrics to gauge SGCTAs.
Collapse
Affiliation(s)
- Gal Nitsan
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel.,Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Shai Baharav
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Dalith Tal-Shir
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
| | - Vered Shakuf
- Department of Communications Disorders, Achva Academic College, Arugot, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel.,Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada.,Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
42
|
Eser BN, Şerbetçioğlu MB. Auditory Short-Term Memory Evaluation in Noise in Musicians. J Am Acad Audiol 2022; 33:375-380. [PMID: 35817310 DOI: 10.1055/a-1896-5129] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/17/2022]
Abstract
BACKGROUND Working memory, a short-term memory component, is a multicomponent system that manages attention and short-term memory in speech perception in challenging listening conditions. These challenging conditions cause listening effort that can be objectively evaluated by pupillometry. Studies show that auditory working memory is more developed in musicians for complex auditory tasks. PURPOSE This study aims to compare the listening effort and short-term memory in noise between musicians and nonmusicians. RESEARCH DESIGN An experimental research design was adopted for the study. STUDY SAMPLE The study was conducted on 22 musicians and 20 nonmusicians between the ages of 20 and 45. DATA COLLECTION AND ANALYSIS Participants' effort analysis was measured with pupillometry; performance analysis was measured with short-term memory score by listening to the 15 word lists of Verbal Memory Processes Test. Participants are tested under three conditions: quiet, +15 signal-to-noise ratio (SNR), and +5 SNR. RESULTS While nonmusicians showed significantly higher short-term memory score (STMS) than musicians in the quiet condition, musicians' STMS were significantly higher in both noise conditions (+15 SNR and +5 SNR). The nonmusician's percentage of pupil growth averages were higher than the musicians for three conditions. CONCLUSION As a result, musicians had better memory performance in noise and less effort in the listening task according to lower pupil growth. This study objectively evaluated the differences between participants' listening efforts by pupillometry. It is also observed that the SNR and music training affect memory performance.
Collapse
Affiliation(s)
- Büşra Nur Eser
- Department of Audiology, Graduate School of Health Sciences, Istanbul Medipol University, Istanbul, Turkey
| | | |
Collapse
|
43
|
Nitsan G, Banai K, Ben-David BM. One Size Does Not Fit All: Examining the Effects of Working Memory Capacity on Spoken Word Recognition in Older Adults Using Eye Tracking. Front Psychol 2022; 13:841466. [PMID: 35478743 PMCID: PMC9037998 DOI: 10.3389/fpsyg.2022.841466] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/22/2021] [Accepted: 03/14/2022] [Indexed: 11/13/2022] Open
Abstract
Difficulties understanding speech form one of the most prevalent complaints among older adults. Successful speech perception depends on top-down linguistic and cognitive processes that interact with the bottom-up sensory processing of the incoming acoustic information. The relative roles of these processes in age-related difficulties in speech perception, especially when listening conditions are not ideal, are still unclear. In the current study, we asked whether older adults with a larger working memory capacity process speech more efficiently than peers with lower capacity when speech is presented in noise, with another task performed in tandem. Using the Eye-tracking of Word Identification in Noise Under Memory Increased Load (E-WINDMIL) an adapted version of the "visual world" paradigm, 36 older listeners were asked to follow spoken instructions presented in background noise, while retaining digits for later recall under low (single-digit) or high (four-digits) memory load. In critical trials, instructions (e.g., "point at the candle") directed listeners' gaze to pictures of objects whose names shared onset or offset sounds with the name of a competitor that was displayed on the screen at the same time (e.g., candy or sandal). We compared listeners with different memory capacities on the time course for spoken word recognition under the two memory loads by testing eye-fixations on a named object, relative to fixations on an object whose name shared phonology with the named object. Results indicated two trends. (1) For older adults with lower working memory capacity, increased memory load did not affect online speech processing, however, it impaired offline word recognition accuracy. (2) The reverse pattern was observed for older adults with higher working memory capacity: increased task difficulty significantly decreases online speech processing efficiency but had no effect on offline word recognition accuracy. Results suggest that in older adults, adaptation to adverse listening conditions is at least partially supported by cognitive reserve. Therefore, additional cognitive capacity may lead to greater resilience of older listeners to adverse listening conditions. The differential effects documented by eye movements and accuracy highlight the importance of using both online and offline measures of speech processing to explore age-related changes in speech perception.
Collapse
Affiliation(s)
- Gal Nitsan
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, University of Haifa, Haifa, Israel
| | - Boaz M. Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- Toronto Rehabilitation Institute, University Health Networks, Toronto, ON, Canada
| |
Collapse
|
44
|
Bieber RE, Brodbeck C, Anderson S. Examining the context benefit in older adults: A combined behavioral-electrophysiologic word identification study. Neuropsychologia 2022; 170:108224. [PMID: 35346650 DOI: 10.1016/j.neuropsychologia.2022.108224] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/28/2021] [Revised: 02/18/2022] [Accepted: 03/22/2022] [Indexed: 10/18/2022]
Abstract
When listening to degraded speech, listeners can use high-level semantic information to support recognition. The literature contains conflicting findings regarding older listeners' ability to benefit from semantic cues in recognizing speech, relative to younger listeners. Electrophysiologic (EEG) measures of lexical access (N400) often show that semantic context does not facilitate lexical access in older listeners; in contrast, auditory behavioral studies indicate that semantic context improves speech recognition in older listeners as much or more as in younger listeners. Many behavioral studies of aging and the context benefit have employed signal degradation or alteration, whereas this stimulus manipulation has been absent in the EEG literature, a possible reason for the inconsistencies between studies. Here we compared the context benefit as a function of age and signal type, using EEG combined with behavioral measures. Non-native accent, a common form of signal alteration which many older adults report as a challenge in daily speech recognition, was utilized for testing. The stimuli included English sentences produced by native speakers of English and Spanish, containing target words differing in cloze probability. Listeners performed a word identification task while 32-channel cortical responses were recorded. Results show that older adults' word identification performance was poorer in the low-predictability and non-native talker conditions than the younger adults, replicating earlier behavioral findings. However, older adults did not show reductions or delays in the average N400 response as compared to younger listeners, suggesting no age-related reduction in predictive processing capability. Potential sources for discrepancies in the prior literature are discussed.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA.
| | - Christian Brodbeck
- Department of Psychological Sciences, University of Connecticut, Storrs CT, 06269, USA
| | - Samira Anderson
- Department of Hearing and Speech Sciences, 0100 Lefrak Hall, University of Maryland College Park, College Park MD, 20740, USA
| |
Collapse
|
45
|
Ma W, Zhang Y, Li X, Liu S, Gao Y, Yang J, Xu L, Liang H, Ren F, Gao F, Wang Y. High-Frequency Hearing Loss Is Associated With Anxiety and Brain Structural Plasticity in Older Adults. Front Aging Neurosci 2022; 14:821537. [PMID: 35360202 PMCID: PMC8961435 DOI: 10.3389/fnagi.2022.821537] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/24/2021] [Accepted: 02/09/2022] [Indexed: 12/01/2022] Open
Abstract
Age-related hearing loss (ARHL) is a kind of symmetrical and slow sensorineural hearing loss, which is a common condition in older adults. The characteristic of ARHL is hearing loss beginning in the high-frequency region and spreading toward low-frequency with age. Previous studies have linked it to anxiety, suggesting that brain structure may be involved in compensatory plasticity after partial hearing deprivation. However, the neural mechanisms of underlying ARHL-related anxiety remain unclear. The purpose of this cross-sectional study was to explore the interactions among high-frequency hearing loss and anxiety as well as brain structure in older adults. Sixty-seven ARHL patients and 68 normal hearing (NH) controls participated in this study, and the inclusion criterion of ARHL group was four-frequency (0.5, 1, 2, and 4 kHz) pure tone average (PTA) > 25 decibels hearing level of the better hearing ear. All participants performed three-dimensional T1-weighted magnetic resonance imaging (MRI), pure tone audiometry tests, anxiety and depression scales. Our results found gray matter volume (GMV) decreased in 20 brain regions in the ARHL group compared with the NH group, and a positive correlation existed between high-frequency pure tone audiometry (H-PT) and anxiety scores in the ARHL group. Among 20 brain regions, we also found the GMVs of the middle cingulate cortex (MCC), and the hippocampal/parahippocampal (H-P) regions were associated with H-PT and anxiety scores in all participants separately. However, the depressive symptoms indicated no relationship with hearing assessment or GMVs. Our findings revealed that the crucial role of MCC and H-P in a link of anxiety and hearing loss in older adults.
Collapse
Affiliation(s)
- Wen Ma
- Department of Otolaryngology, Central Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Yue Zhang
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Xiao Li
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Siqi Liu
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Yuting Gao
- School of Life Sciences, Tiangong University, Tianjin, China
| | - Jing Yang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Longji Xu
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Hudie Liang
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Fuxin Ren
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Fei Gao
- Department of Radiology, Shandong Provincial Hospital Affiliated to Shandong First Medical University, Jinan, China
| | - Yao Wang
- School of Life Sciences, Tiangong University, Tianjin, China
- School of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| |
Collapse
|
46
|
Gordon-Salant S, Schwartz MS, Oppler KA, Yeni-Komshian GH. Detection and Recognition of Asynchronous Auditory/Visual Speech: Effects of Age, Hearing Loss, and Talker Accent. Front Psychol 2022; 12:772867. [PMID: 35153900 PMCID: PMC8832148 DOI: 10.3389/fpsyg.2021.772867] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Accepted: 12/21/2021] [Indexed: 11/13/2022] Open
Abstract
This investigation examined age-related differences in auditory-visual (AV) integration as reflected on perceptual judgments of temporally misaligned AV English sentences spoken by native English and native Spanish talkers. In the detection task, it was expected that slowed auditory temporal processing of older participants, relative to younger participants, would be manifest as a shift in the range over which participants would judge asynchronous stimuli as synchronous (referred to as the "AV simultaneity window"). The older participants were also expected to exhibit greater declines in speech recognition for asynchronous AV stimuli than younger participants. Talker accent was hypothesized to influence listener performance, with older listeners exhibiting a greater narrowing of the AV simultaneity window and much poorer recognition of asynchronous AV foreign-accented speech compared to younger listeners. Participant groups included younger and older participants with normal hearing and older participants with hearing loss. Stimuli were video recordings of sentences produced by native English and native Spanish talkers. The video recordings were altered in 50 ms steps by delaying either the audio or video onset. Participants performed a detection task in which they judged whether the sentences were synchronous or asynchronous, and performed a recognition task for multiple synchronous and asynchronous conditions. Both the detection and recognition tasks were conducted at the individualized signal-to-noise ratio (SNR) corresponding to approximately 70% correct speech recognition performance for synchronous AV sentences. Older listeners with and without hearing loss generally showed wider AV simultaneity windows than younger listeners, possibly reflecting slowed auditory temporal processing in auditory lead conditions and reduced sensitivity to asynchrony in auditory lag conditions. However, older and younger listeners were affected similarly by misalignment of auditory and visual signal onsets on the speech recognition task. This suggests that older listeners are negatively impacted by temporal misalignments for speech recognition, even when they do not notice that the stimuli are asynchronous. Overall, the findings show that when listener performance is equated for simultaneous AV speech signals, age effects are apparent in detection judgments but not in recognition of asynchronous speech.
Collapse
Affiliation(s)
- Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland, College Park, MD, United States
| | | | | | | |
Collapse
|
47
|
Bieber RE, Gordon-Salant S. Semantic context and stimulus variability independently affect rapid adaptation to non-native English speech in young adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:242. [PMID: 35104999 PMCID: PMC8769767 DOI: 10.1121/10.0009170] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2021] [Revised: 11/26/2021] [Accepted: 12/07/2021] [Indexed: 06/14/2023]
Abstract
When speech is degraded or challenging to recognize, young adult listeners with normal hearing are able to quickly adapt, improving their recognition of the speech over a short period of time. This rapid adaptation is robust, but the factors influencing rate, magnitude, and generalization of improvement have not been fully described. Two factors of interest are lexico-semantic information and talker and accent variability; lexico-semantic information promotes perceptual learning for acoustically ambiguous speech, while talker and accent variability are beneficial for generalization of learning. In the present study, rate and magnitude of adaptation were measured for speech varying in level of semantic context, and in the type and number of talkers. Generalization of learning to an unfamiliar talker was also assessed. Results indicate that rate of rapid adaptation was slowed for semantically anomalous sentences, as compared to semantically intact or topic-grouped sentences; however, generalization was seen in the anomalous conditions. Magnitude of adaptation was greater for non-native as compared to native talker conditions, with no difference between single and multiple non-native talker conditions. These findings indicate that the previously documented benefit of lexical information in supporting rapid adaptation is not enhanced by the addition of supra-sentence context.
Collapse
Affiliation(s)
- Rebecca E Bieber
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| | - Sandra Gordon-Salant
- Department of Hearing and Speech Sciences, University of Maryland College Park, College Park, Maryland 20742, USA
| |
Collapse
|
48
|
Remote Microphone Systems Can Improve Listening-in-Noise Accuracy and Listening Effort for Youth With Autism. Ear Hear 2022; 43:436-447. [PMID: 35030553 PMCID: PMC8881266 DOI: 10.1097/aud.0000000000001058] [Citation(s) in RCA: 5] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study examined whether remote microphone (RM) systems improved listening-in-noise performance in youth with autism. We explored effects of RM system use on both listening-in-noise accuracy and listening effort in a well-characterized sample of participants with autism. We hypothesized that listening-in-noise accuracy would be enhanced and listening effort reduced, on average, when participants used the RM system. Furthermore, we predicted that effects of RM system use on listening-in-noise accuracy and listening effort would vary according to participant characteristics. Specifically, we hypothesized that participants who were chronologically older, had greater nonverbal cognitive and language ability, displayed fewer features of autism, and presented with more typical sensory and multisensory profiles might exhibit greater benefits of RM system use than participants who were younger, had less nonverbal cognitive or language ability, displayed more features of autism, and presented with greater sensory and multisensory disruptions. DESIGN We implemented a within-subjects design to investigate our hypotheses, wherein 32 youth with autism completed listening-in-noise testing with and without an RM system. Listening-in-noise accuracy and listening effort were evaluated simultaneously using a dual-task paradigm for stimuli varying in complexity (i.e., syllable-, word-, sentence-, and passage-level). In addition, several putative moderators of RM system effects (i.e., sensory and multisensory function, language, nonverbal cognition, and broader features of autism) on outcomes of interest were evaluated. RESULTS Overall, RM system use resulted in higher listening-in-noise accuracy in youth with autism compared with no RM system use. The observed benefits were all large in magnitude, although the benefits on average were greater for more complex stimuli (e.g., key words embedded in sentences) and relatively smaller for less complex stimuli (e.g., syllables). Notably, none of the putative moderators significantly influenced the effects of the RM system on listening-in-noise accuracy, indicating that RM system benefits did not vary according to any of the participant characteristics assessed. On average, RM system use did not have an effect on listening effort across all youth with autism compared with no RM system use but instead yielded effects that varied according to participant profile. Specifically, moderated effects indicated that RM system use was associated with increased listening effort for youth who had (a) average to below-average nonverbal cognitive ability, (b) below-average language ability, and (c) reduced audiovisual integration. RM system use was also associated with decreased listening effort for youth with very high nonverbal cognitive ability. CONCLUSIONS This study extends prior work by showing that RM systems have the potential to boost listening-in-noise accuracy for youth with autism. However, this boost in accuracy was coupled with increased listening effort, as indexed by longer reaction times while using an RM system, for some youth with autism, perhaps suggesting greater engagement in the listening-in-noise tasks when using the RM system for youth who had lower cognitive abilities, were less linguistically able, and/or have difficulty integrating seen and heard speech. These findings have important implications for clinical practice, suggesting RM system use in classrooms could potentially improve listening-in-noise performance for some youth with autism.
Collapse
|
49
|
Lewis JH, Castellanos I, Moberly AC. The Impact of Neurocognitive Skills on Recognition of Spectrally Degraded Sentences. J Am Acad Audiol 2021; 32:528-536. [PMID: 34965599 DOI: 10.1055/s-0041-1732438] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
BACKGROUND Recent models theorize that neurocognitive resources are deployed differently during speech recognition depending on task demands, such as the severity of degradation of the signal or modality (auditory vs. audiovisual [AV]). This concept is particularly relevant to the adult cochlear implant (CI) population, considering the large amount of variability among CI users in their spectro-temporal processing abilities. However, disentangling the effects of individual differences in spectro-temporal processing and neurocognitive skills on speech recognition in clinical populations of adult CI users is challenging. Thus, this study investigated the relationship between neurocognitive functions and recognition of spectrally degraded speech in a group of young adult normal-hearing (NH) listeners. PURPOSE The aim of this study was to manipulate the degree of spectral degradation and modality of speech presented to young adult NH listeners to determine whether deployment of neurocognitive skills would be affected. RESEARCH DESIGN Correlational study design. STUDY SAMPLE Twenty-one NH college students. DATA COLLECTION AND ANALYSIS Participants listened to sentences in three spectral-degradation conditions: no degradation (clear sentences); moderate degradation (8-channel noise-vocoded); and high degradation (4-channel noise-vocoded). Thirty sentences were presented in an auditory-only (A-only) modality and an AV fashion. Visual assessments from The National Institute of Health Toolbox Cognitive Battery were completed to evaluate working memory, inhibition-concentration, cognitive flexibility, and processing speed. Analyses of variance compared speech recognition performance among spectral degradation condition and modality. Bivariate correlation analyses were performed among speech recognition performance and the neurocognitive skills in the various test conditions. RESULTS Main effects on sentence recognition were found for degree of degradation (p = < 0.001) and modality (p = < 0.001). Inhibition-concentration skills moderately correlated (r = 0.45, p = 0.02) with recognition scores for sentences that were moderately degraded in the A-only condition. No correlations were found among neurocognitive scores and AV speech recognition scores. CONCLUSIONS Inhibition-concentration skills are deployed differentially during sentence recognition, depending on the level of signal degradation. Additional studies will be required to study these relations in actual clinical populations such as adult CI users.
Collapse
Affiliation(s)
- Jessica H Lewis
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio.,Department of Speech and Hearing Science; The Ohio State University, Columbus, Ohio
| | - Irina Castellanos
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| | - Aaron C Moberly
- Department of Otolaryngology - Head and Neck Surgery; The Ohio State University Wexner Medical Center, Columbus, Ohio
| |
Collapse
|
50
|
McGarrigle R, Knight S, Hornsby BWY, Mattys S. Predictors of Listening-Related Fatigue Across the Adult Life Span. Psychol Sci 2021; 32:1937-1951. [PMID: 34751602 DOI: 10.1177/09567976211016410] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Listening-related fatigue is a potentially serious negative consequence of an aging auditory and cognitive system. However, the impact of age on listening-related fatigue and the factors underpinning any such effect remain unexplored. Using data from a large sample of adults (N = 281), we conducted a conditional process analysis to examine potential mediators and moderators of age-related changes in listening-related fatigue. Mediation analyses revealed opposing effects of age on listening-related fatigue: Older adults with greater perceived hearing impairment tended to report increased listening-related fatigue. However, aging was otherwise associated with decreased listening-related fatigue via reductions in both mood disturbance and sensory-processing sensitivity. Results suggested that the effect of auditory attention ability on listening-related fatigue was moderated by sensory-processing sensitivity; for individuals with high sensory-processing sensitivity, better auditory attention ability was associated with increased fatigue. These findings shed light on the perceptual, cognitive, and psychological factors underlying age-related changes in listening-related fatigue.
Collapse
Affiliation(s)
- Ronan McGarrigle
- Department of Psychology, University of York.,Department of Psychology, University of Bradford
| | | | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine
| | - Sven Mattys
- Department of Psychology, University of York
| |
Collapse
|