1
|
Meijer A, Benard MR, Woonink A, Başkent D, Dirks E. The Auditory Environment at Early Intervention Groups for Young Children With Hearing Loss: Signal to Noise Ratio, Background Noise, and Reverberation. Ear Hear 2025:00003446-990000000-00385. [PMID: 39789707 DOI: 10.1097/aud.0000000000001627] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2025]
Abstract
OBJECTIVES One important aspect in facilitating language access for children with hearing loss (HL) is the auditory environment. An optimal auditory environment is characterized by high signal to noise ratios (SNRs), low background noise levels, and low reverberation times. In this study, the authors describe the auditory environment of early intervention groups specifically equipped for young children with HL. DESIGN Seven early intervention groups for children with HL were included in the study. A total of 26 young children (22 to 46 months) visiting those groups participated. Language Environmental Analysis recorders were used to record all sounds around a child during one group visit. The recordings were analyzed to estimate SNR levels and background noise levels during the intervention groups. The unoccupied noise levels and reverberation times were measured in the unoccupied room either directly before or after the group visit. RESULTS The average SNR encountered by the children in the intervention groups was +13 dB SNR. The detected speech of the attending professionals achieved the +15 dB SNR recommended by the American Speech-Language-Hearing Association in approximately 42% of the time. The unoccupied noise levels were between 29 and 39 dBA, complying with acoustic norms for classroom environments (≤35 dBA, by ANSI/ASA 12.60-2010 Part 1) for six out of seven groups. Reverberation time was between 0.3 and 0.6 sec for all groups, which complies to the acoustic norms for classroom environments for children without HL (0.6 or 0.7 sec, depending on the room size), while only one group complied to the stricter norm for children with HL (0.3 sec). CONCLUSIONS The current findings show characteristics of the auditory environment of a setting that is specifically equipped and designed for groups of children with HL. Maintaining favorable SNRs seems to be the largest challenge to achieve within the constraints of an environment where young children gather, play, and learn. The results underscore the importance of staying attentive to keep spoken language accessible for children with HL in a group setting.
Collapse
Affiliation(s)
| | | | | | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Evelien Dirks
- Dutch Foundation of the Deaf and Hard of Hearing Child (NSDSK), Amsterdam, The Netherlands
- Department Tranzo, Tilburg University, the Netherlands
| |
Collapse
|
2
|
Gordon KR, Lewis D, Lowry S, Smith M, Stecker GC, McCreery RW. Remote Microphones Support Speech Recognition in Noise and Reverberation for Children With a Language Disorder. Lang Speech Hear Serv Sch 2025; 56:225-233. [PMID: 39723922 DOI: 10.1044/2024_lshss-24-00018] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2024] Open
Abstract
PURPOSE Children with typical hearing and various language and cognitive challenges can struggle with processing speech in background noise. Thus, children with a language disorder (LD) are at risk for difficulty with speech recognition in poorer acoustic environments. METHOD The current study compared the effects of background speech-shaped noise (SSN) with and without reverberation on sentence recognition for children with LD (n = 9) and typical language development (TLD; n = 9). We also investigated whether the use of a remote microphone (RM) improved speech recognition for children with LD. RESULTS Children with LD demonstrated poorer sentence recognition than peers with TLD in SSN. Both groups had poorer sentence recognition with SSN + reverberation than SSN alone. Notably, using an RM improved speech recognition for children with LD in SSN and SSN + reverberation. CONCLUSION We discuss educational implications and future research questions to identify how to optimally support speech recognition in noisy environments for children with LD. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.28037984.
Collapse
Affiliation(s)
- Katherine R Gordon
- Center for Childhood Deafness, Language, and Learning, Boys Town National Research Hospital, Omaha, NE
| | - Dawna Lewis
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Stephanie Lowry
- Center for Childhood Deafness, Language, and Learning, Boys Town National Research Hospital, Omaha, NE
| | - Maggie Smith
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | | | - Ryan W McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
3
|
Mansouri N, Javanbakht M, Jahan A, Bakhshi E, Shaabani M. Improve the behavioral auditory attention training effects on the Speech-In-Noise perception with simultaneous electrical stimulation in children with hearing loss: A randomized clinical trial. Int J Pediatr Otorhinolaryngol 2025; 188:112197. [PMID: 39709688 DOI: 10.1016/j.ijporl.2024.112197] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/09/2024] [Revised: 11/22/2024] [Accepted: 12/08/2024] [Indexed: 12/24/2024]
Abstract
BACKGROUND Auditory attention is an important cognitive factor that significantly affects speech perception in noisy environments. Hearing loss can impact attention, and it can impair speech perception in noise. Auditory attention training improves speech perception in noise in children with hearing loss. Could the combination of transcranial electrical current stimulation (tES) and auditory attention training enhance the speed and effectiveness of stability potentiation improvements? This investigation explores whether applying electrical stimulation alongside targeted auditory tasks can lead to more pronounced and rapid enhancements in cognitive function. METHODS In this study, 24 children with moderate to severe S.N hearing loss were examined. The monaural-selective-auditory-attention test (mSAAT) and the test of everyday-attention-for-children (TEA-CH) were used to investigate auditory attention. The words-in-noise tests evaluated speech perception in noise. A go/no-go task was conducted to record auditory P300 evoked potential. Children were divided into three groups. Group A received auditory attention training. Group B received tDCS. Group C received combined method. The tests were repeated immediately and one month after training. RESULTS Attention and speech perception improvement was significantly higher for the group that received the combined method compared to the groups that received auditory attention training with sham or tDCS alone (P < 0.001). All three groups showed significant changes one month after the training ended. However, the group that received only tDCS demonstrated a significant decrease in improvement. CONCLUSION The study showed that combining auditory attention training with tDCS can improve speech perception in noise for children with hearing loss. Combining behavioral training with tDCS has a more significant impact than using behavioral training alone, and combined method leads to more stability improvements than using tDCS alone.
Collapse
Affiliation(s)
- Nayiere Mansouri
- Pediatric Neurorehabilitation Research Center, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran; Department of Audiology, Faculty of Rehabilitation, Tabriz University of Medical Sciences, Tabriz, Iran.
| | - Mohanna Javanbakht
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Ali Jahan
- Department of Speech Therapy, Faculty of Rehabilitation, Tabriz University of Medical Sciences, Tabriz, Iran.
| | - Enayatollah Bakhshi
- Department of Biostatistics and Epidemiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Moslem Shaabani
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| |
Collapse
|
4
|
Madhukesh S, Palaniswamy HP, Ganapathy K, Rajashekhar B, Nisha KV. The impact of tinnitus on speech perception in noise: a systematic review and meta-analysis. Eur Arch Otorhinolaryngol 2024; 281:6211-6228. [PMID: 39060407 PMCID: PMC11564254 DOI: 10.1007/s00405-024-08844-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/09/2024] [Accepted: 07/10/2024] [Indexed: 07/28/2024]
Abstract
PURPOSE Tinnitus is a condition that causes people to hear sounds without an external source. One significant issue arising from this condition is the difficulty in communicating, especially in the presence of noisy backgrounds. The process of understanding speech in challenging situations requires both cognitive and auditory abilities. Since tinnitus presents unique challenges, it is important to investigate how it affects speech perception in noise. METHOD In this review, 32 articles were investigated to determine the effect of tinnitus on the effect of speech in noise perception performance. Based on the meta-analysis performed using a random-effects model, meta-regression was used to explore the moderating effects of age and hearing acuity. RESULTS A total of 32 studies were reviewed, and the results of the meta-analysis revealed that tinnitus significantly impacts speech in terms of noise perception performance. Additionally, the regression analysis revealed that age and hearing acuity are not significant predictors of speech in noise perception. CONCLUSION Our findings suggest that tinnitus affects speech perception in noisy environments due to cognitive impairments and central auditory processing deficits. Hearing loss and aging also contribute to reduced speech in noise performance. Interventions and further research are necessary to address individual challenges associated with continuous subjective tinnitus.
Collapse
Affiliation(s)
- Sanjana Madhukesh
- Department of Speech and Hearing, Manipal College of Health Professions (MCHP), Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India
| | - Hari Prakash Palaniswamy
- Department of Speech and Hearing, Manipal College of Health Professions (MCHP), Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India.
| | - Kanaka Ganapathy
- Department of Speech and Hearing, Manipal College of Health Professions (MCHP), Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India
| | - Bellur Rajashekhar
- Department of Speech and Hearing, Manipal College of Health Professions (MCHP), Manipal Academy of Higher Education (MAHE), Manipal, Karnataka, India
| | | |
Collapse
|
5
|
Nyman A, Lieberman M, Snickars M, Persson A. Longitudinal follow-up of hearing, speech, and language skills in 6-year-old children with congenital moderate hearing loss. Int J Pediatr Otorhinolaryngol 2024; 186:112148. [PMID: 39488131 DOI: 10.1016/j.ijporl.2024.112148] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/18/2024] [Revised: 10/16/2024] [Accepted: 10/25/2024] [Indexed: 11/04/2024]
Abstract
OBJECTIVES Children born with moderate hearing loss present with speech and language outcomes at both ends of the spectrum. To explore reasons for this, the objective of this study was to follow up a group of children born with moderate sensorineural hearing loss at 6 years of age (n = 7) by investigating their outcomes in hearing, speech, and language development from time point of hearing aid fitting at 6 months. Another objective was to investigate the relationship between earlier outcomes on precursing variables to the current status in auditory, speech and language development. METHOD Earlier data from a project with the same participants of auditory variables, speech, and language development were compared to the current study outcomes at 6 years of age. Children in this study performed standardized tests of phonology (SVANTE), expressive vocabulary (BNT), and speech-in-noise test (Hagerman's sentences). Parents reported on their child's functional auditory performance in everyday life (PEACH), and demographics and general development (questionnaire). Etiology and frequency of speech and language-directed intervention from time point of diagnosis to 6 years of age were collected through medical journals. RESULTS Hearing levels were stable over time in all children but one, who had received bilateral cochlear implants. Performance on speech-in-noise testing varied in aided condition (-0.8 to 8, mean 2.65, SD 3.09) and unaided condition (7.2 dB-21.2 dB, mean 12.06, SD 4.82). Scores on the PEACH indicated further review in four of the seven children. Mean group score on consonant proficiency had increased from 3 to 6 years of age and were within age norms. Vocabulary scores were below the norms of children with typical hearing. Outcomes on vocabulary measures at 2.5 years showed strong correlations that were significant to scores on the BNT at 6 years of age (r = 0.87, p = 0.05). Correlations between hours of hearing aid use and vocabulary was not significant at 6 years of age. The frequency of intervention sessions in the first 6 years varied between participants (4-55, mean 19.1, SD 17.1). CONCLUSION Despite homogeneous hearing and other background variables in the participants from birth, large individual variations in speech and language outcomes at 6 years of age were found. Considering the many factors involved that impact the development of children with moderate hearing loss, the results suggest that monitoring early precursors in auditory, speech and language development may be helpful in setting commensurate goals for each child. Detecting additional conditions that may pose challenges in future speech and language as early as possible is important. There is ample room for improvement in terms of increasing the frequency of intervention for children with moderate hearing loss and their families.
Collapse
Affiliation(s)
- Anna Nyman
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden; Habilitation and Health, Region Stockholm, Stockholm, Sweden
| | - Marion Lieberman
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden
| | - Madelen Snickars
- Department of Hearing Habilitation for Children and Youth, Karolinska University Hospital, Stockholm, Sweden
| | - Anna Persson
- Division of Speech and Language Pathology, Department of Clinical Science, Intervention and Technology, Karolinska Institute, Stockholm, Sweden; Department of Hearing Habilitation for Children and Youth, Karolinska University Hospital, Stockholm, Sweden; Department of Ear, Nose, Throat, Hearing and Balance, Karolinska Institute, Stockholm, Sweden.
| |
Collapse
|
6
|
Anshu K, Kristensen K, Godar SP, Zhou X, Hartley SL, Litovsky RY. Speech Recognition and Spatial Hearing in Young Adults With Down Syndrome: Relationships With Hearing Thresholds and Auditory Working Memory. Ear Hear 2024; 45:1568-1584. [PMID: 39090791 PMCID: PMC11493531 DOI: 10.1097/aud.0000000000001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
OBJECTIVES Individuals with Down syndrome (DS) have a higher incidence of hearing loss (HL) compared with their peers without developmental disabilities. Little is known about the associations between HL and functional hearing for individuals with DS. This study investigated two aspects of auditory functions, "what" (understanding the content of sound) and "where" (localizing the source of sound), in young adults with DS. Speech reception thresholds in quiet and in the presence of interferers provided insight into speech recognition, that is, the "what" aspect of auditory maturation. Insights into "where" aspect of auditory maturation were gained from evaluating speech reception thresholds in colocated versus separated conditions (quantifying spatial release from masking) as well as right versus left discrimination and sound location identification. Auditory functions in the "where" domain develop during earlier stages of cognitive development in contrast with the later developing "what" functions. We hypothesized that young adults with DS would exhibit stronger "where" than "what" auditory functioning, albeit with the potential impact of HL. Considering the importance of auditory working memory and receptive vocabulary for speech recognition, we hypothesized that better speech recognition in young adults with DS, in quiet and with speech interferers, would be associated with better auditory working memory ability and receptive vocabulary. DESIGN Nineteen young adults with DS (aged 19 to 24 years) participated in the study and completed assessments on pure-tone audiometry, right versus left discrimination, sound location identification, and speech recognition in quiet and with speech interferers that were colocated or spatially separated. Results were compared with published data from children and adults without DS and HL, tested using similar protocols and stimuli. Digit Span tests assessed auditory working memory. Receptive vocabulary was examined using the Peabody Picture Vocabulary Test Fifth Edition. RESULTS Seven participants (37%) had HL in at least 1 ear; 4 individuals had mild HL, and 3 had moderate HL or worse. Participants with mild or no HL had ≥75% correct at 5° separation on the discrimination task and sound localization root mean square errors (mean ± SD: 8.73° ± 2.63°) within the range of adults in the comparison group. Speech reception thresholds in young adults with DS were higher than all comparison groups. However, spatial release from masking did not differ between young adults with DS and comparison groups. Better (lower) speech reception thresholds were associated with better hearing and better auditory working memory ability. Receptive vocabulary did not predict speech recognition. CONCLUSIONS In the absence of HL, young adults with DS exhibited higher accuracy during spatial hearing tasks as compared with speech recognition tasks. Thus, auditory processes associated with the "where" pathways appear to be a relative strength than those associated with "what" pathways in young adults with DS. Further, both HL and auditory working memory impairments contributed to difficulties in speech recognition in the presence of speech interferers. Future larger-sized samples are needed to replicate and extend our findings.
Collapse
Affiliation(s)
- Kumari Anshu
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Kayla Kristensen
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Shelly P. Godar
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Xin Zhou
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Currently at The Chinese University of Hong Kong, Hong Kong
| | - Sigan L. Hartley
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- School of Human Ecology, University of Wisconsin–Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison, Madison, WI, USA
| |
Collapse
|
7
|
Pan Y, Xiao Y. Language and executive function in Mandarin-speaking deaf and hard-of-hearing children aged 3-5. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2024:enae037. [PMID: 39277795 DOI: 10.1093/jdsade/enae037] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Revised: 08/09/2024] [Accepted: 08/27/2024] [Indexed: 09/17/2024]
Abstract
The study aimed to explore spoken language and executive function (EF) characteristics in 3-5-year-old prelingually deaf and hard-of-hearing (DHH) children, and evaluate the impact of demographic variables and EF on spoken language skills. 48 DHH children and 48 typically developing children who use auditory-oral communication were recruited. All participants underwent EF tests, including auditory working memory (WM), inhibitory control, cognitive flexibility, and the EF performance reported by parents. Using Mandarin Clinical Evaluation of Language for Preschoolers (MCELP), vocabulary comprehension, sentence comprehension, vocabulary naming, sentence structure imitation, and story narration were evaluated only in the DHH group, and their results were compared with the typical developmental level provided by MCELP. Results showed that DHH children exhibit deficiencies in different spoken language domains and EF components. While the spoken language skills of DHH children tend to improve as they age, a growing proportion of individuals fail to reach the typical developmental level. The spoken language ability in DHH children was positively correlated with age and EFs, and negatively correlated with aided hearing threshold, while auditory WM could positively predict their spoken language performance.
Collapse
Affiliation(s)
- Yuchen Pan
- School of Chinese Language and Culture, Nanjing Normal University, Nanjing, Jiangsu, China
| | - Yongtao Xiao
- School of Medical Technology and Information Engineering, Zhejiang Chinese Medical University, Hangzhou, Zhejiang, China
| |
Collapse
|
8
|
Nicastri M, Dincer D’Alessandro H, Anderson K, Ciferri M, Cavalcanti L, Greco A, Giallini I, Portanova G, Mancini P. Cross-Cultural Adaptation and Validation of the Listening Inventory for Education-Revised in Italian. Audiol Res 2024; 14:822-839. [PMID: 39311222 PMCID: PMC11417904 DOI: 10.3390/audiolres14050069] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/12/2024] [Revised: 09/10/2024] [Accepted: 09/12/2024] [Indexed: 09/26/2024] Open
Abstract
BACKGROUND Listening difficulties may frequently occur in school settings, but so far there were no tools to identify them for both hearing and hearing-impaired Italian students. This study performed cross-cultural adaptation and validation of the Listening Inventory for Education-Revised for Italian students (LIFE-R-ITA). METHODS The study procedure followed the stages suggested by the Guidelines for the Process of Cross-cultural Adaptation of Self-Report Measures. For the content validation, six cochlear implanted students (8-18 years old) pre-tested the initial version. Whenever any situation did not occur in Italy, the item was adapted to more typical listening situations in Italy. The final version of LIFE-R-ITA was administered to a sample of 223 hearing students from different school settings and educational degrees in order to collect normative data. RESULTS For the LIFE-R-ITA, hearing students showed an average score of 72.26% (SD = 11.93), reflecting some listening difficulties. The subscales (LIFE total, LIFE class, and LIFE social) indicated good internal consistency. All items were shown to be relevant. Most challenging situations happened when listening in large rooms, especially when other students made noise. LIFE social scores were significantly worse than those of LIFE class (p < 0.001). CONCLUSIONS The present study provides cross-cultural adaptation and validation for the LIFE-R-ITA along with the normative data useful to interpret the results of students with hearing loss. The LIFE-R-ITA may support teachers and clinicians in assessing students' self-perception of listening at school. Such understanding may help students overcome their listening difficulties, by planning and selecting the most effective strategies among classroom interventions.
Collapse
Affiliation(s)
- Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Hilal Dincer D’Alessandro
- Department of Audiology, Faculty of Health Sciences, Istanbul University-Cerrahpaşa, 34500 Istanbul, Turkey
| | - Karen Anderson
- Supporting Success for Children with Hearing Loss, Tampa, FL 33625, USA
| | - Miriana Ciferri
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Luca Cavalcanti
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Ginevra Portanova
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, 00185 Rome, Italy; (M.N.)
| |
Collapse
|
9
|
Lee M, Ha S. Vocal and early speech development in Korean-acquiring children with hearing loss and typical hearing. CLINICAL LINGUISTICS & PHONETICS 2024:1-20. [PMID: 39041596 DOI: 10.1080/02699206.2024.2380442] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/30/2023] [Accepted: 07/10/2024] [Indexed: 07/24/2024]
Abstract
This study investigated the vocal and early speech development of Korean-acquiring children with hearing loss (HL) who underwent early auditory amplification compared to their typical hearing (TH) counterparts. The research focused on phonological characteristics of child vocalisation based on samples collected from naturalistic home environments. One-day home recordings using a Language ENvironment Analysis (LENA) recorder were obtained from 6 children with HL and 12 children with TH who ranged from 17 to 23 months of age in Korean monolingual environments. Child volubility, canonical babbling ratio (CBR), consonant distributions, and utterance structures of vocalisations were evaluated through qualitative and quantitative analyses of vocalisation samples collected from LENA recordings. The findings revealed that children with HL displayed comparable vocalisation levels to children with TH, with no significant differences in volubility and CBR. In consonant and utterance shape inventories, noticeable quantitative and qualitative differences were observed between children with HL and those with TH. The study also suggested both universal and language-specific production patterns, revealing the early effects of ambient language on consonant distributions and utterance structures within their vocalisation repertoire. This study emphasised the role of auditory input and the importance of early auditory amplification to support speech development in children with HL.
Collapse
Affiliation(s)
- Mina Lee
- Graduate Program in Speech Language Pathology, Hallym University, Chuncheon, Korea
| | - Seunghee Ha
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, Hallym University, Chuncheon-si, Korea
| |
Collapse
|
10
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. Prelingually Deaf Children With Cochlear Implants Show Better Perception of Voice Cues and Speech in Competing Speech Than Postlingually Deaf Adults With Cochlear Implants. Ear Hear 2024; 45:952-968. [PMID: 38616318 PMCID: PMC11175806 DOI: 10.1097/aud.0000000000001489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/10/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Cambridge Hearing Group, Sound Lab, Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
11
|
Easwar V, Peng ZE, Boothalingam S, Seeto M. Neural Envelope Processing at Low Frequencies Predicts Speech Understanding of Children With Hearing Loss in Noise and Reverberation. Ear Hear 2024; 45:837-849. [PMID: 38768048 PMCID: PMC11175738 DOI: 10.1097/aud.0000000000001481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 12/22/2023] [Indexed: 05/22/2024]
Abstract
OBJECTIVE Children with hearing loss experience greater difficulty understanding speech in the presence of noise and reverberation relative to their normal hearing peers despite provision of appropriate amplification. The fidelity of fundamental frequency of voice (f0) encoding-a salient temporal cue for understanding speech in noise-could play a significant role in explaining the variance in abilities among children. However, the nature of deficits in f0 encoding and its relationship with speech understanding are poorly understood. To this end, we evaluated the influence of frequency-specific f0 encoding on speech perception abilities of children with and without hearing loss in the presence of noise and/or reverberation. METHODS In 14 school-aged children with sensorineural hearing loss fitted with hearing aids and 29 normal hearing peers, envelope following responses (EFRs) were elicited by the vowel /i/, modified to estimate f0 encoding in low (<1.1 kHz) and higher frequencies simultaneously. EFRs to /i/ were elicited in quiet, in the presence of speech-shaped noise at +5 dB signal to noise ratio, with simulated reverberation time of 0.62 sec, as well as both noise and reverberation. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape while children watched a silent movie with captions. Speech discrimination accuracy was measured using the University of Western Ontario Distinctive Features Differences test in each of the four acoustic conditions. Stimuli for EFR recordings and speech discrimination were presented monaurally. RESULTS Both groups of children demonstrated a frequency-dependent dichotomy in the disruption of f0 encoding, as reflected in EFR amplitude and phase coherence. Greater disruption (i.e., lower EFR amplitudes and phase coherence) was evident in EFRs elicited by low frequencies due to noise and greater disruption was evident in EFRs elicited by higher frequencies due to reverberation. Relative to normal hearing peers, children with hearing loss demonstrated: (a) greater disruption of f0 encoding at low frequencies, particularly in the presence of reverberation, and (b) a positive relationship between f0 encoding at low frequencies and speech discrimination in the hardest listening condition (i.e., when both noise and reverberation were present). CONCLUSIONS Together, these results provide new evidence for the persistence of suprathreshold temporal processing deficits related to f0 encoding in children despite the provision of appropriate amplification to compensate for hearing loss. These objectively measurable deficits may underlie the greater difficulty experienced by children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | - Z. Ellen Peng
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Sriram Boothalingam
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | | |
Collapse
|
12
|
Benítez-Barrera CR, Behboudi MH, Maguire MJ. Neural oscillations during predictive sentence processing in young children. BRAIN AND LANGUAGE 2024; 254:105437. [PMID: 38878494 DOI: 10.1016/j.bandl.2024.105437] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/27/2023] [Revised: 06/07/2024] [Accepted: 06/10/2024] [Indexed: 07/15/2024]
Abstract
The neural correlates of predictive processing in language, critical for efficient sentence comprehension, is well documented in adults. Specifically, adults exhibit alpha power (9-12 Hz) suppression when processing high versus low predictability sentences. This study explores whether young children exhibit similar neural mechanisms. We analyzed EEG data from 29 children aged 3-5 years listening to sentences of varying predictability. Our results revealed significant neural oscillation differences in the 5-12 Hz range between high and low predictability sentences, similar to adult patterns. Crucially, the degree of these differences correlated with children's language abilities. These findings are the first to demonstrate the neural basis of predictive processing in young children and its association with language development.
Collapse
Affiliation(s)
- Carlos R Benítez-Barrera
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, United States; Waisman Center, University of Wisconsin-Madison, United States.
| | - Mohammad Hossein Behboudi
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States; Callier Center for Communication Disorders, University of Texas at Dallas, United States
| | - Mandy J Maguire
- School of Behavioral and Brain Sciences, University of Texas at Dallas, United States; Callier Center for Communication Disorders, University of Texas at Dallas, United States
| |
Collapse
|
13
|
Roy A, Bradlow A, Souza P. Effect of frequency compression on fricative perception between normal-hearing English and Mandarin listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:3957-3967. [PMID: 38921646 DOI: 10.1121/10.0026435] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/21/2023] [Accepted: 05/26/2024] [Indexed: 06/27/2024]
Abstract
High-frequency speech information is susceptible to inaccurate perception in even mild to moderate forms of hearing loss. Some hearing aids employ frequency-lowering methods such as nonlinear frequency compression (NFC) to help hearing-impaired individuals access high-frequency speech information in more accessible lower-frequency regions. As such techniques cause significant spectral distortion, tests such as the S-Sh Confusion Test help optimize NFC settings to provide high-frequency audibility with the least distortion. Such tests have been traditionally based on speech contrasts pertinent to English. Here, the effects of NFC processing on fricative perception between English and Mandarin listeners are assessed. Small but significant differences in fricative discrimination were observed between the groups. The study demonstrates possible need for language-specific clinical fitting procedures for NFC.
Collapse
Affiliation(s)
- Abhijit Roy
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| | - Ann Bradlow
- Department of Linguistics, Northwestern University, Evanston, Illinois 60208, USA
| | - Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208, USA
| |
Collapse
|
14
|
McDaniel J, Krimm H, Schuele CM. SLPs' perceptions of language learning myths about children who are DHH. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2024; 29:245-257. [PMID: 37742092 PMCID: PMC10950421 DOI: 10.1093/deafed/enad043] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/08/2023] [Revised: 06/30/2023] [Accepted: 08/23/2023] [Indexed: 09/25/2023]
Abstract
This article reports on speech-language pathologists' (SLPs') knowledge related to myths about spoken language learning of children who are deaf and hard of hearing (DHH). The broader study was designed as a step toward narrowing the research-practice gap and providing effective, evidence-based language services to children. In the broader study, SLPs (n = 106) reported their agreement/disagreement with myth statements and true statements (n = 52) about 7 clinical topics related to speech and language development. For the current report, participant responses to 7 statements within the DHH topic were analyzed. Participants exhibited a relative strength in bilingualism knowledge for spoken languages and a relative weakness in audiovisual integration knowledge. Much individual variation was observed. Participants' responses were more likely to align with current evidence about bilingualism if the participants had less experience as an SLP. The findings provide guidance on prioritizing topics for speech-language pathology preservice and professional development.
Collapse
Affiliation(s)
- Jena McDaniel
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| | - Hannah Krimm
- Department of Communication Sciences and Special Education, University of Georgia, Athens, United States
| | - C Melanie Schuele
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, United States
| |
Collapse
|
15
|
Peng ZE, Easwar V. Development of amplitude modulation, voice onset time, and consonant identification in noise and reverberation. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1071-1085. [PMID: 38341737 DOI: 10.1121/10.0024461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/23/2023] [Accepted: 01/02/2024] [Indexed: 02/13/2024]
Abstract
Children's speech understanding is vulnerable to indoor noise and reverberation: e.g., from classrooms. It is unknown how they develop the ability to use temporal acoustic cues, specifically amplitude modulation (AM) and voice onset time (VOT), which are important for perceiving distorted speech. Through three experiments, we investigated the typical development of AM depth detection in vowels (experiment I), categorical perception of VOT (experiment II), and consonant identification (experiment III) in quiet and in speech-shaped noise (SSN) and mild reverberation in 6- to 14-year-old children. Our findings suggested that AM depth detection using a naturally produced vowel at the rate of the fundamental frequency was particularly difficult for children and with acoustic distortions. While the VOT cue salience was monotonically attenuated with increasing signal-to-noise ratio of SSN, its utility for consonant discrimination was completely removed even under mild reverberation. The reverberant energy decay in distorting critical temporal cues provided further evidence that may explain the error patterns observed in consonant identification. By 11-14 years of age, children approached adult-like performance in consonant discrimination and identification under adverse acoustics, emphasizing the need for good acoustics for younger children as they develop auditory skills to process distorted speech in everyday listening environments.
Collapse
Affiliation(s)
- Z Ellen Peng
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | | |
Collapse
|
16
|
Davidson A, Souza P. Relationships Between Auditory Processing and Cognitive Abilities in Adults: A Systematic Review. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:296-345. [PMID: 38147487 DOI: 10.1044/2023_jslhr-22-00716] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 12/28/2023]
Abstract
PURPOSE The contributions from the central auditory and cognitive systems play a major role in communication. Understanding the relationship between auditory and cognitive abilities has implications for auditory rehabilitation for clinical patients. The purpose of this systematic review is to address the question, "In adults, what is the relationship between central auditory processing abilities and cognitive abilities?" METHOD Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were followed to identify, screen, and determine eligibility for articles that addressed the research question of interest. Medical librarians and subject matter experts assisted in search strategy, keyword review, and structuring the systematic review process. To be included, articles needed to have an auditory measure (either behavioral or electrophysiologic), a cognitive measure that assessed individual ability, and the measures needed to be compared to one another. RESULTS Following two rounds of identification and screening, 126 articles were included for full analysis. Central auditory processing (CAP) measures were grouped into categories (behavioral: speech in noise, altered speech, temporal processing, binaural processing; electrophysiologic: mismatch negativity, P50, N200, P200, and P300). The most common CAP measures were sentence recognition in speech-shaped noise and the P300. Cognitive abilities were grouped into constructs, and the most common construct was working memory. The findings were mixed, encompassing both significant and nonsignificant relationships; therefore, the results do not conclusively establish a direct link between CAP and cognitive abilities. Nonetheless, several consistent relationships emerged across different domains. Distorted or noisy speech was related to working memory or processing speed. Auditory temporal order tasks showed significant relationships with working memory, fluid intelligence, or multidomain cognitive measures. For electrophysiology, relationships were observed between some cortical evoked potentials and working memory or executive/inhibitory processes. Significant results were consistent with the hypothesis that assessments of CAP and cognitive processing would be positively correlated. CONCLUSIONS Results from this systematic review summarize relationships between CAP and cognitive processing, but also underscore the complexity of these constructs, the importance of study design, and the need to select an appropriate measure. The relationship between auditory and cognitive abilities is complex but can provide informative context when creating clinical management plans. This review supports a need to develop guidelines and training for audiologists who wish to consider individual central auditory and cognitive abilities in patient care. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24855174.
Collapse
|
17
|
Lalonde K, Walker EA, Leibold LJ, McCreery RW. Predictors of Susceptibility to Noise and Speech Masking Among School-Age Children With Hearing Loss or Typical Hearing. Ear Hear 2024; 45:81-93. [PMID: 37415268 PMCID: PMC10771540 DOI: 10.1097/aud.0000000000001403] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate effects of masker type and hearing group on the relationship between school-age children's speech recognition and age, vocabulary, working memory, and selective attention. This study also explored effects of masker type and hearing group on the time course of maturation of masked speech recognition. DESIGN Participants included 31 children with normal hearing (CNH) and 41 children with mild to severe bilateral sensorineural hearing loss (CHL), between 6.7 and 13 years of age. Children with hearing aids used their personal hearing aids throughout testing. Audiometric thresholds and standardized measures of vocabulary, working memory, and selective attention were obtained from each child, along with masked sentence recognition thresholds in a steady state, speech-spectrum noise (SSN) and in a two-talker speech masker (TTS). Aided audibility through children's hearing aids was calculated based on the Speech Intelligibility Index (SII) for all children wearing hearing aids. Linear mixed effects models were used to examine the contribution of group, age, vocabulary, working memory, and attention to individual differences in speech recognition thresholds in each masker. Additional models were constructed to examine the role of aided audibility on masked speech recognition in CHL. Finally, to explore the time course of maturation of masked speech perception, linear mixed effects models were used to examine interactions between age, masker type, and hearing group as predictors of masked speech recognition. RESULTS Children's thresholds were higher in TTS than in SSN. There was no interaction of hearing group and masker type. CHL had higher thresholds than CNH in both maskers. In both hearing groups and masker types, children with better vocabularies had lower thresholds. An interaction of hearing group and attention was observed only in the TTS. Among CNH, attention predicted thresholds in TTS. Among CHL, vocabulary and aided audibility predicted thresholds in TTS. In both maskers, thresholds decreased as a function of age at a similar rate in CNH and CHL. CONCLUSIONS The factors contributing to individual differences in speech recognition differed as a function of masker type. In TTS, the factors contributing to individual difference in speech recognition further differed as a function of hearing group. Whereas attention predicted variance for CNH in TTS, vocabulary and aided audibility predicted variance in CHL. CHL required a more favorable signal to noise ratio (SNR) to recognize speech in TTS than in SSN (mean = +1 dB in TTS, -3 dB in SSN). We posit that failures in auditory stream segregation limit the extent to which CHL can recognize speech in a speech masker. Larger sample sizes or longitudinal data are needed to characterize the time course of maturation of masked speech perception in CHL.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
18
|
Flaherty MM, Price R, Murgia S, Manukian E. Can Playing a Game Improve Children's Speech Recognition? A Preliminary Study of Implicit Talker Familiarity Effects. Am J Audiol 2023:1-16. [PMID: 38056473 DOI: 10.1044/2023_aja-23-00156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
PURPOSE The goal was to evaluate whether implicit talker familiarization via an interactive computer game, designed for this study, could improve children's word recognition in classroom noise. It was hypothesized that, regardless of age, children would perform better when recognizing words spoken by the talker who was heard during the game they played. METHOD Using a one-group pretest-posttest experimental design, this study examined the impact of short-term implicit voice exposure on children's word recognition in classroom noise. Implicit voice familiarization occurred via an interactive computer game, played at home for 10 min a day for 5 days. In the game, children (8-12 years) heard one voice, intended to become the "familiar talker." Pre- and postfamiliarization, children identified words in prerecorded classroom noise. Four conditions were tested to evaluate talker familiarity and generalization effects. RESULTS Results demonstrated an 11% improvement when recognizing words spoken by the voice heard in the game ("familiar talker"). This was observed only for words that were heard in the game and did not generalize to unfamiliarized words. Before familiarization, younger children had poorer recognition than older children in all conditions; however, after familiarization, there was no effect of age on performance for familiarized stimuli. CONCLUSIONS Implicit short-term exposure to a talker has the potential to improve children's speech recognition. Therefore, leveraging talker familiarity through gameplay shows promise as a viable method for improving children's speech-in-noise recognition. However, given that improvements did not generalize to unfamiliarized words, careful consideration of exposure stimuli is necessary to optimize this approach.
Collapse
Affiliation(s)
- Mary M Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Rachael Price
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
- Department of Audiology, Children's Hospital of Philadelphia, PA
| | - Silvia Murgia
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Emma Manukian
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| |
Collapse
|
19
|
Porto L, Wouters J, van Wieringen A. Speech perception in noise, working memory, and attention in children: A scoping review. Hear Res 2023; 439:108883. [PMID: 37722287 DOI: 10.1016/j.heares.2023.108883] [Citation(s) in RCA: 6] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/28/2023] [Accepted: 09/07/2023] [Indexed: 09/20/2023]
Abstract
PURPOSE Speech perception in noise is an everyday occurrence for adults and children alike. The factors that influence how well individuals cope with noise during spoken communication are not well understood, particularly in the case of children. This article aims to review the available evidence on how working memory and attention play a role in children's speech perception in noise, how characteristics of measures affect results, and how this relationship differs in non-typical populations. METHOD This article is a scoping review of the literature available on PubMed. Forty articles were included for meeting the inclusion criteria of including children as participants, some measure of speech perception in noise, some measure of attention and/or working memory, and some attempt to establish relationships between the measures. Findings were charted and presented keeping in mind how they relate to the research questions. RESULTS The majority of studies report that attention and especially working memory are involved in speech perception in noise by children. We provide an overview of the impact of certain task characteristics on findings across the literature, as well as how these affect non-typical populations. CONCLUSION While most of the work reviewed here provides evidence suggesting that working memory and attention are important abilities employed by children in overcoming the difficulties imposed by noise during spoken communication, methodological variability still prevents a clearer picture from emerging.
Collapse
Affiliation(s)
- Lyan Porto
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium.
| | - Jan Wouters
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium; Department of Special Needs Education, University of Oslo, Norway
| |
Collapse
|
20
|
Lima JVDS, de Morais CFM, Zamberlan-Amorim NE, Mandrá PP, Reis ACMB. Neurocognitive function in children with cochlear implants and hearing aids: a systematic review. Front Neurosci 2023; 17:1242949. [PMID: 37859761 PMCID: PMC10582571 DOI: 10.3389/fnins.2023.1242949] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2023] [Accepted: 09/15/2023] [Indexed: 10/21/2023] Open
Abstract
Purpose To systematically review the existing literature that examines the relationship between cognition, hearing, and language in children using cochlear implants and hearing aids. Method The review has been registered in Prospero (Registration: CRD 42020203974). The review was based on the Preferred Reporting Items for Systematic Reviews and Meta-Analysis and examined the scientific literature in VHL, MEDLINE, CINAHL, Scopus, WOS, and Embase. It included original observational studies in children using hearing aids and/or cochlear implants who underwent cognitive and auditory and/or language tests. Data were extracted from the studies and their level of evidence was graded with the Oxford Center for Evidence-Based Medicine: Levels of Evidence. Meta-analysis could not be performed due to data heterogeneity. Outcomes are described in narrative and tables synthesis. Results The systematic search and subsequent full-text evaluation identified 21 studies, conducted in 10 different countries. Altogether, their samples comprised 1,098 individuals, aged 0.16-12.6 years. The studies assessed the following cognitive domains: memory, nonverbal cognition, reasoning, attention, executive functions, language, perceptual-motor function, visuoconstructive ability, processing speed, and phonological processing/phonological memory. Children with hearing loss using cochlear implants and hearing aids scored significantly lower in many cognitive functions than normal hearing (NH) children. Neurocognitive functions were correlated with hearing and language outcomes. Conclusion Many cognitive tools were used to assess cognitive function in children with hearing devices. Results suggest that children with cochlear implants and hearing aids have cognitive deficits; these outcomes are mainly correlated with vocabulary. This study highlights the need to understand children's cognitive function and increase the knowledge of the relationship between cognition, language, and hearing in children using cochlear implants and hearing aids.
Collapse
Affiliation(s)
- Jefferson Vilela da Silva Lima
- Postgraduate Program in Rehabilitation and Functional Performance, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Caroline Favaretto Martins de Morais
- Postgraduate Program in Rehabilitation and Functional Performance, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | - Nelma Ellen Zamberlan-Amorim
- Clinics Hospital of the Ribeirão Preto Medical School (HCFMRP-USP), University of São Paulo, Ribeirão Preto, Brazil
| | - Patricia Pupin Mandrá
- Department of Health Sciences, Ribeirão Preto Medical School, University of São Paulo, Ribeirão Preto, Brazil
| | | |
Collapse
|
21
|
Visentin C, Pellegatti M, Garraffa M, Di Domenico A, Prodi N. Individual characteristics moderate listening effort in noisy classrooms. Sci Rep 2023; 13:14285. [PMID: 37652970 PMCID: PMC10471719 DOI: 10.1038/s41598-023-40660-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2022] [Accepted: 08/16/2023] [Indexed: 09/02/2023] Open
Abstract
Comprehending the teacher's message when other students are chatting is challenging. Even though the sound environment is the same for a whole class, differences in individual performance can be observed, which might depend on a variety of personal factors and their specific interaction with the listening condition. This study was designed to explore the role of individual characteristics (reading comprehension, inhibitory control, noise sensitivity) when primary school children perform a listening comprehension task in the presence of a two-talker masker. The results indicated that this type of noise impairs children's accuracy, effort, and motivation during the task. Its specific impact depended on the level and was modulated by the child's characteristics. In particular, reading comprehension was found to support task accuracy, whereas inhibitory control moderated the effect of listening condition on the two measures of listening effort included in the study (response time and self-ratings), even though with a different pattern of association. A moderation effect of noise sensitivity on perceived listening effort was also observed. Understanding the relationship between individual characteristics and classroom sound environment has practical implications for the acoustic design of spaces promoting students' well-being, and supporting their learning performance.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy.
- Institute for Renewable Energy, Eurac Research, Via A. Volta/A. Volta Straße 13/A, 39100, Bolzano-Bozen, Italy.
| | - Matteo Pellegatti
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| | - Maria Garraffa
- School of Health Sciences, University of East Anglia, Norwich Research Park, Norwich, Norfolk, NR4 7TJ, UK
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, University of Chieti-Pescara, Via dei Vestini 31, 66100, Chieti, Italy
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122, Ferrara, Italy
| |
Collapse
|
22
|
Kara HÇ, Kara E, Ataş A. The Effect of Noise and Reverberation on Spatial Perception in Sequential Bilateral Cochlear Implant Users. J Am Acad Audiol 2023; 34:143-152. [PMID: 39471993 DOI: 10.1055/s-0044-1790266] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2024]
Abstract
BACKGROUND Spatial orientation is an executive function which includes vital activities and auditory organization according to daily bodily movements, directionality, and environmental information. It is directly linked to the vision and hearing and used throughout life, building complex relationships with these systems, based on learning. PURPOSE Our purpose in our study is to try to see the effects of noise and reverberation on the users by comparing the localization and auditory performances of the cochlear implant (CI) user individuals in a silent, noisy environment and reverberation. RESEARCH DESIGN All subjects were subjected to immitancemetric/audiological tests, language development test (TIFALDI, Receptive/Expressive Language Test score 7 years and above), localization determination in noise, and localization determination test in reverberation. Study sample: In our study, 18 female and 16 male bilateral CI users with profound sensorineural hearing loss were included. The age range of subjects was 8 years 4 months and 10 years 11 months. DATA COLLECTION AND ANALYSIS Data from subjects were collected prospectively. Data analysis was analyzed with SPSS 21 program. RESULTS It was observed that the subjects did not have difficulty in determining the direction in silent condition, but they had a significant difficulty in localizing the 135-, 225-, and 315-degree angles especially when the noise was signal-to-noise ratio (SNR) -10 dB and the reverberation was 06 and 09 second (p ≤ 0.005). Subjects' performances were significantly altered in sequential implanted users both when the SNR was changed and in the presence of reverberation (p < 0.05). CONCLUSION As a result of our study, it is thought that individuals with hearing loss will experience intense difficulties, especially in noisy and reverberant environments such as schools, and using assistive listening devices in these environmental conditions will contribute positively to their academic development.
Collapse
Affiliation(s)
- Halide Çetin Kara
- Department of ENT - Audiology, Istanbul University-Cerrahpasa, Istanbul, Türkiye
| | - Eyyup Kara
- Department of Audiology, Istanbul Universitesi-Cerrahpasa, Istanbul, Türkiye
| | - Ahmet Ataş
- Department of ENT - Audiology, Koç University, Istanbul, Türkiye
| |
Collapse
|
23
|
Wiseman KB, McCreery RW, Walker EA. Hearing Thresholds, Speech Recognition, and Audibility as Indicators for Modifying Intervention in Children With Hearing Aids. Ear Hear 2023; 44:787-802. [PMID: 36627755 PMCID: PMC10271969 DOI: 10.1097/aud.0000000000001328] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023]
Abstract
OBJECTIVES The purpose of this study was to determine if traditional audiologic measures (e.g., pure-tone average, speech recognition) and audibility-based measures predict risk for spoken language delay in children who are hard of hearing (CHH) who use hearing aids (HAs). Audibility-based measures included the Speech Intelligibility Index (SII), HA use, and auditory dosage, a measure of auditory access that weighs each child's unaided and aided audibility by the average hours of HA use per day. The authors also sought to estimate values of these measures at which CHH would be at greater risk for delayed outcomes compared with a group of children with typical hearing (CTH) matched for age and socioeconomic status, potentially signaling a need to make changes to a child's hearing technology or intervention plan. DESIGN The authors compared spoken language outcomes of 182 CHH and 78 CTH and evaluated relationships between language and audiologic measures (e.g., aided SII) in CHH using generalized additive models. They used these models to identify values associated with falling below CTH (by > 1.5 SDs from the mean) on language assessments, putting CHH at risk for language delay. RESULTS Risk for language delay was associated with aided speech recognition in noise performance (<59% phonemes correct, 95% confidence interval [55%, 62%]), aided Speech Intelligibility Index (SII < 0.61, 95% confidence internal [.53,.68]), and auditory dosage (dosage < 6.0, 95% confidence internal [5.3, 6.7]) in CHH. The level of speech recognition in quiet, unaided pure-tone average, and unaided SII that placed children at risk for language delay could not be determined due to imprecise estimates with broad confidence intervals. CONCLUSIONS Results support using aided SII, aided speech recognition in noise measures, and auditory dosage as tools to facilitate clinical decision-making, such as deciding whether changes to a child's hearing technology are warranted. Values identified in this article can complement other metrics (e.g., unaided hearing thresholds, aided speech recognition testing, language assessment) when considering changes to intervention, such as adding language supports, making HA adjustments, or referring for cochlear implant candidacy evaluation.
Collapse
Affiliation(s)
| | | | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA
| |
Collapse
|
24
|
Visentin C, Pellegatti M, Garraffa M, Di Domenico A, Prodi N. Be Quiet! Effects of Competing Speakers and Individual Characteristics on Listening Comprehension for Primary School Students. INTERNATIONAL JOURNAL OF ENVIRONMENTAL RESEARCH AND PUBLIC HEALTH 2023; 20:4822. [PMID: 36981730 PMCID: PMC10049310 DOI: 10.3390/ijerph20064822] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/29/2023] [Revised: 03/03/2023] [Accepted: 03/05/2023] [Indexed: 05/06/2023]
Abstract
Students learn in noisy classrooms, where the main sources of noise are their own voices. In this sound environment, students are not equally at risk from background noise interference during lessons, due to the moderation effect of the individual characteristics on the listening conditions. This study investigates the effect of the number of competing speakers on listening comprehension and whether this is modulated by selective attention skills, working memory, and noise sensitivity. Seventy-one primary school students aged 10 to 13 years completed a sentence comprehension task in three listening conditions: quiet, two competing speakers, and four competing speakers. Outcome measures were accuracy, listening effort (response times and self-reported), motivation, and confidence in completing the task. Individual characteristics were assessed in quiet. Results showed that the number of competing speakers has no direct effects on the task, whilst the individual characteristics were found to moderate the effect of the listening conditions. Selective attention moderated the effects on accuracy and response times, working memory on motivation, and noise sensitivity on both perceived effort and confidence. Students with low cognitive abilities and high noise sensitivity were found to be particularly at risk in the condition with two competing speakers.
Collapse
Affiliation(s)
- Chiara Visentin
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122 Ferrara, Italy; (M.P.); (N.P.)
- Institute for Renewable Energy, Eurac Research, A. Volta Straße/Via A. Volta 13/A, 39100 Bolzano-Bozen, Italy
| | - Matteo Pellegatti
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122 Ferrara, Italy; (M.P.); (N.P.)
- Institute for Renewable Energy, Eurac Research, A. Volta Straße/Via A. Volta 13/A, 39100 Bolzano-Bozen, Italy
| | - Maria Garraffa
- School of Health Sciences, University of East Anglia, Norwich Research Park, Norwich NR4 7TJ, UK;
| | - Alberto Di Domenico
- Department of Psychological, Health and Territorial Sciences, University of Chieti-Pescara, Via dei Vestini 31, 66100 Chieti, Italy;
| | - Nicola Prodi
- Department of Engineering, University of Ferrara, Via Saragat 1, 44122 Ferrara, Italy; (M.P.); (N.P.)
| |
Collapse
|
25
|
Nittrouer S, Lowenstein JH. Recognition of Sentences With Complex Syntax in Speech Babble by Adolescents With Normal Hearing or Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1110-1135. [PMID: 36758200 PMCID: PMC10205108 DOI: 10.1044/2022_jslhr-22-00407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/17/2022] [Accepted: 11/22/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE General language abilities of children with cochlear implants have been thoroughly investigated, especially at young ages, but far less is known about how well they process language in real-world settings, especially in higher grades. This study addressed this gap in knowledge by examining recognition of sentences with complex syntactic structures in backgrounds of speech babble by adolescents with cochlear implants, and peers with normal hearing. DESIGN Two experiments were conducted. First, new materials were developed using young adults with normal hearing as the normative sample, creating a corpus of sentences with controlled, but complex syntactic structures presented in three kinds of babble that varied in voice gender and number of talkers. Second, recognition by adolescents with normal hearing or cochlear implants was examined for these new materials and for sentence materials used with these adolescents at younger ages. Analyses addressed three objectives: (1) to assess the stability of speech recognition across a multiyear age range, (2) to evaluate speech recognition of sentences with complex syntax in babble, and (3) to explore how bottom-up and top-down mechanisms account for performance under these conditions. RESULTS Results showed: (1) Recognition was stable across the ages of 10-14 years for both groups. (2) Adolescents with normal hearing performed similarly to young adults with normal hearing, showing effects of syntactic complexity and background babble; adolescents with cochlear implants showed poorer recognition overall, and diminished effects of both factors. (3) Top-down language and working memory primarily explained recognition for adolescents with normal hearing, but the bottom-up process of perceptual organization primarily explained recognition for adolescents with cochlear implants. CONCLUSIONS Comprehension of language in real-world settings relies on different mechanisms for adolescents with cochlear implants than for adolescents with normal hearing. A novel finding was that perceptual organization is a critical factor. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21965228.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
26
|
Porter HL, Braza MD, Knox R, Vicente M, Buss E, Leibold LJ. "I think it impacts all areas of his life": Perspectives on hearing from mothers of individuals with Down syndrome. JOURNAL OF APPLIED RESEARCH IN INTELLECTUAL DISABILITIES 2023; 36:333-342. [PMID: 36527178 PMCID: PMC9911370 DOI: 10.1111/jar.13062] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/22/2022] [Revised: 10/31/2022] [Accepted: 11/30/2022] [Indexed: 12/23/2022]
Abstract
BACKGROUND Individuals with Down syndrome are known to have high rates of hearing loss, but it is unclear how this impacts their ability to communicate and function in real-world environments. METHODS Sixteen English-speaking and Spanish-speaking mothers of individuals with Down syndrome ages 6-40 years participated in individual, semi-structured interviews using a videoconferencing platform. Session transcripts were analysed using applied thematic analysis. RESULTS Mothers described listening environments, the impact of hearing on daily life, barriers to successful listening, and strategies to overcome communication barriers for their children with Down syndrome. CONCLUSIONS Hearing was largely discussed in terms of challenges and detriments, suggesting that hearing experiences are predominately considered to negatively impact the functional abilities of individuals with Down syndrome. Background noise and hearing loss were sources of communication difficulties. Parent-reported barriers and strategies can inform ecologically valid research priorities aimed at improving outcomes for individuals with Down syndrome.
Collapse
Affiliation(s)
- Heather L. Porter
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Meredith D. Braza
- Department of Allied Health Sciences, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Randi Knox
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Manuel Vicente
- Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, Nebraska, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| |
Collapse
|
27
|
Lewis DE. Speech Understanding in Complex Environments by School-Age Children with Mild Bilateral or Unilateral Hearing Loss. Semin Hear 2023; 44:S36-S48. [PMID: 36970648 PMCID: PMC10033204 DOI: 10.1055/s-0043-1764134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2023] Open
Abstract
Numerous studies have shown that children with mild bilateral (MBHL) or unilateral hearing loss (UHL) experience speech perception difficulties in poor acoustics. Much of the research in this area has been conducted via laboratory studies using speech-recognition tasks with a single talker and presentation via earphones and/or from a loudspeaker located directly in front of the listener. Real-world speech understanding is more complex, however, and these children may need to exert greater effort than their peers with normal hearing to understand speech, potentially impacting progress in a number of developmental areas. This article discusses issues and research relative to speech understanding in complex environments for children with MBHL or UHL and implications for real-world listening and understanding.
Collapse
Affiliation(s)
- Dawna E. Lewis
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska
| |
Collapse
|
28
|
Plasticity Changes in Central Auditory Systems of School-Age Children Following a Brief Training With a Remote Microphone System. Ear Hear 2023:00003446-990000000-00109. [PMID: 36706057 DOI: 10.1097/aud.0000000000001329] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVES The objective of this study was to investigate whether a brief speech-in-noise training with a remote microphone (RM) system (favorable listening condition) would contribute to enhanced post-training plasticity changes in the auditory system of school-age children. DESIGN Before training, event-related potentials (ERPs) were recorded from 49 typically developing children, who actively identified two syllables in quiet and in noise (+5 dB signal-to-noise ratio [SNR]). During training, children completed the same syllable identification task as in the pre-training noise condition, but received feedback on their performance. Following random assignment, half of the sample used an RM system during training (experimental group), while the other half did not (control group). That is, during training' children in the experimental group listened to a more favorable speech signal (+15 dB SNR) than children from the control group (+5 dB SNR). ERPs were collected after training at +5 dB SNR to evaluate the effects of training with and without the RM system. Electrical neuroimaging analyses quantified the effects of training in each group on ERP global field power (GFP) and topography, indexing response strength and network changes, respectively. Behavioral speech-perception-in-noise skills of children were also evaluated and compared before and after training. We hypothesized that training with the RM system (experimental group) would lead to greater enhancement of GFP and greater topographical changes post-training than training without the RM system (control group). We also expected greater behavioral improvement on the speech-perception-in-noise task when training with than without the RM system. RESULTS GFP was enhanced after training only in the experimental group. These effects were observed on early time-windows corresponding to traditional P1-N1 (100 to 200 msec) and P2-N2 (200 to 400 msec) ERP components. No training effects were observed on response topography. Finally, both groups increased their speech-perception-in-noise skills post-training. CONCLUSIONS Enhanced GFP after training with the RM system indicates plasticity changes in the neural representation of sound resulting from listening to an enriched auditory signal. Further investigation of longer training or auditory experiences with favorable listening conditions is needed to determine if that results in long-term speech-perception-in-noise benefits.
Collapse
|
29
|
Gohari N, Dastgerdi ZH, Rouhbakhsh N, Afshar S, Mobini R. Training Programs for Improving Speech Perception in Noise: A Review. J Audiol Otol 2023; 27:1-9. [PMID: 36710414 PMCID: PMC9884994 DOI: 10.7874/jao.2022.00283] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2022] [Accepted: 10/26/2022] [Indexed: 01/20/2023] Open
Abstract
Understanding speech in the presence of noise is difficult and challenging, even for people with normal hearing. Accurate pitch perception, coding and decoding of temporal and intensity cues, and cognitive factors are involved in speech perception in noise (SPIN); disruption in any of these can be a barrier to SPIN. Because the physiological representations of sounds can be corrected by exercises, training methods for any impairment can be used to improve speech perception. This study describes the various types of bottom-up training methods: pitch training based on fundamental frequency (F0) and harmonics; spatial, temporal, and phoneme training; and top-down training methods, such as cognitive training of functional memory. This study also discusses music training that affects both bottom-up and top-down components and speech training in noise. Given the effectiveness of all these training methods, we recommend identifying the defects underlying SPIN disorders and selecting the best training approach.
Collapse
Affiliation(s)
- Nasrin Gohari
- Hearing Disorders Research Center, Department of Audiology, School of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Zahra Hosseini Dastgerdi
- Department of Audiology, School of Rehabilitation, Isfahan University of Medical Sciences, Isfahan, Iran,Address for correspondence Zahra Hosseini Dastgerdi, PhD Department of Audiology, School of Rehabilitation, Isfahan University of Medical Sciences, Isfahan, Iran Tel +98-09132947800 Fax +98-(311)5145-668 E-mail
| | - Nematollah Rouhbakhsh
- Department of Audiology, School of Rehabilitation, Tehran University of Medical Sciences, Tehran, Iran
| | - Sara Afshar
- Hearing Disorders Research Center, Department of Audiology, School of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran
| | - Razieh Mobini
- Hearing Disorders Research Center, Department of Audiology, School of Rehabilitation, Hamadan University of Medical Sciences, Hamadan, Iran
| |
Collapse
|
30
|
Buss E, Felder J, Miller MK, Leibold LJ, Calandruccio L. Can Closed-Set Word Recognition Differentially Assess Vowel and Consonant Perception for School-Age Children With and Without Hearing Loss? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3934-3950. [PMID: 36194777 PMCID: PMC9927623 DOI: 10.1044/2022_jslhr-20-00749] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 04/02/2022] [Accepted: 06/18/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE Vowels and consonants play different roles in language acquisition and speech recognition, yet standard clinical tests do not assess vowel and consonant perception separately. As a result, opportunities for targeted intervention may be lost. This study evaluated closed-set word recognition tests designed to rely predominantly on either vowel or consonant perception and compared results with sentence recognition scores. METHOD Participants were children (5-17 years of age) and adults (18-38 years of age) with normal hearing and children with sensorineural hearing loss (7-17 years of age). Speech reception thresholds (SRTs) were measured in speech-shaped noise. Children with hearing loss were tested with their hearing aids. Word recognition was evaluated using a three-alternative forced-choice procedure, with a picture-pointing response; monosyllabic target words varied with respect to either consonant or vowel content. Sentence recognition was evaluated for low- and high-probability sentences. In a subset of conditions, stimuli were low-pass filtered to simulate a steeply sloping hearing loss in participants with normal hearing. RESULTS Children's SRTs improved with increasing age for words and sentences. Low-pass filtering had a larger effect for consonant-variable words than vowel-variable words for both children and adults with normal hearing, consistent with the greater high-frequency content of consonants. Children with hearing loss tested with hearing aids tended to perform more poorly than age-matched children with normal hearing, particularly for sentence recognition, but consonant- and vowel-variable word recognition did not appear to be differentially affected by the amount of high- and low-frequency hearing loss. CONCLUSIONS Closed-set recognition of consonant- and vowel-variable words appeared to differentially evaluate vowel and consonant perception but did not vary by configuration of hearing loss in this group of pediatric hearing aid users. Word scores obtained in this manner do not fully characterize the auditory abilities necessary for open-set sentence recognition, but they do provide a general estimate.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | | | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| |
Collapse
|
31
|
Stewart HJ, Cash EK, Pinkl J, Nakeva von Mentzer C, Lin L, Hunter LL, Moore DR. Adaptive Hearing Aid Benefit in Children With Mild/Moderate Hearing Loss: A Registered, Double-Blind, Randomized Clinical Trial. Ear Hear 2022; 43:1402-1415. [PMID: 35758427 DOI: 10.1097/aud.0000000000001230] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES We completed a registered double-blind randomized control trial to compare acclimatization to two hearing aid fitting algorithms by experienced pediatric hearing aid users with mild to moderate hearing loss. We hypothesized that extended use (up to 13 months) of an adaptive algorithm with integrated directionality and noise reduction, OpenSound Navigator (OSN), would result in improved performance on auditory, cognitive, academic, and caregiver- or self-report measures compared with a control, omnidirectional algorithm (OMNI). DESIGN Forty children aged 6 to 13 years with mild to moderate/severe symmetric sensorineural hearing loss completed this study. They were all experienced hearing aid users and were recruited through the Cincinnati Children's Hospital Medical Center Division of Audiology. The children were divided into 20 pairs based on similarity of age (within 1 year) and hearing loss (level and configuration). Individuals from each pair were randomly assigned to either an OSN (experimental) or OMNI (control) fitting algorithm group. Each child completed an audiology evaluation, hearing aid fitting using physically identical Oticon OPN hearing aids, follow-up audiological appointment, and 2 research visits up to 13 months apart. Research visit outcome measures covered speech perception (in quiet and in noise), novel grammar and word learning, cognition, academic ability, and caregiver report of listening behaviors. Analysis of outcome differences between visits, groups, ages, conditions and their interactions used linear mixed models. Between 22 and 39 children provided useable data for each task. RESULTS Children using the experimental (OSN) algorithm did not show any significant performance differences on the outcome measures compared with those using the control (OMNI) algorithm. Overall performance of all children in the study increased across the duration of the trial on word repetition in noise, sentence repetition in quiet, and caregivers' assessment of hearing ability. There was a significant negative relationship between age at first hearing aid use, final Reading and Mathematical ability, and caregiver rated speech hearing. A significant positive relationship was found between daily hearing aid use and study-long change in performance on the Flanker test of inhibitory control and attention. Logged daily use of hearing aids related to caregiver rated spatial hearing. All results controlled for age at testing/evaluation and false discovery rate. CONCLUSIONS Use of the experimental (OSN) algorithm neither enhanced nor reduced performance on auditory, cognitive, academic or caregiver report measures compared with the control (OMNI) algorithm. However, prolonged hearing aid use led to benefits in hearing, academic skills, attention, and caregiver evaluation.
Collapse
Affiliation(s)
- Hannah J Stewart
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Division of Psychology and Language Sciences, University College London, London, United Kingdom.,Department of Psychology, Lancaster University, Lancaster, United Kingdom
| | - Erin K Cash
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Joseph Pinkl
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Research and Development, Gateway Biotechnology Inc., Rootstown, Ohio, USA
| | - Cecilia Nakeva von Mentzer
- Department of Neuroscience, Unit for SLP, Uppsala University, Uppsala, Sweden.,Department of Research in Patient Services, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Li Lin
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Research in Patient Services, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA
| | - Lisa L Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Research in Patient Services, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Division of Audiology, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Otolaryngology, College of Medicine, University of Cincinnati, Cincinnati, Ohio, USA
| | - David R Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Research in Patient Services, Cincinnati Children's Hospital Medical Center, Cincinnati, Ohio, USA.,Department of Otolaryngology, College of Medicine, University of Cincinnati, Cincinnati, Ohio, USA.,Manchester Centre for Audiology and Deafness, University of Manchester, Manchester, United Kingdom
| | | |
Collapse
|
32
|
Zussino J, Zupan B, Preston R. Speech, language, and literacy outcomes for children with mild to moderate hearing loss: A systematic review. JOURNAL OF COMMUNICATION DISORDERS 2022; 99:106248. [PMID: 35843068 DOI: 10.1016/j.jcomdis.2022.106248] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/03/2022] [Revised: 07/06/2022] [Accepted: 07/07/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE To systematically review the current literature to describe the speech, language, and literacy skills of children with mild to moderate hearing loss (MMHL). METHOD Systematic searching of seven online databases identified 13 eligible studies examining speech, language, and literacy outcomes for children with MMHL. Studies were rated for quality. Findings were reported via narrative synthesis. RESULTS Many studies reported no significant differences between children with MMHL and hearing peers on speech, language, and literacy measures. Studies that did report significant differences reported that children with MMHL performed significantly more poorly than hearing peers in speech production, receptive morphology, following directions, recalling sentences, expressive morphology, and word and non-word reading. CONCLUSIONS Due to the heterogeneity in participant characteristics, moderating factors reported, and measures used, clear patterns in the outcomes were difficult to find. Further research into speech, language and literacy outcomes for children with MMHL from early childhood to adolescence (longitudinal studies) are required to describe possible trajectories for children with MMHL including how moderating factors (such as age of hearing aid fitting, duration of use, and access to early intervention) may be contributing to these trajectories.
Collapse
Affiliation(s)
- Jenna Zussino
- Central Queensland University, Rockhampton, QLD, Australia..
| | - Barbra Zupan
- Central Queensland University, Rockhampton, QLD, Australia
| | - Robyn Preston
- Central Queensland University, Townsville, QLD, Australia
| |
Collapse
|
33
|
The relationships between language, working memory and rapid naming in children with mild to moderate hearing loss. Int J Pediatr Otorhinolaryngol 2022; 158:111156. [PMID: 35490609 DOI: 10.1016/j.ijporl.2022.111156] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/21/2022] [Revised: 03/30/2022] [Accepted: 04/20/2022] [Indexed: 11/21/2022]
Abstract
OBJECTIVE Hearing loss is associated with reduced quality and quantity of auditory input, and difficulty in cognitive and language skills. This study aimed to investigate the relationship between language, working memory, and rapid naming skills in children with mild to moderate sensorineural hearing loss (MMHL). METHODS Twenty children with MMHL with the same auditory experience and demographical conditions using bilateral hearing aids were included. Verbal memory subscale of the Working Memory Scale (WMS), consisting of verbal short-term memory (V-STM) and verbal working memory (V-WM) subtests, was administered to all participants. They also completed rapid automatized naming tasks and standardized language measures. RESULTS The language score showed a moderate and significant correlation with verbal memory (VM) score (p = 0.03, r = 0.48) and a moderate and negative correlation with rapid automatized naming (RAN) duration (p = 0.06, r = -0.61). The VM score showed a moderate and significant negative correlation with RAN duration (p = 0.01, r = -0.67). The language level has a strong and significant positive correlation with V-STM (p = 0.007, r = 0.60), V-WM (p = 0.009, r = 0.58), and VM level (p = 0.003, r = 0.65). VM subtests levels have a strong and significant positive correlation with each other (p = 0.017, r = 0.53). RAN level has a strong and significant negative correlation with VM (p = 0.001, r = -0.70), V-WM (p = 0.001, r = -0.76), V-STM (p = 0.001, r = -0.69), and language level (p = 0.001, r = -0.77). CONCLUSION The results suggest that the language, verbal working memory, and rapid naming skills of children with MMHL are closely related. It is recommended that the relationship between verbal short-term memory, verbal working memory, rapid naming skills, and language skills should be considered in therapeutic and educational settings. To the best of our knowledge, this is the first study to examine the relationships between verbal-short-term -working memory, duration of rapid automatized naming, and language skills in children with MMHL.
Collapse
|
34
|
Lewis D, Spratford M, Stecker GC, McCreery RW. Remote-Microphone Benefit in Noise and Reverberation for Children Who are Hard of Hearing. J Am Acad Audiol 2022; 33:330-341. [PMID: 36577441 PMCID: PMC10300232 DOI: 10.1055/s-0042-1755319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Remote-microphone (RM) systems are designed to reduce the impact of poor acoustics on speech understanding. However, there is limited research examining the effects of adding reverberation to noise on speech understanding when using hearing aids (HAs) and RM systems. Given the significant challenges posed by environments with poor acoustics for children who are hard of hearing, we evaluated the ability of a novel RM system to address the effects of noise and reverberation. PURPOSE We assessed the effect of a recently developed RM system on aided speech perception of children who were hard of hearing in noise and reverberation and how their performance compared to peers who are not hard of hearing (i.e., who have hearing thresholds no greater than 15 dB HL). The effect of aided speech audibility on sentence recognition when using an RM system also was assessed. STUDY SAMPLE Twenty-two children with mild to severe hearing loss and 17 children who were not hard of hearing (i.e., with hearing thresholds no greater than 15 dB HL) (7-18 years) participated. DATA COLLECTION AND ANALYSIS An adaptive procedure was used to determine the signal-to-noise ratio for 50 and 95% correct sentence recognition in noise and noise plus reverberation (RT 300 ms). Linear mixed models were used to examine the effect of listening conditions on speech recognition with RMs for both groups of children and the effects of aided audibility on performance across all listening conditions for children who were hard of hearing. RESULTS Children who were hard of hearing had poorer speech recognition for HAs alone than for HAs plus RM. Regardless of hearing status, children had poorer speech recognition in noise plus reverberation than in noise alone. Children who were hard of hearing had poorer speech recognition than peers with thresholds no greater than 15 dB HL when using HAs alone but comparable or better speech recognition with HAs plus RM. Children with better-aided audibility with the HAs showed better speech recognition with the HAs alone and with HAs plus RM. CONCLUSION Providing HAs that maximize speech audibility and coupling them with RM systems has the potential to improve communication access and outcomes for children who are hard of hearing in environments with noise and reverberation.
Collapse
Affiliation(s)
- Dawna Lewis
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Meredith Spratford
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | | | - Ryan W. McCreery
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
35
|
Busch T, Brinchmann EI, Braeken J, Wie OB. Receptive Vocabulary of Children With Bilateral Cochlear Implants From 3 to 16 Years of Age. Ear Hear 2022; 43:1866-1880. [PMID: 35426854 PMCID: PMC9592181 DOI: 10.1097/aud.0000000000001220] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES The vocabulary of children with cochlear implants is often smaller than that of their peers with typical hearing, but there is uncertainty regarding the extent of the differences and potential risks and protective factors. Some studies indicate that their receptive vocabulary develops well at first, but that they fail to keep up with their typical hearing peers, causing many CI users to enter school with a receptive vocabulary that is not age-appropriate. To better understand the receptive vocabulary abilities of children with cochlear implants this study explored age-related differences to matched children with typical hearing and associations between vocabulary skills and child-level characteristics. DESIGN A retrospective cross-sectional study with matched controls was conducted at the Norwegian national cochlear implant center at Oslo University Hospital. Eighty-eight children (mean age 8.7 years; range 3.2 to 15.9; 43 girls, 45 boys) who had received bilateral cochlear implants before 3 years of age were compared with two groups of children with typical hearing. One group was matched for maternal education, sex, and chronological age, the other group was matched for maternal education, sex, and hearing age. Receptive vocabulary performance was measured with the British Picture Vocabulary Scale. RESULTS Cochlear implant users' receptive vocabulary was poorer than that of age-matched children with typical hearing ( M = 84.6 standard points, SD = 21.1; children with typical hearing: M = 102.1 standard points, SD = 15.8; mean difference -17.5 standard points, 95% CI [-23.0 to -12.0], p < 0.001; Hedges's g = -0.94, 95% CI [-1.24 to -0.62]), and children with cochlear implants were significantly more likely to perform below the normative range (risk ratio = 2.2, 95% CI [1.42 to 3.83]). However, there was a significant nonlinear U-shaped effect of age on the scores of cochlear implant users, with the difference to the matched typical hearing children being largest (23.9 standard points, on average) around 8.7 years of age and smaller toward the beginning and end of the age range. There was no significant difference compared with children with typical hearing when differences in auditory experience were accounted for. Variability was not significantly different between the groups. Further analysis with a random forest revealed that, in addition to chronological age and hearing age, simultaneous versus sequential implantation, communication mode at school, and social integration were predictors of cochlear implant users' receptive vocabulary. CONCLUSIONS On average, the receptive vocabulary of children with cochlear implants was smaller than that of their typical hearing peers. The magnitude of the difference was changing with age and was the largest for children in early primary school. The nonlinear effect of age might explain some of the ambiguity in previous research findings and could indicate that better intervention is required around school entry. The results emphasize that continuous monitoring and support are crucial to avoid far-reaching negative effects on the children's development and well-being.
Collapse
Affiliation(s)
- Tobias Busch
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Johan Braeken
- Centre for Educational Measurement, University of Oslo, Oslo, Norway
| | - Ona Bø Wie
- Department of Special Needs Education, University of Oslo, Oslo, Norway,Department of Otolaryngology, Oslo University Hospital, Oslo, Norway
| |
Collapse
|
36
|
Brennan MA, McCreery RW, Massey J. Influence of Audibility and Distortion on Recognition of Reverberant Speech for Children and Adults with Hearing Aid Amplification. J Am Acad Audiol 2022; 33:170-180. [PMID: 34695870 PMCID: PMC9112843 DOI: 10.1055/a-1678-3381] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
BACKGROUND Adults and children with sensorineural hearing loss (SNHL) have trouble understanding speech in rooms with reverberation when using hearing aid amplification. While the use of amplitude compression signal processing in hearing aids may contribute to this difficulty, there is conflicting evidence on the effects of amplitude compression settings on speech recognition. Less clear is the effect of a fast release time for adults and children with SNHL when using compression ratios derived from a prescriptive procedure. PURPOSE The aim of the study is to determine whether release time impacts speech recognition in reverberation for children and adults with SNHL and to determine if these effects of release time and reverberation can be predicted using indices of audibility or temporal and spectral distortion. RESEARCH DESIGN This is a quasi-experimental cohort study. Participants used a hearing aid simulator set to the Desired Sensation Level algorithm m[i/o] for three different amplitude compression release times. Reverberation was simulated using three different reverberation times. PARTICIPANTS Participants were 20 children and 16 adults with SNHL. DATA COLLECTION AND ANALYSES Participants were seated in a sound-attenuating booth and then nonsense syllable recognition was measured. Predictions of speech recognition were made using indices of audibility, temporal distortion, and spectral distortion and the effects of release time and reverberation were analyzed using linear mixed models. RESULTS While nonsense syllable recognition decreased in reverberation release time did not significantly affect nonsense syllable recognition. Participants with lower audibility were more susceptible to the negative effect of reverberation on nonsense syllable recognition. CONCLUSION We have extended previous work on the effects of reverberation on aided speech recognition to children with SNHL. Variations in release time did not impact the understanding of speech. An index of audibility best predicted nonsense syllable recognition in reverberation and, clinically, these results suggest that patients with less audibility are more susceptible to nonsense syllable recognition in reverberation.
Collapse
|
37
|
Abstract
OBJECTIVES The purpose of the present study was to determine whether age and hearing ability influence selective attention during childhood. Specifically, we hypothesized that immaturity and disrupted auditory experience impede selective attention during childhood. DESIGN Seventy-seven school-age children (5 to 12 years of age) participated in this study: 61 children with normal hearing and 16 children with bilateral hearing loss who use hearing aids and/or cochlear implants. Children performed selective attention-based behavioral change detection tasks comprised of target and distractor streams in the auditory and visual modalities. In the auditory modality, children were presented with two streams of single-syllable words spoken by a male and female talker. In the visual modality, children were presented with two streams of grayscale images. In each task, children were instructed to selectively attend to the target stream, inhibit attention to the distractor stream, and press a key as quickly as possible when they detected a frequency (auditory modality) or color (visual modality) deviant stimulus in the target, but not distractor, stream. Performance on the auditory and visual change detection tasks was quantified by response sensitivity, which reflects children's ability to selectively attend to deviants in the target stream and inhibit attention to those in the distractor stream. Children also completed a standardized measure of attention and inhibitory control. RESULTS Younger children and children with hearing loss demonstrated lower response sensitivity, and therefore poorer selective attention, than older children and children with normal hearing, respectively. The effect of hearing ability on selective attention was observed across the auditory and visual modalities, although the extent of this group difference was greater in the auditory modality than the visual modality due to differences in children's response patterns. Additionally, children's performance on a standardized measure of attention and inhibitory control related to their performance during the auditory and visual change detection tasks. CONCLUSIONS Overall, the findings from the present study suggest that age and hearing ability influence children's ability to selectively attend to a target stream in both the auditory and visual modalities. The observed differences in response patterns across modalities, however, reveal a complex interplay between hearing ability, task modality, and selective attention during childhood. While the effect of age on selective attention is expected to reflect the immaturity of cognitive and linguistic processes, the effect of hearing ability may reflect altered development of selective attention due to disrupted auditory experience early in life and/or a differential allocation of attentional resources to meet task demands.
Collapse
|
38
|
Social communication and quality of life in children using hearing aids. Int J Pediatr Otorhinolaryngol 2022; 152:111000. [PMID: 34883326 DOI: 10.1016/j.ijporl.2021.111000] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/08/2021] [Revised: 11/10/2021] [Accepted: 12/02/2021] [Indexed: 11/21/2022]
Abstract
OBJECTIVES This study compared the parent-reported structural language and social communication skills-measured with the Children's Communication Checklist-2 (CCC-2)-and health-related quality of life (HR-QOL)-measured with the Pediatric Quality of Life Inventory (PedsQL)-of children who use hearing aids (HAs) and their typical-hearing (TH) peers. DESIGN The participants were 88 children (age range of 5; 6 to 13; 1 (years; months)) and their parents: 45 children with bilateral moderate to severe hearing loss using HAs who had no additional disabilities and 43 children with typical hearing. The groups were matched based on chronological age, gender, nonverbal IQ, and parental education level. The parents completed questionnaires related to their children's communication skills, including subdomains structural language and social communication, and HR-QOL. RESULTS The HA group had significantly poorer overall communication skills than the TH group (r = 0.49). The children in the HA group scored significantly lower than the TH group on both structural language (r = 0.37) and social communication (r = 0.41). Half of the children in the HA group had overall communication scores that either indicated concern or required further investigation according to the instrument's manual. In terms of psychosocial functioning, which was measured as HR-QOL, the subdomain school functioning was the main driver of the difference between groups, with the HA group being at least twice as likely (OR = 2.52) as the TH group to have poor HR-QOL in the school domain. Better parent-reported social communication was associated with better parent-reported psychosocial functioning in the children using HAs-even when background variables were taken into account. CONCLUSION The results suggest that traditional assessments and interventions targeting structural aspects of language may overlook social communication difficulties in children with HAs, even those with no additional disabilities. As school functioning stood out as the most problematic domain for children with HAs, efforts to improve the well-being of these children should focus on this area.
Collapse
|
39
|
Gustafson SJ, Camarata S, Hornsby BWY, Bess FH. Perceived Listening Difficulty in the Classroom, Not Measured Noise Levels, Is Associated With Fatigue in Children With and Without Hearing Loss. Am J Audiol 2021; 30:956-967. [PMID: 34464548 PMCID: PMC9126126 DOI: 10.1044/2021_aja-21-00065] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to examine if classroom noise levels and perceived listening difficulty were related to fatigue reported by children with and without hearing loss. METHOD Measures of classroom noise and reports of classroom listening difficulty were obtained from 79 children (ages 6-12 years) at two time points on two different school days. Forty-four children had mild to moderately severe hearing loss in at least one ear. Multiple regression analyses were conducted to evaluate if measured noise levels, perceived listening difficulty, hearing status, language abilities, or grade level would predict self-reported fatigue ratings measured using the Pediatric Quality of Life Inventory Multidimensional Fatigue Scale. RESULTS Higher perceived listening difficulty was the only predictor variable that was associated with greater self-reported fatigue. CONCLUSIONS Measured classroom noise levels showed no systematic relationship with fatigue ratings, suggesting that actual classroom noise levels do not contribute to increased reports of subjective fatigue. Instead, perceived challenges with listening appears to be an important factor for consideration in future work examining listening-related fatigue in children with and without hearing loss.
Collapse
Affiliation(s)
- Samantha J. Gustafson
- Department of Communication Sciences and Disorders, The University of Utah, Salt Lake City
| | - Stephen Camarata
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
| | - Benjamin W. Y. Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
| | - Fred H. Bess
- Department of Hearing and Speech Sciences, Vanderbilt University School of Medicine, Nashville, TN
| |
Collapse
|
40
|
Peng ZE, Pausch F, Fels J. Spatial release from masking in reverberation for school-age children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3263. [PMID: 34852617 PMCID: PMC8730369 DOI: 10.1121/10.0006752] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/29/2021] [Accepted: 09/29/2021] [Indexed: 05/06/2023]
Abstract
Understanding speech in noisy environments, such as classrooms, is a challenge for children. When a spatial separation is introduced between the target and masker, as compared to when both are co-located, children demonstrate intelligibility improvement of the target speech. Such intelligibility improvement is known as spatial release from masking (SRM). In most reverberant environments, binaural cues associated with the spatial separation are distorted; the extent to which such distortion will affect children's SRM is unknown. Two virtual acoustic environments with reverberation times between 0.4 s and 1.1 s were compared. SRM was measured using a spatial separation with symmetrically displaced maskers to maximize access to binaural cues. The role of informational masking in modulating SRM was investigated through voice similarity between the target and masker. Results showed that, contradictory to previous developmental findings on free-field SRM, children's SRM in reverberation has not yet reached maturity in the 7-12 years age range. When reducing reverberation, an SRM improvement was seen in adults but not in children. Our findings suggest that, even though school-age children have access to binaural cues that are distorted in reverberation, they demonstrate immature use of such cues for speech-in-noise perception, even in mild reverberation.
Collapse
Affiliation(s)
- Z Ellen Peng
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| | - Florian Pausch
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| |
Collapse
|
41
|
Brennan MA, Browning JM, Spratford M, Kirby BJ, McCreery RW. Influence of aided audibility on speech recognition performance with frequency composition for children and adults. Int J Audiol 2021; 60:849-857. [PMID: 33719807 PMCID: PMC8440664 DOI: 10.1080/14992027.2021.1893839] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2019] [Revised: 01/07/2021] [Accepted: 02/11/2021] [Indexed: 01/12/2023]
Abstract
OBJECTIVE The primary purpose of this project was to evaluate the influence of speech audibility on speech recognition with frequency composition, a frequency-lowering algorithm used in hearing aids. DESIGN Participants were tested to determine word and sentence recognition thresholds in background noise, with and without frequency composition. The audibility of speech was quantified using the speech intelligibility index (SII). STUDY SAMPLE Participants included 17 children (ages 6-16) and 21 adults (ages 19 to 72) with bilateral mild-to-severe sensorineural hearing loss. RESULTS Word and sentence recognition thresholds did not change significantly with frequency composition. Participants with better aided speech audibility had better speech recognition in noise, regardless of processing condition, than those with poorer aided audibility. For the child participants, changes in the word recognition threshold between processing conditions were predictable from aided speech audibility. However, this relationship depended strongly on one participant with a low SII and otherwise, changes in speech recognition between frequency composition off and on were not predicable from aided speech audibility. CONCLUSION While these results suggest that children who have a low-aided SII may benefit from frequency composition, further data are needed to generalise these findings to a greater number of participants and variety of stimuli.
Collapse
Affiliation(s)
- Marc A. Brennan
- Department of Special Education and Communication Disorders, University of Nebraska-Lincoln
| | | | | | - Benjamin J. Kirby
- Department of Audiology & Speech-Language Pathology, University of North Texas
| | | |
Collapse
|
42
|
McSweeny C, Cushing SL, Campos JL, Papsin BC, Gordon KA. Functional Consequences of Poor Binaural Hearing in Development: Evidence From Children With Unilateral Hearing Loss and Children Receiving Bilateral Cochlear Implants. Trends Hear 2021; 25:23312165211051215. [PMID: 34661482 PMCID: PMC8527588 DOI: 10.1177/23312165211051215] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Poor binaural hearing in children was hypothesized to contribute to related cognitive and
academic deficits. Children with unilateral hearing have normal hearing in one ear but no
access to binaural cues. Their cognitive and academic deficits could be unique from
children receiving bilateral cochlear implants (CIs) at young ages who have poor access to
spectral cues and impaired binaural sensitivity. Both groups are at risk for
vestibular/balance deficits which could further contribute to memory and learning
challenges. Eighty-eight children (43 male:45 female, aged 9.89 ± 3.40 years), grouped
by unilateral hearing loss (n = 20), bilateral CI
(n = 32), and typically developing (n = 36), completed a
battery of sensory, cognitive, and academic tests. Analyses revealed that children in both
hearing loss groups had significantly poorer skills (accounting for age) on most tests
than their normal hearing peers. Children with unilateral hearing loss had more asymmetric
speech perception than children with bilateral CIs (p < .0001) but
balance and language deficits (p = .0004, p < .0001,
respectively) were similar in the two hearing loss groups (p > .05).
Visuospatial memory deficits occurred in both hearing loss groups
(p = .02) but more consistently across tests in children with unilateral
hearing loss. Verbal memory was not significantly different than normal
(p > .05). Principal component analyses revealed deficits in a main
cluster of visuospatial memory, oral language, mathematics, and reading measures
(explaining 46.8% data variability). The remaining components revealed clusters of
self-reported hearing, balance and vestibular function, and speech perception deficits.
The findings indicate significant developmental impacts of poor binaural hearing in
children.
Collapse
Affiliation(s)
- Claire McSweeny
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Sharon L Cushing
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Jennifer L Campos
- KITE-Toronto Rehabilitation Institute, Toronto, Ontario, Canada.,Department of Psychology, University of Toronto, Toronto, Ontario, Canada
| | - Blake C Papsin
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, 7979Hospital for Sick Children, Toronto, Ontario, Canada
| | - Karen A Gordon
- Archie's Cochlear Implant Lab, 7979Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, Head & Neck Surgery, Faculty of Medicine, University of Toronto, Ontario, Canada
| |
Collapse
|
43
|
Rosa BC, Souza COE, Paccola ECM, Bucuvic ÉC, Jacob RTDS. Phrases in Noise Test (PINT) Brazil: influence of the inter-stimulus interval on the performance of children with hearing impairment. Codas 2021; 33:e20200054. [PMID: 34431856 DOI: 10.1590/2317-1782/20202020054] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/17/2020] [Accepted: 11/13/2020] [Indexed: 11/21/2022] Open
Abstract
PURPOSE This study aimed to investigate, using the PINT Brasil, the influence of the interstimulus interval on the performance of children with moderate and severe hearing loss fitted with hearing aids. METHODS Ten children with normal hearing (CG) and 20 children with hearing loss (SG) participated in the study. Both groups were assessed using the speech perception test called PINT Brasil in PAUSE and NO PAUSE situations. RESULTS When comparing the PAUSE and NO PAUSE situations, only the SG presented a statistically significant difference, indicating that the NO PAUSE situation had the best performance. In this situation, the noise oscillations were smaller, and the noise reduction algorithm, which may cause the loss of message information, was not repeatedly activated. CONCLUSION The interstimulus interval in the PINT Brasil influenced the performance of children with moderate and severe hearing loss fitted with hearing aids. The NO PAUSE situation presented the best results.
Collapse
Affiliation(s)
- Bruna Camilo Rosa
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Camila Oliveira E Souza
- Departamento de Fonoaudiologia, Faculdade de Odontologia de Bauru - FOB, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Elaine Cristina Moreto Paccola
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Érika Cristina Bucuvic
- Divisão de Saúde Auditiva, Hospital de Reabilitação de Anomalias Craniofaciais - HRAC, Universidade de São Paulo - USP - Bauru (SP), Brasil
| | - Regina Tangerino de Souza Jacob
- Departamento de Fonoaudiologia, Faculdade de Odontologia de Bauru - FOB, Universidade de São Paulo - USP - Bauru (SP), Brasil
| |
Collapse
|
44
|
Flaherty MM, Browning J, Buss E, Leibold LJ. Effects of Hearing Loss on School-Aged Children's Ability to Benefit From F0 Differences Between Target and Masker Speech. Ear Hear 2021; 42:1084-1096. [PMID: 33538428 PMCID: PMC8222052 DOI: 10.1097/aud.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The objectives of the study were to (1) evaluate the impact of hearing loss on children's ability to benefit from F0 differences between target/masker speech in the context of aided speech-in-speech recognition and (2) to determine whether compromised F0 discrimination associated with hearing loss predicts F0 benefit in individual children. We hypothesized that children wearing appropriately fitted amplification would benefit from F0 differences, but they would not show the same magnitude of benefit as children with normal hearing. Reduced audibility and poor suprathreshold encoding that degrades frequency discrimination were expected to impair children's ability to segregate talkers based on F0. DESIGN Listeners were 9 to 17 year olds with bilateral, symmetrical, sensorineural hearing loss ranging in degree from mild to severe. A four-alternative, forced-choice procedure was used to estimate thresholds for disyllabic word recognition in a 60-dB-SPL two-talker masker. The same male talker produced target and masker speech. Target words had either the same mean F0 as the masker or were digitally shifted higher than the masker by three, six, or nine semitones. The F0 benefit was defined as the difference in thresholds between the shifted-F0 conditions and the unshifted-F0 condition. Thresholds for discriminating F0 were also measured, using a three-alternative, three-interval forced choice procedure, to determine whether compromised sensitivity to F0 differences due to hearing loss would predict children's ability to benefit from F0. Testing was performed in the sound field, and all children wore their personal hearing aids at user settings. RESULTS Children with hearing loss benefited from an F0 difference of nine semitones between target words and masker speech, with older children generally benefitting more than younger children. Some children benefitted from an F0 difference of six semitones, but this was not consistent across listeners. Thresholds for discriminating F0 improved with increasing age and predicted F0 benefit in the nine-semitone condition. An exploratory analysis indicated that F0 benefit was not significantly correlated with the four-frequency pure-tone average (0.5, 1, 2, and 4 kHz), aided audibility, or consistency of daily hearing aid use, although there was a trend for an association with the low-frequency pure-tone average (0.25 and 0.5 kHz). Comparisons of the present data to our previous study of children with normal hearing demonstrated that children with hearing loss benefitted less than children with normal hearing for the F0 differences tested. CONCLUSIONS The results demonstrate that children with mild-to-severe hearing loss who wear hearing aids benefit from relatively large F0 differences between target and masker speech during aided speech-in-speech recognition. The size of the benefit increases with increasing age, consistent with previously reported age effects for children with normal hearing. However, hearing loss reduces children's ability to capitalize on F0 differences between talkers. Audibility alone does not appear to be responsible for this effect; aided audibility and degree of loss were not primary predictors of performance. The ability to benefit from F0 differences may be limited by immature central processing or aspects of peripheral encoding that are not characterized in standard clinical assessments.
Collapse
Affiliation(s)
- Mary M. Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois, USA
| | | | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| |
Collapse
|
45
|
Wang LM, Brill LC. Speech and noise levels measured in occupied K-12 classrooms. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:864. [PMID: 34470284 DOI: 10.1121/10.0005815] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/12/2021] [Accepted: 07/13/2021] [Indexed: 06/13/2023]
Abstract
This project acquired sound levels logged across six school days and impulse responses in 220 classrooms across four K-12 grades. Seventy-four percent met reverberation time recommendations. Sound levels were processed to estimate occupied signal-to-noise ratios (SNRs), using Gaussian mixture modeling and from daily equivalent and statistical levels. A third method, k-means clustering, estimated SNR more precisely, separating data on nine dimensions into one group with high levels across speech frequencies and one without. The SNRs calculated as the daily difference between the average levels for the speech and non-speech clusters are found to be lower than 15 dB in 27.3% of the classrooms and differ from using the other two methods. The k-means data additionally indicate that speech occurred 30.5%-81.2% of the day, with statistically larger percentages found in grade 3 compared to higher grades. Speech levels exceeded 65 dBA 35% of the day, and non-speech levels exceeded 50 dBA 32% of the day, on average, with grades 3 and 8 experiencing speech levels exceeding 65 dBA statistically more often than the other two grades. Finally, classroom speech and non-speech levels were significantly correlated, with a 0.29 dBA increase in speech levels for every 1 dBA in non-speech levels.
Collapse
Affiliation(s)
- Lily M Wang
- Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, Omaha, Nebraska 68182, USA
| | - Laura C Brill
- Durham School of Architectural Engineering and Construction, University of Nebraska-Lincoln, Omaha, Nebraska 68182, USA
| |
Collapse
|
46
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|
47
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. School-age children benefit from voice gender cue differences for the perception of speech in competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3328. [PMID: 34241121 DOI: 10.1121/10.0004791] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 04/08/2021] [Indexed: 06/13/2023]
Abstract
Differences in speakers' voice characteristics, such as mean fundamental frequency (F0) and vocal-tract length (VTL), that primarily define speakers' so-called perceived voice gender facilitate the perception of speech in competing speech. Perceiving speech in competing speech is particularly challenging for children, which may relate to their lower sensitivity to differences in voice characteristics than adults. This study investigated the development of the benefit from F0 and VTL differences in school-age children (4-12 years) for separating two competing speakers while tasked with comprehending one of them and also the relationship between this benefit and their corresponding voice discrimination thresholds. Children benefited from differences in F0, VTL, or both cues at all ages tested. This benefit proportionally remained the same across age, although overall accuracy continued to differ from that of adults. Additionally, children's benefit from F0 and VTL differences and their overall accuracy were not related to their discrimination thresholds. Hence, although children's voice discrimination thresholds and speech in competing speech perception abilities develop throughout the school-age years, children already show a benefit from voice gender cue differences early on. Factors other than children's discrimination thresholds seem to relate more closely to their developing speech in competing speech perception abilities.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Etienne Gaudrain
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Sound Lab, Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen 9713GZ, Netherlands
| |
Collapse
|
48
|
Extended high-frequency hearing and head orientation cues benefit children during speech-in-speech recognition. Hear Res 2021; 406:108230. [PMID: 33951577 DOI: 10.1016/j.heares.2021.108230] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/03/2021] [Accepted: 03/18/2021] [Indexed: 12/29/2022]
Abstract
While the audible frequency range for humans spans approximately 20 Hz to 20 kHz, children display enhanced sensitivity relative to adults when detecting extended high frequencies (frequencies above 8 kHz; EHFs), as indicated by better pure tone thresholds. The impact that this increased hearing sensitivity to EHFs may have on children's speech recognition has not been established. One context in which EHF hearing may be particularly important for children is when recognizing speech in the presence of competing talkers. In the present study, we examined the extent to which school-age children (ages 5-17 years) with normal hearing were able to benefit from EHF cues when recognizing sentences in a two-talker speech masker. Two filtering conditions were tested: all stimuli were either full band or were low-pass filtered at 8 kHz to remove EHFs. Given that EHF energy emission in speech is highly dependent on head orientation of the talker (i.e., radiation becomes more directional with increasing frequency), two masker head angle conditions were tested: both co-located maskers were facing 45°, or both were facing 60° relative to the listener. The results demonstrated that regardless of age, children performed better when EHFs were present. In addition, a small change in masker head orientation also impacted performance, with better recognition at 60° compared to 45°. These findings suggest that EHF energy in the speech signal above 8 kHz is beneficial for children in complex listening situations. The magnitude of benefit from EHF cues and talker head orientation cues did not differ between children and adults. Therefore, while EHFs were beneficial for children as young as 5 years of age, children's generally better EHF hearing relative to adults did not provide any additional benefit.
Collapse
|
49
|
Heinrichs-Graham E, Walker EA, Eastman JA, Frenzel MR, Joe TR, McCreery RW. The impact of mild-to-severe hearing loss on the neural dynamics serving verbal working memory processing in children. Neuroimage Clin 2021; 30:102647. [PMID: 33838545 PMCID: PMC8056458 DOI: 10.1016/j.nicl.2021.102647] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 03/23/2021] [Accepted: 03/24/2021] [Indexed: 11/18/2022]
Abstract
Children with hearing loss (CHL) exhibit delays in language function relative to children with normal hearing (CNH). However, evidence on whether these delays extend into other cognitive domains such as working memory is mixed, with some studies showing decrements in CHL and others showing CHL performing at the level of CNH. Despite the growing literature investigating the impact of hearing loss on cognitive and language development, studies of the neural dynamics that underlie these cognitive processes are notably absent. This study sought to identify the oscillatory neural responses serving verbal working memory processing in CHL compared to CNH. To this end, participants with and without hearing loss performed a verbal working memory task during magnetoencephalography. Neural oscillatory responses associated with working memory encoding and maintenance were imaged separately, and these responses were statistically evaluated between CHL and CNH. While CHL performed as well on the task as CNH, CHL exhibited significantly elevated alpha-beta activity in the right frontal and precentral cortices during encoding relative to CNH. In contrast, CHL showed elevated alpha maintenance-related activity in the right precentral and parieto-occipital cortices. Crucially, right superior frontal encoding activity and right parieto-occipital maintenance activity correlated with language ability across groups. These data suggest that CHL may utilize compensatory right-hemispheric activity to achieve verbal working memory function at the level of CNH. Neural behavior in these regions may impact language function during crucial developmental ages.
Collapse
Affiliation(s)
- Elizabeth Heinrichs-Graham
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA.
| | - Elizabeth A Walker
- Wendell Johnson Speech and Hearing Center, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Jacob A Eastman
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Michaela R Frenzel
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Timothy R Joe
- Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Ryan W McCreery
- Audibility, Perception, and Cognition Laboratory, BTNRH, Omaha, NE, USA
| |
Collapse
|
50
|
Rönnberg J, Holmer E, Rudner M. Cognitive Hearing Science: Three Memory Systems, Two Approaches, and the Ease of Language Understanding Model. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:359-370. [PMID: 33439747 DOI: 10.1044/2020_jslhr-20-00007] [Citation(s) in RCA: 25] [Impact Index Per Article: 6.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to conceptualize the subtle balancing act between language input and prediction (cognitive priming of future input) to achieve understanding of communicated content. When understanding fails, reconstructive postdiction is initiated. Three memory systems play important roles: working memory (WM), episodic long-term memory (ELTM), and semantic long-term memory (SLTM). The axiom of the Ease of Language Understanding (ELU) model is that explicit WM resources are invoked by a mismatch between language input-in the form of rapid automatic multimodal binding of phonology-and multimodal phonological and lexical representations in SLTM. However, if there is a match between rapid automatic multimodal binding of phonology output and SLTM/ELTM representations, language processing continues rapidly and implicitly. Method and Results In our first ELU approach, we focused on experimental manipulations of signal processing in hearing aids and background noise to cause a mismatch with LTM representations; both resulted in increased dependence on WM. Our second-and main approach relevant for this review article-focuses on the relative effects of age-related hearing loss on the three memory systems. According to the ELU, WM is predicted to be frequently occupied with reconstruction of what was actually heard, resulting in a relative disuse of phonological/lexical representations in the ELTM and SLTM systems. The prediction and results do not depend on test modality per se but rather on the particular memory system. This will be further discussed. Conclusions Related to the literature on ELTM decline as precursors of dementia and the fact that the risk for Alzheimer's disease increases substantially over time due to hearing loss, there is a possibility that lowered ELTM due to hearing loss and disuse may be part of the causal chain linking hearing loss and dementia. Future ELU research will focus on this possibility.
Collapse
Affiliation(s)
- Jerker Rönnberg
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Emil Holmer
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| | - Mary Rudner
- Linnaeus Centre HEAD, Swedish Institute for Disability Research Department of Behavioural Sciences and Learning, Linköping University, Sweden
| |
Collapse
|