1
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. Prelingually Deaf Children With Cochlear Implants Show Better Perception of Voice Cues and Speech in Competing Speech Than Postlingually Deaf Adults With Cochlear Implants. Ear Hear 2024; 45:952-968. [PMID: 38616318 PMCID: PMC11175806 DOI: 10.1097/aud.0000000000001489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/10/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Cambridge Hearing Group, Sound Lab, Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
2
|
Easwar V, Peng ZE, Boothalingam S, Seeto M. Neural Envelope Processing at Low Frequencies Predicts Speech Understanding of Children With Hearing Loss in Noise and Reverberation. Ear Hear 2024; 45:837-849. [PMID: 38768048 PMCID: PMC11175738 DOI: 10.1097/aud.0000000000001481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 12/22/2023] [Indexed: 05/22/2024]
Abstract
OBJECTIVE Children with hearing loss experience greater difficulty understanding speech in the presence of noise and reverberation relative to their normal hearing peers despite provision of appropriate amplification. The fidelity of fundamental frequency of voice (f0) encoding-a salient temporal cue for understanding speech in noise-could play a significant role in explaining the variance in abilities among children. However, the nature of deficits in f0 encoding and its relationship with speech understanding are poorly understood. To this end, we evaluated the influence of frequency-specific f0 encoding on speech perception abilities of children with and without hearing loss in the presence of noise and/or reverberation. METHODS In 14 school-aged children with sensorineural hearing loss fitted with hearing aids and 29 normal hearing peers, envelope following responses (EFRs) were elicited by the vowel /i/, modified to estimate f0 encoding in low (<1.1 kHz) and higher frequencies simultaneously. EFRs to /i/ were elicited in quiet, in the presence of speech-shaped noise at +5 dB signal to noise ratio, with simulated reverberation time of 0.62 sec, as well as both noise and reverberation. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape while children watched a silent movie with captions. Speech discrimination accuracy was measured using the University of Western Ontario Distinctive Features Differences test in each of the four acoustic conditions. Stimuli for EFR recordings and speech discrimination were presented monaurally. RESULTS Both groups of children demonstrated a frequency-dependent dichotomy in the disruption of f0 encoding, as reflected in EFR amplitude and phase coherence. Greater disruption (i.e., lower EFR amplitudes and phase coherence) was evident in EFRs elicited by low frequencies due to noise and greater disruption was evident in EFRs elicited by higher frequencies due to reverberation. Relative to normal hearing peers, children with hearing loss demonstrated: (a) greater disruption of f0 encoding at low frequencies, particularly in the presence of reverberation, and (b) a positive relationship between f0 encoding at low frequencies and speech discrimination in the hardest listening condition (i.e., when both noise and reverberation were present). CONCLUSIONS Together, these results provide new evidence for the persistence of suprathreshold temporal processing deficits related to f0 encoding in children despite the provision of appropriate amplification to compensate for hearing loss. These objectively measurable deficits may underlie the greater difficulty experienced by children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | - Z. Ellen Peng
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Sriram Boothalingam
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | | |
Collapse
|
3
|
Carlie J, Sahlén B, Johansson R, Andersson K, Whitling S, Brännström KJ. The Effect of Background Noise, Bilingualism, Socioeconomic Status, and Cognitive Functioning on Primary School Children's Narrative Listening Comprehension. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:960-973. [PMID: 38363725 DOI: 10.1044/2023_jslhr-22-00637] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/18/2024]
Abstract
PURPOSE This study focuses on 7- to 9-year-old children attending primary school in Swedish areas of low socioeconomic status, where most children's school language is their second language. The aim was to better understand what factors influence these children's narrative listening comprehension both in an ideal listening condition (in quiet) and for the primary school classroom, a typical listening condition (with multitalker babble noise). METHOD A total of 86 typically developing 7- to 9-year-olds performed a narrative listening comprehension test (Lyssna, Förstå och Minnas [LFM]; English translation: Listen, Comprehend, and Remember) in two listening conditions: quiet and multitalker babble noise. They also performed the crosslinguistic nonword repetition test and a digit span backwards (DSB) test. A predictive statistical model including these factors, the children's degree of school language exposure, parental education level, and age was derived. RESULTS Listening condition had the strongest predictive value for LFM performance, followed by school language exposure and nonword repetition accuracy. Parental education level was also a significant predictor. There was a significant three-way interaction effect between listening condition, age, and DSB performance. CONCLUSIONS Multitalker babble noise has a negative effect on children's narrative listening comprehension. The effect of multitalker babble noise could be explained by age differences in the ability to allocate working memory capacity during the narrative listening comprehension task, suggesting that younger children may be more vulnerable for missing information when listening in background noise than their older peers. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25209248.
Collapse
Affiliation(s)
- Johanna Carlie
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Birgitta Sahlén
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | | | - Ketty Andersson
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Susanna Whitling
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| | - Karl Jonas Brännström
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences Lund, Lund University, Sweden
| |
Collapse
|
4
|
Lalonde K, Walker EA, Leibold LJ, McCreery RW. Predictors of Susceptibility to Noise and Speech Masking Among School-Age Children With Hearing Loss or Typical Hearing. Ear Hear 2024; 45:81-93. [PMID: 37415268 PMCID: PMC10771540 DOI: 10.1097/aud.0000000000001403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate effects of masker type and hearing group on the relationship between school-age children's speech recognition and age, vocabulary, working memory, and selective attention. This study also explored effects of masker type and hearing group on the time course of maturation of masked speech recognition. DESIGN Participants included 31 children with normal hearing (CNH) and 41 children with mild to severe bilateral sensorineural hearing loss (CHL), between 6.7 and 13 years of age. Children with hearing aids used their personal hearing aids throughout testing. Audiometric thresholds and standardized measures of vocabulary, working memory, and selective attention were obtained from each child, along with masked sentence recognition thresholds in a steady state, speech-spectrum noise (SSN) and in a two-talker speech masker (TTS). Aided audibility through children's hearing aids was calculated based on the Speech Intelligibility Index (SII) for all children wearing hearing aids. Linear mixed effects models were used to examine the contribution of group, age, vocabulary, working memory, and attention to individual differences in speech recognition thresholds in each masker. Additional models were constructed to examine the role of aided audibility on masked speech recognition in CHL. Finally, to explore the time course of maturation of masked speech perception, linear mixed effects models were used to examine interactions between age, masker type, and hearing group as predictors of masked speech recognition. RESULTS Children's thresholds were higher in TTS than in SSN. There was no interaction of hearing group and masker type. CHL had higher thresholds than CNH in both maskers. In both hearing groups and masker types, children with better vocabularies had lower thresholds. An interaction of hearing group and attention was observed only in the TTS. Among CNH, attention predicted thresholds in TTS. Among CHL, vocabulary and aided audibility predicted thresholds in TTS. In both maskers, thresholds decreased as a function of age at a similar rate in CNH and CHL. CONCLUSIONS The factors contributing to individual differences in speech recognition differed as a function of masker type. In TTS, the factors contributing to individual difference in speech recognition further differed as a function of hearing group. Whereas attention predicted variance for CNH in TTS, vocabulary and aided audibility predicted variance in CHL. CHL required a more favorable signal to noise ratio (SNR) to recognize speech in TTS than in SSN (mean = +1 dB in TTS, -3 dB in SSN). We posit that failures in auditory stream segregation limit the extent to which CHL can recognize speech in a speech masker. Larger sample sizes or longitudinal data are needed to characterize the time course of maturation of masked speech perception in CHL.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
5
|
Flaherty MM, Price R, Murgia S, Manukian E. Can Playing a Game Improve Children's Speech Recognition? A Preliminary Study of Implicit Talker Familiarity Effects. Am J Audiol 2023:1-16. [PMID: 38056473 DOI: 10.1044/2023_aja-23-00156] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/08/2023] Open
Abstract
PURPOSE The goal was to evaluate whether implicit talker familiarization via an interactive computer game, designed for this study, could improve children's word recognition in classroom noise. It was hypothesized that, regardless of age, children would perform better when recognizing words spoken by the talker who was heard during the game they played. METHOD Using a one-group pretest-posttest experimental design, this study examined the impact of short-term implicit voice exposure on children's word recognition in classroom noise. Implicit voice familiarization occurred via an interactive computer game, played at home for 10 min a day for 5 days. In the game, children (8-12 years) heard one voice, intended to become the "familiar talker." Pre- and postfamiliarization, children identified words in prerecorded classroom noise. Four conditions were tested to evaluate talker familiarity and generalization effects. RESULTS Results demonstrated an 11% improvement when recognizing words spoken by the voice heard in the game ("familiar talker"). This was observed only for words that were heard in the game and did not generalize to unfamiliarized words. Before familiarization, younger children had poorer recognition than older children in all conditions; however, after familiarization, there was no effect of age on performance for familiarized stimuli. CONCLUSIONS Implicit short-term exposure to a talker has the potential to improve children's speech recognition. Therefore, leveraging talker familiarity through gameplay shows promise as a viable method for improving children's speech-in-noise recognition. However, given that improvements did not generalize to unfamiliarized words, careful consideration of exposure stimuli is necessary to optimize this approach.
Collapse
Affiliation(s)
- Mary M Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Rachael Price
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
- Department of Audiology, Children's Hospital of Philadelphia, PA
| | - Silvia Murgia
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| | - Emma Manukian
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign
| |
Collapse
|
6
|
Lewis D, Al-Salim S, McDermott T, Dergan A, McCreery RW. Impact of room acoustics and visual cues on speech perception and talker localization by children with mild bilateral or unilateral hearing loss. Front Pediatr 2023; 11:1252452. [PMID: 38078311 PMCID: PMC10703386 DOI: 10.3389/fped.2023.1252452] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/03/2023] [Accepted: 10/30/2023] [Indexed: 02/12/2024] Open
Abstract
Introduction This study evaluated the ability of children (8-12 years) with mild bilateral or unilateral hearing loss (MBHL/UHL) listening unaided, or normal hearing (NH) to locate and understand talkers in varying auditory/visual acoustic environments. Potential differences across hearing status were examined. Methods Participants heard sentences presented by female talkers from five surrounding locations in varying acoustic environments. A localization-only task included two conditions (auditory only, visually guided auditory) in three acoustic environments (favorable, typical, poor). Participants were asked to locate each talker. A speech perception task included four conditions [auditory-only, visually guided auditory, audiovisual, auditory-only from 0° azimuth (baseline)] in a single acoustic environment. Participants were asked to locate talkers, then repeat what was said. Results In the localization-only task, participants were better able to locate talkers and looking times were shorter with visual guidance to talker location. Correct looking was poorest and looking times longest in the poor acoustic environment. There were no significant effects of hearing status/age. In the speech perception task, performance was highest in the audiovisual condition and was better in the visually guided and auditory-only conditions than in the baseline condition. Although audiovisual performance was best overall, children with MBHL or UHL performed more poorly than peers with NH. Better-ear pure-tone averages for children with MBHL had a greater effect on keyword understanding than did poorer-ear pure-tone averages for children with UHL. Conclusion Although children could locate talkers more easily and quickly with visual information, finding locations alone did not improve speech perception. Best speech perception occurred in the audiovisual condition; however, poorer performance by children with MBHL or UHL suggested that being able to see talkers did not overcome reduced auditory access. Children with UHL exhibited better speech perception than children with MBHL, supporting benefits of NH in at least one ear.
Collapse
Affiliation(s)
- Dawna Lewis
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
- Auditory Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Sarah Al-Salim
- Clinical Measurement Program, Boys Town National Research Hospital, Omaha, NE, United States
| | - Tessa McDermott
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Andrew Dergan
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Ryan W. McCreery
- Auditory Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| |
Collapse
|
7
|
Porto L, Wouters J, van Wieringen A. Speech perception in noise, working memory, and attention in children: A scoping review. Hear Res 2023; 439:108883. [PMID: 37722287 DOI: 10.1016/j.heares.2023.108883] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/30/2023] [Revised: 08/28/2023] [Accepted: 09/07/2023] [Indexed: 09/20/2023]
Abstract
PURPOSE Speech perception in noise is an everyday occurrence for adults and children alike. The factors that influence how well individuals cope with noise during spoken communication are not well understood, particularly in the case of children. This article aims to review the available evidence on how working memory and attention play a role in children's speech perception in noise, how characteristics of measures affect results, and how this relationship differs in non-typical populations. METHOD This article is a scoping review of the literature available on PubMed. Forty articles were included for meeting the inclusion criteria of including children as participants, some measure of speech perception in noise, some measure of attention and/or working memory, and some attempt to establish relationships between the measures. Findings were charted and presented keeping in mind how they relate to the research questions. RESULTS The majority of studies report that attention and especially working memory are involved in speech perception in noise by children. We provide an overview of the impact of certain task characteristics on findings across the literature, as well as how these affect non-typical populations. CONCLUSION While most of the work reviewed here provides evidence suggesting that working memory and attention are important abilities employed by children in overcoming the difficulties imposed by noise during spoken communication, methodological variability still prevents a clearer picture from emerging.
Collapse
Affiliation(s)
- Lyan Porto
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium.
| | - Jan Wouters
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium
| | - Astrid van Wieringen
- Department of Neurosciences, University of Leuven, Research group Experimental Oto-Rino-Laryngologie. O&N II, Herestraat 49, Leuven 3000, Belgium; Department of Special Needs Education, University of Oslo, Norway
| |
Collapse
|
8
|
Chen F, Guo Q, Deng Y, Zhu J, Zhang H. Development of Mandarin Lexical Tone Identification in Noise and Its Relation With Working Memory. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:4100-4116. [PMID: 37678219 DOI: 10.1044/2023_jslhr-22-00457] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/09/2023]
Abstract
PURPOSE This study aimed to examine the developmental trajectory of Mandarin tone identification in quiet and two noisy conditions: speech-shaped noise (SSN) and multitalker babble noise. In addition, we evaluated the relationship between tonal identification development and working memory capacity. METHOD Ninety-three typically developing children aged 5-8 years and 23 young adults completed categorical identification of two tonal continua (Tone 1-4 and Tone 2-3) in quiet, SSN, and babble noise. Their working memory was additionally measured using auditory digit span tests. Correlation analyses between digit span scores and boundary widths were performed. RESULTS Six-year-old children have achieved the adultlike ability of categorical identification of Tone 1-4 continuum under both types of noise. Moreover, 6-year-old children could identify Tone 2-3 continuum as well as adults in SSN. Nonetheless, the child participants, even 8-year-olds, performed worse when tokens from Tone 2-3 continuum were masked by babble noise. Greater working memory capacity was associated with better tone identification in noise for preschoolers aged 5-6 years; however, for school-age children aged 7-8 years, such correlation only existed in Tone 2-3 continuum in SSN. CONCLUSIONS Lexical tone perception might take a prolonged time to achieve adultlike competence in babble noise relative to SSN. Moreover, a significant interaction between masking type and stimulus difficulty was found, as indicated by Tone 2-3 being more susceptible to interference from babble noise than Tone 1-4. Furthermore, correlations between working memory capacity and tone perception in noise varied with developmental stage, stimulus difficulty, and masking type.
Collapse
Affiliation(s)
- Fei Chen
- School of Foreign Languages, Hunan University, Changsha, China
| | - Qingqing Guo
- School of Foreign Languages, Hunan University, Changsha, China
| | - Yunhua Deng
- Foreign Studies College, Hunan Normal University, Changsha, China
| | - Jiaqiang Zhu
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University, Hung Hom, Hong Kong SAR, China
| | - Hao Zhang
- Center for Clinical Neurolinguistics, School of Foreign Languages and Literature, Shandong University, Jinan, China
| |
Collapse
|
9
|
Nittrouer S, Lowenstein JH. Recognition of Sentences With Complex Syntax in Speech Babble by Adolescents With Normal Hearing or Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:1110-1135. [PMID: 36758200 PMCID: PMC10205108 DOI: 10.1044/2022_jslhr-22-00407] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/13/2022] [Revised: 10/17/2022] [Accepted: 11/22/2022] [Indexed: 05/25/2023]
Abstract
PURPOSE General language abilities of children with cochlear implants have been thoroughly investigated, especially at young ages, but far less is known about how well they process language in real-world settings, especially in higher grades. This study addressed this gap in knowledge by examining recognition of sentences with complex syntactic structures in backgrounds of speech babble by adolescents with cochlear implants, and peers with normal hearing. DESIGN Two experiments were conducted. First, new materials were developed using young adults with normal hearing as the normative sample, creating a corpus of sentences with controlled, but complex syntactic structures presented in three kinds of babble that varied in voice gender and number of talkers. Second, recognition by adolescents with normal hearing or cochlear implants was examined for these new materials and for sentence materials used with these adolescents at younger ages. Analyses addressed three objectives: (1) to assess the stability of speech recognition across a multiyear age range, (2) to evaluate speech recognition of sentences with complex syntax in babble, and (3) to explore how bottom-up and top-down mechanisms account for performance under these conditions. RESULTS Results showed: (1) Recognition was stable across the ages of 10-14 years for both groups. (2) Adolescents with normal hearing performed similarly to young adults with normal hearing, showing effects of syntactic complexity and background babble; adolescents with cochlear implants showed poorer recognition overall, and diminished effects of both factors. (3) Top-down language and working memory primarily explained recognition for adolescents with normal hearing, but the bottom-up process of perceptual organization primarily explained recognition for adolescents with cochlear implants. CONCLUSIONS Comprehension of language in real-world settings relies on different mechanisms for adolescents with cochlear implants than for adolescents with normal hearing. A novel finding was that perceptual organization is a critical factor. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21965228.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| | - Joanna H. Lowenstein
- Department of Speech, Language, and Hearing Sciences, University of Florida, Gainesville
| |
Collapse
|
10
|
Abstract
OBJECTIVE The purpose of this study was to 1) characterise word recognition in a speech masker for preschoolers tested using closed-set, forced-choice procedures and 2) better understand the stimulus and listener factors affecting performance. DESIGN Speech recognition thresholds (SRTs) in a two-talker masker were evaluated using a picture-pointing response with two sets of disyllabic target words. ChEgSS words were previously developed for children ≥5 years of age, and simple words were developed for preschoolers. Familiarisation ensured accurate identification of target words before testing. STUDY SAMPLE Participants were 3- and 4-year olds (n = 21) and young adults (n = 10) with normal hearing. RESULTS Preschoolers and adults had significantly lower SRTs for the simple words than the ChEgSS words, and lower SRTs for early-acquired than later-acquired ChEgSS words. For both word sets, SRTs were approximately 11-dB higher for preschoolers than adults, and child age was associated with SRTs. Preschoolers' receptive vocabulary size predicted performance for ChEgSS words but not simple words. CONCLUSIONS Preschoolers were more susceptible to speech-in-speech masking than adults, with a similar child-adult difference for the ChEgSS and simple words. Effects of receptive vocabulary in preschoolers' recognition of ChEgSS words indicate that vocabulary size is an important consideration, even when using closed-set methods.
Collapse
Affiliation(s)
- Christina Dubas
- Phoenix Children's Hospital, Audiology Program, Phoenix, AZ, USA
| | - Heather Porter
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, USA
| | - Ryan W McCreery
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, USA
| | - Emily Buss
- Department of Otolaryngology/HNS, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
| | - Lori J Leibold
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, USA
| |
Collapse
|
11
|
Sendesen E, Colak H, Korkut Y, Yalcınkaya E, Sennaroglu G. The right ear advantage – a perspective from speech perception in noise test. HEARING, BALANCE AND COMMUNICATION 2023. [DOI: 10.1080/21695717.2023.2181562] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 02/25/2023]
Affiliation(s)
- Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Hasan Colak
- Department of Audiology, Hacettepe University, Ankara, Turkey
- Department of Audiology, Baskent University, Ankara, Turkey
| | - Yagız Korkut
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | - Eda Yalcınkaya
- Department of Audiology, Hacettepe University, Ankara, Turkey
| | | |
Collapse
|
12
|
Köse B, Karaman-Demirel A, Çiprut A. Psychoacoustic abilities in pediatric cochlear implant recipients: The relation with short-term memory and working memory capacity. Int J Pediatr Otorhinolaryngol 2022; 162:111307. [PMID: 36116181 DOI: 10.1016/j.ijporl.2022.111307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/20/2022] [Revised: 08/30/2022] [Accepted: 08/31/2022] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The aim was to investigate school-age children with cochlear implants (CIs) and their typically developing peers in terms of auditory short-term memory (ASTM), auditory working memory (AWM), visuospatial short-term memory (VSTM), visuospatial working memory (VWM), spectral resolution and monosyllabic word recognition in noise. METHODS Twenty-three prelingually deaf CI users and twenty-three typically developing (TD) peers aged 7-10 years participated. Twelve children with CI were earlier-implanted (i.e., age at implantation ≤24 months). Children with CIs were compared to typically developing peers and correlations between cognitive and psychoacoustic abilities were computed separately for the groups. Besides, regression analyses were conducted to develop models that could predict SMRT (spectral-temporally modulated ripple test) and speech recognition scores. RESULTS The AWM scores of the later-implanted group were significantly lower than both earlier-implanted and TD groups. ASTM scores of TD children were significantly higher than both earlier-implanted and later-implanted participants. There was no statistically significant difference between groups in terms of VSTM and VWM. AWM performance was positively correlated with ASTM, SMRT scores, and speech recognition under noisy conditions for pediatric CI recipients. The AWM was a statistically significant predictor of the SMRT score and the SMRT score was an indicator of speech recognition score under 0 dB SNR condition. CONCLUSION Most of children using CI are at risk for clinically remarkable deficits across cognitive abilities such as AWM and ASTM. While evaluating cognitive and psychoacoustic abilities in the clinic routine, it should be kept in mind that they can be influenced by each other.
Collapse
Affiliation(s)
- Büşra Köse
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey; Koç University Research Center for Translational Medicine (KUTTAM), Istanbul, Turkey.
| | - Ayşenur Karaman-Demirel
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey; Vocational School of Health Services, Okan University, Istanbul, Turkey
| | - Ayça Çiprut
- Department of Audiology, School of Medicine, Marmara University, Istanbul, Turkey
| |
Collapse
|
13
|
Ching TY, Cupples L, Zhang VW. Predicting 9-Year Language Ability from Preschool Speech Recognition in Noise in Children Using Cochlear Implants. Trends Hear 2022; 26:23312165221090395. [PMID: 36285469 PMCID: PMC9608021 DOI: 10.1177/23312165221090395] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/07/2022] Open
Abstract
The presence of congenital permanent childhood hearing loss has a negative impact on children’s development and lives. The current literature documents weaknesses in speech perception in noise and language development in many children with hearing loss. However, there is a lack of clear evidence for a longitudinal relationship between early speech perception abilities and later language skills. This study addressed the evidence gap by drawing on data collected as part of the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. Cross-lagged regression analyses were used to examine the influence of speech perception in noise at age 5 years on language ability at age 9 years and vice versa (i.e. the influence of language ability at age 5 years on speech perception in noise at age 9 years). Data from 56 children using cochlear implants were analysed. We found that preschool speech perception in noise was a significant predictor of language ability at school age, after controlling for the effect of early language. The findings lend support to early intervention that targets the improvement of language skills, but also highlight the need for intervention and technology to enhance young children’s auditory capabilities for perceiving speech in noise in early childhood so that outcomes of children with hearing loss in school can be maximized.
Collapse
Affiliation(s)
- Teresa Y.C. Ching
- Macquarie University, Sydney, Australia,NextSense Institute, Sydney, Australia,University of Queensland, Brisbane, Australia,Teresa Y.C. Ching, Macquarie University, Sydney, Australia.
| | | | - Vicky W. Zhang
- Macquarie University, Sydney, Australia,National Acoustic Laboratories, Sydney, Australia
| |
Collapse
|
14
|
Benítez-Barrera CR, Skoe E, Huang J, Tharpe AM. Evidence for a Musician Speech-Perception-in-Noise Advantage in School-Age Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3996-4008. [PMID: 36194893 DOI: 10.1044/2022_jslhr-22-00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE The objective of this study was to evaluate whether child musicians are better at listening to speech in noise (SPIN) than nonmusicians of the same age. In addition, we aimed to explore whether the musician SPIN advantage in children was related to general intelligence (IQ). METHOD Fifty-one children aged 8.2-11.8 years and with different levels of music training participated in the study. A between-group design and correlational analyses were used to determine differences in SPIN skills as they relate to music training. IQ was used as a covariate to explore the relationship between intelligence and SPIN ability. RESULTS More years of music training were associated with better SPIN skills than fewer years of music training. Furthermore, this difference in SPIN skills remained even when accounting for IQ. These results were found at the group level and also when years of instrument training was treated as a continuous variable (i.e., correlational analyses). CONCLUSIONS We confirmed results from previous studies in which child musicians outperformed nonmusicians in SPIN skills. We also showed that this effect was not related to differences in IQ between the musicians and nonmusicians for this cohort of children. However, confirmation of this finding with a cohort of children from more diverse socioeconomic statuses and cognitive profiles is warranted.
Collapse
Affiliation(s)
| | | | | | - Anne Marie Tharpe
- Vanderbilt University, Nashville, TN
- Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
15
|
van Wieringen A, Wouters J. Lilliput: speech perception in speech-weighted noise and in quiet in young children. Int J Audiol 2022:1-9. [PMID: 35732012 DOI: 10.1080/14992027.2022.2086491] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/05/2022]
Abstract
OBJECTIVE The aim of this study was to develop an open-set word recognition task in speech-weighted noise and in quiet for young children and examine age effects for open versus closed response formats. DESIGN Dutch monosyllabic words were presented in quiet and in stationary speech-weighted noise to 4- and 5-year-old children as well as to young adults in an open-set response format. Additionally, performance in open and closed context was assessed, as well as in a picture-pointing paradigm. STUDY SAMPLE More than 200 children and 50 adults with normal hearing participated in the various validation phases. RESULTS Average fitted speech reception thresholds (50%) yielded an age effect between 4-year and 5-year olds (and adults), both in speech-weighted noise and in quiet. The closed-set format yielded lower (better) SNRs than the open-set format, and children benefitted to the same extent as adults from phonetically similar words in speech-weighted noise. Additionally, the 4 AFC picture-pointing paradigm can be used to assess word recognition in quiet from 3 years of age. CONCLUSIONS The same materials reveal performance differences between 4 and 5 years of age (and adults), both in quiet and speech-weighted noise using an open-set response format. This relatively small yet significant difference in SRT for a gap of only 1 year shows a developmental change for word recognition in speech-weighted noise and in quiet in the first decade of life.The study is part of the protocol registered on ClinicalTrials.gov (ID = NCT04063748).
Collapse
Affiliation(s)
- Astrid van Wieringen
- Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology, KU Leuven - University of Leuven, Leuven, Belgium
| | - Jan Wouters
- Department of Neurosciences, Research Group Experimental Oto-rhino-laryngology, KU Leuven - University of Leuven, Leuven, Belgium
| |
Collapse
|
16
|
Lewis D, Spratford M, Stecker GC, McCreery RW. Remote-Microphone Benefit in Noise and Reverberation for Children Who are Hard of Hearing. J Am Acad Audiol 2022; 33:330-341. [PMID: 36577441 PMCID: PMC10300232 DOI: 10.1055/s-0042-1755319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Remote-microphone (RM) systems are designed to reduce the impact of poor acoustics on speech understanding. However, there is limited research examining the effects of adding reverberation to noise on speech understanding when using hearing aids (HAs) and RM systems. Given the significant challenges posed by environments with poor acoustics for children who are hard of hearing, we evaluated the ability of a novel RM system to address the effects of noise and reverberation. PURPOSE We assessed the effect of a recently developed RM system on aided speech perception of children who were hard of hearing in noise and reverberation and how their performance compared to peers who are not hard of hearing (i.e., who have hearing thresholds no greater than 15 dB HL). The effect of aided speech audibility on sentence recognition when using an RM system also was assessed. STUDY SAMPLE Twenty-two children with mild to severe hearing loss and 17 children who were not hard of hearing (i.e., with hearing thresholds no greater than 15 dB HL) (7-18 years) participated. DATA COLLECTION AND ANALYSIS An adaptive procedure was used to determine the signal-to-noise ratio for 50 and 95% correct sentence recognition in noise and noise plus reverberation (RT 300 ms). Linear mixed models were used to examine the effect of listening conditions on speech recognition with RMs for both groups of children and the effects of aided audibility on performance across all listening conditions for children who were hard of hearing. RESULTS Children who were hard of hearing had poorer speech recognition for HAs alone than for HAs plus RM. Regardless of hearing status, children had poorer speech recognition in noise plus reverberation than in noise alone. Children who were hard of hearing had poorer speech recognition than peers with thresholds no greater than 15 dB HL when using HAs alone but comparable or better speech recognition with HAs plus RM. Children with better-aided audibility with the HAs showed better speech recognition with the HAs alone and with HAs plus RM. CONCLUSION Providing HAs that maximize speech audibility and coupling them with RM systems has the potential to improve communication access and outcomes for children who are hard of hearing in environments with noise and reverberation.
Collapse
Affiliation(s)
- Dawna Lewis
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Meredith Spratford
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | | | - Ryan W. McCreery
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
17
|
American Cochlear Implant Alliance Task Force Guidelines for Clinical Assessment and Management of Cochlear Implantation in Children With Single-Sided Deafness. Ear Hear 2022; 43:255-267. [PMID: 35213890 PMCID: PMC8862768 DOI: 10.1097/aud.0000000000001204] [Citation(s) in RCA: 19] [Impact Index Per Article: 9.5] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
18
|
Martin IA, Goupell MJ, Huang YT. Children's syntactic parsing and sentence comprehension with a degraded auditory signal. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:699. [PMID: 35232101 PMCID: PMC8816517 DOI: 10.1121/10.0009271] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/02/2020] [Revised: 10/15/2021] [Accepted: 12/16/2021] [Indexed: 06/14/2023]
Abstract
During sentence comprehension, young children anticipate syntactic structures using early-arriving words and have difficulties revising incorrect predictions using late-arriving words. However, nearly all work to date has focused on syntactic parsing in idealized speech environments, and little is known about how children's strategies for predicting and revising meanings are affected by signal degradation. This study compares comprehension of active and passive sentences in natural and vocoded speech. In a word-interpretation task, 5-year-olds inferred the meanings of novel words in sentences that (1) encouraged agent-first predictions (e.g., The blicket is eating the seal implies The blicket is the agent), (2) required revising predictions (e.g., The blicket is eaten by the seal implies The blicket is the theme), or (3) weakened predictions by placing familiar nouns in sentence-initial position (e.g., The seal is eating/eaten by the blicket). When novel words promoted agent-first predictions, children misinterpreted passives as actives, and errors increased with vocoded compared to natural speech. However, when familiar words were sentence-initial that weakened agent-first predictions, children accurately interpreted passives, with no signal-degradation effects. This demonstrates that signal quality interacts with interpretive processes during sentence comprehension, and the impacts of speech degradation are greatest when late-arriving information conflicts with predictions.
Collapse
Affiliation(s)
- Isabel A Martin
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Yi Ting Huang
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
19
|
Evans S, Rosen S. Who is Right? A Word-Identification-in-Noise Test for Young Children Using Minimal Pair Distracters. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:159-168. [PMID: 34910569 DOI: 10.1044/2021_jslhr-20-00658] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Many children have difficulties understanding speech. At present, there are few assessments that test for subtle impairments in speech perception with normative data from U.K. children. We present a new test that evaluates children's ability to identify target words in background noise by choosing between minimal pair alternatives that differ by a single articulatory phonetic feature. This task (a) is tailored to testing young children, but also readily applicable to adults; (b) has minimal memory demands; (c) adapts to the child's ability; and (d) does not require reading or verbal output. METHOD We tested 155 children and young adults aged from 5 to 25 years on this new test of single word perception. RESULTS Speech-in-noise abilities in this particular task develop rapidly through childhood until they reach maturity at around 9 years of age. CONCLUSIONS We make this test freely available and provide associated normative data. We hope that it will be useful to researchers and clinicians in the assessment of speech perception abilities in children who are hard of hearing or have developmental language disorder, dyslexia, or auditory processing disorder. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.17155934.
Collapse
Affiliation(s)
- Samuel Evans
- Department of Psychology, University of Westminster, London, United Kingdom
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, United Kingdom
| |
Collapse
|
20
|
Abstract
OBJECTIVES The purpose of the present study was to determine whether age and hearing ability influence selective attention during childhood. Specifically, we hypothesized that immaturity and disrupted auditory experience impede selective attention during childhood. DESIGN Seventy-seven school-age children (5 to 12 years of age) participated in this study: 61 children with normal hearing and 16 children with bilateral hearing loss who use hearing aids and/or cochlear implants. Children performed selective attention-based behavioral change detection tasks comprised of target and distractor streams in the auditory and visual modalities. In the auditory modality, children were presented with two streams of single-syllable words spoken by a male and female talker. In the visual modality, children were presented with two streams of grayscale images. In each task, children were instructed to selectively attend to the target stream, inhibit attention to the distractor stream, and press a key as quickly as possible when they detected a frequency (auditory modality) or color (visual modality) deviant stimulus in the target, but not distractor, stream. Performance on the auditory and visual change detection tasks was quantified by response sensitivity, which reflects children's ability to selectively attend to deviants in the target stream and inhibit attention to those in the distractor stream. Children also completed a standardized measure of attention and inhibitory control. RESULTS Younger children and children with hearing loss demonstrated lower response sensitivity, and therefore poorer selective attention, than older children and children with normal hearing, respectively. The effect of hearing ability on selective attention was observed across the auditory and visual modalities, although the extent of this group difference was greater in the auditory modality than the visual modality due to differences in children's response patterns. Additionally, children's performance on a standardized measure of attention and inhibitory control related to their performance during the auditory and visual change detection tasks. CONCLUSIONS Overall, the findings from the present study suggest that age and hearing ability influence children's ability to selectively attend to a target stream in both the auditory and visual modalities. The observed differences in response patterns across modalities, however, reveal a complex interplay between hearing ability, task modality, and selective attention during childhood. While the effect of age on selective attention is expected to reflect the immaturity of cognitive and linguistic processes, the effect of hearing ability may reflect altered development of selective attention due to disrupted auditory experience early in life and/or a differential allocation of attentional resources to meet task demands.
Collapse
|
21
|
Gijbels L, Yeatman JD, Lalonde K, Lee AKC. Audiovisual Speech Processing in Relationship to Phonological and Vocabulary Skills in First Graders. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:5022-5040. [PMID: 34735292 PMCID: PMC9150669 DOI: 10.1044/2021_jslhr-21-00196] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2021] [Revised: 07/06/2021] [Accepted: 08/11/2021] [Indexed: 06/13/2023]
Abstract
PURPOSE It is generally accepted that adults use visual cues to improve speech intelligibility in noisy environments, but findings regarding visual speech benefit in children are mixed. We explored factors that contribute to audiovisual (AV) gain in young children's speech understanding. We examined whether there is an AV benefit to speech-in-noise recognition in children in first grade and if visual salience of phonemes influences their AV benefit. We explored if individual differences in AV speech enhancement could be explained by vocabulary knowledge, phonological awareness, or general psychophysical testing performance. METHOD Thirty-seven first graders completed online psychophysical experiments. We used an online single-interval, four-alternative forced-choice picture-pointing task with age-appropriate consonant-vowel-consonant words to measure auditory-only, visual-only, and AV word recognition in noise at -2 and -8 dB SNR. We obtained standard measures of vocabulary and phonological awareness and included a general psychophysical test to examine correlations with AV benefits. RESULTS We observed a significant overall AV gain among children in first grade. This effect was mainly attributed to the benefit at -8 dB SNR, for visually distinct targets. Individual differences were not explained by any of the child variables. Boys showed lower auditory-only performances, leading to significantly larger AV gains. CONCLUSIONS This study shows AV benefit, of distinctive visual cues, to word recognition in challenging noisy conditions in first graders. The cognitive and linguistic constraints of the task may have minimized the impact of individual differences of vocabulary and phonological awareness on AV benefit. The gender difference should be studied on a larger sample and age range.
Collapse
Affiliation(s)
- Liesbeth Gijbels
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| | - Jason D. Yeatman
- Division of Developmental-Behavioral Pediatrics, School of Medicine, Stanford University, CA
- Graduate School of Education, Stanford University, CA
| | - Kaylah Lalonde
- Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE
| | - Adrian K. C. Lee
- Department of Speech & Hearing Sciences, University of Washington, Seattle
- Institute for Learning & Brain Sciences, University of Washington, Seattle
| |
Collapse
|
22
|
Magimairaj BM, Nagaraj NK, Champlin CA, Thibodeau LK, Loeb DF, Gillam RB. Speech Perception in Noise Predicts Oral Narrative Comprehension in Children With Developmental Language Disorder. Front Psychol 2021; 12:735026. [PMID: 34744907 PMCID: PMC8566731 DOI: 10.3389/fpsyg.2021.735026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/17/2021] [Indexed: 11/13/2022] Open
Abstract
We examined the relative contribution of auditory processing abilities (tone perception and speech perception in noise) after controlling for short-term memory capacity and vocabulary, to narrative language comprehension in children with developmental language disorder. Two hundred and sixteen children with developmental language disorder, ages 6 to 9 years (Mean = 7; 6), were administered multiple measures. The dependent variable was children's score on the narrative comprehension scale of the Test of Narrative Language. Predictors were auditory processing abilities, phonological short-term memory capacity, and language (vocabulary) factors, with age, speech perception in quiet, and non-verbal IQ as covariates. Results showed that narrative comprehension was positively correlated with the majority of the predictors. Regression analysis suggested that speech perception in noise contributed uniquely to narrative comprehension in children with developmental language disorder, over and above all other predictors; however, tone perception tasks failed to explain unique variance. The relative importance of speech perception in noise over tone-perception measures for language comprehension reinforces the need for the assessment and management of listening in noise deficits and makes a compelling case for the functional implications of complex listening situations for children with developmental language disorder.
Collapse
Affiliation(s)
- Beula M Magimairaj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Naveen K Nagaraj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Craig A Champlin
- Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Linda K Thibodeau
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, United States
| | - Diane F Loeb
- Communication Sciences and Disorders, Baylor University, Waco, TX, United States
| | - Ronald B Gillam
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| |
Collapse
|
23
|
Development of Masked Speech Detection Thresholds in 2- to 15-year-old Children: Speech-Shaped Noise and Two-Talker Speech Maskers. Ear Hear 2021; 42:1712-1726. [PMID: 33928913 DOI: 10.1097/aud.0000000000001062] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES On the basis of the data from school-aged children, there is consistent evidence that there is a prolonged course of auditory development for perceiving speech embedded in competing background sounds. Furthermore, age-related differences are prolonged and pronounced for a two-talker speech masker compared to a speech-shaped noise masker. However, little is known about the course of development during the toddler and preschool years because it is difficult to collect reliable behavioral data from this age range. The goal of this study was to extend our lower age limit to include toddlers and preschoolers to characterize the developmental trajectory for masked speech detection thresholds across childhood. DESIGN Participants were 2- to 15-year-old children (n = 67) and adults (n = 17), all with normal hearing. Thresholds (71%) were measured for detecting a two-syllable word embedded in one of two maskers: speech-shaped noise or two-talker speech. The masker was presented at 55 dB SPL throughout testing. Stimuli were presented to the left ear via a lightweight headphone. Data were collected using an observer-based testing method in which the participant's behavior was judged by an experimenter using a two-interval, two-alternative testing paradigm. The participant's response to the stimulus was shaped by training him/her to perform a conditioned play-based response to the sound. For children, receptive vocabulary and working memory were measured. Data were fitted with a linear regression model to establish the course of development for each masker condition. Appropriateness of the test method was also evaluated by determining if there were age-related differences in training data, inter-rater reliability, or slope or upper asymptote estimates from pooled psychometric functions across different age groups. RESULTS Child and adult speech detection thresholds were poorer in the two-talker masker than in the speech-shaped noise masker, but different developmental trajectories were seen for the two masker conditions. For the speech-shaped noise masker, threshold improved by about 5 dB across the age span tested, with adult-like performance being reached around 10 years of age. For the two-talker masker condition, thresholds improved by about 7 dB between 2.5 and 15 years. However, the linear fit for this condition failed to achieve adult-like performance because of limited data from teenagers. No significant age-related differences were seen in training data, probe hit rate, or inter-rater reliability. Furthermore, slope and upper asymptote estimates from pooled psychometric functions were similar across different child age groups. CONCLUSIONS Different developmental patterns were seen across the two maskers, with more pronounced child-adult differences and prolonged immaturity during childhood for the two-talker masker relative to the speech-shaped noise masker. Our data do not support the idea that there is rapid improvement of masked speech detection thresholds between 2.5 and 5 years of age. This study also highlights that our observer-based method can be used to collect reliable behavioral data from toddlers and preschoolers-a time period where we know little about auditory development.
Collapse
|
24
|
Blankenship CM, Hunter LL, Feeney MP, Cox M, Bittinger L, Garinis AC, Lin L, McPhail G, Clancy JP. Functional Impacts of Aminoglycoside Treatment on Speech Perception and Extended High-Frequency Hearing Loss in a Pediatric Cystic Fibrosis Cohort. Am J Audiol 2021; 30:834-853. [PMID: 33465313 DOI: 10.1044/2020_aja-20-00059] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/30/2023] Open
Abstract
Purpose The purpose of this study is to better understand the prevalence of ototoxicity-related hearing loss and its functional impact on communication in a pediatric and young adult cohort with cystic fibrosis (CF) and individuals without CF (controls). Method We did an observational, cross-sectional investigation of hearing function in children, teens, and young adults with CF (n = 57, M = 15.0 years) who received intravenous aminoglycoside antibiotics and age- and gender-matched controls (n = 61, M = 14.6 years). Participants completed standard and extended high-frequency audiometry, middle ear measures, speech perception tests, and a hearing and balance questionnaire. Results Individuals with CF were 3-4 times more likely to report issues with hearing, balance, and tinnitus and performed significantly poorer on speech perception tasks compared to controls. A higher prevalence of hearing loss was observed in individuals with CF (57%) compared to controls (37%). CF and control groups had similar proportions of slight and mild hearing losses; however, individuals with CF were 7.6 times more likely to have moderate and greater degrees of hearing loss. Older participants displayed higher average extended high-frequency thresholds, with no effect of age on average standard frequency thresholds. Although middle ear dysfunction has not previously been reported to be more prevalent in CF, this study showed that 16% had conductive or mixed hearing loss and higher rates of previous otitis media and pressure equalization tube surgeries compared to controls. Conclusions Individuals with CF have a higher prevalence of conductive, mixed, and sensorineural hearing loss; poorer speech-in-noise performance; and higher rates of multiple symptoms associated with otologic disorders (tinnitus, hearing difficulty, dizziness, imbalance, and otitis media) compared to controls. Accordingly, children with CF should be asked about these symptoms and receive baseline hearing assessment(s) prior to treatment with potentially ototoxic medications and at regular intervals thereafter in order to provide otologic and audiologic treatment for hearing- and ear-related problems to improve communication functioning.
Collapse
Affiliation(s)
- Chelsea M. Blankenship
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Departments of Otolaryngology and Communication Sciences and Disorders, University of Cincinnati, OH
| | - Lisa L. Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Departments of Otolaryngology and Communication Sciences and Disorders, University of Cincinnati, OH
| | - M. Patrick Feeney
- Oregon Hearing Research Center, Oregon Health & Science University, Portland
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
| | - Madison Cox
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
| | - Lindsey Bittinger
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
| | - Angela C. Garinis
- Oregon Hearing Research Center, Oregon Health & Science University, Portland
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, OR
| | - Li Lin
- Research in Patient Services, Cincinnati Children's Hospital Medical Center, OH
| | - Gary McPhail
- Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, OH
| | - John P. Clancy
- Division of Pulmonary Medicine, Cincinnati Children's Hospital Medical Center, OH
| |
Collapse
|
25
|
van der Hoek-Snieders HEM, Stegeman I, Smit AL, Rhebergen KS. Linguistic Complexity of Speech Recognition Test Sentences and Its Influence on Children's Verbal Repetition Accuracy. Ear Hear 2021; 41:1511-1517. [PMID: 33136627 DOI: 10.1097/aud.0000000000000868] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Speech recognition (SR)-tests have been developed for children without considering the linguistic complexity of the sentences used. However, linguistic complexity is hypothesized to influence correct sentence repetition. The aim of this study is to identify lexical and grammatical parameters influencing verbal repetition accuracy of sentences derived from a Dutch SR-test when performed by 6-year-old typically developing children. DESIGN For this observational, cross-sectional study, 40 typically developing children aged 6 were recruited at four primary schools in the Netherlands. All children performed a sentence repetition task derived from an SR-test for adults. The sentence complexity was described beforehand with one lexical parameter, age of acquisition, and four grammatical parameters, specifically sentence length, prepositions, sentence structure, and verb inflection. A multiple logistic regression analysis was performed. RESULTS Sentences with a higher age of acquisition (odds ratio [OR] = 1.59) or greater sentence length (OR = 1.28) had a higher risk of repetition inaccuracy. Sentences including a spatial (OR = 1.25) or other preposition (OR = 1.25) were at increased risk for incorrect repetition, as were complex sentences (OR = 1.69) and sentences in the present perfect (OR = 1.44) or future tense (OR = 2.32). CONCLUSIONS The variation in verbal repetition accuracy in 6-year-old children is significantly influenced by both lexical and grammatical parameters. Linguistic complexity is an important factor to take into account when assessing speech intelligibility in children.
Collapse
Affiliation(s)
- Hanneke E M van der Hoek-Snieders
- Department of Otorhinolaryngology and Head & Neck Surgery, Brain Center Rudolf Magnus, University Medical Center Utrecht, Utrecht, The Netherlands
| | | | | | | |
Collapse
|
26
|
Leibold LJ, Browning JM, Buss E. Masking Release for Speech-in-Speech Recognition Due to a Target/Masker Sex Mismatch in Children With Hearing Loss. Ear Hear 2021; 41:259-267. [PMID: 31365355 PMCID: PMC7310385 DOI: 10.1097/aud.0000000000000752] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The goal of the present study was to compare the extent to which children with hearing loss and children with normal hearing benefit from mismatches in target/masker sex in the context of speech-in-speech recognition. It was hypothesized that children with hearing loss experience a smaller target/masker sex mismatch benefit relative to children with normal hearing due to impairments in peripheral encoding, variable access to high-quality auditory input, or both. DESIGN Eighteen school-age children with sensorineural hearing loss (7 to 15 years) and 18 age-matched children with normal hearing participated in this study. Children with hearing loss were bilateral hearing aid users. Severity of hearing loss ranged from mild to severe across participants, but most had mild to moderate hearing loss. Speech recognition thresholds for disyllabic words presented in a two-talker speech masker were estimated in the sound field using an adaptive, forced-choice procedure with a picture-pointing response. Participants were tested in each of four conditions: (1) male target speech/two-male-talker masker; (2) male target speech/two-female-talker masker; (3) female target speech/two-female-talker masker; and (4) female target speech/two-male-talker masker. Children with hearing loss were tested wearing their personal hearing aids at user settings. RESULTS Both groups of children showed a sex-mismatch benefit, requiring a more advantageous signal to noise ratio when the target and masker were matched in sex than when they were mismatched. However, the magnitude of sex-mismatch benefit was significantly reduced for children with hearing loss relative to age-matched children with normal hearing. There was no effect of child age on the magnitude of sex-mismatch benefit. The sex-mismatch benefit was larger for male target speech than for female target speech. For children with hearing loss, the magnitude of sex-mismatch benefit was not associated with degree of hearing loss or aided audibility. CONCLUSIONS The findings from the present study indicate that children with sensorineural hearing loss are able to capitalize on acoustic differences between speech produced by male and female talkers when asked to recognize target words in a competing speech masker. However, children with hearing loss experienced a smaller benefit relative to their peers with normal hearing. No association between the sex-mismatch benefit and measures of unaided thresholds or aided audibility were observed for children with hearing loss, suggesting that reduced peripheral encoding is not the only factor responsible for the smaller sex-mismatch benefit relative to children with normal hearing.
Collapse
Affiliation(s)
- Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Jenna M. Browning
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Emily Buss
- Departement of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
27
|
Lalonde K, McCreery RW. Audiovisual Enhancement of Speech Perception in Noise by School-Age Children Who Are Hard of Hearing. Ear Hear 2021; 41:705-719. [PMID: 32032226 PMCID: PMC7822589 DOI: 10.1097/aud.0000000000000830] [Citation(s) in RCA: 18] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to examine age- and hearing-related differences in school-age children's benefit from visual speech cues. The study addressed three questions: (1) Do age and hearing loss affect degree of audiovisual (AV) speech enhancement in school-age children? (2) Are there age- and hearing-related differences in the mechanisms underlying AV speech enhancement in school-age children? (3) What cognitive and linguistic variables predict individual differences in AV benefit among school-age children? DESIGN Forty-eight children between 6 and 13 years of age (19 with mild to severe sensorineural hearing loss; 29 with normal hearing) and 14 adults with normal hearing completed measures of auditory and AV syllable detection and/or sentence recognition in a two-talker masker type and a spectrally matched noise. Children also completed standardized behavioral measures of receptive vocabulary, visuospatial working memory, and executive attention. Mixed linear modeling was used to examine effects of modality, listener group, and masker on sentence recognition accuracy and syllable detection thresholds. Pearson correlations were used to examine the relationship between individual differences in children's AV enhancement (AV-auditory-only) and age, vocabulary, working memory, executive attention, and degree of hearing loss. RESULTS Significant AV enhancement was observed across all tasks, masker types, and listener groups. AV enhancement of sentence recognition was similar across maskers, but children with normal hearing exhibited less AV enhancement of sentence recognition than adults with normal hearing and children with hearing loss. AV enhancement of syllable detection was greater in the two-talker masker than the noise masker, but did not vary significantly across listener groups. Degree of hearing loss positively correlated with individual differences in AV benefit on the sentence recognition task in noise, but not on the detection task. None of the cognitive and linguistic variables correlated with individual differences in AV enhancement of syllable detection or sentence recognition. CONCLUSIONS Although AV benefit to syllable detection results from the use of visual speech to increase temporal expectancy, AV benefit to sentence recognition requires that an observer extracts phonetic information from the visual speech signal. The findings from this study suggest that all listener groups were equally good at using temporal cues in visual speech to detect auditory speech, but that adults with normal hearing and children with hearing loss were better than children with normal hearing at extracting phonetic information from the visual signal and/or using visual speech information to access phonetic/lexical representations in long-term memory. These results suggest that standard, auditory-only clinical speech recognition measures likely underestimate real-world speech recognition skills of children with mild to severe hearing loss.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
28
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. School-age children benefit from voice gender cue differences for the perception of speech in competing speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3328. [PMID: 34241121 DOI: 10.1121/10.0004791] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/24/2020] [Accepted: 04/08/2021] [Indexed: 06/13/2023]
Abstract
Differences in speakers' voice characteristics, such as mean fundamental frequency (F0) and vocal-tract length (VTL), that primarily define speakers' so-called perceived voice gender facilitate the perception of speech in competing speech. Perceiving speech in competing speech is particularly challenging for children, which may relate to their lower sensitivity to differences in voice characteristics than adults. This study investigated the development of the benefit from F0 and VTL differences in school-age children (4-12 years) for separating two competing speakers while tasked with comprehending one of them and also the relationship between this benefit and their corresponding voice discrimination thresholds. Children benefited from differences in F0, VTL, or both cues at all ages tested. This benefit proportionally remained the same across age, although overall accuracy continued to differ from that of adults. Additionally, children's benefit from F0 and VTL differences and their overall accuracy were not related to their discrimination thresholds. Hence, although children's voice discrimination thresholds and speech in competing speech perception abilities develop throughout the school-age years, children already show a benefit from voice gender cue differences early on. Factors other than children's discrimination thresholds seem to relate more closely to their developing speech in competing speech perception abilities.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Etienne Gaudrain
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Sound Lab, Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, Cambridge CB2 0SZ, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen 9712EK, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen 9713GZ, Netherlands
| |
Collapse
|
29
|
Extended high-frequency hearing and head orientation cues benefit children during speech-in-speech recognition. Hear Res 2021; 406:108230. [PMID: 33951577 DOI: 10.1016/j.heares.2021.108230] [Citation(s) in RCA: 17] [Impact Index Per Article: 5.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/22/2020] [Revised: 03/03/2021] [Accepted: 03/18/2021] [Indexed: 12/29/2022]
Abstract
While the audible frequency range for humans spans approximately 20 Hz to 20 kHz, children display enhanced sensitivity relative to adults when detecting extended high frequencies (frequencies above 8 kHz; EHFs), as indicated by better pure tone thresholds. The impact that this increased hearing sensitivity to EHFs may have on children's speech recognition has not been established. One context in which EHF hearing may be particularly important for children is when recognizing speech in the presence of competing talkers. In the present study, we examined the extent to which school-age children (ages 5-17 years) with normal hearing were able to benefit from EHF cues when recognizing sentences in a two-talker speech masker. Two filtering conditions were tested: all stimuli were either full band or were low-pass filtered at 8 kHz to remove EHFs. Given that EHF energy emission in speech is highly dependent on head orientation of the talker (i.e., radiation becomes more directional with increasing frequency), two masker head angle conditions were tested: both co-located maskers were facing 45°, or both were facing 60° relative to the listener. The results demonstrated that regardless of age, children performed better when EHFs were present. In addition, a small change in masker head orientation also impacted performance, with better recognition at 60° compared to 45°. These findings suggest that EHF energy in the speech signal above 8 kHz is beneficial for children in complex listening situations. The magnitude of benefit from EHF cues and talker head orientation cues did not differ between children and adults. Therefore, while EHFs were beneficial for children as young as 5 years of age, children's generally better EHF hearing relative to adults did not provide any additional benefit.
Collapse
|
30
|
Heinrichs-Graham E, Walker EA, Eastman JA, Frenzel MR, Joe TR, McCreery RW. The impact of mild-to-severe hearing loss on the neural dynamics serving verbal working memory processing in children. Neuroimage Clin 2021; 30:102647. [PMID: 33838545 PMCID: PMC8056458 DOI: 10.1016/j.nicl.2021.102647] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 03/23/2021] [Accepted: 03/24/2021] [Indexed: 11/18/2022]
Abstract
Children with hearing loss (CHL) exhibit delays in language function relative to children with normal hearing (CNH). However, evidence on whether these delays extend into other cognitive domains such as working memory is mixed, with some studies showing decrements in CHL and others showing CHL performing at the level of CNH. Despite the growing literature investigating the impact of hearing loss on cognitive and language development, studies of the neural dynamics that underlie these cognitive processes are notably absent. This study sought to identify the oscillatory neural responses serving verbal working memory processing in CHL compared to CNH. To this end, participants with and without hearing loss performed a verbal working memory task during magnetoencephalography. Neural oscillatory responses associated with working memory encoding and maintenance were imaged separately, and these responses were statistically evaluated between CHL and CNH. While CHL performed as well on the task as CNH, CHL exhibited significantly elevated alpha-beta activity in the right frontal and precentral cortices during encoding relative to CNH. In contrast, CHL showed elevated alpha maintenance-related activity in the right precentral and parieto-occipital cortices. Crucially, right superior frontal encoding activity and right parieto-occipital maintenance activity correlated with language ability across groups. These data suggest that CHL may utilize compensatory right-hemispheric activity to achieve verbal working memory function at the level of CNH. Neural behavior in these regions may impact language function during crucial developmental ages.
Collapse
Affiliation(s)
- Elizabeth Heinrichs-Graham
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA.
| | - Elizabeth A Walker
- Wendell Johnson Speech and Hearing Center, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Jacob A Eastman
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Michaela R Frenzel
- Institute for Human Neuroscience, Boys Town National Research Hospital (BTNRH), Omaha, NE, USA; Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Timothy R Joe
- Center for Magnetoencephalography (MEG), University of Nebraska Medical Center (UNMC), Omaha, NE, USA
| | - Ryan W McCreery
- Audibility, Perception, and Cognition Laboratory, BTNRH, Omaha, NE, USA
| |
Collapse
|
31
|
Wilczyński J, Ślęzak G. Level of Vocabulary Development and Selected Elements Regarding Sensory Integration and Balance in 5-Year-Old Girls and Boys. CHILDREN (BASEL, SWITZERLAND) 2021; 8:200. [PMID: 33800019 PMCID: PMC7999570 DOI: 10.3390/children8030200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Subscribe] [Scholar Register] [Received: 01/14/2021] [Revised: 03/03/2021] [Accepted: 03/04/2021] [Indexed: 11/16/2022]
Abstract
The aim of this research was to assess relationships between the level of vocabulary and selected elements of sensory integration and balance in 5-year-old girls and boys, showing the differences between them. The study group consisted of 290 5-year-old children (172 boys and 118 girls) with different levels of vocabulary development and selected disturbances in sensory integration and balance processes. To evaluate the developmental deficits of speech with regard to vocabulary, the Children's Dictionary Test was used. The Clinical Test of Sensory Integration and Balance was also employed. In our research's overall assessment, 118 children, i.e., 41%, had a low level of vocabulary, while 108 (37%) had an average level and 64 (22%) had a high level. However, the average score of all examined children (3.71 stens) indicates a low level of vocabulary development. Less developed vocabulary skills included the ability to create subordinate words and define concepts. There were no significant differences in the level of vocabulary between girls and boys. We observed disorders concerning selected elements of sensory integration and balance in most of the children, and more often in boys. There were statistically significant relationships between the level of vocabulary and selected disorders of sensory integration and balance; however, they were not unambiguous. Children with the lowest level of vocabulary in overall assessment obtained significantly the worst results in the Clinical Test of Sensory Integration and Balance (CTSIB) open eyes, hard surface test. However, in the closed eyes, hard surface test, the lowest score was obtained by children with a high overall assessment. In turn, in the open eyes, soft surface test, the lowest score was noted for children with average overall assessment. In the complex CTSIB test, the lowest score was achieved by children with low ability to define concepts. The problem of the relationship between vocabulary level of and sensory integration as well as balance requires further research. The demonstrated significant relationships between some aspects of vocabulary level and selected elements of sensory integration as well as balance confirm the need to care for the overall psychomotor sphere of a child.
Collapse
Affiliation(s)
- Jacek Wilczyński
- Laboratory of Posturology, Collegium Medicum, Jan Kochanowski University in Kielce, Al. IX Wieków Kielc 19, 25–317 Kielce, Poland
| | - Grzegorz Ślęzak
- Municipal Psychological and Pedagogical Clinic Complex, Kielce, 75–215 Koszalin, Poland;
| |
Collapse
|
32
|
Flaherty MM, Buss E, Leibold LJ. Independent and Combined Effects of Fundamental Frequency and Vocal Tract Length Differences for School-Age Children's Sentence Recognition in a Two-Talker Masker. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:206-217. [PMID: 33375828 PMCID: PMC8610228 DOI: 10.1044/2020_jslhr-20-00327] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/08/2020] [Revised: 09/08/2020] [Accepted: 09/29/2020] [Indexed: 06/12/2023]
Abstract
Purpose The purpose of this study was to examine the independent and combined contributions of fundamental frequency (F0) and vocal tract length (VTL) differences on children's speech-in-speech recognition in the presence of a competing two-talker masker. Method Participants were 64 children (5-17 years old) and 25 adults (18-39 years old). Sentence recognition thresholds were measured in a two-talker masker. Target sentences had either the same mean F0 and VTL of the masker or were digitally altered so that the target and masker differed in F0 (Experiment 1), differed in VTL (Experiment 2), or differed in both F0 and VTL (Experiment 3). To determine the benefit, masking release was computed by subtracting thresholds in each shifted condition from the threshold in the unshifted condition. Results Results demonstrate that children's ability to benefit from either F0 or VTL differences (Experiments 1 and 2) depended on listener age, with younger children showing less improvement in speech reception thresholds compared to older children and adults. Age effects were also evident in the combined-cue conditions (Experiment 3), but children showed greater improvements compared to F0-only or VTL-only manipulations. Conclusions There was a prolonged pattern of development in children's ability to benefit from F0 or VTL differences between target and masker speech. Young children failed to capitalize on F0 and VTL differences to the same extent as older children and adults but did show a robust benefit when the cues were combined, supporting the hypothesis that younger children rely more heavily on redundant cues compared to older children and adults.
Collapse
Affiliation(s)
- Mary M. Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
33
|
Calandruccio L, Porter HL, Leibold LJ, Buss E. The Clear-Speech Benefit for School-Age Children: Speech-in-Noise and Speech-in-Speech Recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:4265-4276. [PMID: 33151767 PMCID: PMC8608216 DOI: 10.1044/2020_jslhr-20-00353] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/22/2020] [Revised: 08/22/2020] [Accepted: 08/24/2020] [Indexed: 06/11/2023]
Abstract
Purpose Talkers often modify their speech when communicating with individuals who struggle to understand speech, such as listeners with hearing loss. This study evaluated the benefit of clear speech in school-age children and adults with normal hearing for speech-in-noise and speech-in-speech recognition. Method Masked sentence recognition thresholds were estimated for school-age children and adults using an adaptive procedure. In Experiment 1, the target and masker were summed and presented over a loudspeaker located directly in front of the listener. The masker was either speech-shaped noise or two-talker speech, and target sentences were produced using a clear or conversational speaking style. In Experiment 2, stimuli were presented over headphones. The two-talker speech masker was diotic (M0). Clear and conversational target sentences were presented either in-phase (T0) or out-of-phase (Tπ) between the two ears. The M0Tπ condition introduces a segregation cue that was expected to improve performance. Results For speech presented over a single loudspeaker (Experiment 1), the clear-speech benefit was independent of age for the noise masker, but it increased with age for the two-talker masker. Similar age effects for the two-talker speech masker were seen under headphones with diotic presentation (M0T0), but comparable clear-speech benefit as a function of age was observed with a binaural cue to facilitate segregation (M0Tπ). Conclusions Consistent with prior research, children showed a robust clear-speech benefit for speech-in-noise recognition. Immaturity in the ability to segregate target from masker speech may limit young children's ability to benefit from clear-speech modifications for speech-in-speech recognition under some conditions. When provided with a cue that facilitates segregation, children as young as 4-7 years of age derived a clear-speech benefit in a two-talker masker that was similar to the benefit experienced by adults.
Collapse
Affiliation(s)
- Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Heather L. Porter
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| |
Collapse
|
34
|
Magimairaj BM, Nagaraj NK, Sergeev AV, Benafield NJ. Comparison of Auditory, Language, Memory, and Attention Abilities in Children With and Without Listening Difficulties. Am J Audiol 2020; 29:710-727. [PMID: 32810407 DOI: 10.1044/2020_aja-20-00018] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Objectives School-age children with and without parent-reported listening difficulties (LiD) were compared on auditory processing, language, memory, and attention abilities. The objective was to extend what is known so far in the literature about children with LiD by using multiple measures and selective novel measures across the above areas. Design Twenty-six children who were reported by their parents as having LiD and 26 age-matched typically developing children completed clinical tests of auditory processing and multiple measures of language, attention, and memory. All children had normal-range pure-tone hearing thresholds bilaterally. Group differences were examined. Results In addition to significantly poorer speech-perception-in-noise scores, children with LiD had reduced speed and accuracy of word retrieval from long-term memory, poorer short-term memory, sentence recall, and inferencing ability. Statistically significant group differences were of moderate effect size; however, standard test scores of children with LiD were not clinically poor. No statistically significant group differences were observed in attention, working memory capacity, vocabulary, and nonverbal IQ. Conclusions Mild signal-to-noise ratio loss, as reflected by the group mean of children with LiD, supported the children's functional listening problems. In addition, children's relative weakness in select areas of language performance, short-term memory, and long-term memory lexical retrieval speed and accuracy added to previous research on evidence-based areas that need to be evaluated in children with LiD who almost always have heterogenous profiles. Importantly, the functional difficulties faced by children with LiD in relation to their test results indicated, to some extent, that commonly used assessments may not be adequately capturing the children's listening challenges. Supplemental Material https://doi.org/10.23641/asha.12808607.
Collapse
Affiliation(s)
- Beula M. Magimairaj
- Cognitive Hearing Science Lab, Communicative Disorders and Deaf Education, Utah State University, Logan
| | - Naveen K. Nagaraj
- Cognitive Hearing Science Lab, Communicative Disorders and Deaf Education, Utah State University, Logan
| | | | - Natalie J. Benafield
- Department of Communication Sciences and Disorders, University of Central Arkansas, Conway
| |
Collapse
|
35
|
McCreery RW, Miller MK, Buss E, Leibold LJ. Cognitive and Linguistic Contributions to Masked Speech Recognition in Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3525-3538. [PMID: 32881629 PMCID: PMC8060059 DOI: 10.1044/2020_jslhr-20-00030] [Citation(s) in RCA: 20] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/24/2020] [Revised: 06/08/2020] [Accepted: 06/28/2020] [Indexed: 05/31/2023]
Abstract
Purpose The goal of this study was to examine the effects of cognitive and linguistic skills on masked speech recognition for children with normal hearing in three different masking conditions: (a) speech-shaped noise (SSN), (b) amplitude-modulated SSN (AMSSN), and (c) two-talker speech (TTS). We hypothesized that children with better working memory and language skills would have better masked speech recognition than peers with poorer skills in these areas. Selective attention was predicted to affect performance in the TTS masker due to increased cognitive demands from informational masking. Method A group of 60 children in two age groups (5- to 6-year-olds and 9- to 10-year-olds) with normal hearing completed sentence recognition in SSN, AMSSN, and TTS masker conditions. Speech recognition thresholds for 50% correct were measured. Children also completed standardized measures of language, memory, and executive function. Results Children's speech recognition was poorer in the TTS relative to the SSN and AMSSN maskers. Older children had lower speech recognition thresholds than younger children for all masker conditions. Greater language abilities were associated with better sentence recognition for the younger children in all masker conditions, but there was no effect of language for older children. Better working memory and selective attention skills were associated with better masked sentence recognition for both age groups, but only in the TTS masker condition. Conclusions The decreasing influence of vocabulary on masked speech recognition for older children supports the idea that this relationship depends on an interaction between the language level of the stimuli and the listener's vocabulary. Increased cognitive demands associated with perceptually isolating the target talker and two competing masker talkers with a TTS masker may result in the recruitment of working memory and selective attention skills, effects that were not observed in SSN or AMSSN maskers. Future research should evaluate these effects across a broader range of stimuli or with children who have hearing loss.
Collapse
Affiliation(s)
- Ryan W. McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
36
|
Buss E, Calandruccio L, Oleson J, Leibold LJ. Contribution of Stimulus Variability to Word Recognition in Noise Versus Two-Talker Speech for School-Age Children and Adults. Ear Hear 2020; 42:313-322. [PMID: 32881723 DOI: 10.1097/aud.0000000000000951] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
BACKGROUND Speech-in-speech recognition scores tend to be more variable than the speech-in-noise recognition scores, both within and across listeners. This variability could be due to listener factors, such as individual differences in audibility or susceptibility to informational masking. It could also be due to stimulus variability, with some speech-in-speech samples posing more of a challenge than others. The purpose of this experiment was to test two hypotheses: (1) that stimulus variability affects adults' word recognition in a two-talker speech masker and (2) that stimulus variability plays a smaller role in children's performance due to relatively greater contributions of listener factors. METHODS Listeners were children (5 to 10 years) and adults (18 to 41 years) with normal hearing. Target speech was a corpus of 30 disyllabic words, each associated with an unambiguous illustration. Maskers were 30 samples of either two-talker speech or speech-shaped noise. The task was a four-alternative forced choice. Speech reception thresholds were measured adaptively, and those results were used to determine the signal-to-noise ratio associated with ≈65% correct for each listener and masker. Two 30-word blocks of fixed-level testing were then completed in each of the two conditions: (1) with the target-masker pairs randomly assigned prior to each block and (2) with frozen target-masker pairs. RESULTS Speech reception thresholds were lower for adults than for children, particularly for the two-talker speech masker. Listener responses in fixed-level testing were evaluated for consistency across listeners. Target sample was the best predictor of performance in the speech-shaped noise masker for both the random and frozen conditions. In contrast, both the target and masker samples affected performance in the two-talker masker. Results were qualitatively similar for children and adults, and the pattern of performance across stimulus samples was consistent, with differences in masked target audibility in both age groups. CONCLUSIONS Although word recognition in speech-shaped noise differed consistently across target words, recognition in a two-talker speech masker depended on both the target and masker samples. These stimulus effects are broadly consistent with a simple model of masked target audibility. Although variability in speech-in-speech recognition is often thought to reflect differences in informational masking, the present results suggest that variability in energetic masking across stimuli can play an important role in performance.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, Ohio, USA
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Lori J Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| |
Collapse
|
37
|
Liu JS, Yu YF, Tao DD, Li Y, Ye F, Galvin JJ, Gopen Q, Fu QJ. Effects of Monaural Asymmetry and Target-Masker Similarity on Binaural Advantage in Children and Adults With Normal Hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:2811-2824. [PMID: 32777196 DOI: 10.1044/2020_jslhr-19-00269] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose For colocated targets and maskers, binaural listening typically offers a small but significant advantage over monaural listening. This study investigated how monaural asymmetry and target-masker similarity may limit binaural advantage in adults and children. Method Ten Mandarin-speaking Chinese adults (aged 22-27 years) and 12 children (aged 7-14 years) with normal hearing participated in the study. Monaural and binaural speech recognition thresholds (SRTs) were adaptively measured for colocated competing speech. The target-masker sex was the same or different. Performance was measured using headphones for three listening conditions: left ear, right ear, and both ears. Binaural advantage was calculated relative to the poorer or better ear. Results Mean SRTs were significantly lower for adults than children. When the target-masker sex was the same, SRTs were significantly lower with the better ear than with the poorer ear or both ears (p < .05). When the target-masker sex was different, SRTs were significantly lower with the better ear or both ears than with the poorer ear (p < .05). Children and adults similarly benefitted from target-masker sex differences. Substantial monaural asymmetry was observed, but the effects of asymmetry on binaural advantage were similar between adults and children. Monaural asymmetry was significantly correlated with binaural advantage relative to the poorer ear (p = .004), but not to the better ear (p = .056). Conclusions Binaural listening may offer little advantage (or even a disadvantage) over monaural listening with the better ear, especially when competing talkers have similar vocal characteristics. Monaural asymmetry appears to limit binaural advantage in listeners with normal hearing, similar to observations in listeners with hearing impairment. While language development may limit perception of competing speech, it does not appear to limit the effects of monaural asymmetry or target-masker sex on binaural advantage.
Collapse
Affiliation(s)
- Ji-Sheng Liu
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ya-Feng Yu
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Duo-Duo Tao
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Yi Li
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Fei Ye
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | | | - Quinton Gopen
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, CA
| |
Collapse
|
38
|
McFayden TC, Baskin P, Stephens JDW, He S. Cortical Auditory Event-Related Potentials and Categorical Perception of Voice Onset Time in Children With an Auditory Neuropathy Spectrum Disorder. Front Hum Neurosci 2020; 14:184. [PMID: 32523521 PMCID: PMC7261872 DOI: 10.3389/fnhum.2020.00184] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2020] [Accepted: 04/27/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: This study evaluated cortical encoding of voice onset time (VOT) in quiet and noise, and their potential associations with the behavioral categorical perception of VOT in children with auditory neuropathy spectrum disorder (ANSD). Design: Subjects were 11 children with ANSD ranging in age between 6.4 and 16.2 years. The stimulus was an /aba/-/apa/ vowel-consonant-vowel continuum comprising eight tokens with VOTs ranging from 0 ms (voiced endpoint) to 88 ms (voiceless endpoint). For speech in noise, speech tokens were mixed with the speech-shaped noise from the Hearing In Noise Test at a signal-to-noise ratio (SNR) of +5 dB. Speech-evoked auditory event-related potentials (ERPs) and behavioral categorization perception of VOT were measured in quiet in all subjects, and at an SNR of +5 dB in seven subjects. The stimuli were presented at 35 dB SL (re: pure tone average) or 115 dB SPL if this limit was less than 35 dB SL. In addition to the onset response, the auditory change complex (ACC) elicited by VOT was recorded in eight subjects. Results: Speech evoked ERPs recorded in all subjects consisted of a vertex positive peak (i.e., P1), followed by a trough occurring approximately 100 ms later (i.e., N2). For results measured in quiet, there was no significant difference in categorical boundaries estimated using ERP measures and behavioral procedures. Categorical boundaries estimated in quiet using both ERP and behavioral measures closely correlated with the most-recently measured Phonetically Balanced Kindergarten (PBK) scores. Adding a competing background noise did not affect categorical boundaries estimated using either behavioral or ERP procedures in three subjects. For the other four subjects, categorical boundaries estimated in noise using behavioral measures were prolonged. However, adding background noise only increased categorical boundaries measured using ERPs in three out of these four subjects. Conclusions: VCV continuum can be used to evaluate behavioral identification and the neural encoding of VOT in children with ANSD. In quiet, categorical boundaries of VOT estimated using behavioral measures and ERP recordings are closely associated with speech recognition performance in children with ANSD. Underlying mechanisms for excessive speech perception deficits in noise may vary for individual patients with ANSD.
Collapse
Affiliation(s)
- Tyler C McFayden
- Department of Psychology, Virginia Polytechnic Institute and State University, Blacksburg, VA, United States
| | - Paola Baskin
- Department of Anesthesiology, School of Medicine, University of California, San Diego, San Diego, CA, United States
| | - Joseph D W Stephens
- Department of Psychology, North Carolina Agricultural and Technical State University, Greensboro, NC, United States
| | - Shuman He
- Department of Otolaryngology-Head and Neck Surgery, Wexner Medical Center, The Ohio State University, Columbus, OH, United States.,Department of Audiology, Nationwide Children's Hospital, Columbus, OH, United States
| |
Collapse
|
39
|
Salanger M, Lewis D, Vallier T, McDermott T, Dergan A. Applying Virtual Reality to Audiovisual Speech Perception Tasks in Children. Am J Audiol 2020; 29:244-258. [PMID: 32250641 DOI: 10.1044/2020_aja-19-00004] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The primary purpose of this study was to explore the efficacy of using virtual reality (VR) technology in hearing research with children by comparing speech perception abilities in a typical laboratory environment and a simulated VR classroom environment. Method The study included 48 final participants (40 children and eight young adults). The study design utilized a speech perception task in conjunction with a localization demand in auditory-only (AO) and auditory-visual (AV) conditions. Tasks were completed in simulated classroom acoustics in both a typical laboratory environment and in a virtual classroom environment accessed using an Oculus Rift head-mounted display. Results Speech perception scores were higher for AV conditions over AO conditions across age groups. In addition, interaction effects of environment (i.e., laboratory environment and VR classroom environment) and visual accessibility (i.e., AV vs. AO) indicated that children's performance on the speech perception task in the VR classroom was more similar to their performance in the laboratory environment for AV tasks than it was for AO tasks. AO tasks showed improvement in speech perception scores from the laboratory to the VR classroom environment, whereas AV conditions showed little significant change. Conclusion These results suggest that VR head-mounted displays are a viable research tool in AV tasks for children, increasing flexibility for audiovisual testing in a typical laboratory environment.
Collapse
Affiliation(s)
| | - Dawna Lewis
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Timothy Vallier
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Tessa McDermott
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Andrew Dergan
- Listening and Learning Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
40
|
Masked Sentence Recognition in Children, Young Adults, and Older Adults: Age-Dependent Effects of Semantic Context and Masker Type. Ear Hear 2020; 40:1117-1126. [PMID: 30601213 DOI: 10.1097/aud.0000000000000692] [Citation(s) in RCA: 29] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Masked speech recognition in normal-hearing listeners depends in part on masker type and semantic context of the target. Children and older adults are more susceptible to masking than young adults, particularly when the masker is speech. Semantic context has been shown to facilitate noise-masked sentence recognition in all age groups, but it is not known whether age affects a listener's ability to use context with a speech masker. The purpose of the present study was to evaluate the effect of masker type and semantic context of the target as a function of listener age. DESIGN Listeners were children (5 to 16 years), young adults (19 to 30 years), and older adults (67 to 81 years), all with normal or near-normal hearing. Maskers were either speech-shaped noise or two-talker speech, and targets were either semantically correct (high context) sentences or semantically anomalous (low context) sentences. RESULTS As predicted, speech reception thresholds were lower for young adults than either children or older adults. Age effects were larger for the two-talker masker than the speech-shaped noise masker, and the effect of masker type was larger in children than older adults. Performance tended to be better for targets with high than low semantic context, but this benefit depended on age group and masker type. In contrast to adults, children benefitted less from context in the two-talker speech masker than the speech-shaped noise masker. Context effects were small compared with differences across age and masker type. CONCLUSIONS Different effects of masker type and target context are observed at different points across the lifespan. While the two-talker masker is particularly challenging for children and older adults, the speech masker may limit the use of semantic context in children but not adults.
Collapse
|
41
|
Abstract
OBJECTIVES Emotional communication is important in children's social development. Previous studies have shown deficits in voice emotion recognition by children with moderate-to-severe hearing loss or with cochlear implants. Little, however, is known about emotion recognition in children with mild-to-moderate hearing loss. The objective of this study was to compare voice emotion recognition by children with mild-to-moderate hearing loss relative to their peers with normal hearing, under conditions in which the emotional prosody was either more or less exaggerated (child-directed or adult-directed speech, respectively). We hypothesized that the performance of children with mild-to-moderate hearing loss would be comparable to their normally hearing peers when tested with child-directed materials but would show significant deficits in emotion recognition when tested with adult-directed materials, which have reduced prosodic cues. DESIGN Nineteen school-aged children (8 to 14 years of age) with mild-to-moderate hearing loss and 20 children with normal hearing aged 6 to 17 years participated in the study. A group of 11 young, normally hearing adults was also tested. Stimuli comprised sentences spoken in one of five emotions (angry, happy, sad, neutral, and scared), either in a child-directed or in an adult-directed manner. The task was a single-interval, five-alternative forced-choice paradigm, in which the participants heard each sentence in turn and indicated which of the five emotions was associated with that sentence. Reaction time was also recorded as a measure of cognitive load. RESULTS Acoustic analyses confirmed the exaggerated prosodic cues in the child-directed materials relative to the adult-directed materials. Results showed significant effects of age, specific emotion (happy, sad, etc.), and test materials (better performance with child-directed materials) in both groups of children, as well as susceptibility to talker variability. Contrary to our hypothesis, no significant differences were observed between the 2 groups of children in either emotion recognition (percent correct or d' values) or in reaction time, with either child- or adult-directed materials. Among children with hearing loss, degree of hearing loss (mild or moderate) did not predict performance. In children with hearing loss, interactions between vocabulary, materials, and age were observed, such that older children with stronger vocabulary showed better performance with child-directed speech. Such interactions were not observed in children with normal hearing. The pattern of results was broadly consistent across the different measures of accuracy, d', and reaction time. CONCLUSIONS Children with mild-to-moderate hearing loss do not have significant deficits in overall voice emotion recognition compared with their normally hearing peers, but mechanisms involved may be different between the 2 groups. The results suggest a stronger role for linguistic ability in emotion recognition by children with normal hearing than by children with hearing loss.
Collapse
Affiliation(s)
- Shauntelle A. Cannon
- Department of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, Chapel Hill, NC, USA
- Auditory Prostheses & Perception Lab, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| | - Monita Chatterjee
- Auditory Prostheses & Perception Lab, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
42
|
Miller MK, Calandruccio L, Buss E, McCreery RW, Oleson J, Rodriguez B, Leibold LJ. Masked English Speech Recognition Performance in Younger and Older Spanish-English Bilingual and English Monolingual Children. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:4578-4591. [PMID: 31830845 PMCID: PMC7839054 DOI: 10.1044/2019_jslhr-19-00059] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/09/2019] [Accepted: 05/09/2019] [Indexed: 06/01/2023]
Abstract
Purpose The purpose of this study was to compare masked English speech recognition thresholds between Spanish-English bilingual and English monolingual children and to evaluate effects of age, maternal education, and English receptive language abilities on individual differences in masked speech recognition. Method Forty-three Spanish-English bilingual children and 42 English monolingual children completed an English sentence recognition task in 2 masker conditions: (a) speech-shaped noise and (b) 2-talker English speech. Two age groups of children, younger (5-6 years) and older (9-10 years), were tested. The predictors of masked speech recognition performance were evaluated using 2 mixed-effects regression models. In the 1st model, fixed effects were age group (younger children vs. older children), language group (bilingual vs. monolingual), and masker type (speech-shaped noise vs. 2-talker speech). In the 2nd model, the fixed effects of receptive English vocabulary scores and maternal education level were also included. Results Younger children performed more poorly than older children, but no significant difference in masked speech recognition was observed between bilingual and monolingual children for either age group when English proficiency and maternal education were also included in the model. English language abilities fell within age-appropriate norms for both groups, but individual children with larger receptive vocabularies in English tended to show better recognition; this effect was stronger for younger children than for older children. Speech reception thresholds for all children were lower in the speech-shaped noise masker than in the 2-talker speech masker. Conclusions Regardless of age, similar masked speech recognition was observed for Spanish-English bilingual and English monolingual children tested in this study when receptive English language abilities were accounted for. Receptive English vocabulary scores were associated with better masked speech recognition performance for both bilinguals and monolinguals, with a stronger relationship observed for younger children than older children. Further investigation involving a Spanish-dominant bilingual sample is warranted given the high English language proficiency of children included in this study.
Collapse
Affiliation(s)
- Margaret K. Miller
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Lauren Calandruccio
- Department of Psychological Sciences, Case Western Reserve University, Cleveland, OH
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Jacob Oleson
- Department of Biostatistics, University of Iowa, Iowa City
| | - Barbara Rodriguez
- Department of Speech and Hearing Sciences, The University of New Mexico, Albuquerque
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
43
|
Kirby BJ, Spratford M, Klein KE, McCreery RW. Cognitive Abilities Contribute to Spectro-Temporal Discrimination in Children Who Are Hard of Hearing. Ear Hear 2019; 40:645-650. [PMID: 30130295 DOI: 10.1097/aud.0000000000000645] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Spectral ripple discrimination tasks have received considerable interest as potential clinical tools for use with adults and children with hearing loss. Previous results have indicated that performance on ripple tasks is affected by differences in aided audibility [quantified using the Speech Intelligibility Index, or Speech Intelligibility Index (SII)] in children who wear hearing aids and that ripple thresholds tend to improve over time in children with and without hearing loss. Although ripple task performance is thought to depend less on language skills than common speech perception tasks, the extent to which spectral ripple discrimination might depend on other general cognitive abilities such as nonverbal intelligence and working memory is unclear. This is an important consideration for children because age-related changes in ripple test results could be due to developing cognitive ability and could obscure the effect of any changes in unaided or aided hearing over time. The purpose of this study was to establish the relationship between spectral ripple discrimination in a group of children who use hearing aids and general cognitive abilities such as nonverbal intelligence, visual and auditory working memory, and executive function. It was hypothesized that, after controlling for listener age, general cognitive ability would be associated with spectral ripple thresholds and performance on both auditory and visual cognitive tasks would be associated with spectral ripple thresholds. DESIGN Children who were full-time users of hearing aids for at least 1 year (n = 24, ages 6 to 13 years) participated in this study. Children completed a spectro-temporal modulated ripple discrimination task in the sound field using their personal hearing aids. Threshold was determined from the average of two repetitions of the task. Participants completed standard measurements of executive function, nonverbal intelligence, and visual and verbal working memory. Real ear verification measures were completed for each child with their personal hearing aids to determine aided SII. RESULTS Consistent with past findings, spectro-temporal ripple thresholds improved with greater listener age. Surprisingly, aided SII was not significantly correlated with spectro-temporal ripple thresholds potentially because this particular group of listeners had overall better hearing and greater aided SII than participants in previous studies. Partial correlations controlling for listener age revealed that greater nonverbal intelligence and visual working memory were associated with better spectro-temporal ripple discrimination thresholds. Verbal working memory, executive function, and language ability were not significantly correlated with spectro-temporal ripple discrimination thresholds. CONCLUSIONS These results indicate that greater general cognitive abilities are associated with better spectro-temporal ripple discrimination ability, independent of children's age or aided SII. It is possible that these relationships reflect the cognitive demands of the psychophysical task rather than a direct relationship of cognitive ability to spectro-temporal processing in the auditory system. Further work is needed to determine the relationships of cognitive abilities to ripple discrimination in other populations, such as children with cochlear implants or with a wider range of aided SII.
Collapse
Affiliation(s)
- Benjamin J Kirby
- Department of Communication Sciences and Disorders, Illinois State University, Normal, Illinois, USA
| | | | - Kelsey E Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | | |
Collapse
|
44
|
Walker EA, Sapp C, Oleson JJ, McCreery RW. Longitudinal Speech Recognition in Noise in Children: Effects of Hearing Status and Vocabulary. Front Psychol 2019; 10:2421. [PMID: 31708849 PMCID: PMC6824244 DOI: 10.3389/fpsyg.2019.02421] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 10/11/2019] [Indexed: 11/13/2022] Open
Abstract
Objectives: The aims of the current study were: (1) to compare growth trajectories of speech recognition in noise for children with normal hearing (CNH) and children who are hard of hearing (CHH) and (2) to determine the effects of auditory access, vocabulary size, and working memory on growth trajectories of speech recognition in noise in CHH. Design: Participants included 290 children enrolled in a longitudinal study. Children received a comprehensive battery of measures annually, including speech recognition in noise, vocabulary, and working memory. We collected measures of unaided and aided hearing and daily hearing aid (HA) use to quantify aided auditory experience (i.e., HA dosage). We used a longitudinal regression framework to examine the trajectories of speech recognition in noise in CNH and CHH. To determine factors that were associated with growth trajectories for CHH, we used a longitudinal regression model in which the dependent variable was speech recognition in noise scores, and the independent variables were grade, maternal education level, age at confirmation of hearing loss, vocabulary scores, working memory scores, and HA dosage. Results: We found a significant effect of grade and hearing status. Older children and CNH showed stronger speech recognition in noise scores compared to younger children and CHH. The growth trajectories for both groups were parallel over time. For CHH, older age, stronger vocabulary skills, and greater average HA dosage supported speech recognition in noise. Conclusion: The current study is among the first to compare developmental growth rates in speech recognition for CHH and CNH. CHH demonstrated persistent deficits in speech recognition in noise out to age 11, with no evidence of convergence or divergence between groups. These trends highlight the need to provide support for children with all degrees of hearing loss in the academic setting as they transition into secondary grades. The results also elucidate factors that influence growth trajectories for speech recognition in noise for children; stronger vocabulary skills and higher HA dosage supported speech recognition in degraded situations. This knowledge helps us to develop a more comprehensive model of spoken word recognition in children.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Caitlin Sapp
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Jacob J. Oleson
- Department of Biostatistics, University of Iowa, Iowa City, IA, United States
| | - Ryan W. McCreery
- Center for Hearing Research, Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| |
Collapse
|
45
|
McCreery RW, Walker EA, Spratford M, Lewis D, Brennan M. Auditory, Cognitive, and Linguistic Factors Predict Speech Recognition in Adverse Listening Conditions for Children With Hearing Loss. Front Neurosci 2019; 13:1093. [PMID: 31680828 PMCID: PMC6803493 DOI: 10.3389/fnins.2019.01093] [Citation(s) in RCA: 45] [Impact Index Per Article: 9.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/30/2019] [Indexed: 11/23/2022] Open
Abstract
Objectives: Children with hearing loss listen and learn in environments with noise and reverberation, but perform more poorly in noise and reverberation than children with normal hearing. Even with amplification, individual differences in speech recognition are observed among children with hearing loss. Few studies have examined the factors that support speech understanding in noise and reverberation for this population. This study applied the theoretical framework of the Ease of Language Understanding (ELU) model to examine the influence of auditory, cognitive, and linguistic factors on speech recognition in noise and reverberation for children with hearing loss. Design: Fifty-six children with hearing loss and 50 age-matched children with normal hearing who were 7–10 years-old participated in this study. Aided sentence recognition was measured using an adaptive procedure to determine the signal-to-noise ratio for 50% correct (SNR50) recognition in steady-state speech-shaped noise. SNR50 was also measured with noise plus a simulation of 600 ms reverberation time. Receptive vocabulary, auditory attention, and visuospatial working memory were measured. Aided speech audibility indexed by the Speech Intelligibility Index was measured through the hearing aids of children with hearing loss. Results: Children with hearing loss had poorer aided speech recognition in noise and reverberation than children with typical hearing. Children with higher receptive vocabulary and working memory skills had better speech recognition in noise and noise plus reverberation than peers with poorer skills in these domains. Children with hearing loss with higher aided audibility had better speech recognition in noise and reverberation than peers with poorer audibility. Better audibility was also associated with stronger language skills. Conclusions: Children with hearing loss are at considerable risk for poor speech understanding in noise and in conditions with noise and reverberation. Consistent with the predictions of the ELU model, children with stronger vocabulary and working memory abilities performed better than peers with poorer skills in these domains. Better aided speech audibility was associated with better recognition in noise and noise plus reverberation conditions for children with hearing loss. Speech audibility had direct effects on speech recognition in noise and reverberation and cumulative effects on speech recognition in noise through a positive association with language development over time.
Collapse
Affiliation(s)
- Ryan W McCreery
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Elizabeth A Walker
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Meredith Spratford
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Dawna Lewis
- The Audibility Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| | - Marc Brennan
- Amplification and Perception Laboratory, Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, NE, United States
| |
Collapse
|
46
|
Cabrera L, Varnet L, Buss E, Rosen S, Lorenzi C. Development of temporal auditory processing in childhood: Changes in efficiency rather than temporal-modulation selectivity. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:2415. [PMID: 31672005 DOI: 10.1121/1.5128324] [Citation(s) in RCA: 16] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/25/2019] [Accepted: 09/16/2019] [Indexed: 05/22/2023]
Abstract
The ability to detect amplitude modulation (AM) is essential to distinguish the spectro-temporal features of speech from those of a competing masker. Previous work shows that AM sensitivity improves until 10 years of age. This may relate to the development of sensory factors (tuning of AM filters, susceptibility to AM masking) or to changes in processing efficiency (reduction in internal noise, optimization of decision strategies). To disentangle these hypotheses, three groups of children (5-11 years) and one of young adults completed psychophysical tasks measuring thresholds for detecting sinusoidal AM (with a rate of 4, 8, or 32 Hz) applied to carriers whose inherent modulations exerted different amounts of AM masking. Results showed that between 5 and 11 years, AM detection thresholds improved and that susceptibility to AM masking slightly increased. However, the effects of AM rate and carrier were not associated with age, suggesting that sensory factors are mature by 5 years. Subsequent modelling indicated that reducing internal noise by a factor 10 accounted for the observed developmental trends. Finally, children's consonant identification thresholds in noise related to some extent to AM sensitivity. Increased efficiency in AM detection may support better use of temporal information in speech during childhood.
Collapse
Affiliation(s)
- Laurianne Cabrera
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, WC1N 1PF, London, United Kingdom
| | - Léo Varnet
- Laboratoire des Systèmes Perceptifs, Ecole Normale Supérieure, Centre National de la Recherche Scientifique, Université Paris Sciences et Lettres, 29 Rue d'Ulm, 75005, Paris, France
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina, Chapel Hill, North Carolina 27599, USA
| | - Stuart Rosen
- Department of Speech, Hearing and Phonetic Sciences, University College London, Chandler House, 2 Wakefield Street, WC1N 1PF, London, United Kingdom
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, Ecole Normale Supérieure, Centre National de la Recherche Scientifique, Université Paris Sciences et Lettres, 29 Rue d'Ulm, 75005, Paris, France
| |
Collapse
|
47
|
Leibold LJ, Buss E. Masked Speech Recognition in School-Age Children. Front Psychol 2019; 10:1981. [PMID: 31551862 PMCID: PMC6733920 DOI: 10.3389/fpsyg.2019.01981] [Citation(s) in RCA: 44] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 08/13/2019] [Indexed: 11/13/2022] Open
Abstract
Children who are typically developing often struggle to hear and understand speech in the presence of competing background sounds, particularly when the background sounds are also speech. For example, in many cases, young school-age children require an additional 5- to 10-dB signal-to-noise ratio relative to adults to achieve the same word or sentence recognition performance in the presence of two streams of competing speech. Moreover, adult-like performance is not observed until adolescence. Despite ample converging evidence that children are more susceptible to auditory masking than adults, the field lacks a comprehensive model that accounts for the development of masked speech recognition. This review provides a synthesis of the literature on the typical development of masked speech recognition. Age-related changes in the ability to recognize phonemes, words, or sentences in the presence of competing background sounds will be discussed by considering (1) how masking sounds influence the sensory encoding of target speech; (2) differences in the time course of development for speech-in-noise versus speech-in-speech recognition; and (3) the central auditory and cognitive processes required to separate and attend to target speech when multiple people are speaking at the same time.
Collapse
Affiliation(s)
- Lori J Leibold
- Human Auditory Development Laboratory, Department of Research, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, United States
| | - Emily Buss
- Psychoacoustics Laboratories, Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
48
|
Walker EA, Kessler D, Klein K, Spratford M, Oleson JJ, Welhaven A, McCreery RW. Time-Gated Word Recognition in Children: Effects of Auditory Access, Age, and Semantic Context. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2519-2534. [PMID: 31194921 PMCID: PMC6808355 DOI: 10.1044/2019_jslhr-h-18-0407] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/06/2018] [Revised: 01/07/2019] [Accepted: 02/19/2019] [Indexed: 06/03/2023]
Abstract
Purpose We employed a time-gated word recognition task to investigate how children who are hard of hearing (CHH) and children with normal hearing (CNH) combine cognitive-linguistic abilities and acoustic-phonetic cues to recognize words in sentence-final position. Method The current study included 40 CHH and 30 CNH in 1st or 3rd grade. Participants completed vocabulary and working memory tests and a time-gated word recognition task consisting of 14 high- and 14 low-predictability sentences. A time-to-event model was used to evaluate the effect of the independent variables (age, hearing status, predictability) on word recognition. Mediation models were used to examine the associations between the independent variables (vocabulary size and working memory), aided audibility, and word recognition. Results Gated words were identified significantly earlier for high-predictability than low-predictability sentences. First-grade CHH and CNH showed no significant difference in performance. Third-grade CHH needed more information than CNH to identify final words. Aided audibility was associated with word recognition. This association was fully mediated by vocabulary size but not working memory. Conclusions Both CHH and CNH benefited from the addition of semantic context. Interventions that focus on consistent aided audibility and vocabulary may enhance children's ability to fill in gaps in incoming messages.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | - David Kessler
- Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN
| | - Kelsey Klein
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City
| | | | | | - Anne Welhaven
- Department of Biostatistics, University of Iowa, Iowa City
| | | |
Collapse
|
49
|
Blomberg R, Danielsson H, Rudner M, Söderlund GBW, Rönnberg J. Speech Processing Difficulties in Attention Deficit Hyperactivity Disorder. Front Psychol 2019; 10:1536. [PMID: 31333549 PMCID: PMC6624822 DOI: 10.3389/fpsyg.2019.01536] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/19/2019] [Accepted: 06/18/2019] [Indexed: 12/20/2022] Open
Abstract
The large body of research that forms the ease of language understanding (ELU) model emphasizes the important contribution of cognitive processes when listening to speech in adverse conditions; however, speech-in-noise (SIN) processing is yet to be thoroughly tested in populations with cognitive deficits. The purpose of the current study was to contribute to the field in this regard by assessing SIN performance in a sample of adolescents with attention deficit hyperactivity disorder (ADHD) and comparing results with age-matched controls. This population was chosen because core symptoms of ADHD include developmental deficits in cognitive control and working memory capacity and because these top-down processes are thought to reach maturity during adolescence in individuals with typical development. The study utilized natural language sentence materials under experimental conditions that manipulated the dependency on cognitive mechanisms in varying degrees. In addition, participants were tested on cognitive capacity measures of complex working memory-span, selective attention, and lexical access. Primary findings were in support of the ELU-model. Age was shown to significantly covary with SIN performance, and after controlling for age, ADHD participants demonstrated greater difficulty than controls with the experimental manipulations. In addition, overall SIN performance was strongly predicted by individual differences in cognitive capacity. Taken together, the results highlight the general disadvantage persons with deficient cognitive capacity have when attending to speech in typically noisy listening environments. Furthermore, the consistently poorer performance observed in the ADHD group suggests that auditory processing tasks designed to tax attention and working memory capacity may prove to be beneficial clinical instruments when diagnosing ADHD.
Collapse
Affiliation(s)
- Rina Blomberg
- Disability Research Division, Institute for Behavioral Science and Learning, Linköping University, Linköping, Sweden
| | - Henrik Danielsson
- Disability Research Division, Institute for Behavioral Science and Learning, Linköping University, Linköping, Sweden
| | - Mary Rudner
- Disability Research Division, Institute for Behavioral Science and Learning, Linköping University, Linköping, Sweden
| | - Göran B W Söderlund
- Faculty of Teacher Education Arts and Sports, Western Norway University of Applied Sciences, Sogndal, Norway
| | - Jerker Rönnberg
- Disability Research Division, Institute for Behavioral Science and Learning, Linköping University, Linköping, Sweden
| |
Collapse
|
50
|
Buss E, Lorenzi C, Cabrera L, Leibold LJ, Grose JH. Amplitude modulation detection and modulation masking in school-age children and adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 145:2565. [PMID: 31046373 PMCID: PMC6909994 DOI: 10.1121/1.5098950] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/10/2019] [Revised: 04/02/2019] [Accepted: 04/03/2019] [Indexed: 05/30/2023]
Abstract
Two experiments were performed to better understand on- and off-frequency modulation masking in normal-hearing school-age children and adults. Experiment 1 estimated thresholds for detecting 16-, 64- or 256-Hz sinusoidal amplitude modulation (AM) imposed on a 4300-Hz pure tone. Thresholds tended to improve with age, with larger developmental effects for 64- and 256-Hz AM than 16-Hz AM. Detection of 16-Hz AM was also measured with a 1000-Hz off-frequency masker tone carrying 16-Hz AM. Off-frequency modulation masking was larger for younger than older children and adults when the masker was gated with the target, but not when the masker was continuous. Experiment 2 measured detection of 16- or 64-Hz sinusoidal AM carried on a bandpass noise with and without additional on-frequency masker AM. Children and adults demonstrated modulation masking with similar tuning to modulation rate. Rate-dependent age effects for AM detection on a pure-tone carrier are consistent with maturation of temporal resolution, an effect that may be obscured by modulation masking for noise carriers. Children were more susceptible than adults to off-frequency modulation masking for gated stimuli, consistent with maturation in the ability to listen selectively in frequency, but the children were not more susceptible to on-frequency modulation masking than adults.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina, Chapel Hill, North Carolina 27599-7070, USA
| | - Christian Lorenzi
- Laboratoire des Systèmes Perceptifs, Département d'Études Cognitives, Ecole Normale Supérieure, Universite Paris Sciences et Lettres, Centre National de la Recherche Scientifique, Paris, France
| | - Laurianne Cabrera
- Laboratoire de Psychologie de la Perception, Université Paris Descartes, Centre National de la Recherche Scientifique, Paris, France
| | - Lori J Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - John H Grose
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina, Chapel Hill, North Carolina 27599-7070, USA
| |
Collapse
|