1
|
Lalonde K, Walker EA, Leibold LJ, McCreery RW. Predictors of Susceptibility to Noise and Speech Masking Among School-Age Children With Hearing Loss or Typical Hearing. Ear Hear 2024; 45:81-93. [PMID: 37415268 PMCID: PMC10771540 DOI: 10.1097/aud.0000000000001403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate effects of masker type and hearing group on the relationship between school-age children's speech recognition and age, vocabulary, working memory, and selective attention. This study also explored effects of masker type and hearing group on the time course of maturation of masked speech recognition. DESIGN Participants included 31 children with normal hearing (CNH) and 41 children with mild to severe bilateral sensorineural hearing loss (CHL), between 6.7 and 13 years of age. Children with hearing aids used their personal hearing aids throughout testing. Audiometric thresholds and standardized measures of vocabulary, working memory, and selective attention were obtained from each child, along with masked sentence recognition thresholds in a steady state, speech-spectrum noise (SSN) and in a two-talker speech masker (TTS). Aided audibility through children's hearing aids was calculated based on the Speech Intelligibility Index (SII) for all children wearing hearing aids. Linear mixed effects models were used to examine the contribution of group, age, vocabulary, working memory, and attention to individual differences in speech recognition thresholds in each masker. Additional models were constructed to examine the role of aided audibility on masked speech recognition in CHL. Finally, to explore the time course of maturation of masked speech perception, linear mixed effects models were used to examine interactions between age, masker type, and hearing group as predictors of masked speech recognition. RESULTS Children's thresholds were higher in TTS than in SSN. There was no interaction of hearing group and masker type. CHL had higher thresholds than CNH in both maskers. In both hearing groups and masker types, children with better vocabularies had lower thresholds. An interaction of hearing group and attention was observed only in the TTS. Among CNH, attention predicted thresholds in TTS. Among CHL, vocabulary and aided audibility predicted thresholds in TTS. In both maskers, thresholds decreased as a function of age at a similar rate in CNH and CHL. CONCLUSIONS The factors contributing to individual differences in speech recognition differed as a function of masker type. In TTS, the factors contributing to individual difference in speech recognition further differed as a function of hearing group. Whereas attention predicted variance for CNH in TTS, vocabulary and aided audibility predicted variance in CHL. CHL required a more favorable signal to noise ratio (SNR) to recognize speech in TTS than in SSN (mean = +1 dB in TTS, -3 dB in SSN). We posit that failures in auditory stream segregation limit the extent to which CHL can recognize speech in a speech masker. Larger sample sizes or longitudinal data are needed to characterize the time course of maturation of masked speech perception in CHL.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
2
|
Magimairaj BM, Nagaraj NK, Champlin CA, Thibodeau LK, Loeb DF, Gillam RB. Speech Perception in Noise Predicts Oral Narrative Comprehension in Children With Developmental Language Disorder. Front Psychol 2021; 12:735026. [PMID: 34744907 PMCID: PMC8566731 DOI: 10.3389/fpsyg.2021.735026] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2021] [Accepted: 09/17/2021] [Indexed: 11/13/2022] Open
Abstract
We examined the relative contribution of auditory processing abilities (tone perception and speech perception in noise) after controlling for short-term memory capacity and vocabulary, to narrative language comprehension in children with developmental language disorder. Two hundred and sixteen children with developmental language disorder, ages 6 to 9 years (Mean = 7; 6), were administered multiple measures. The dependent variable was children's score on the narrative comprehension scale of the Test of Narrative Language. Predictors were auditory processing abilities, phonological short-term memory capacity, and language (vocabulary) factors, with age, speech perception in quiet, and non-verbal IQ as covariates. Results showed that narrative comprehension was positively correlated with the majority of the predictors. Regression analysis suggested that speech perception in noise contributed uniquely to narrative comprehension in children with developmental language disorder, over and above all other predictors; however, tone perception tasks failed to explain unique variance. The relative importance of speech perception in noise over tone-perception measures for language comprehension reinforces the need for the assessment and management of listening in noise deficits and makes a compelling case for the functional implications of complex listening situations for children with developmental language disorder.
Collapse
Affiliation(s)
- Beula M Magimairaj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Naveen K Nagaraj
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| | - Craig A Champlin
- Speech, Language, and Hearing Sciences, The University of Texas at Austin, Austin, TX, United States
| | - Linda K Thibodeau
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, United States
| | - Diane F Loeb
- Communication Sciences and Disorders, Baylor University, Waco, TX, United States
| | - Ronald B Gillam
- Communicative Disorders and Deaf Education, Emma Eccles Jones Early Childhood Education and Research Center, Utah State University, Logan, UT, United States
| |
Collapse
|
3
|
Zarzo Benlloch M, Ygual Fernández A, Cervera Mérida JF. Relaciones entre habilidades de percepción y producción de habla y el desarrollo morfosintáctico en niños con Trastorno Fonológico que hablan español. REVISTA DE INVESTIGACIÓN EN LOGOPEDIA 2021. [DOI: 10.5209/rlog.72143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/20/2022] Open
Abstract
La investigación sobre el desarrollo gramatical y su posible relación con los déficits de procesamiento de habla en niños con Trastorno Fonológico (TF) es escasa, especialmente para la lengua española. El objetivo es analizar la influencia de las habilidades de percepción y producción de habla en el desarrollo morfosintáctico de los niños con TF sin Trastorno del Lenguaje (TL). Participaron 52 niños de habla española de 4 a 6 años: 26 con TF y 26 con desarrollo típico (DT) emparejados en edad cronológica, cociente de inteligencia no verbal y nivel de vocabulario receptivo. El desarrollo morfosintáctico se evaluó con el test de lenguaje CELF-Preschool-2-Spanish. Los niños realizaron una tarea de percepción de habla en concreto de discriminación y reconocimiento fonológico y la producción se analizó mediante un análisis fonológico a partir de una tarea de denominación de imágenes. Los niños con TF obtuvieron puntuaciones significativamente más pobres que los niños con DT en todas las variables. Un análisis de mediación mostró un efecto positivo entre la percepción del habla y el desarrollo gramatical con la mediación de la producción del habla. Los niños con TF presentan peor desarrollo morfosintáctico que los niños con DT. Parecen aprender el lenguaje de forma diferente porque son menos eficaces extrayendo, manipulando y produciendo las características del habla. En ellos, el desarrollo gramatical parece depender de varios factores incluyendo la percepción y producción de habla y del efecto sinérgico que estos dos procesos tienen el uno sobre el otro.
Collapse
|
4
|
Nakeva von Mentzer C. Phonemic discrimination and reproduction in 4-5-year-old children: Relations to hearing. Int J Pediatr Otorhinolaryngol 2020; 133:109981. [PMID: 32247932 DOI: 10.1016/j.ijporl.2020.109981] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2019] [Revised: 02/27/2020] [Accepted: 02/29/2020] [Indexed: 12/28/2022]
Abstract
OBJECTIVE The long-term objective of this research is to highlight the importance of speech perception assessment in children with developmental language disorder (DLD), and to investigate how hearing contributes to speech and language skills. As a first step in fulfilling this aim, the present study explored relations between phonemic discrimination and reproduction, and sensitive measures of hearing in young healthy children. METHODS The American Listen-Say test was developed and served as speech perception tool. This test reports speech discrimination of phonemic contrasts quantitatively for both quiet and in noise conditions, along with reproduction scores, all measured within one session. Speech tokens were perceptually homogenized in noise. Forty-one 4-5-year-old American children participated. Phonemic discrimination (quiet and speech shaped noise) and phonemic reproduction, audiometric thresholds in the conventional (1-8 kHz) and extended high frequency (EHF; 10-16 kHz) range, and distortion product otoacoustic emissions (DPOAEs) were examined. RESULTS All children had normal hearing thresholds within the conventional range (mean PTA bilaterally 8.6 dB HL). Ten (24.3%) of the children had elevated EHF thresholds (> 20 dB HL) for one or more frequencies or ears, and six (14.6%) had DPOAE signal to noise ratios (SNR) < 6 dB. EHF thresholds and DPOAE SNRs were significantly associated. Children's phonemic discrimination was impaired in noise, relative to quiet. There was a moderate, significant correlation between overall phonemic discrimination in noise and EHF audiometric thresholds. CONCLUSIONS Overall, the present study showed that sensitive hearing measures enabled the detection of subtle hearing difficulties in young healthy children. In particular, phonemic discrimination in noise showed associations with hearing. Implications of including sensitive hearing measures in children with DLD are discussed.
Collapse
|
5
|
Leibold LJ, Buss E. Masked Speech Recognition in School-Age Children. Front Psychol 2019; 10:1981. [PMID: 31551862 PMCID: PMC6733920 DOI: 10.3389/fpsyg.2019.01981] [Citation(s) in RCA: 38] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/20/2019] [Accepted: 08/13/2019] [Indexed: 11/13/2022] Open
Abstract
Children who are typically developing often struggle to hear and understand speech in the presence of competing background sounds, particularly when the background sounds are also speech. For example, in many cases, young school-age children require an additional 5- to 10-dB signal-to-noise ratio relative to adults to achieve the same word or sentence recognition performance in the presence of two streams of competing speech. Moreover, adult-like performance is not observed until adolescence. Despite ample converging evidence that children are more susceptible to auditory masking than adults, the field lacks a comprehensive model that accounts for the development of masked speech recognition. This review provides a synthesis of the literature on the typical development of masked speech recognition. Age-related changes in the ability to recognize phonemes, words, or sentences in the presence of competing background sounds will be discussed by considering (1) how masking sounds influence the sensory encoding of target speech; (2) differences in the time course of development for speech-in-noise versus speech-in-speech recognition; and (3) the central auditory and cognitive processes required to separate and attend to target speech when multiple people are speaking at the same time.
Collapse
Affiliation(s)
- Lori J Leibold
- Human Auditory Development Laboratory, Department of Research, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, United States
| | - Emily Buss
- Psychoacoustics Laboratories, Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, NC, United States
| |
Collapse
|
6
|
Musacchia G, Ortiz-Mantilla S, Roesler CP, Rajendran S, Morgan-Byrne J, Benasich AA. Effects of noise and age on the infant brainstem response to speech. Clin Neurophysiol 2018; 129:2623-2634. [DOI: 10.1016/j.clinph.2018.08.005] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2018] [Revised: 08/20/2018] [Accepted: 08/24/2018] [Indexed: 12/23/2022]
|
7
|
Nakeva von Mentzer C, Sundström M, Enqvist K, Hällgren M. Assessing speech perception in Swedish school-aged children: preliminary data on the Listen–Say test. LOGOP PHONIATR VOCO 2017; 43:106-119. [DOI: 10.1080/14015439.2017.1380076] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Martina Sundström
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Karin Enqvist
- Department of Neuroscience, Unit for Speech Language Pathology, Uppsala University, Uppsala, Sweden
| | - Mathias Hällgren
- Department of Otorhinolaryngology/Section of Audiology, Linköping University Hospital, Linköping, Sweden
| |
Collapse
|
8
|
McCreery RW, Spratford M, Kirby B, Brennan M. Individual differences in language and working memory affect children's speech recognition in noise. Int J Audiol 2017; 56:306-315. [PMID: 27981855 PMCID: PMC5634965 DOI: 10.1080/14992027.2016.1266703] [Citation(s) in RCA: 53] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVE We examined how cognitive and linguistic skills affect speech recognition in noise for children with normal hearing. Children with better working memory and language abilities were expected to have better speech recognition in noise than peers with poorer skills in these domains. DESIGN As part of a prospective, cross-sectional study, children with normal hearing completed speech recognition in noise for three types of stimuli: (1) monosyllabic words, (2) syntactically correct but semantically anomalous sentences and (3) semantically and syntactically anomalous word sequences. Measures of vocabulary, syntax and working memory were used to predict individual differences in speech recognition in noise. STUDY SAMPLE Ninety-six children with normal hearing, who were between 5 and 12 years of age. RESULTS Higher working memory was associated with better speech recognition in noise for all three stimulus types. Higher vocabulary abilities were associated with better recognition in noise for sentences and word sequences, but not for words. CONCLUSIONS Working memory and language both influence children's speech recognition in noise, but the relationships vary across types of stimuli. These findings suggest that clinical assessment of speech recognition is likely to reflect underlying cognitive and linguistic abilities, in addition to a child's auditory skills, consistent with the Ease of Language Understanding model.
Collapse
Affiliation(s)
- Ryan W. McCreery
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - Meredith Spratford
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - Benjamin Kirby
- Department of Communication Sciences and Disorders, Illinois State University, Normal, IL, USA
| | - Marc Brennan
- Audibility, Perception and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| |
Collapse
|
9
|
Knowland VCP, Evans S, Snell C, Rosen S. Visual Speech Perception in Children With Language Learning Impairments. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1-14. [PMID: 26895558 DOI: 10.1044/2015_jslhr-s-14-0269] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/29/2014] [Accepted: 07/30/2015] [Indexed: 06/05/2023]
Abstract
PURPOSE The purpose of the study was to assess the ability of children with developmental language learning impairments (LLIs) to use visual speech cues from the talking face. METHOD In this cross-sectional study, 41 typically developing children (mean age: 8 years 0 months, range: 4 years 5 months to 11 years 10 months) and 27 children with diagnosed LLI (mean age: 8 years 10 months, range: 5 years 2 months to 11 years 6 months) completed a silent speechreading task and a speech-in-noise task with and without visual support from the talking face. The speech-in-noise task involved the identification of a target word in a carrier sentence with a single competing speaker as a masker. RESULTS Children in the LLI group showed a deficit in speechreading when compared with their typically developing peers. Beyond the single-word level, this deficit became more apparent in older children. On the speech-in-noise task, a substantial benefit of visual cues was found regardless of age or group membership, although the LLI group showed an overall developmental delay in speech perception. CONCLUSION Although children with LLI were less accurate than their peers on the speechreading and speech-in noise-tasks, both groups were able to make equivalent use of visual cues to boost performance accuracy when listening in noise.
Collapse
|
10
|
Affordances and limitations of electronic storybooks for young children's emergent literacy. DEVELOPMENTAL REVIEW 2015. [DOI: 10.1016/j.dr.2014.12.004] [Citation(s) in RCA: 197] [Impact Index Per Article: 21.9] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
|
11
|
Fine PA, Ginsborg J. Making myself understood: perceived factors affecting the intelligibility of sung text. Front Psychol 2014; 5:809. [PMID: 25249987 PMCID: PMC4155173 DOI: 10.3389/fpsyg.2014.00809] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/17/2014] [Accepted: 07/08/2014] [Indexed: 11/13/2022] Open
Abstract
Singing is universal, and understanding sung words is thought to be important for many listeners' enjoyment of vocal and choral music. However, this is not a trivial task, and sung text intelligibility is probably affected by many factors. A survey of musicians was undertaken to identify the factors believed to have most impact on intelligibility, and to assess the importance of understanding sung words in familiar and unfamiliar languages. A total of 143 professional and amateur musicians, including singers, singing teachers, and regular listeners to vocal music, provided 394 statements yielding 851 references to one or more of 43 discrete factors in four categories: performer-related, listener-related, environment-related and words/music-related. The factors mentioned most frequently in each of the four categories were, respectively: diction; hearing ability; acoustic; and genre. In more than a third of references, the extent to which sung text is intelligible was attributed to the performer. Over 60% of respondents rated the ability to understand words in familiar languages as "very important," but only 17% when the text was in an unfamiliar language. Professional musicians (47% of the sample) rated the importance of understanding in both familiar and unfamiliar languages significantly higher than amateurs but listed fewer factors overall and fewer listener-related factors. The more important the respondents rated understanding, the more performer-related and environment-related factors they tended to list. There were no significant differences between the responses of those who teach singing and those who do not. Enhancing sung text intelligibility is thus perceived to be within the singer's control, at least to some extent, but there are also many factors outside their control. Empirical research is needed to explore some of these factors in greater depth, and has the potential to inform pedagogy for singers, composers, and choral directors.
Collapse
Affiliation(s)
- Philip A Fine
- Department of Psychology, University of Buckingham Buckingham, UK
| | - Jane Ginsborg
- Centre for Music Performance Research, Royal Northern College of Music Manchester, UK
| |
Collapse
|
12
|
Smeets DJH, van Dijken MJ, Bus AG. Using electronic storybooks to support word learning in children with severe language impairments. JOURNAL OF LEARNING DISABILITIES 2014; 47:435-449. [PMID: 23213051 DOI: 10.1177/0022219412467069] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
Novel word learning is reported to be problematic for children with severe language impairments (SLI). In this study, we tested electronic storybooks as a tool to support vocabulary acquisition in SLI children. In Experiment 1, 29 kindergarten SLI children heard four e-books each four times: (a) two stories were presented as video books with motion pictures, music, and sounds, and (b) two stories included only static illustrations without music or sounds. Two other stories served as the control condition. Both static and video books were effective in increasing knowledge of unknown words, but static books were most effective. Experiment 2 was designed to examine which elements in video books interfere with word learning: video images or music or sounds. A total of 23 kindergarten SLI children heard 8 storybooks each four times: (a) two static stories without music or sounds, (b) two static stories with music or sounds, (c) two video stories without music or sounds, and (d) two video books with music or sounds. Video images and static illustrations were equally effective, but the presence of music or sounds moderated word learning. In children with severe SLI, background music interfered with learning. Problems with speech perception in noisy conditions may be an underlying factor of SLI and should be considered in selecting teaching aids and learning environments.
Collapse
|
13
|
Nittrouer S, Caldwell-Tarr A, Tarr E, Lowenstein JH, Rice C, Moberly AC. Improving speech-in-noise recognition for children with hearing loss: potential effects of language abilities, binaural summation, and head shadow. Int J Audiol 2014; 52:513-25. [PMID: 23834373 DOI: 10.3109/14992027.2013.792957] [Citation(s) in RCA: 33] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE This study examined speech recognition in noise for children with hearing loss, compared it to recognition for children with normal hearing, and examined mechanisms that might explain variance in children's abilities to recognize speech in noise. DESIGN Word recognition was measured in two levels of noise, both when the speech and noise were co-located in front and when the noise came separately from one side. Four mechanisms were examined as factors possibly explaining variance: vocabulary knowledge, sensitivity to phonological structure, binaural summation, and head shadow. STUDY SAMPLE Participants were 113 eight-year-old children. Forty-eight had normal hearing (NH) and 65 had hearing loss: 18 with hearing aids (HAs), 19 with one cochlear implant (CI), and 28 with two CIs. RESULTS Phonological sensitivity explained a significant amount of between-groups variance in speech-in-noise recognition. Little evidence of binaural summation was found. Head shadow was similar in magnitude for children with NH and with CIs, regardless of whether they wore one or two CIs. Children with HAs showed reduced head shadow effects. CONCLUSION These outcomes suggest that in order to improve speech-in-noise recognition for children with hearing loss, intervention needs to be comprehensive, focusing on both language abilities and auditory mechanisms.
Collapse
Affiliation(s)
- Susan Nittrouer
- Department of Otolaryngology, The Ohio State University, Columbus, OH 43212, USA.
| | | | | | | | | | | |
Collapse
|
14
|
Smiljanic R, Sladen D. Acoustic and semantic enhancements for children with cochlear implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1085-1096. [PMID: 23785186 DOI: 10.1044/1092-4388(2012/12-0097)] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE In this study, the authors examined how signal clarity interacts with the use of sentence context information in determining speech-in-noise recognition for children with cochlear implants and children with normal hearing. METHOD One hundred and twenty sentences in which the final word varied in predictability (high vs. low semantic context) were produced in conversational and clear speech. Nine children with cochlear implants and 9 children with normal hearing completed the sentence-in-noise listening tests and a standardized language measure. RESULTS Word recognition in noise improved significantly for both groups of children for high-predictability sentences in clear speech. Children with normal hearing benefited more from each source of information compared with children with cochlear implants. There was a significant correlation between more developed language skills and the ability to use contextual enhancements. The smaller context gain in clear speech for children with cochlear implants is in accord with the effortfulness hypothesis (McCoy et al., 2005) and points to the cumulative effects of noise throughout the processing system. CONCLUSION Modifications of the speech signal and the context of the utterances through changes in the talker output hold substantial promise as a communication enhancement technique for both children with cochlear implants and children with normal hearing.
Collapse
|
15
|
Caldwell A, Nittrouer S. Speech perception in noise by children with cochlear implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:13-30. [PMID: 22744138 PMCID: PMC3810941 DOI: 10.1044/1092-4388(2012/11-0338)] [Citation(s) in RCA: 86] [Impact Index Per Article: 7.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
PURPOSE Common wisdom suggests that listening in noise poses disproportionately greater difficulty for listeners with cochlear implants (CIs) than for peers with normal hearing (NH). The purpose of this study was to examine phonological, language, and cognitive skills that might help explain speech-in-noise abilities for children with CIs. METHOD Three groups of kindergartners (NH, hearing aid wearers, and CI users) were tested on speech recognition in quiet and noise and on tasks thought to underlie the abilities that fit into the domains of phonological awareness, general language, and cognitive skills. These last measures were used as predictor variables in regression analyses with speech-in-noise scores as dependent variables. RESULTS Compared to children with NH, children with CIs did not perform as well on speech recognition in noise or on most other measures, including recognition in quiet. Two surprising results were that (a) noise effects were consistent across groups and (b) scores on other measures did not explain any group differences in speech recognition. CONCLUSIONS Limitations of implant processing take their primary toll on recognition in quiet and account for poor speech recognition and language/phonological deficits in children with CIs. Implications are that teachers/clinicians need to teach language/phonology directly and maximize signal-to-noise levels in the classroom.
Collapse
|