1
|
Easwar V, Peng ZE, Boothalingam S, Seeto M. Neural Envelope Processing at Low Frequencies Predicts Speech Understanding of Children With Hearing Loss in Noise and Reverberation. Ear Hear 2024; 45:837-849. [PMID: 38768048 PMCID: PMC11175738 DOI: 10.1097/aud.0000000000001481] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/24/2023] [Accepted: 12/22/2023] [Indexed: 05/22/2024]
Abstract
OBJECTIVE Children with hearing loss experience greater difficulty understanding speech in the presence of noise and reverberation relative to their normal hearing peers despite provision of appropriate amplification. The fidelity of fundamental frequency of voice (f0) encoding-a salient temporal cue for understanding speech in noise-could play a significant role in explaining the variance in abilities among children. However, the nature of deficits in f0 encoding and its relationship with speech understanding are poorly understood. To this end, we evaluated the influence of frequency-specific f0 encoding on speech perception abilities of children with and without hearing loss in the presence of noise and/or reverberation. METHODS In 14 school-aged children with sensorineural hearing loss fitted with hearing aids and 29 normal hearing peers, envelope following responses (EFRs) were elicited by the vowel /i/, modified to estimate f0 encoding in low (<1.1 kHz) and higher frequencies simultaneously. EFRs to /i/ were elicited in quiet, in the presence of speech-shaped noise at +5 dB signal to noise ratio, with simulated reverberation time of 0.62 sec, as well as both noise and reverberation. EFRs were recorded using single-channel electroencephalogram between the vertex and the nape while children watched a silent movie with captions. Speech discrimination accuracy was measured using the University of Western Ontario Distinctive Features Differences test in each of the four acoustic conditions. Stimuli for EFR recordings and speech discrimination were presented monaurally. RESULTS Both groups of children demonstrated a frequency-dependent dichotomy in the disruption of f0 encoding, as reflected in EFR amplitude and phase coherence. Greater disruption (i.e., lower EFR amplitudes and phase coherence) was evident in EFRs elicited by low frequencies due to noise and greater disruption was evident in EFRs elicited by higher frequencies due to reverberation. Relative to normal hearing peers, children with hearing loss demonstrated: (a) greater disruption of f0 encoding at low frequencies, particularly in the presence of reverberation, and (b) a positive relationship between f0 encoding at low frequencies and speech discrimination in the hardest listening condition (i.e., when both noise and reverberation were present). CONCLUSIONS Together, these results provide new evidence for the persistence of suprathreshold temporal processing deficits related to f0 encoding in children despite the provision of appropriate amplification to compensate for hearing loss. These objectively measurable deficits may underlie the greater difficulty experienced by children with hearing loss.
Collapse
Affiliation(s)
- Vijayalakshmi Easwar
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | - Z. Ellen Peng
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Sriram Boothalingam
- Waisman Center, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communcation Sciences and Disorders, University of Wisconsin Madison, Madison, Wisconsin, USA
- Communication Sciences Department, National Acoustic Laboratories, Sydney, Australia
- Linguistics, Macquarie University, Sydney, Australia
| | | |
Collapse
|
2
|
Lalonde K, Walker EA, Leibold LJ, McCreery RW. Predictors of Susceptibility to Noise and Speech Masking Among School-Age Children With Hearing Loss or Typical Hearing. Ear Hear 2024; 45:81-93. [PMID: 37415268 PMCID: PMC10771540 DOI: 10.1097/aud.0000000000001403] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/08/2023]
Abstract
OBJECTIVES The purpose of this study was to evaluate effects of masker type and hearing group on the relationship between school-age children's speech recognition and age, vocabulary, working memory, and selective attention. This study also explored effects of masker type and hearing group on the time course of maturation of masked speech recognition. DESIGN Participants included 31 children with normal hearing (CNH) and 41 children with mild to severe bilateral sensorineural hearing loss (CHL), between 6.7 and 13 years of age. Children with hearing aids used their personal hearing aids throughout testing. Audiometric thresholds and standardized measures of vocabulary, working memory, and selective attention were obtained from each child, along with masked sentence recognition thresholds in a steady state, speech-spectrum noise (SSN) and in a two-talker speech masker (TTS). Aided audibility through children's hearing aids was calculated based on the Speech Intelligibility Index (SII) for all children wearing hearing aids. Linear mixed effects models were used to examine the contribution of group, age, vocabulary, working memory, and attention to individual differences in speech recognition thresholds in each masker. Additional models were constructed to examine the role of aided audibility on masked speech recognition in CHL. Finally, to explore the time course of maturation of masked speech perception, linear mixed effects models were used to examine interactions between age, masker type, and hearing group as predictors of masked speech recognition. RESULTS Children's thresholds were higher in TTS than in SSN. There was no interaction of hearing group and masker type. CHL had higher thresholds than CNH in both maskers. In both hearing groups and masker types, children with better vocabularies had lower thresholds. An interaction of hearing group and attention was observed only in the TTS. Among CNH, attention predicted thresholds in TTS. Among CHL, vocabulary and aided audibility predicted thresholds in TTS. In both maskers, thresholds decreased as a function of age at a similar rate in CNH and CHL. CONCLUSIONS The factors contributing to individual differences in speech recognition differed as a function of masker type. In TTS, the factors contributing to individual difference in speech recognition further differed as a function of hearing group. Whereas attention predicted variance for CNH in TTS, vocabulary and aided audibility predicted variance in CHL. CHL required a more favorable signal to noise ratio (SNR) to recognize speech in TTS than in SSN (mean = +1 dB in TTS, -3 dB in SSN). We posit that failures in auditory stream segregation limit the extent to which CHL can recognize speech in a speech masker. Larger sample sizes or longitudinal data are needed to characterize the time course of maturation of masked speech perception in CHL.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Elizabeth A. Walker
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Ryan W. McCreery
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
3
|
Shen Y, Langley L. Spectral weighting for sentence recognition in steady-state and amplitude-modulated noise. JASA EXPRESS LETTERS 2023; 3:2887651. [PMID: 37125871 PMCID: PMC10155216 DOI: 10.1121/10.0017934] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/24/2022] [Accepted: 04/06/2023] [Indexed: 05/03/2023]
Abstract
Spectral weights in octave-frequency bands from 0.25 to 4 kHz were estimated for speech-in-noise recognition using two sentence materials (i.e., the IEEE and AzBio sentences). The masking noise was either unmodulated or sinusoidally amplitude-modulated at 8 Hz. The estimated spectral weights did not vary significantly across two test sessions and were similar for the two sentence materials. Amplitude-modulating the masker increased the weight at 2 kHz and decreased the weight at 0.25 kHz, which may support an upward shift in spectral weights for temporally fluctuating maskers.
Collapse
Affiliation(s)
- Yi Shen
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105-6246, ,
| | - Lauren Langley
- Department of Speech and Hearing Sciences, University of Washington, 1417 Northeast 42nd Street, Seattle, Washington 98105-6246, ,
| |
Collapse
|
4
|
Lewis D, Spratford M, Stecker GC, McCreery RW. Remote-Microphone Benefit in Noise and Reverberation for Children Who are Hard of Hearing. J Am Acad Audiol 2022; 33:330-341. [PMID: 36577441 PMCID: PMC10300232 DOI: 10.1055/s-0042-1755319] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/30/2022]
Abstract
BACKGROUND Remote-microphone (RM) systems are designed to reduce the impact of poor acoustics on speech understanding. However, there is limited research examining the effects of adding reverberation to noise on speech understanding when using hearing aids (HAs) and RM systems. Given the significant challenges posed by environments with poor acoustics for children who are hard of hearing, we evaluated the ability of a novel RM system to address the effects of noise and reverberation. PURPOSE We assessed the effect of a recently developed RM system on aided speech perception of children who were hard of hearing in noise and reverberation and how their performance compared to peers who are not hard of hearing (i.e., who have hearing thresholds no greater than 15 dB HL). The effect of aided speech audibility on sentence recognition when using an RM system also was assessed. STUDY SAMPLE Twenty-two children with mild to severe hearing loss and 17 children who were not hard of hearing (i.e., with hearing thresholds no greater than 15 dB HL) (7-18 years) participated. DATA COLLECTION AND ANALYSIS An adaptive procedure was used to determine the signal-to-noise ratio for 50 and 95% correct sentence recognition in noise and noise plus reverberation (RT 300 ms). Linear mixed models were used to examine the effect of listening conditions on speech recognition with RMs for both groups of children and the effects of aided audibility on performance across all listening conditions for children who were hard of hearing. RESULTS Children who were hard of hearing had poorer speech recognition for HAs alone than for HAs plus RM. Regardless of hearing status, children had poorer speech recognition in noise plus reverberation than in noise alone. Children who were hard of hearing had poorer speech recognition than peers with thresholds no greater than 15 dB HL when using HAs alone but comparable or better speech recognition with HAs plus RM. Children with better-aided audibility with the HAs showed better speech recognition with the HAs alone and with HAs plus RM. CONCLUSION Providing HAs that maximize speech audibility and coupling them with RM systems has the potential to improve communication access and outcomes for children who are hard of hearing in environments with noise and reverberation.
Collapse
Affiliation(s)
- Dawna Lewis
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | - Meredith Spratford
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| | | | - Ryan W. McCreery
- Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
5
|
Lalonde K, Buss E, Miller MK, Leibold LJ. Face Masks Impact Auditory and Audiovisual Consonant Recognition in Children With and Without Hearing Loss. Front Psychol 2022; 13:874345. [PMID: 35645844 PMCID: PMC9137424 DOI: 10.3389/fpsyg.2022.874345] [Citation(s) in RCA: 10] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/12/2022] [Accepted: 03/22/2022] [Indexed: 11/16/2022] Open
Abstract
Teachers and students are wearing face masks in many classrooms to limit the spread of the coronavirus. Face masks disrupt speech understanding by concealing lip-reading cues and reducing transmission of high-frequency acoustic speech content. Transparent masks provide greater access to visual speech cues than opaque masks but tend to cause greater acoustic attenuation. This study examined the effects of four types of face masks on auditory-only and audiovisual speech recognition in 18 children with bilateral hearing loss, 16 children with normal hearing, and 38 adults with normal hearing tested in their homes, as well as 15 adults with normal hearing tested in the laboratory. Stimuli simulated the acoustic attenuation and visual obstruction caused by four different face masks: hospital, fabric, and two transparent masks. Participants tested in their homes completed auditory-only and audiovisual consonant recognition tests with speech-spectrum noise at 0 dB SNR. Adults tested in the lab completed the same tests at 0 and/or -10 dB SNR. A subset of participants from each group completed a visual-only consonant recognition test with no mask. Consonant recognition accuracy and transmission of three phonetic features (place of articulation, manner of articulation, and voicing) were analyzed using linear mixed-effects models. Children with hearing loss identified consonants less accurately than children with normal hearing and adults with normal hearing tested at 0 dB SNR. However, all the groups were similarly impacted by face masks. Under auditory-only conditions, results were consistent with the pattern of high-frequency acoustic attenuation; hospital masks had the least impact on performance. Under audiovisual conditions, transparent masks had less impact on performance than opaque masks. High-frequency attenuation and visual obstruction had the greatest impact on place perception. The latter finding was consistent with the visual-only feature transmission data. These results suggest that the combination of noise and face masks negatively impacts speech understanding in children. The best mask for promoting speech understanding in noisy environments depend on whether visual cues will be accessible: hospital masks are best under auditory-only conditions, but well-fit transparent masks are best when listeners have a clear, consistent view of the talker's face.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Audiovisual Speech Processing Laboratory, Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, United States
| | - Emily Buss
- Speech Perception and Auditory Research at Carolina Laboratory, Department of Otolaryngology Head and Neck Surgery, University of North Carolina School of Medicine, Chapel Hill, NC, United States
| | - Margaret K. Miller
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, United States
| | - Lori J. Leibold
- Human Auditory Development Laboratory, Boys Town National Research Hospital, Center for Hearing Research, Omaha, NE, United States
| |
Collapse
|
6
|
Schiller IS, Remacle A, Durieux N, Morsomme D. Effects of Noise and a Speaker's Impaired Voice Quality on Spoken Language Processing in School-Aged Children: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:169-199. [PMID: 34902257 DOI: 10.1044/2021_jslhr-21-00183] [Citation(s) in RCA: 8] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Background noise and voice problems among teachers can degrade listening conditions in classrooms. The aim of this literature review is to understand how these acoustic degradations affect spoken language processing in 6- to 18-year-old children. METHOD In a narrative report and meta-analysis, we systematically review studies that examined the effects of noise and/or impaired voice on children's response accuracy and response time (RT) in listening tasks. We propose the Speech Processing under Acoustic DEgradations (SPADE) framework to classify relevant findings according to three processing dimensions-speech perception, listening comprehension, and auditory working memory-and highlight potential moderators. RESULTS Thirty-one studies are included in this systematic review. Our meta-analysis shows that noise can impede children's accuracy in listening tasks across all processing dimensions (Cohen's d between -0.67 and -2.65, depending on signal-to-noise ratio) and that impaired voice lowers children's accuracy in listening comprehension tasks (d = -0.35). A handful of studies assessed RT, but results are inconclusive. The impact of noise and impaired voice can be moderated by listener, task, environmental, and exposure factors. The interaction between noise and impaired voice remains underinvestigated. CONCLUSIONS Overall, this review suggests that children have more trouble perceiving speech, processing verbal messages, and recalling verbal information when listening to speech in noise or to a speaker with dysphonia. Impoverished speech input could impede pupils' motivation and academic performance at school. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.17139377.
Collapse
Affiliation(s)
- Isabel S Schiller
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
- Teaching and Research Area Work and Engineering Psychology, Institute of Psychology, RWTH Aachen University, Germany
| | - Angélique Remacle
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
- Center For Research in Cognition and Neurosciences, Faculty of Psychological Science and Education, Université Libre de Bruxelles, Belgium
| | - Nancy Durieux
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
| | - Dominique Morsomme
- Research Unit for a Life-Course Perspective on Health & Education, Faculty of Psychology, Speech and Language Therapy, and Educational Sciences, University of Liège, Belgium
| |
Collapse
|
7
|
Corbin NE, Buss E, Leibold LJ. Spatial Hearing and Functional Auditory Skills in Children With Unilateral Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4495-4512. [PMID: 34609204 PMCID: PMC9132156 DOI: 10.1044/2021_jslhr-20-00081] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/24/2020] [Revised: 05/03/2021] [Accepted: 06/30/2021] [Indexed: 06/13/2023]
Abstract
Purpose The purpose of this study was to characterize spatial hearing abilities of children with longstanding unilateral hearing loss (UHL). UHL was expected to negatively impact children's sound source localization and masked speech recognition, particularly when the target and masker were separated in space. Spatial release from masking (SRM) in the presence of a two-talker speech masker was expected to predict functional auditory performance as assessed by parent report. Method Participants were 5- to 14-year-olds with sensorineural or mixed UHL, age-matched children with normal hearing (NH), and adults with NH. Sound source localization was assessed on the horizontal plane (-90° to 90°), with noise that was either all-pass, low-pass, high-pass, or an unpredictable mixture. Speech recognition thresholds were measured in the sound field for sentences presented in two-talker speech or speech-shaped noise. Target speech was always presented from 0°; the masker was either colocated with the target or spatially separated at ±90°. Parents of children with UHL rated their children's functional auditory performance in everyday environments via questionnaire. Results Sound source localization was poorer for children with UHL than those with NH. Children with UHL also derived less SRM than those with NH, with increased masking for some conditions. Effects of UHL were larger in the two-talker than the noise masker, and SRM in two-talker speech increased with age for both groups of children. Children with UHL whose parents reported greater functional difficulties achieved less SRM when either masker was on the side of the better-hearing ear. Conclusions Children with UHL are clearly at a disadvantage compared with children with NH for both sound source localization and masked speech recognition with spatial separation. Parents' report of their children's real-world communication abilities suggests that spatial hearing plays an important role in outcomes for children with UHL.
Collapse
Affiliation(s)
- Nicole E. Corbin
- Department of Communication Science and Disorders, University of Pittsburgh, PA
| | - Emily Buss
- Department of Otolaryngology—Head & Neck Surgery, School of Medicine, University of North Carolina at Chapel Hill
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
8
|
Zhang M, Moncrieff D, Johnston D, Parfitt M, Auld R. A preliminary study on speech recognition in noise training for children with hearing loss. Int J Pediatr Otorhinolaryngol 2021; 149:110843. [PMID: 34340007 DOI: 10.1016/j.ijporl.2021.110843] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/28/2020] [Revised: 06/27/2021] [Accepted: 07/16/2021] [Indexed: 10/20/2022]
Abstract
PURPOSE The current study is a preliminary study to examine whether children with hearing loss would benefit from a speech recognition in noise training. METHODS Twenty-five children who wore hearing aids, cochlear implants, or bimodal devices from 4 to 12 years old participated in the study (experimental, n = 16; control, n = 9). The experimental group received a speech-in-noise training that took sixteen 15-min sessions spanning 8 to 12 weeks. The task involves recognizing monosyllabic target words and sentence keywords with various contextual cues in a multi-talker babble. The target stimuli were spoken by two females and fixed at 65 dB SPL throughout the training while the masker varied adaptively. Pre- and post-training tests measured the speech recognition thresholds of monosyllabic words and sentences spoken by two males in the babble noise. The test targets were presented at 55, 65, and 80 dB SPL. RESULTS The experimental group improved for word and sentence recognition in noise after training (Mean Difference = 2.4-2.5 dB, 2.7-4.2 dB, respectively). Training benefits were observed at trained (65 dB SPL) and untrained levels (55 and 80 dB SPL). The amount of post-training improvement was comparable between children using hearing aids and cochlear implants. CONCLUSIONS This preliminary study showed that children with hearing loss could benefit from a speech recognition in noise training that may fit into the children's school schedules. Training at a conversational level (65 dB SPL) transfers the benefit to levels 10-15 dB softer or louder. Training with female target talkers transfers the benefit to male target talkers. Overall, speech in noise training brings practical benefits for school-age children with hearing loss.
Collapse
Affiliation(s)
- Mengchao Zhang
- Department of Communication Science and Disorders, University of Pittsburgh, 6035 Forbes Tower, Pittsburgh, PA, 15260, USA.
| | - Deborah Moncrieff
- School of Communication Sciences and Disorders, University of Memphis, 4055 N. Park Loop, Memphis, TN, 38152, USA
| | - Deborrah Johnston
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| | - Michelle Parfitt
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| | - Ruth Auld
- DePaul School for Hearing and Speech, 6202 Alder St, Pittsburgh, PA, 15206, USA
| |
Collapse
|
9
|
Tsou YT, Li B, Kret ME, Frijns JHM, Rieffe C. Hearing Status Affects Children's Emotion Understanding in Dynamic Social Situations: An Eye-Tracking Study. Ear Hear 2021; 42:1024-1033. [PMID: 33369943 PMCID: PMC8221710 DOI: 10.1097/aud.0000000000000994] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES For children to understand the emotional behavior of others, the first two steps involve emotion encoding and emotion interpreting, according to the Social Information Processing model. Access to daily social interactions is prerequisite to a child acquiring these skills, and barriers to communication such as hearing loss impede this access. Therefore, it could be challenging for children with hearing loss to develop these two skills. The present study aimed to understand the effect of prelingual hearing loss on children's emotion understanding, by examining how they encode and interpret nonverbal emotional cues in dynamic social situations. DESIGN Sixty deaf or hard-of-hearing (DHH) children and 71 typically hearing (TH) children (3-10 years old, mean age 6.2 years, 54% girls) watched videos of prototypical social interactions between a target person and an interaction partner. At the end of each video, the target person did not face the camera, rendering their facial expressions out of view to participants. Afterward, participants were asked to interpret the emotion they thought the target person felt at the end of the video. As participants watched the videos, their encoding patterns were examined by an eye tracker, which measured the amount of time participants spent looking at the target person's head and body and at the interaction partner's head and body. These regions were preselected for analyses because they had been found to provide cues for interpreting people's emotions and intentions. RESULTS When encoding emotional cues, both the DHH and TH children spent more time looking at the head of the target person and at the head of the interaction partner than they spent looking at the body or actions of either person. Yet, compared with the TH children, the DHH children looked at the target person's head for a shorter time (b = -0.03, p = 0.030), and at the target person's body (b = 0.04, p = 0.006) and at the interaction partner's head (b = 0.03, p = 0.048) for a longer time. The DHH children were also less accurate when interpreting emotions than their TH peers (b = -0.13, p = 0.005), and their lower scores were associated with their distinctive encoding pattern. CONCLUSIONS The findings suggest that children with limited auditory access to the social environment tend to collect visually observable information to compensate for ambiguous emotional cues in social situations. These children may have developed this strategy to support their daily communication. Yet, to fully benefit from such a strategy, these children may need extra support for gaining better social-emotional knowledge.
Collapse
Affiliation(s)
- Yung-Ting Tsou
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Boya Li
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Mariska E. Kret
- Cognitive Psychology Unit, Institute of Psychology, Leiden University, Leiden, The Netherlands
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
| | - Johan H. M. Frijns
- Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
- Department of Otorhinolaryngology and Head & Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands
| | - Carolien Rieffe
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
- Department of Psychology and Human Development, Institute of Education, University College London, London, UK
| |
Collapse
|
10
|
Flaherty MM, Browning J, Buss E, Leibold LJ. Effects of Hearing Loss on School-Aged Children's Ability to Benefit From F0 Differences Between Target and Masker Speech. Ear Hear 2021; 42:1084-1096. [PMID: 33538428 PMCID: PMC8222052 DOI: 10.1097/aud.0000000000000979] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The objectives of the study were to (1) evaluate the impact of hearing loss on children's ability to benefit from F0 differences between target/masker speech in the context of aided speech-in-speech recognition and (2) to determine whether compromised F0 discrimination associated with hearing loss predicts F0 benefit in individual children. We hypothesized that children wearing appropriately fitted amplification would benefit from F0 differences, but they would not show the same magnitude of benefit as children with normal hearing. Reduced audibility and poor suprathreshold encoding that degrades frequency discrimination were expected to impair children's ability to segregate talkers based on F0. DESIGN Listeners were 9 to 17 year olds with bilateral, symmetrical, sensorineural hearing loss ranging in degree from mild to severe. A four-alternative, forced-choice procedure was used to estimate thresholds for disyllabic word recognition in a 60-dB-SPL two-talker masker. The same male talker produced target and masker speech. Target words had either the same mean F0 as the masker or were digitally shifted higher than the masker by three, six, or nine semitones. The F0 benefit was defined as the difference in thresholds between the shifted-F0 conditions and the unshifted-F0 condition. Thresholds for discriminating F0 were also measured, using a three-alternative, three-interval forced choice procedure, to determine whether compromised sensitivity to F0 differences due to hearing loss would predict children's ability to benefit from F0. Testing was performed in the sound field, and all children wore their personal hearing aids at user settings. RESULTS Children with hearing loss benefited from an F0 difference of nine semitones between target words and masker speech, with older children generally benefitting more than younger children. Some children benefitted from an F0 difference of six semitones, but this was not consistent across listeners. Thresholds for discriminating F0 improved with increasing age and predicted F0 benefit in the nine-semitone condition. An exploratory analysis indicated that F0 benefit was not significantly correlated with the four-frequency pure-tone average (0.5, 1, 2, and 4 kHz), aided audibility, or consistency of daily hearing aid use, although there was a trend for an association with the low-frequency pure-tone average (0.25 and 0.5 kHz). Comparisons of the present data to our previous study of children with normal hearing demonstrated that children with hearing loss benefitted less than children with normal hearing for the F0 differences tested. CONCLUSIONS The results demonstrate that children with mild-to-severe hearing loss who wear hearing aids benefit from relatively large F0 differences between target and masker speech during aided speech-in-speech recognition. The size of the benefit increases with increasing age, consistent with previously reported age effects for children with normal hearing. However, hearing loss reduces children's ability to capitalize on F0 differences between talkers. Audibility alone does not appear to be responsible for this effect; aided audibility and degree of loss were not primary predictors of performance. The ability to benefit from F0 differences may be limited by immature central processing or aspects of peripheral encoding that are not characterized in standard clinical assessments.
Collapse
Affiliation(s)
- Mary M. Flaherty
- Department of Speech and Hearing Science, University of Illinois at Urbana-Champaign, Champaign, Illinois, USA
| | | | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, School of Medicine, University of North Carolina, Chapel Hill, North Carolina, USA
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| |
Collapse
|
11
|
The Feasibility and Reliability of a Digits-in-Noise Test in the Clinical Follow-Up of Children With Mild to Profound Hearing Loss. Ear Hear 2021; 42:973-981. [PMID: 33577216 PMCID: PMC8221724 DOI: 10.1097/aud.0000000000000989] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Speech perception in noise is an important aspect of the rehabilitation of children with hearing loss. We aimed to evaluate the feasibility and reliability of the Dutch digits-in-noise (DIN) test in the clinical follow-up of children with hearing aids (HAs) and/or cochlear implants (CIs). A second aim of the study was to gain insight in the speech perception in noise performance of children with different degrees of hearing loss. DESIGN We retrospectively analyzed DIN test data of Dutch-speaking children with hearing loss (N = 188; 5 to 18 years old). A free-field version of the DIN-test was used. Children with open-set phoneme recognition in quiet of >70% at 65 dB SPL (best aided condition) were included. Ages ranged from 5 to 18 years old. All were experienced HA or CI users and had used their device(s) for at least 1 year before the measurement in the study. The DIN-test was performed in the framework of a clinical rehabilitation program. During testing, children wore their own devices with normal daily programs. RESULTS The average speech reception threshold (SRT) was -3.6 dB (SD 3.6) for the first list and significantly improved to -4.0 dB (SD 3.1) for the second list. HA users had a 4-dB better SRT compared with CI users. The larger the child's hearing loss, the worse the SRT is. However, 15% of the children who completed a first list of 24 trials were unable to complete a second list. Mean adaptive staircase trajectories across trials suggested that learning occurred throughout the first list, and that loss of sustained attention contributed to response variability during the second list. CONCLUSION The DIN test can be used to assess speech perception in noise abilities for children with different degrees of hearing loss and using HAs or CIs. The children with hearing loss required a higher signal-to-noise ratio (SNR) than did normal-hearing children and the required SNR is larger as the hearing loss increases. However, the current measurement procedure should be optimized for use in standard pediatric audiological care, as 15% of the children were unable to conduct a second list after the first list to reach a more stable SNR.
Collapse
|
12
|
Leibold LJ, Browning JM, Buss E. Masking Release for Speech-in-Speech Recognition Due to a Target/Masker Sex Mismatch in Children With Hearing Loss. Ear Hear 2021; 41:259-267. [PMID: 31365355 PMCID: PMC7310385 DOI: 10.1097/aud.0000000000000752] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The goal of the present study was to compare the extent to which children with hearing loss and children with normal hearing benefit from mismatches in target/masker sex in the context of speech-in-speech recognition. It was hypothesized that children with hearing loss experience a smaller target/masker sex mismatch benefit relative to children with normal hearing due to impairments in peripheral encoding, variable access to high-quality auditory input, or both. DESIGN Eighteen school-age children with sensorineural hearing loss (7 to 15 years) and 18 age-matched children with normal hearing participated in this study. Children with hearing loss were bilateral hearing aid users. Severity of hearing loss ranged from mild to severe across participants, but most had mild to moderate hearing loss. Speech recognition thresholds for disyllabic words presented in a two-talker speech masker were estimated in the sound field using an adaptive, forced-choice procedure with a picture-pointing response. Participants were tested in each of four conditions: (1) male target speech/two-male-talker masker; (2) male target speech/two-female-talker masker; (3) female target speech/two-female-talker masker; and (4) female target speech/two-male-talker masker. Children with hearing loss were tested wearing their personal hearing aids at user settings. RESULTS Both groups of children showed a sex-mismatch benefit, requiring a more advantageous signal to noise ratio when the target and masker were matched in sex than when they were mismatched. However, the magnitude of sex-mismatch benefit was significantly reduced for children with hearing loss relative to age-matched children with normal hearing. There was no effect of child age on the magnitude of sex-mismatch benefit. The sex-mismatch benefit was larger for male target speech than for female target speech. For children with hearing loss, the magnitude of sex-mismatch benefit was not associated with degree of hearing loss or aided audibility. CONCLUSIONS The findings from the present study indicate that children with sensorineural hearing loss are able to capitalize on acoustic differences between speech produced by male and female talkers when asked to recognize target words in a competing speech masker. However, children with hearing loss experienced a smaller benefit relative to their peers with normal hearing. No association between the sex-mismatch benefit and measures of unaided thresholds or aided audibility were observed for children with hearing loss, suggesting that reduced peripheral encoding is not the only factor responsible for the smaller sex-mismatch benefit relative to children with normal hearing.
Collapse
Affiliation(s)
- Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Jenna M. Browning
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska, USA
| | - Emily Buss
- Departement of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, Chapel Hill, North Carolina, USA
| |
Collapse
|
13
|
Pinkl J, Cash EK, Evans TC, Neijman T, Hamilton JW, Ferguson SD, Martinez JL, Rumley J, Hunter LL, Moore DR, Stewart HJ. Short-Term Pediatric Acclimatization to Adaptive Hearing Aid Technology. Am J Audiol 2021; 30:76-92. [PMID: 33351648 DOI: 10.1044/2020_aja-20-00073] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/28/2022] Open
Abstract
Purpose This exploratory study assessed the perceptual, cognitive, and academic learning effects of an adaptive, integrated, directionality, and noise reduction hearing aid program in pediatric users. Method Fifteen pediatric hearing aid users (6-12 years old) received new bilateral, individually fitted Oticon Opn hearing aids programmed with OpenSound Navigator (OSN) processing. Word recognition in noise, sentence repetition in quiet, nonword repetition, vocabulary learning, selective attention, executive function, memory, and reading and mathematical abilities were measured within 1 week of the initial hearing aid fitting and 2 months post fit. Caregivers completed questionnaires assessing their child's listening and communication abilities prior to study enrollment and after 2 months of using the study hearing aids. Results Caregiver reporting indicated significant improvements in speech and sound perception, spatial sound awareness, and the ability to participate in conversations. However, there was no positive change in performance in any of the measured skills. Mathematical scores significantly declined after 2 months. Conclusions OSN provided a perceived improvement in functional benefit, compared to their previous hearing aids, as reported by caregivers. However, there was no positive change in listening skills, cognition, and academic success after 2 months of using OSN. Findings may have been impacted by reporter bias, limited sample size, and a relatively short trial period. This study took place during the summer when participants were out of school, which may have influenced the decline in mathematical scores. The results support further exploration with age- and audiogram-matched controls, larger sample sizes, and longer test-retest intervals that correspond to the academic school year.
Collapse
Affiliation(s)
- Joseph Pinkl
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Erin K. Cash
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Neuroscience, College of Arts and Sciences, University of Cincinnati, OH
| | - Tommy C. Evans
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Timothy Neijman
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Jean W. Hamilton
- Division of Audiology, Cincinnati Children's Hospital Medical Center, OH
| | - Sarah D. Ferguson
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Jasmin L. Martinez
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - Johanne Rumley
- Oticon A/S, Kongebakken, Denmark
- Department of Nordic Studies and Linguistics, Faculty of Humanities, University of Copenhagen, Denmark
| | - Lisa L. Hunter
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, OH
| | - David R. Moore
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Department of Otolaryngology, College of Medicine, University of Cincinnati, OH
- Manchester Centre for Audiology and Deafness, The University of Manchester, United Kingdom
| | - Hannah J. Stewart
- Communication Sciences Research Center, Cincinnati Children's Hospital Medical Center, OH
- Division of Psychology and Language Sciences, University College London, United Kingdom
| |
Collapse
|
14
|
Brännström KJ, Lyberg-Åhlander V, Sahlén B. Perceived listening effort in children with hearing loss: listening to a dysphonic voice in quiet and in noise. LOGOP PHONIATR VOCO 2020; 47:1-9. [PMID: 32696707 DOI: 10.1080/14015439.2020.1794030] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/23/2022]
Abstract
AIM The present study investigates the effect of signal degradation on perceived listening effort in children with hearing loss listening in a simulated class-room context. It also examines the associations between perceived listening effort, passage comprehension performance and executive functioning. METHODS Twenty-four children (aged 06:03-13:00 years) with hearing impairment using cochlear implant (CI) and/or hearing aids (HA) participated. The children made ratings of perceived listening effort after completing an auditory passage comprehension task. All children performed the task in four different listening conditions: listening to a typical (i.e. normal) voice in quiet, to a dysphonic voice in quiet, to a typical voice in background noise and to a dysphonic voice in background noise. In addition, the children completed a task assessing executive function. RESULTS Both voice quality and background noise increased perceived listening effort in children with CI/HA, but no interaction with executive function was seen. CONCLUSION Since increased listening effort seems to be a consequence of increased cognitive resource spending, it is likely that less resources will be available for these children not only to comprehend but also to learn in challenging listening environments such as classrooms.
Collapse
Affiliation(s)
- K Jonas Brännström
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden
| | - Viveka Lyberg-Åhlander
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden.,Speech Language Pathology, Faculty of Arts, Psychology and Theology, Åbo Akademi University, Turku, Finland
| | - Birgitta Sahlén
- Department of Clinical Sciences Lund, Logopedics, Phoniatrics and Audiology, Lund University, Lund, Sweden
| |
Collapse
|
15
|
Nelson LH, Anderson K, Whicker J, Barrett T, Muñoz K, White K. Classroom Listening Experiences of Students Who Are Deaf or Hard of Hearing Using Listening Inventory For Education-Revised. Lang Speech Hear Serv Sch 2020; 51:720-733. [PMID: 32392436 DOI: 10.1044/2020_lshss-19-00087] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose This study examined classroom listening experiences reported by students who are deaf or hard of hearing using the Listening Inventory For Education-Revised (LIFE-R). Method Retrospective electronic survey responses from 3,584 school-age participants were analyzed using descriptive statistics to report student perceptions of listening difficulty in various classroom scenarios, including the strategies students used when they did not hear or understand. Stratified data were used to explore potential differences between grades and across degree of hearing loss or type of hearing technology. Results Average student listening appraisal ratings for 15 classroom, school, and social scenarios was 5.7 based on a 10-point Likert scale (0 = difficult, 10 = easy), highlighting listening difficulties encountered during the school day. This finding can be considered in context with the average rating of 7.2 reported from a previous study of students with typical hearing using the LIFE-R. The greatest difficulties were reported when trying to listen when other students in the class were making noise and in hearing the comments of other classmates. Average listening difficulty was greater for respondents in Grades 3-6 than those in Grades 7-12. Listening difficulty also generally increased relative to degree of hearing loss. When unable to hear, some students took proactive steps to improve their listening access; some reported they did nothing. Conclusions Students who are deaf or hard of hearing can face challenges in hearing and understanding throughout the school day. A functional tool to evaluate and monitor student experiences, such as the LIFE-R, can provide information to make necessary and effective adjustments to classroom instruction and the listening environment.
Collapse
Affiliation(s)
| | - Karen Anderson
- Supporting Success for Children with Hearing Loss, Tampa, FL
| | | | | | | | | |
Collapse
|
16
|
Walker EA, Sapp C, Oleson JJ, McCreery RW. Longitudinal Speech Recognition in Noise in Children: Effects of Hearing Status and Vocabulary. Front Psychol 2019; 10:2421. [PMID: 31708849 PMCID: PMC6824244 DOI: 10.3389/fpsyg.2019.02421] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 10/11/2019] [Indexed: 11/13/2022] Open
Abstract
Objectives: The aims of the current study were: (1) to compare growth trajectories of speech recognition in noise for children with normal hearing (CNH) and children who are hard of hearing (CHH) and (2) to determine the effects of auditory access, vocabulary size, and working memory on growth trajectories of speech recognition in noise in CHH. Design: Participants included 290 children enrolled in a longitudinal study. Children received a comprehensive battery of measures annually, including speech recognition in noise, vocabulary, and working memory. We collected measures of unaided and aided hearing and daily hearing aid (HA) use to quantify aided auditory experience (i.e., HA dosage). We used a longitudinal regression framework to examine the trajectories of speech recognition in noise in CNH and CHH. To determine factors that were associated with growth trajectories for CHH, we used a longitudinal regression model in which the dependent variable was speech recognition in noise scores, and the independent variables were grade, maternal education level, age at confirmation of hearing loss, vocabulary scores, working memory scores, and HA dosage. Results: We found a significant effect of grade and hearing status. Older children and CNH showed stronger speech recognition in noise scores compared to younger children and CHH. The growth trajectories for both groups were parallel over time. For CHH, older age, stronger vocabulary skills, and greater average HA dosage supported speech recognition in noise. Conclusion: The current study is among the first to compare developmental growth rates in speech recognition for CHH and CNH. CHH demonstrated persistent deficits in speech recognition in noise out to age 11, with no evidence of convergence or divergence between groups. These trends highlight the need to provide support for children with all degrees of hearing loss in the academic setting as they transition into secondary grades. The results also elucidate factors that influence growth trajectories for speech recognition in noise for children; stronger vocabulary skills and higher HA dosage supported speech recognition in degraded situations. This knowledge helps us to develop a more comprehensive model of spoken word recognition in children.
Collapse
Affiliation(s)
- Elizabeth A. Walker
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Caitlin Sapp
- Pediatric Audiology Laboratory, Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, United States
| | - Jacob J. Oleson
- Department of Biostatistics, University of Iowa, Iowa City, IA, United States
| | - Ryan W. McCreery
- Center for Hearing Research, Audibility, Perception, and Cognition Laboratory, Boys Town National Research Hospital, Omaha, NE, United States
| |
Collapse
|
17
|
Mehrkian S, Bayat Z, Javanbakht M, Emamdjomeh H, Bakhshi E. Effect of wireless remote microphone application on speech discrimination in noise in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2019; 125:192-195. [PMID: 31369931 DOI: 10.1016/j.ijporl.2019.07.007] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 04/17/2019] [Revised: 07/09/2019] [Accepted: 07/10/2019] [Indexed: 10/26/2022]
Abstract
OBJECTIVES Although cochlear implantation has significantly contributed to the speech perception of cochlear implant (CI) users, these individuals still have significant difficulty in understanding speech, especially in noisy environments and keeping track of the target speaker in the presence of speech sounds of others. This study was aimed to evaluate the effect of wireless Remote Microphones (RM) on speech discrimination scores in noise in child CI users. MATERIALS AND METHODS Twenty children with unilateral cochlear implantation were enrolled in this study with mean ± SD age of 5.8 ± 0.83 years who have undergone CI for at least one year. Speech discrimination scores in noise were assessed using the Words-in-Noise (WIN) test at a constant signal-to-noise ratio (SNR) of 0 dB, in the presence and absence of a wireless RM. Three loudspeakers were placed at a distance of 1 m in front of the child to present the speech and babble noise. The wireless microphone was placed on a base with a height equivalent to the height of the middle speech loudspeaker at a distance of 30 cm from it. FINDINGS The mean speech discrimination score in noise in the absence of wireless RM in all children was obtained 34% (6.8 words out of 20 words), with minimum and maximum of 15% and 50% words. Findings revealed the mean speech discrimination score in noise in the presence of wireless RMs is equivalent to 65% (13 words out of 20 words), with the minimum and maximum scores of 35% and 95%, respectively. The result showed that speech discrimination scores in noise improved in the presence of wireless RM. CONCLUSION The significant improvement was observed in speech discrimination in noise in all cochlear implanted children when the wireless RM was used, as compared to the absence of a wireless RM, which suggests the usefulness of this hearing aid accessory in CI users.
Collapse
Affiliation(s)
- Saeideh Mehrkian
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | - Zeinab Bayat
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran.
| | - Mohanna Javanbakht
- Department of Audiology, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| | | | - Enayatollah Bakhshi
- Department of Statistics, University of Social Welfare and Rehabilitation Sciences, Tehran, Iran
| |
Collapse
|
18
|
Behavioral Measures of Listening Effort in School-Age Children: Examining the Effects of Signal-to-Noise Ratio, Hearing Loss, and Amplification. Ear Hear 2019; 40:381-392. [PMID: 29905670 DOI: 10.1097/aud.0000000000000623] [Citation(s) in RCA: 48] [Impact Index Per Article: 9.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Increased listening effort in school-age children with hearing loss (CHL) could compromise learning and academic achievement. Identifying a sensitive behavioral measure of listening effort for this group could have both clinical and research value. This study examined the effects of signal-to-noise ratio (SNR), hearing loss, and personal amplification on 2 commonly used behavioral measures of listening effort: dual-task visual response times (visual RTs) and verbal response times (verbal RTs). DESIGN A total of 82 children (aged 6-13 years) took part in this study; 37 children with normal hearing (CNH) and 45 CHL. All children performed a dual-task paradigm from which both measures of listening effort (dual-task visual RT and verbal RT) were derived. The primary task was word recognition in multi-talker babble in three individually selected SNR conditions: Easy, Moderate, and Hard. The secondary task was a visual monitoring task. Listening effort during the dual-task was quantified as the change in secondary task RT from baseline (single-task visual RT) to the dual-task condition. Listening effort based on verbal RT was quantified as the time elapsed from the onset of the auditory stimulus to the onset of the verbal response when performing the primary (word recognition) task in isolation. CHL completed the task aided and/or unaided to examine the effect of amplification on listening effort. RESULTS Verbal RTs were generally slower in the more challenging SNR conditions. However, there was no effect of SNR on dual-task visual RT. Overall, verbal RTs were significantly slower in CHL versus CNH. No group difference in dual-task visual RTs was found between CNH and CHL. No effect of amplification was found on either dual-task visual RTs or verbal RTs. CONCLUSIONS This study compared dual-task visual RT and verbal RT measures of listening effort in the child population. Overall, verbal RTs appear more sensitive than dual-task visual RTs to the negative effects of SNR and hearing loss. The current findings extend the literature on listening effort in the pediatric population by demonstrating that, even for speech that is accurately recognized, school-age CHL show a greater processing speed decrement than their normal-hearing counterparts, a decrement that could have a negative impact on learning and academic achievement in the classroom.
Collapse
|
19
|
Goldsworthy RL, Markle KL. Pediatric Hearing Loss and Speech Recognition in Quiet and in Different Types of Background Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:758-767. [PMID: 30950727 PMCID: PMC9907566 DOI: 10.1044/2018_jslhr-h-17-0389] [Citation(s) in RCA: 17] [Impact Index Per Article: 3.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/19/2017] [Revised: 04/23/2018] [Accepted: 10/12/2018] [Indexed: 05/27/2023]
Abstract
Purpose Speech recognition deteriorates with hearing loss, particularly in fluctuating background noise. This study examined how hearing loss affects speech recognition in different types of noise to clarify how characteristics of the noise interact with the benefits listeners receive when listening in fluctuating compared to steady-state noise. Method Speech reception thresholds were measured for a closed set of spondee words in children (ages 5-17 years) in quiet, speech-spectrum noise, 2-talker babble, and instrumental music. Twenty children with normal hearing and 43 children with hearing loss participated; children with hearing loss were subdivided into groups with cochlear implant (18 children) and hearing aid (25 children) groups. A cohort of adults with normal hearing was included for comparison. Results Hearing loss had a large effect on speech recognition for each condition, but the effect of hearing loss was largest in 2-talker babble and smallest in speech-spectrum noise. Children with normal hearing had better speech recognition in 2-talker babble than in speech-spectrum noise, whereas children with hearing loss had worse recognition in 2-talker babble than in speech-spectrum noise. Almost all subjects had better speech recognition in instrumental music compared to speech-spectrum noise, but with less of a difference observed for children with hearing loss. Conclusions Speech recognition is more sensitive to the effects of hearing loss when measured in fluctuating compared to steady-state noise. Speech recognition measured in fluctuating noise depends on an interaction of hearing loss with characteristics of the background noise; specifically, children with hearing loss were able to derive a substantial benefit for listening in fluctuating noise when measured in instrumental music compared to 2-talker babble.
Collapse
|
20
|
Browning JM, Buss E, Flaherty M, Vallier T, Leibold LJ. Effects of Adaptive Hearing Aid Directionality and Noise Reduction on Masked Speech Recognition for Children Who Are Hard of Hearing. Am J Audiol 2019; 28:101-113. [PMID: 30938559 DOI: 10.1044/2018_aja-18-0045] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose The purpose of this study was to evaluate speech-in-noise and speech-in-speech recognition associated with activation of a fully adaptive directional hearing aid algorithm in children with mild to severe bilateral sensory/neural hearing loss. Method Fourteen children (5-14 years old) who are hard of hearing participated in this study. Participants wore laboratory hearing aids. Open-set word recognition thresholds were measured adaptively for 2 hearing aid settings: (a) omnidirectional (OMNI) and (b) fully adaptive directionality. Each hearing aid setting was evaluated in 3 listening conditions. Fourteen children with normal hearing served as age-matched controls. Results Children who are hard of hearing required a more advantageous signal-to-noise ratio than children with normal hearing to achieve comparable performance in all 3 conditions. For children who are hard of hearing, the average improvement in signal-to-noise ratio when comparing fully adaptive directionality to OMNI was 4.0 dB in noise, regardless of target location. Children performed similarly with fully adaptive directionality and OMNI settings in the presence of the speech maskers. Conclusions Compared to OMNI, fully adaptive directionality improved speech recognition in steady noise for children who are hard of hearing, even when they were not facing the target source. This algorithm did not affect speech recognition when the background noise was speech. Although the use of hearing aids with fully adaptive directionality is not proposed as a substitute for remote microphone systems, it appears to offer several advantages over fixed directionality, because it does not depend on children facing the target talker and provides access to multiple talkers within the environment. Additional experiments are required to further evaluate children's performance under a variety of spatial configurations in the presence of both noise and speech maskers.
Collapse
Affiliation(s)
- Jenna M. Browning
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Mary Flaherty
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Tim Vallier
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
21
|
Qi Y, Yu S, Du Z, Qu T, He L, Xiong W, Wei W, Liu K, Gong S. Long-Term Conductive Auditory Deprivation During Early Development Causes Irreversible Hearing Impairment and Cochlear Synaptic Disruption. Neuroscience 2019; 406:345-355. [PMID: 30742960 DOI: 10.1016/j.neuroscience.2019.01.065] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2018] [Revised: 01/30/2019] [Accepted: 01/31/2019] [Indexed: 10/27/2022]
Abstract
Conductive hearing loss is a prevalent condition globally. It remains unclear whether conductive hearing loss that occurs during early development disrupts auditory peripheral systems. In this study, a mouse model of conductive auditory deprivation (CAD) was achieved using external auditory canal closure on postnatal day 12, which marks the onset of external ear canal opening. Short-term (2 weeks) and long-term (6 weeks) deprivations involving external ear canal closure were conducted. Mice were examined immediately, 4 weeks, and 8 weeks after deprivation. Short-term deprivation induced reversible auditory brainstem response (ABR) threshold and latencies of ABR wave I, whereas long-term deprivation caused irreversible ABR thresholds and latencies of ABR wave I. Complete recovery of ribbon synapses and latencies of ABR wave I was observed in the short-term group. In contrast, we observed irreversible ABR thresholds, latencies of ABR wave I, and quantity of ribbon synapses in the long-term deprivation group. Positive 8-hydroxy-2'-deoxyguanosine signals were noted in cochlear hair cells in the long-term group, suggesting that long-term auditory deprivation could disrupt auditory maturation via mitochondrial damage in cochlear hair cells. Conversely, no significant changes in cellular morphology were observed in cochlear hair cells and spiral ganglion cells in either short- or long-term groups. Collectively, our findings suggest that long-term conductive hearing deprivation during early stages of auditory development can cause significant and irreversible disruption that persists into adulthood.
Collapse
Affiliation(s)
- Yue Qi
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Shukui Yu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Zhengde Du
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Tengfei Qu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Lu He
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Wei Xiong
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China
| | - Wei Wei
- Department of Otology, Shengjing Hospital of China Medical University, Shenyang 110004, China
| | - Ke Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China.
| | - Shusheng Gong
- Department of Otolaryngology Head and Neck Surgery, Beijing Friendship Hospital, Capital Medical University, Beijing 100050, China.
| |
Collapse
|
22
|
Jonas Brännström K, von Lochow H, Lyberg-Åhlander V, Sahlén B. The influence of voice quality and multi-talker babble noise on sentence processing and recall performance in school children using cochlear implant and/or hearing aids. LOGOP PHONIATR VOCO 2018; 44:87-94. [DOI: 10.1080/14015439.2018.1504984] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Affiliation(s)
- K. Jonas Brännström
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Heike von Lochow
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Viveka Lyberg-Åhlander
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, Lund, Sweden
| | - Birgitta Sahlén
- Logopedics, Phoniatrics and Audiology, Department of Clinical Sciences in Lund, Lund University, Lund, Sweden
| |
Collapse
|
23
|
Ching TYC, Zhang VW, Flynn C, Burns L, Button L, Hou S, McGhie K, Van Buynder P. Factors influencing speech perception in noise for 5-year-old children using hearing aids or cochlear implants. Int J Audiol 2018; 57:S70-S80. [PMID: 28687057 PMCID: PMC5756692 DOI: 10.1080/14992027.2017.1346307] [Citation(s) in RCA: 39] [Impact Index Per Article: 6.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/26/2017] [Accepted: 06/18/2017] [Indexed: 10/19/2022]
Abstract
OBJECTIVE We investigated the factors influencing speech perception in babble for 5-year-old children with hearing loss who were using hearing aids (HAs) or cochlear implants (CIs). DESIGN Speech reception thresholds (SRTs) for 50% correct identification were measured in two conditions - speech collocated with babble, and speech with spatially separated babble. The difference in SRTs between the two conditions give a measure of binaural unmasking, commonly known as spatial release from masking (SRM). Multiple linear regression analyses were conducted to examine the influence of a range of demographic factors on outcomes. STUDY SAMPLE Participants were 252 children enrolled in the Longitudinal Outcomes of Children with Hearing Impairment (LOCHI) study. RESULTS Children using HAs or CIs required a better signal-to-noise ratio to achieve the same level of performance as their normal-hearing peers but demonstrated SRM of a similar magnitude. For children using HAs, speech perception was significantly influenced by cognitive and language abilities. For children using CIs, age at CI activation and language ability were significant predictors of speech perception outcomes. CONCLUSIONS Speech perception in children with hearing loss can be enhanced by improving their language abilities. Early age at cochlear implantation was also associated with better outcomes.
Collapse
Affiliation(s)
- Teresa YC Ching
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Vicky W Zhang
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Christopher Flynn
- National Acoustic Laboratories, Sydney, Australia
- Australian Hearing, Australia
| | - Lauren Burns
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
- Australian Hearing, Australia
| | - Laura Button
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Sanna Hou
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| | - Karen McGhie
- National Acoustic Laboratories, Sydney, Australia
- Australian Hearing, Australia
| | - Patricia Van Buynder
- National Acoustic Laboratories, Sydney, Australia
- The Hearing CRC, Melbourne, Australia
| |
Collapse
|
24
|
Gustafson SJ, Key AP, Hornsby BWY, Bess FH. Fatigue Related to Speech Processing in Children With Hearing Loss: Behavioral, Subjective, and Electrophysiological Measures. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1000-1011. [PMID: 29635434 PMCID: PMC6194945 DOI: 10.1044/2018_jslhr-h-17-0314] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2017] [Revised: 12/05/2017] [Accepted: 01/04/2018] [Indexed: 06/01/2023]
Abstract
PURPOSE The purpose of this study was to examine fatigue associated with sustained and effortful speech-processing in children with mild to moderately severe hearing loss. METHOD We used auditory P300 responses, subjective reports, and behavioral indices (response time, lapses of attention) to measure fatigue resulting from sustained speech-processing demands in 34 children with mild to moderately severe hearing loss (M = 10.03 years, SD = 1.93). RESULTS Compared to baseline values, children with hearing loss showed increased lapses in attention, longer reaction times, reduced P300 amplitudes, and greater reports of fatigue following the completion of the demanding speech-processing tasks. CONCLUSIONS Similar to children with normal hearing, children with hearing loss demonstrate reductions in attentional processing of speech in noise following sustained speech-processing tasks-a finding consistent with the development of fatigue.
Collapse
Affiliation(s)
- Samantha J Gustafson
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| | - Alexandra P Key
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
- Vanderbilt Kennedy Center for Research on Human Development, Vanderbilt University School of Medicine, Nashville, TN
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| | - Fred H Bess
- Department of Hearing and Speech Sciences, Vanderbilt Bill Wilkerson Center, Nashville, TN
| |
Collapse
|
25
|
Leibold LJ. Speech Perception in Complex Acoustic Environments: Developmental Effects. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:3001-3008. [PMID: 29049600 PMCID: PMC5945069 DOI: 10.1044/2017_jslhr-h-17-0070] [Citation(s) in RCA: 27] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/20/2017] [Accepted: 06/19/2017] [Indexed: 05/06/2023]
Abstract
PURPOSE The ability to hear and understand speech in complex acoustic environments follows a prolonged time course of development. The purpose of this article is to provide a general overview of the literature describing age effects in susceptibility to auditory masking in the context of speech recognition, including a summary of findings related to the maturation of processes thought to facilitate segregation of target from competing speech. METHOD Data from published and ongoing studies are discussed, with a focus on synthesizing results from studies that address age-related changes in the ability to perceive speech in the presence of a small number of competing talkers. CONCLUSIONS This review provides a summary of the current state of knowledge that is valuable for researchers and clinicians. It highlights the importance of considering listener factors, such as age and hearing status, as well as stimulus factors, such as masker type, when interpreting masked speech recognition data. PRESENTATION VIDEO http://cred.pubs.asha.org/article.aspx?articleid=2601620.
Collapse
Affiliation(s)
- Lori J. Leibold
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
26
|
Lewis D, Schmid K, O'Leary S, Spalding J, Heinrichs-Graham E, High R. Effects of Noise on Speech Recognition and Listening Effort in Children With Normal Hearing and Children With Mild Bilateral or Unilateral Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2016; 59:1218-1232. [PMID: 27784030 PMCID: PMC5345560 DOI: 10.1044/2016_jslhr-h-15-0207] [Citation(s) in RCA: 49] [Impact Index Per Article: 6.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/06/2015] [Revised: 12/04/2015] [Accepted: 03/06/2016] [Indexed: 05/20/2023]
Abstract
PURPOSE This study examined the effects of stimulus type and hearing status on speech recognition and listening effort in children with normal hearing (NH) and children with mild bilateral hearing loss (MBHL) or unilateral hearing loss (UHL). METHOD Children (5-12 years of age) with NH (Experiment 1) and children (8-12 years of age) with MBHL, UHL, or NH (Experiment 2) performed consonant identification and word and sentence recognition in background noise. Percentage correct performance and verbal response time (VRT) were assessed (onset time, total duration). RESULTS In general, speech recognition improved as signal-to-noise ratio (SNR) increased both for children with NH and children with MBHL or UHL. The groups did not differ on measures of VRT. Onset times were longer for incorrect than for correct responses. For correct responses only, there was a general increase in VRT with decreasing SNR. CONCLUSIONS Findings indicate poorer sentence recognition in children with NH and MBHL or UHL as SNR decreases. VRT results suggest that greater effort was expended when processing stimuli that were incorrectly identified. Increasing VRT with decreasing SNR for correct responses also supports greater effort in poorer acoustic conditions. The absence of significant hearing status differences suggests that VRT was not differentially affected by MBHL, UHL, or NH for children in this study.
Collapse
Affiliation(s)
- Dawna Lewis
- Boys Town National Research Hospital, Omaha, NE
| | - Kendra Schmid
- Boys Town National Research Hospital, Omaha, NE
- University of Nebraska Medical Center, Omaha
| | | | | | | | - Robin High
- University of Nebraska Medical Center, Omaha
| |
Collapse
|
27
|
Assessing speech perception in children with hearing loss: what conventional clinical tools may miss. Ear Hear 2016; 36:e57-60. [PMID: 25329371 DOI: 10.1097/aud.0000000000000110] [Citation(s) in RCA: 31] [Impact Index Per Article: 3.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study tested the hypothesis that word recognition in a complex, two-talker masker is more closely related to real-world speech perception for children with hearing loss than testing performed in quiet or steady-state noise. DESIGN Sixteen school-age hearing aid users were tested on aided word recognition in noise or two-talker speech. Unaided estimates of speech perception in quiet were retrospectively obtained from the clinical record. Ten parents completed a questionnaire regarding their children's ease of communication and understanding in background noise. RESULTS Unaided performance in quiet was correlated with aided performance in competing noise, but not in two-talker speech. Only results in the two-talker masker were correlated with parental reports of their children's functional hearing abilities. CONCLUSIONS Speech perception testing in a complex background such as two-talker speech may provide a more accurate predictor of the communication challenges of children with hearing loss than testing in steady noise or quiet.
Collapse
|
28
|
Martin K, Johnstone P, Hedrick M. Auditory and visual localization accuracy in young children and adults. Int J Pediatr Otorhinolaryngol 2015; 79:844-851. [PMID: 25841637 DOI: 10.1016/j.ijporl.2015.03.016] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/15/2015] [Revised: 03/16/2015] [Accepted: 03/17/2015] [Indexed: 11/30/2022]
Abstract
OBJECTIVE This study aimed to measure and compare sound and light source localization ability in young children and adults who have normal hearing and normal/corrected vision in order to determine the extent to which age, type of stimuli, and stimulus order affects sound localization accuracy. METHODS Two experiments were conducted. The first involved a group of adults only. The second involved a group of 30 children aged 3 to 5 years. Testing occurred in a sound-treated booth containing a semi-circular array of 15 loudspeakers set at 10° intervals from -70° to 70° azimuth. Each loudspeaker had a tiny light bulb and a small picture fastened underneath. Seven of the loudspeakers were used to randomly test sound and light source identification. The sound stimulus was the word "baseball". The light stimulus was a flashing of a light bulb triggered by the digital signal of the word "baseball". Each participant was asked to face 0° azimuth, and identify the location of the test stimulus upon presentation. Adults used a computer mouse to click on an icon; children responded by verbally naming or walking toward the picture underneath the corresponding loudspeaker or light. A mixed experimental design using repeated measures was used to determine the effect of age and stimulus type on localization accuracy in children and adults. A mixed experimental design was used to compare the effect of stimulus order (light first/last) and varying or fixed intensity sound on localization accuracy in children and adults. RESULTS Localization accuracy was significantly better for light stimuli than sound stimuli for children and adults. Children, compared to adults, showed significantly greater localization errors for audition. Three-year-old children had significantly greater sound localization errors compared to 4- and 5-year olds. Adults performed better on the sound localization task when the light localization task occurred first. CONCLUSIONS Young children can understand and attend to localization tasks, but show poorer localization accuracy than adults in sound localization. This may be a reflection of differences in sensory modality development and/or central processes in young children, compared to adults.
Collapse
Affiliation(s)
- Karen Martin
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Patti Johnstone
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States
| | - Mark Hedrick
- University of Tennessee Health Science Center, Department of Audiology and Speech Pathology, 578 South Stadium Hall, Knoxville, TN 37996, United States.
| |
Collapse
|
29
|
Hillock-Dunn A, Buss E, Duncan N, Roush PA, Leibold LJ. Effects of nonlinear frequency compression on speech identification in children with hearing loss. Ear Hear 2015; 35:353-65. [PMID: 24496288 DOI: 10.1097/aud.0000000000000007] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES This study evaluated effects of nonlinear frequency compression (NLFC) processing in children with hearing loss for consonant identification in quiet and for spondee identification in competing noise or speech. It was predicted that participants would benefit from NLFC for consonant identification in quiet when access to high-frequency information was critical, but that NLFC would be less beneficial, or even detrimental, when identification relied on mid-frequency cues. Further, it was hypothesized that NLFC could result in greater susceptibility to masking in the spondee task. The rationale for these predictions is that improved access to high-frequency information comes at the cost of decreased spectral resolution. DESIGN A repeated-measures design compared speech-perception outcomes in 17 pediatric hearing aid users (9 to 17 years of age) wearing Naida V SP "laboratory" hearing aids with NLFC on and off. Data were also collected in an initial baseline session in which children wore their personal hearing aids. Children with a wide range of audiometric configurations were included, but all participants were full-time users of hearing aids with active NLFC. For each hearing aid condition, speech perception was assessed in the sound field by using a closed-set 12-alternative consonant-vowel identification measure in quiet, and a closed-set four-alternative spondee-identification measure in a speech-shaped noise or in a two-talker speech masker. RESULTS No significant differences in performance were observed between laboratory hearing aid conditions with NLFC activated or deactivated for either speech-perception measure. An unexpected finding was that the majority of participants had no difficulty identifying the high-frequency consonant /s/ even when NLFC was deactivated. Investigation into individual differences revealed that subjects with a greater difference in audible bandwidth with NLFC on versus NLFC off were less likely to demonstrate improvements in high-frequency consonant identification in quiet, but were more likely to demonstrate improvements in spondee identification in speech-shaped noise. Group results observed in the initial baseline assessment using personal aids fitted with more aggressive NLFC settings than used in laboratory aids indicated better consonant identification accuracy in quiet. However, spondee identification in the two-talker masker was poorer with personal compared with laboratory hearing aids. Comparisons across personal and laboratory hearing aids are tempered, however, by the potential of an order effect. CONCLUSIONS The observation of comparable performance with NLFC on and NLFC off in the laboratory aids provides evidence that NLFC is neither detrimental nor advantageous when modest in strength. Results with personal hearing aids fitted with stronger compression settings than laboratory aids (NLFC on) highlight the critical need for further research to determine the impact of NLFC processing on speech perception for a wider range of speech-perception measures and compression settings.
Collapse
Affiliation(s)
- Andrea Hillock-Dunn
- 1Department of Allied Health Sciences, The University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, North Carolina, USA; 2Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill, School of Medicine Chapel Hill, North Carolina, USA
| | | | | | | | | |
Collapse
|
30
|
Calandruccio L, Gomez B, Buss E, Leibold LJ. Development and preliminary evaluation of a pediatric Spanish-English speech perception task. Am J Audiol 2014; 23:158-72. [PMID: 24686915 DOI: 10.1044/2014_aja-13-0055] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE The purpose of this study was to develop a task to evaluate children's English and Spanish speech perception abilities in either noise or competing speech maskers. METHOD Eight bilingual Spanish-English and 8 age-matched monolingual English children (ages 4.9-16.4 years) were tested. A forced-choice, picture-pointing paradigm was selected for adaptively estimating masked speech reception thresholds. Speech stimuli were spoken by simultaneous bilingual Spanish-English talkers. The target stimuli were 30 disyllabic English and Spanish words, familiar to 5-year-olds and easily illustrated. Competing stimuli included either 2-talker English or 2-talker Spanish speech (corresponding to target language) and spectrally matched noise. RESULTS For both groups of children, regardless of test language, performance was significantly worse for the 2-talker than for the noise masker condition. No difference in performance was found between bilingual and monolingual children. Bilingual children performed significantly better in English than in Spanish in competing speech. For all listening conditions, performance improved with increasing age. CONCLUSIONS Results indicated that the stimuli and task were appropriate for speech recognition testing in both languages, providing a more conventional measure of speech-in-noise perception as well as a measure of complex listening. Further research is needed to determine performance for Spanish-dominant listeners and to evaluate the feasibility of implementation into routine clinical use.
Collapse
Affiliation(s)
| | | | - Emily Buss
- University of North Carolina at Chapel Hill
| | | |
Collapse
|