1
|
Anshu K, Kristensen K, Godar SP, Zhou X, Hartley SL, Litovsky RY. Speech Recognition and Spatial Hearing in Young Adults With Down Syndrome: Relationships With Hearing Thresholds and Auditory Working Memory. Ear Hear 2024; 45:1568-1584. [PMID: 39090791 PMCID: PMC11493531 DOI: 10.1097/aud.0000000000001549] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/04/2024]
Abstract
OBJECTIVES Individuals with Down syndrome (DS) have a higher incidence of hearing loss (HL) compared with their peers without developmental disabilities. Little is known about the associations between HL and functional hearing for individuals with DS. This study investigated two aspects of auditory functions, "what" (understanding the content of sound) and "where" (localizing the source of sound), in young adults with DS. Speech reception thresholds in quiet and in the presence of interferers provided insight into speech recognition, that is, the "what" aspect of auditory maturation. Insights into "where" aspect of auditory maturation were gained from evaluating speech reception thresholds in colocated versus separated conditions (quantifying spatial release from masking) as well as right versus left discrimination and sound location identification. Auditory functions in the "where" domain develop during earlier stages of cognitive development in contrast with the later developing "what" functions. We hypothesized that young adults with DS would exhibit stronger "where" than "what" auditory functioning, albeit with the potential impact of HL. Considering the importance of auditory working memory and receptive vocabulary for speech recognition, we hypothesized that better speech recognition in young adults with DS, in quiet and with speech interferers, would be associated with better auditory working memory ability and receptive vocabulary. DESIGN Nineteen young adults with DS (aged 19 to 24 years) participated in the study and completed assessments on pure-tone audiometry, right versus left discrimination, sound location identification, and speech recognition in quiet and with speech interferers that were colocated or spatially separated. Results were compared with published data from children and adults without DS and HL, tested using similar protocols and stimuli. Digit Span tests assessed auditory working memory. Receptive vocabulary was examined using the Peabody Picture Vocabulary Test Fifth Edition. RESULTS Seven participants (37%) had HL in at least 1 ear; 4 individuals had mild HL, and 3 had moderate HL or worse. Participants with mild or no HL had ≥75% correct at 5° separation on the discrimination task and sound localization root mean square errors (mean ± SD: 8.73° ± 2.63°) within the range of adults in the comparison group. Speech reception thresholds in young adults with DS were higher than all comparison groups. However, spatial release from masking did not differ between young adults with DS and comparison groups. Better (lower) speech reception thresholds were associated with better hearing and better auditory working memory ability. Receptive vocabulary did not predict speech recognition. CONCLUSIONS In the absence of HL, young adults with DS exhibited higher accuracy during spatial hearing tasks as compared with speech recognition tasks. Thus, auditory processes associated with the "where" pathways appear to be a relative strength than those associated with "what" pathways in young adults with DS. Further, both HL and auditory working memory impairments contributed to difficulties in speech recognition in the presence of speech interferers. Future larger-sized samples are needed to replicate and extend our findings.
Collapse
Affiliation(s)
- Kumari Anshu
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Kayla Kristensen
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Shelly P. Godar
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
| | - Xin Zhou
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Currently at The Chinese University of Hong Kong, Hong Kong
| | - Sigan L. Hartley
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- School of Human Ecology, University of Wisconsin–Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin–Madison, Madison, WI, USA
- Department of Communication Sciences and Disorders, University of Wisconsin–Madison, Madison, WI, USA
| |
Collapse
|
2
|
Li Z, Li B, Tsou YT, Wang L, Liang W, Rieffe C. A longitudinal study on moral emotions and psychosocial functioning among preschool children with and without hearing loss. Dev Psychopathol 2024:1-12. [PMID: 39328179 DOI: 10.1017/s0954579424001408] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/28/2024]
Abstract
Moral emotions such as shame, guilt and pride are crucial to young children's social-emotional development. Due to the restrictions caused by hearing loss in accessing the social world, deaf and hard of hearing (DHH) children may encounter extra difficulties in their development of moral emotions. However, little research so far has investigated the development trajectory of moral emotions during preschool years in DHH children. The present study used a longitudinal design to explore the development trajectories of shame, guilt, and pride, in a sample of 259 Chinese DHH and typically hearing (TH) preschoolers aged 2 to 6 years old. The results indicated that according to parent reports, DHH children manifested lower levels of guilt and pride compared to their TH peers, yet the manifested levels of shame, guilt, and pride increased throughout the preschool time at a similar pace in all children. Moreover, whilst guilt and pride contributed to increasing levels of psychosocial functioning over the preschool years, shame contributed to lower social competence and more externalizing behaviors in DHH and TH preschoolers. The outcomes imply that early interventions and adjustment to hearing loss could be useful to safeguard the social development of children with severe hearing loss, and cultural variances shall be taken into consideration when studying moral emotions in a Chinese cultural background.
Collapse
Affiliation(s)
- Zijian Li
- Unit of Developmental and Educational Psychology, Institute of Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, Netherlands
| | - Boya Li
- Unit of Developmental and Educational Psychology, Institute of Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, Netherlands
| | - Yung-Ting Tsou
- Unit of Developmental and Educational Psychology, Institute of Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, Netherlands
- Faculty of Social Sciences, Vrije Universiteit Amsterdam, Amsterdam, Netherlands
| | - Liyan Wang
- China Rehabilitation Research Center for Hearing and Speech Impairment, Beijing, China
| | - Wei Liang
- China Rehabilitation Research Center for Hearing and Speech Impairment, Beijing, China
| | - Carolien Rieffe
- Unit of Developmental and Educational Psychology, Institute of Psychology, Faculty of Social and Behavioral Sciences, Leiden University, Leiden, Netherlands
- Department of Human Media Interaction, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, Netherlands
- Department of Psychology and Human Development, Institute of Education, University College London, London, UK
| |
Collapse
|
3
|
König C, Baumann U, Stöver T, Weissgerber T. Impact of Reverberation on Speech Perception in Noise in Bimodal/Bilateral Cochlear Implant Users with and without Residual Hearing. J Clin Med 2024; 13:5269. [PMID: 39274482 PMCID: PMC11396047 DOI: 10.3390/jcm13175269] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2024] [Revised: 08/30/2024] [Accepted: 09/03/2024] [Indexed: 09/16/2024] Open
Abstract
(1) Background: The aim of the present study was to assess the impact of reverberation on speech perception in noise and spatial release from masking (SRM) in bimodal or bilateral cochlear implant (CI) users and CI subjects with low-frequency residual hearing using combined electric-acoustic stimulation (EAS). (2) Methods: In total, 10 bimodal, 14 bilateral CI users and 14 EAS users, and 17 normal hearing (NH) controls, took part in the study. Speech reception thresholds (SRTs) in unmodulated noise were assessed in co-located masker condition (S0N0) with a spatial separation of speech and noise (S0N60) in both free-field and loudspeaker-based room simulation for two different reverberation times. (3) Results: There was a significant detrimental effect of reverberation on SRTs and SRM in all subject groups. A significant difference between the NH group and all the CI/EAS groups was found. There was no significant difference in SRTs between any CI and EAS group. Only NH subjects achieved spatial release from masking in reverberation, whereas no beneficial effect of spatial separation of speech and noise was found in any CI/EAS group. (4) Conclusions: The subject group with electric-acoustic stimulation did not yield a superior outcome in terms of speech perception in noise under reverberation when the noise was presented towards the better hearing ear.
Collapse
Affiliation(s)
- Clara König
- Audiological Acoustics, ENT Department, University Hospital, Goethe University Frankfurt, 60590 Frankfurt am Main, Germany
| | - Uwe Baumann
- Audiological Acoustics, ENT Department, University Hospital, Goethe University Frankfurt, 60590 Frankfurt am Main, Germany
| | - Timo Stöver
- ENT Department, University Hospital, Goethe University Frankfurt, 60590 Frankfurt am Main, Germany
| | - Tobias Weissgerber
- Audiological Acoustics, ENT Department, University Hospital, Goethe University Frankfurt, 60590 Frankfurt am Main, Germany
| |
Collapse
|
4
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. Prelingually Deaf Children With Cochlear Implants Show Better Perception of Voice Cues and Speech in Competing Speech Than Postlingually Deaf Adults With Cochlear Implants. Ear Hear 2024; 45:952-968. [PMID: 38616318 PMCID: PMC11175806 DOI: 10.1097/aud.0000000000001489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/10/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Cambridge Hearing Group, Sound Lab, Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
5
|
Lalonde K, Peng ZE, Halverson DM, Dwyer GA. Children's use of spatial and visual cues for release from perceptual masking. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1559-1569. [PMID: 38393738 PMCID: PMC10890829 DOI: 10.1121/10.0024766] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/22/2023] [Revised: 01/19/2024] [Accepted: 01/22/2024] [Indexed: 02/25/2024]
Abstract
This study examined the role of visual speech in providing release from perceptual masking in children by comparing visual speech benefit across conditions with and without a spatial separation cue. Auditory-only and audiovisual speech recognition thresholds in a two-talker speech masker were obtained from 21 children with typical hearing (7-9 years of age) using a color-number identification task. The target was presented from a loudspeaker at 0° azimuth. Masker source location varied across conditions. In the spatially collocated condition, the masker was also presented from the loudspeaker at 0° azimuth. In the spatially separated condition, the masker was presented from the loudspeaker at 0° azimuth and a loudspeaker at -90° azimuth, with the signal from the -90° loudspeaker leading the signal from the 0° loudspeaker by 4 ms. The visual stimulus (static image or video of the target talker) was presented at 0° azimuth. Children achieved better thresholds when the spatial cue was provided and when the visual cue was provided. Visual and spatial cue benefit did not differ significantly depending on the presence of the other cue. Additional studies are needed to characterize how children's preferential use of visual and spatial cues varies depending on the strength of each cue.
Collapse
Affiliation(s)
- Kaylah Lalonde
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Z Ellen Peng
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Destinee M Halverson
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Grace A Dwyer
- Center for Hearing Research, Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| |
Collapse
|
6
|
Creff G, Lambert C, Coudert P, Pean V, Laurent S, Godey B. Comparison of Tonotopic and Default Frequency Fitting for Speech Understanding in Noise in New Cochlear Implantees: A Prospective, Randomized, Double-Blind, Cross-Over Study. Ear Hear 2024; 45:35-52. [PMID: 37823850 DOI: 10.1097/aud.0000000000001423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
OBJECTIVES While cochlear implants (CIs) have provided benefits for speech recognition in quiet for subjects with severe-to-profound hearing loss, speech recognition in noise remains challenging. A body of evidence suggests that reducing frequency-to-place mismatch may positively affect speech perception. Thus, a fitting method based on a tonotopic map may improve speech perception results in quiet and noise. The aim of our study was to assess the impact of a tonotopic map on speech perception in noise and quiet in new CI users. DESIGN A prospective, randomized, double-blind, two-period cross-over study in 26 new CI users was performed over a 6-month period. New CI users older than 18 years with bilateral severe-to-profound sensorineural hearing loss or complete hearing loss for less than 5 years were selected in the University Hospital Centre of Rennes in France. An anatomical tonotopic map was created using postoperative flat-panel computed tomography and a reconstruction software based on the Greenwood function. Each participant was randomized to receive a conventional map followed by a tonotopic map or vice versa. Each setting was maintained for 6 weeks, at the end of which participants performed speech perception tasks. The primary outcome measure was speech recognition in noise. Participants were allocated to sequences by block randomization of size two with a ratio 1:1 (CONSORT Guidelines). Participants and those assessing the outcomes were blinded to the intervention. RESULTS Thirteen participants were randomized to each sequence. Two of the 26 participants recruited (one in each sequence) had to be excluded due to the COVID-19 pandemic. Twenty-four participants were analyzed. Speech recognition in noise was significantly better with the tonotopic fitting at all signal-to-noise ratio (SNR) levels tested [SNR = +9 dB, p = 0.002, mean effect (ME) = 12.1%, 95% confidence interval (95% CI) = 4.9 to 19.2, standardized effect size (SES) = 0.71; SNR = +6 dB, p < 0.001, ME = 16.3%, 95% CI = 9.8 to 22.7, SES = 1.07; SNR = +3 dB, p < 0.001 ME = 13.8%, 95% CI = 6.9 to 20.6, SES = 0.84; SNR = 0 dB, p = 0.003, ME = 10.8%, 95% CI = 4.1 to 17.6, SES = 0.68]. Neither period nor interaction effects were observed for any signal level. Speech recognition in quiet ( p = 0.66) and tonal audiometry ( p = 0.203) did not significantly differ between the two settings. 92% of the participants kept the tonotopy-based map after the study period. No correlation was found between speech-in-noise perception and age, duration of hearing deprivation, angular insertion depth, or position or width of the frequency filters allocated to the electrodes. CONCLUSION For new CI users, tonotopic fitting appears to be more efficient than the default frequency fitting because it allows for better speech recognition in noise without compromising understanding in quiet.
Collapse
Affiliation(s)
- Gwenaelle Creff
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
| | - Cassandre Lambert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | - Paul Coudert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | | | | | - Benoit Godey
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
- Hearing Aid Academy, Javene, France
| |
Collapse
|
7
|
Park LR, Dillon MT, Buss E, Brown KD. Two-Year Outcomes of Cochlear Implant Use for Children With Unilateral Hearing Loss: Benefits and Comparison to Children With Normal Hearing. Ear Hear 2023; 44:955-968. [PMID: 36879386 PMCID: PMC10426784 DOI: 10.1097/aud.0000000000001353] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2022] [Accepted: 01/24/2023] [Indexed: 03/08/2023]
Abstract
OBJECTIVES Children with severe-to-profound unilateral hearing loss, including cases of single-sided deafness (SSD), lack access to binaural cues that support spatial hearing, such as recognizing speech in complex multisource environments and sound source localization. Listening in a monaural condition negatively impacts communication, learning, and quality of life for children with SSD. Cochlear implant (CI) use may restore binaural hearing abilities and improve outcomes as compared to alternative treatments or no treatment. This study investigated performance over 24 months of CI use in young children with SSD as compared to the better hearing ear alone and to children with bilateral normal hearing (NH). DESIGN Eighteen children with SSD who received a CI between the ages of 3.5 and 6.5 years as part of a prospective clinical trial completed assessments of word recognition in quiet, masked sentence recognition, and sound source localization at regular intervals out to 24-month postactivation. Eighteen peers with bilateral NH, matched by age at the group level, completed the same test battery. Performance at 24-month postactivation for the SSD group was compared to the performance of the NH group. RESULTS Children with SSD have significantly poorer speech recognition in quiet, masked sentence recognition, and localization both with and without the use of the CI than their peers with NH. The SSD group experienced significant benefits with the CI+NH versus the NH ear alone on measures of isolated word recognition, masked sentence recognition, and localization. These benefits were realized within the first 3 months of use and were maintained through the 24-month postactivation interval. CONCLUSIONS Young children with SSD who use a CI experience significant isolated word recognition and bilateral spatial hearing benefits, although their performance remains poorer than their peers with NH.
Collapse
Affiliation(s)
- Lisa R. Park
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Margaret T. Dillon
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| | - Kevin D. Brown
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, North Carolina, USA
| |
Collapse
|
8
|
Syeda A, Nisha KV, Jain C. Age differences in binaural and working memory abilities in school-going children. Int J Pediatr Otorhinolaryngol 2023; 171:111652. [PMID: 37467581 DOI: 10.1016/j.ijporl.2023.111652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/03/2022] [Revised: 06/28/2023] [Accepted: 07/08/2023] [Indexed: 07/21/2023]
Abstract
OBJECTIVES Binaural hearing is the interplay of acoustic cues (interaural time differences: ITD, interaural level differences: ILD, and spectral cues) and cognitive abilities (e.g., working memory, attention). The current study investigated the effect of developmental age on auditory binaural resolution and working memory and the association between them (if any) in school-going children. METHODS Fifty-seven normal-hearing school-going children aged 6-15 y were recruited for the study. The participants were divided into three groups: Group 1 (n=17, Mage = 7.1y ± 0.72 y), Group 2 (n = 23; Mage = 10.2y ± 0.8 y), Group 3 (n = 17; Mage: 14.1 y ±1.3 y). Group 4, with normal hearing young adults (n = 20; Mage = 21.1 y± 3.2 y), was included for comparing the maturational changes in former groups with adult values. Tests of binaural resolution (ITD and ILD thresholds) and auditory working memory (forward and backward digit span and 2n-back digit) were administered to all the participants. RESULTS Results indicated a main effect of age on spatial resolution and working memory, with the median of lower age groups (Group 1 & Group 2) being significantly poorer (p < 0.01) than the higher age groups (Group 3 & Group 4). Groups 2, 3, and 4 performed significantly better than Group 1 (p < 0.001) on the forward span and ILD task. Groups 3 and 4 had significantly better ITD (p = 0.04), backward span (p = 0.02), and 2n-back scores than Group 2. A significant correlation between scores on working memory tasks and spatial resolution thresholds was also found. On discriminant function analysis, backward span and ITD emerged as sensitive measures for segregating older groups (Group 3 & Group 4) from younger groups (Group 1 & Group 2). CONCLUSIONS The present study showed that the ILD thresholds and forward digit span mature by nine years. However, the backward digit span score continued to mature beyond 15 y. This finding can be attributed to the influence of auditory attention (a working memory process) on the binaural resolution, which is reported to mature till late adolescence.
Collapse
Affiliation(s)
- Aisha Syeda
- All India Institute of Speech and Hearing, Mysuru, Karnataka, 570006, India.
| | | | - Chandni Jain
- All India Institute of Speech and Hearing, Mysuru, Karnataka, 570006, India.
| |
Collapse
|
9
|
Oh Y, Srinivasan NK, Hartling CL, Gallun FJ, Reiss LAJ. Differential Effects of Binaural Pitch Fusion Range on the Benefits of Voice Gender Differences in a "Cocktail Party" Environment for Bimodal and Bilateral Cochlear Implant Users. Ear Hear 2023; 44:318-329. [PMID: 36395512 PMCID: PMC9957805 DOI: 10.1097/aud.0000000000001283] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022]
Abstract
OBJECTIVES Some cochlear implant (CI) users are fitted with a CI in each ear ("bilateral"), while others have a CI in one ear and a hearing aid in the other ("bimodal"). Presently, evaluation of the benefits of bilateral or bimodal CI fitting does not take into account the integration of frequency information across the ears. This study tests the hypothesis that CI listeners, especially bimodal CI users, with a more precise integration of frequency information across ears ("sharp binaural pitch fusion") will derive greater benefit from voice gender differences in a multi-talker listening environment. DESIGN Twelve bimodal CI users and twelve bilateral CI users participated. First, binaural pitch fusion ranges were measured using the simultaneous, dichotic presentation of reference and comparison stimuli (electric pulse trains for CI ears and acoustic tones for HA ears) in opposite ears, with reference stimuli fixed and comparison stimuli varied in frequency/electrode to find the range perceived as a single sound. Direct electrical stimulation was used in implanted ears through the research interface, which allowed selective stimulation of one electrode at a time, and acoustic stimulation was used in the non-implanted ears through the headphone. Second, speech-on-speech masking performance was measured to estimate masking release by voice gender difference between target and maskers (VGRM). The VGRM was calculated as the difference in speech recognition thresholds of target sounds in the presence of same-gender or different-gender maskers. RESULTS Voice gender differences between target and masker talkers improved speech recognition performance for the bimodal CI group, but not the bilateral CI group. The bimodal CI users who benefited the most from voice gender differences were those who had the narrowest range of acoustic frequencies that fused into a single sound with stimulation from a single electrode from the CI in the opposite ear. There was no similar voice gender difference benefit of narrow binaural fusion range for the bilateral CI users. CONCLUSIONS The findings suggest that broad binaural fusion reduces the acoustical information available for differentiating individual talkers in bimodal CI users, but not for bilateral CI users. In addition, for bimodal CI users with narrow binaural fusion who benefit from voice gender differences, bilateral implantation could lead to a loss of that benefit and impair their ability to selectively attend to one talker in the presence of multiple competing talkers. The results suggest that binaural pitch fusion, along with an assessment of residual hearing and other factors, could be important for assessing bimodal and bilateral CI users.
Collapse
Affiliation(s)
- Yonghee Oh
- Department of Otolaryngology - Head and Neck Surgery and Communicative Disorders, University of Louisville, Louisville, Kentucky 40202, USA
| | - Nirmal Kumar Srinivasan
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, Maryland 21252, USA
| | - Curtis L. Hartling
- Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon 97239, USA
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, Oregon 97239, USA
| | - Lina A. J. Reiss
- Department of Otolaryngology, Oregon Health and Science University, Portland, Oregon 97239, USA
| |
Collapse
|
10
|
Peng ZE, Garcia A, Godar SP, Holt JR, Lee DJ, Litovsky RY. Hearing Preservation and Spatial Hearing Outcomes After Cochlear Implantation in Children With TMPRSS3 Mutations. Otol Neurotol 2023; 44:21-25. [PMID: 36509434 PMCID: PMC9764138 DOI: 10.1097/mao.0000000000003747] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/15/2022]
Abstract
OBJECTIVE Investigate hearing preservation and spatial hearing outcomes in children with TMPRSS3 mutations who received bilateral cochlear implantation. STUDY DESIGN AND METHODS Longitudinal case series report. Two siblings (ages, 7 and 4 yr) with TMPRSS3 mutations with down-sloping audiograms received sequential bilateral cochlear implantation with hearing preservation with low-frequency acoustic amplification and high-frequency electrical stimulation. Spatial hearing, including speech perception and localization, was assessed at three time points: preoperative, postoperative of first and second cochlear implant (CI). RESULTS Both children showed low-frequency hearing preservation in unaided, acoustic-only audiograms. Both children demonstrated improvements in speech perception in both quiet and noise after CI activations. The emergence of spatial hearing was observed. Each child's overall speech perception and spatial hearing when listening with bilateral CIs were within the range or better than published group data from children with bilateral CIs of other etiology. CONCLUSION Bilateral cochlear implantation with hearing preservation is a viable option for managing hearing loss for pediatric patients with TMPRSS3 mutations.
Collapse
Affiliation(s)
- Z. Ellen Peng
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Alejandro Garcia
- Otolaryngology, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
| | - Shelly P. Godar
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| | - Jeffrey R. Holt
- Boston Children’s Hospital & Harvard Medical School, Boston, MA, SUA
| | - Daniel J. Lee
- Otolaryngology, Massachusetts Eye and Ear Infirmary, Boston, MA, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
11
|
Nicastri M, Giallini I, Inguscio BMS, Turchetta R, Guerzoni L, Cuda D, Portanova G, Ruoppolo G, Dincer D'Alessandro H, Mancini P. The influence of auditory selective attention on linguistic outcomes in deaf and hard of hearing children with cochlear implants. Eur Arch Otorhinolaryngol 2023; 280:115-124. [PMID: 35831674 DOI: 10.1007/s00405-022-07463-y] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/27/2022] [Accepted: 05/23/2022] [Indexed: 01/07/2023]
Abstract
PURPOSE Auditory selective attention (ASA) is crucial to focus on significant auditory stimuli without being distracted by irrelevant auditory signals and plays an important role in language development. The present study aimed to investigate the unique contribution of ASA to the linguistic levels achieved by a group of cochlear implanted (CI) children. METHODS Thirty-four CI children with a median age of 10.05 years were tested using both the "Batteria per la Valutazione dell'Attenzione Uditiva e della Memoria di Lavoro Fonologica nell'età evolutiva-VAUM-ELF" to assess their ASA skills, and two Italian standardized tests to measure lexical and morphosyntactic skills. A regression analysis, including demographic and audiological variables, was conducted to assess the unique contribution of ASA to language skills. RESULTS The percentages of CI children with adequate ASA performances ranged from 50 to 29.4%. Bilateral CI children performed better than their monolateral peers. ASA skills contributed significantly to linguistic skills, accounting alone for the 25% of the observed variance. CONCLUSIONS The present findings are clinically relevant as they highlight the importance to assess ASA skills as early as possible, reflecting their important role in language development. Using simple clinical tools, ASA skills could be studied at early developmental stages. This may provide additional information to outcomes from traditional auditory tests and may allow us to implement specific training programs that could positively contribute to the development of neural mechanisms of ASA and, consequently, induce improvements in language skills.
Collapse
Affiliation(s)
- Maria Nicastri
- Department of Sense Organs, Sapienza University, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University, Rome, Italy
| | | | | | - Letizia Guerzoni
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | - Domenico Cuda
- Department of Otorhinolaryngology, "Guglielmo da Saliceto" Hospital, Piacenza, Italy
| | | | - Giovanni Ruoppolo
- I.R.C.C.S. San Raffaele Pisana, Via Nomentana, 401, 00162, Rome, Italy
| | | | | |
Collapse
|
12
|
The Impact of Synchronized Cochlear Implant Sampling and Stimulation on Free-Field Spatial Hearing Outcomes: Comparing the ciPDA Research Processor to Clinical Processors. Ear Hear 2022; 43:1262-1272. [PMID: 34882619 PMCID: PMC9174346 DOI: 10.1097/aud.0000000000001179] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES Bilateral cochlear implant (BiCI) listeners use independent processors in each ear. This independence and lack of shared hardware prevents control of the timing of sampling and stimulation across ears, which precludes the development of bilaterally-coordinated signal processing strategies. As a result, these devices potentially reduce access to binaural cues and introduce disruptive artifacts. For example, measurements from two clinical processors demonstrate that independently-running processors introduce interaural incoherence. These issues are typically avoided in the laboratory by using research processors with bilaterally-synchronized hardware. However, these research processors do not typically run in real-time and are difficult to take out into the real-world due to their benchtop nature. Hence, the question of whether just applying hardware synchronization to reduce bilateral stimulation artifacts (and thereby potentially improve functional spatial hearing performance) has been difficult to answer. The CI personal digital assistant (ciPDA) research processor, which uses one clock to drive two processors, presented an opportunity to examine whether synchronization of hardware can have an impact on spatial hearing performance. DESIGN Free-field sound localization and spatial release from masking (SRM) were assessed in 10 BiCI listeners using both their clinical processors and the synchronized ciPDA processor. For sound localization, localization accuracy was compared within-subject for the two processor types. For SRM, speech reception thresholds were compared for spatially separated and co-located configurations, and the amount of unmasking was compared for synchronized and unsynchronized hardware. There were no deliberate changes of the sound processing strategy on the ciPDA to restore or improve binaural cues. RESULTS There was no significant difference in localization accuracy between unsynchronized and synchronized hardware (p = 0.62). Speech reception thresholds were higher with the ciPDA. In addition, although five of eight participants demonstrated improved SRM with synchronized hardware, there was no significant difference in the amount of unmasking due to spatial separation between synchronized and unsynchronized hardware (p = 0.21). CONCLUSIONS Using processors with synchronized hardware did not yield an improvement in sound localization or SRM for all individuals, suggesting that mere synchronization of hardware is not sufficient for improving spatial hearing outcomes. Further work is needed to improve sound coding strategies to facilitate access to spatial hearing cues. This study provides a benchmark for spatial hearing performance with real-time, bilaterally-synchronized research processors.
Collapse
|
13
|
Novel Approaches to Measure Spatial Release From Masking in Children With Bilateral Cochlear Implants. Ear Hear 2022; 43:101-114. [PMID: 34133400 PMCID: PMC8671563 DOI: 10.1097/aud.0000000000001080] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/03/2023]
Abstract
OBJECTIVES To investigate the role of auditory cues for spatial release from masking (SRM) in children with bilateral cochlear implants (BiCIs) and compare their performance with children with normal hearing (NH). To quantify the contribution to speech intelligibility benefits from individual auditory cues: head shadow, binaural redundancy, and interaural differences; as well as from multiple cues: SRM and binaural squelch. To assess SRM using a novel approach of adaptive target-masker angular separation, which provides a more functionally relevant assessment in realistic complex auditory environments. DESIGN Children fitted with BiCIs (N = 11) and with NH (N = 18) were tested in virtual acoustic space that was simulated using head-related transfer functions measured from individual children with BiCIs behind the ear and from a standard head and torso simulator for all NH children. In experiment I, by comparing speech reception thresholds across 4 test conditions that varied in target-masker spatial separation (colocated versus separated at 180°) and listening conditions (monaural versus binaural/bilateral listening), intelligibility benefits were derived for individual auditory cues for SRM. In experiment II, SRM was quantified using a novel measure to find the minimum angular separation (MAS) between the target and masker to achieve a fixed 20% intelligibility improvement. Target speech was fixed at either +90 or -90° azimuth on the side closer to the better ear (+90° for all NH children) and masker locations were adaptively varied. RESULTS In experiment I, children with BiCIs as a group had smaller intelligibility benefits from head shadow than NH children. No group difference was observed in benefits from binaural redundancy or interaural difference cues. In both groups of children, individuals who gained a larger benefit from interaural differences relied less on monaural head shadow, and vice versa. In experiment II, all children with BiCIs demonstrated measurable MAS thresholds <180° and on average larger than that from NH children. Eight of 11 children with BiCIs and all NH children had a MAS threshold <90°, requiring interaural differences only to gain the target intelligibility benefit; whereas the other 3 children with BiCIs had a MAS between 120° and 137°, requiring monaural head shadow for SRM. CONCLUSIONS When target and maskers were separated at 180° on opposing hemifields, children with BiCIs demonstrated greater intelligibility benefits from head shadow and interaural differences than previous literature showed with a smaller separation. Children with BiCIs demonstrated individual differences in using auditory cues for SRM. From the MAS thresholds, more than half of the children with BiCIs demonstrated robust access to interaural differences without needing additional monaural head shadow for SRM. Both experiments led to the conclusion that individualized fitting strategies in the bilateral devices may be warranted to maximize spatial hearing for children with BiCIs in complex auditory environments.
Collapse
|
14
|
Oh Y, Hartling CL, Srinivasan NK, Diedesch AC, Gallun FJ, Reiss LAJ. Factors underlying masking release by voice-gender differences and spatial separation cues in multi-talker listening environments in listeners with and without hearing loss. Front Neurosci 2022; 16:1059639. [PMID: 36507363 PMCID: PMC9726925 DOI: 10.3389/fnins.2022.1059639] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/01/2022] [Accepted: 11/07/2022] [Indexed: 11/24/2022] Open
Abstract
Voice-gender differences and spatial separation are important cues for auditory object segregation. The goal of this study was to investigate the relationship of voice-gender difference benefit to the breadth of binaural pitch fusion, the perceptual integration of dichotic stimuli that evoke different pitches across ears, and the relationship of spatial separation benefit to localization acuity, the ability to identify the direction of a sound source. Twelve bilateral hearing aid (HA) users (age from 30 to 75 years) and eleven normal hearing (NH) listeners (age from 36 to 67 years) were tested in the following three experiments. First, speech-on-speech masking performance was measured as the threshold target-to-masker ratio (TMR) needed to understand a target talker in the presence of either same- or different-gender masker talkers. These target-masker gender combinations were tested with two spatial configurations (maskers co-located or 60° symmetrically spatially separated from the target) in both monaural and binaural listening conditions. Second, binaural pitch fusion range measurements were conducted using harmonic tone complexes around a 200-Hz fundamental frequency. Third, absolute localization acuity was measured using broadband (125-8000 Hz) noise and one-third octave noise bands centered at 500 and 3000 Hz. Voice-gender differences between target and maskers improved TMR thresholds for both listener groups in the binaural condition as well as both monaural (left ear and right ear) conditions, with greater benefit in co-located than spatially separated conditions. Voice-gender difference benefit was correlated with the breadth of binaural pitch fusion in the binaural condition, but not the monaural conditions, ruling out a role of monaural abilities in the relationship between binaural fusion and voice-gender difference benefits. Spatial separation benefit was not significantly correlated with absolute localization acuity. In addition, greater spatial separation benefit was observed in NH listeners than in bilateral HA users, indicating a decreased ability of HA users to benefit from spatial release from masking (SRM). These findings suggest that sharp binaural pitch fusion may be important for maximal speech perception in multi-talker environments for both NH listeners and bilateral HA users.
Collapse
Affiliation(s)
- Yonghee Oh
- Department of Otolaryngology and Communicative Disorders, University of Louisville, Louisville, KY, United States
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- *Correspondence: Yonghee Oh,
| | - Curtis L. Hartling
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Nirmal Kumar Srinivasan
- Department of Speech-Language Pathology & Audiology, Towson University, Towson, MD, United States
| | - Anna C. Diedesch
- Department of Communication Sciences and Disorders, Western Washington University, Bellingham, WA, United States
| | - Frederick J. Gallun
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| | - Lina A. J. Reiss
- National Center for Rehabilitative Auditory Research, VA Portland Health Care System, Portland, OR, United States
- Department of Otolaryngology, Oregon Health & Science University, Portland, OR, United States
| |
Collapse
|
15
|
Peng ZE, Pausch F, Fels J. Spatial release from masking in reverberation for school-age children. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:3263. [PMID: 34852617 PMCID: PMC8730369 DOI: 10.1121/10.0006752] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/15/2021] [Revised: 09/29/2021] [Accepted: 09/29/2021] [Indexed: 05/06/2023]
Abstract
Understanding speech in noisy environments, such as classrooms, is a challenge for children. When a spatial separation is introduced between the target and masker, as compared to when both are co-located, children demonstrate intelligibility improvement of the target speech. Such intelligibility improvement is known as spatial release from masking (SRM). In most reverberant environments, binaural cues associated with the spatial separation are distorted; the extent to which such distortion will affect children's SRM is unknown. Two virtual acoustic environments with reverberation times between 0.4 s and 1.1 s were compared. SRM was measured using a spatial separation with symmetrically displaced maskers to maximize access to binaural cues. The role of informational masking in modulating SRM was investigated through voice similarity between the target and masker. Results showed that, contradictory to previous developmental findings on free-field SRM, children's SRM in reverberation has not yet reached maturity in the 7-12 years age range. When reducing reverberation, an SRM improvement was seen in adults but not in children. Our findings suggest that, even though school-age children have access to binaural cues that are distorted in reverberation, they demonstrate immature use of such cues for speech-in-noise perception, even in mild reverberation.
Collapse
Affiliation(s)
- Z Ellen Peng
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| | - Florian Pausch
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| | - Janina Fels
- Institute for Hearing Technology and Acoustics, RWTH Aachen University, Kopernikusstrasse 5, 52074 Aachen, Germany
| |
Collapse
|
16
|
Koelewijn T, Gaudrain E, Tamati T, Başkent D. The effects of lexical content, acoustic and linguistic variability, and vocoding on voice cue perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1620. [PMID: 34598602 DOI: 10.1121/10.0005938] [Citation(s) in RCA: 12] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/21/2021] [Accepted: 08/02/2021] [Indexed: 06/13/2023]
Abstract
Perceptual differences in voice cues, such as fundamental frequency (F0) and vocal tract length (VTL), can facilitate speech understanding in challenging conditions. Yet, we hypothesized that in the presence of spectrotemporal signal degradations, as imposed by cochlear implants (CIs) and vocoders, acoustic cues that overlap for voice perception and phonemic categorization could be mistaken for one another, leading to a strong interaction between linguistic and indexical (talker-specific) content. Fifteen normal-hearing participants performed an odd-one-out adaptive task measuring just-noticeable differences (JNDs) in F0 and VTL. Items used were words (lexical content) or time-reversed words (no lexical content). The use of lexical content was either promoted (by using variable items across comparison intervals) or not (fixed item). Finally, stimuli were presented without or with vocoding. Results showed that JNDs for both F0 and VTL were significantly smaller (better) for non-vocoded compared with vocoded speech and for fixed compared with variable items. Lexical content (forward vs reversed) affected VTL JNDs in the variable item condition, but F0 JNDs only in the non-vocoded, fixed condition. In conclusion, lexical content had a positive top-down effect on VTL perception when acoustic and linguistic variability was present but not on F0 perception. Lexical advantage persisted in the most degraded conditions and vocoding even enhanced the effect of item variability, suggesting that linguistic content could support compensation for poor voice perception in CI users.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- CNRS Unité Mixte de Recherche 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Institut National de la Santé et de la Recherche Médicale, UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Terrin Tamati
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University Wexner Medical Center, The Ohio State University, Columbus, Ohio, USA
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
17
|
Lopez-Poveda EA, Eustaquio-Martín A, Fumero MJ, Gorospe JM, Polo López R, Gutiérrez Revilla MA, Schatzer R, Nopp P, Stohl JS. Speech-in-Noise Recognition With More Realistic Implementations of a Binaural Cochlear-Implant Sound Coding Strategy Inspired by the Medial Olivocochlear Reflex. Ear Hear 2021; 41:1492-1510. [PMID: 33136626 PMCID: PMC7722463 DOI: 10.1097/aud.0000000000000880] [Citation(s) in RCA: 9] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/14/2019] [Accepted: 03/24/2020] [Indexed: 12/15/2022]
Abstract
OBJECTIVES Cochlear implant (CI) users continue to struggle understanding speech in noisy environments with current clinical devices. We have previously shown that this outcome can be improved by using binaural sound processors inspired by the medial olivocochlear (MOC) reflex, which involve dynamic (contralaterally controlled) rather than fixed compressive acoustic-to-electric maps. The present study aimed at investigating the potential additional benefits of using more realistic implementations of MOC processing. DESIGN Eight users of bilateral CIs and two users of unilateral CIs participated in the study. Speech reception thresholds (SRTs) for sentences in competition with steady state noise were measured in unilateral and bilateral listening modes. Stimuli were processed through two independently functioning sound processors (one per ear) with fixed compression, the current clinical standard (STD); the originally proposed MOC strategy with fast contralateral control of compression (MOC1); a MOC strategy with slower control of compression (MOC2); and a slower MOC strategy with comparatively greater contralateral inhibition in the lower-frequency than in the higher-frequency channels (MOC3). Performance with the four strategies was compared for multiple simulated spatial configurations of the speech and noise sources. Based on a previously published technical evaluation of these strategies, we hypothesized that SRTs would be overall better (lower) with the MOC3 strategy than with any of the other tested strategies. In addition, we hypothesized that the MOC3 strategy would be advantageous over the STD strategy in listening conditions and spatial configurations where the MOC1 strategy was not. RESULTS In unilateral listening and when the implant ear had the worse acoustic signal-to-noise ratio, the mean SRT was 4 dB worse for the MOC1 than for the STD strategy (as expected), but it became equal or better for the MOC2 or MOC3 strategies than for the STD strategy. In bilateral listening, mean SRTs were 1.6 dB better for the MOC3 strategy than for the STD strategy across all spatial configurations tested, including a condition with speech and noise sources colocated at front where the MOC1 strategy was slightly disadvantageous relative to the STD strategy. All strategies produced significantly better SRTs for spatially separated than for colocated speech and noise sources. A statistically significant binaural advantage (i.e., better mean SRTs across spatial configurations and participants in bilateral than in unilateral listening) was found for the MOC2 and MOC3 strategies but not for the STD or MOC1 strategies. CONCLUSIONS Overall, performance was best with the MOC3 strategy, which maintained the benefits of the originally proposed MOC1 strategy over the STD strategy for spatially separated speech and noise sources and extended those benefits to additional spatial configurations. In addition, the MOC3 strategy provided a significant binaural advantage, which did not occur with the STD or the original MOC1 strategies.
Collapse
Affiliation(s)
- Enrique A. Lopez-Poveda
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
| | - Almudena Eustaquio-Martín
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - Milagros J. Fumero
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
| | - José M. Gorospe
- Laboratorio de Audición Computacional y Psicoacústica, Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Salamanca, Spain
- Grupo de Audiología, Instituto de Investigación Biomédica de Salamanca, Universidad de Salamanca, Salamanca, Spain
- Departamento de Cirugía, Facultad de Medicina, Universidad de Salamanca, Salamanca, Spain
- Unidad de Foniatría, Logopedia y Audiología, Servicio de Otorrinolaringología, Hospital Universitario de Salamanca, Salamanca, Spain
| | - Rubén Polo López
- Servicio de Otorrinolaringología, Hospital Universitario Ramón y Cajal, Madrid, Spain
| | | | | | | | - Joshua S. Stohl
- North American Research Laboratory, MED-EL Corporation, Durham, North Carolina, USA
| |
Collapse
|
18
|
Yun D, Jennings TR, Kidd G, Goupell MJ. Benefits of triple acoustic beamforming during speech-on-speech masking and sound localization for bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 149:3052. [PMID: 34241104 PMCID: PMC8102069 DOI: 10.1121/10.0003933] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/22/2020] [Revised: 03/03/2021] [Accepted: 03/06/2021] [Indexed: 05/30/2023]
Abstract
Bilateral cochlear-implant (CI) users struggle to understand speech in noisy environments despite receiving some spatial-hearing benefits. One potential solution is to provide acoustic beamforming. A headphone-based experiment was conducted to compare speech understanding under natural CI listening conditions and for two non-adaptive beamformers, one single beam and one binaural, called "triple beam," which provides an improved signal-to-noise ratio (beamforming benefit) and usable spatial cues by reintroducing interaural level differences. Speech reception thresholds (SRTs) for speech-on-speech masking were measured with target speech presented in front and two maskers in co-located or narrow/wide separations. Numerosity judgments and sound-localization performance also were measured. Natural spatial cues, single-beam, and triple-beam conditions were compared. For CI listeners, there was a negligible change in SRTs when comparing co-located to separated maskers for natural listening conditions. In contrast, there were 4.9- and 16.9-dB improvements in SRTs for the beamformer and 3.5- and 12.3-dB improvements for triple beam (narrow and wide separations). Similar results were found for normal-hearing listeners presented with vocoded stimuli. Single beam improved speech-on-speech masking performance but yielded poor sound localization. Triple beam improved speech-on-speech masking performance, albeit less than the single beam, and sound localization. Thus, triple beam was the most versatile across multiple spatial-hearing domains.
Collapse
Affiliation(s)
- David Yun
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Todd R Jennings
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Gerald Kidd
- Department of Speech, Language, and Hearing Sciences, Boston University, Boston, Massachusetts 02215, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
19
|
Tsou YT, Li B, Wiefferink CH, Frijns JHM, Rieffe C. The Developmental Trajectory of Empathy and Its Association with Early Symptoms of Psychopathology in Children with and without Hearing Loss. Res Child Adolesc Psychopathol 2021; 49:1151-1164. [PMID: 33826005 PMCID: PMC8322017 DOI: 10.1007/s10802-021-00816-x] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/24/2021] [Indexed: 12/21/2022]
Abstract
Empathy enables people to share, understand, and show concern for others’ emotions. However, this capacity may be more difficult to acquire for children with hearing loss, due to limited social access, and the effect of hearing on empathic maturation has been unexplored. This four-wave longitudinal study investigated the development of empathy in children with and without hearing loss, and how this development is associated with early symptoms of psychopathology. Seventy-one children with hearing loss and cochlear implants (CI), and 272 typically-hearing (TH) children, participated (aged 1–5 years at Time 1). Parents rated their children’s empathic skills (affective empathy, attention to others’ emotions, prosocial actions, and emotion acknowledgment) and psychopathological symptoms (internalizing and externalizing behaviors). Children with CI and TH children were rated similarly on most of the empathic skills. Yet, fewer prosocial actions were reported in children with CI than in TH children. In both groups, affective empathy decreased with age, while prosocial actions and emotion acknowledgment increased with age and stabilized when children entered primary schools. Attention to emotions increased with age in children with CI, yet remained stable in TH children. Moreover, higher levels of affective empathy, lower levels of emotion acknowledgment, and a larger increase in attention to emotions over time were associated with more psychopathological symptoms in both groups. These findings highlight the importance of social access from which children with CI can learn to process others’ emotions more adaptively. Notably, interventions for psychopathology that tackle empathic responses may be beneficial for both groups, alike.
Collapse
Affiliation(s)
- Yung-Ting Tsou
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands.
| | - Boya Li
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands
| | - Carin H Wiefferink
- Dutch Foundation for the Deaf and Hard of Hearing Child, Amsterdam, The Netherlands
| | - Johan H M Frijns
- Department of Otorhinolaryngology and Head & Neck Surgery, Leiden University Medical Center, Leiden, The Netherlands.,Leiden Institute for Brain and Cognition, Leiden University, Leiden, The Netherlands
| | - Carolien Rieffe
- Unit of Developmental and Educational Psychology, Institute of Psychology, Leiden University, Leiden, The Netherlands.,Department of Human Media Interaction, Faculty of Electrical Engineering, Mathematics and Computer Science, University of Twente, Enschede, The Netherlands.,Department of Psychology and Human Development, Institute of Education, University College London, London, UK
| |
Collapse
|
20
|
Ellen Peng Z, Litovsky RY. The Role of Interaural Differences, Head Shadow, and Binaural Redundancy in Binaural Intelligibility Benefits Among School-Aged Children. Trends Hear 2021; 25:23312165211045313. [PMID: 34609935 PMCID: PMC8642055 DOI: 10.1177/23312165211045313] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/21/2020] [Revised: 08/19/2021] [Accepted: 08/20/2021] [Indexed: 11/16/2022] Open
Abstract
In complex listening environments, children can benefit from auditory spatial cues to understand speech in noise. When a spatial separation is introduced between the target and masker and/or listening with two ears versus one ear, children can gain intelligibility benefits with access to one or more auditory cues for unmasking: monaural head shadow, binaural redundancy, and interaural differences. This study systematically quantified the contribution of individual auditory cues in providing binaural speech intelligibility benefits for children with normal hearing between 6 and 15 years old. In virtual auditory space, target speech was presented from + 90° azimuth (i.e., listener's right), and two-talker babble maskers were either co-located (+ 90° azimuth) or separated by 180° (-90° azimuth, listener's left). Testing was conducted over headphones in monaural (i.e., right ear) or binaural (i.e., both ears) conditions. Results showed continuous improvement of speech reception threshold (SRT) between 6 and 15 years old and immature performance at 15 years of age for both SRTs and intelligibility benefits from more than one auditory cue. With early maturation of head shadow, the prolonged maturation of unmasking was likely driven by children's poorer ability to gain full benefits from interaural difference cues. In addition, children demonstrated a trade-off between the benefits from head shadow versus interaural differences, suggesting an important aspect of individual differences in accessing auditory cues for binaural intelligibility benefits during development.
Collapse
Affiliation(s)
- Z. Ellen Peng
- Waisman Center, University of
Wisconsin-Madison, Madison, WI, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of
Wisconsin-Madison, Madison, WI, USA
| |
Collapse
|
21
|
Wasiuk PA, Lavandier M, Buss E, Oleson J, Calandruccio L. The effect of fundamental frequency contour similarity on multi-talker listening in older and younger adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 148:3527. [PMID: 33379934 PMCID: PMC7863686 DOI: 10.1121/10.0002661] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Older adults with hearing loss have greater difficulty recognizing target speech in multi-talker environments than young adults with normal hearing, especially when target and masker speech streams are perceptually similar. A difference in fundamental frequency (f0) contour depth is an effective stream segregation cue for young adults with normal hearing. This study examined whether older adults with varying degrees of sensorineural hearing loss are able to utilize differences in target/masker f0 contour depth to improve speech recognition in multi-talker listening. Speech recognition thresholds (SRTs) were measured for speech mixtures composed of target/masker streams with flat, normal, and exaggerated speaking styles, in which f0 contour depth systematically varied. Computational modeling estimated differences in energetic masking across listening conditions. Young adults had lower SRTs than older adults; a result that was partially explained by differences in audibility predicted by the model. However, audibility differences did not explain why young adults experienced a benefit from mismatched target/masker f0 contour depth, while in most conditions, older adults did not. Reduced ability to use segregation cues (differences in target/masker f0 contour depth), and deficits grouping speech with variable f0 contours likely contribute to difficulties experienced by older adults in challenging acoustic environments.
Collapse
Affiliation(s)
- Peter A Wasiuk
- Department of Psychological Sciences, 11635 Euclid Avenue, Case Western Reserve University, Cleveland, Ohio 44106, USA
| | - Mathieu Lavandier
- Univ. Lyon, ENTPE, Laboratoire Génie Civil et Bâtiment, Rue M. Audin, Vaulx-en-Velin Cedex, 69518, France
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina, CB#7070, Chapel Hill, North Carolina 27599, USA
| | - Jacob Oleson
- Department of Biostatistics, N300 CPHB, University of Iowa, 145 North Riverside Drive, Iowa City, Iowa 52242-2007, USA
| | - Lauren Calandruccio
- Department of Psychological Sciences, 11635 Euclid Avenue, Case Western Reserve University, Cleveland, Ohio 44106, USA
| |
Collapse
|
22
|
D'Onofrio K, Richards V, Gifford R. Spatial Release From Informational and Energetic Masking in Bimodal and Bilateral Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3816-3833. [PMID: 33049147 PMCID: PMC8582905 DOI: 10.1044/2020_jslhr-20-00044] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2020] [Revised: 04/27/2020] [Accepted: 07/24/2020] [Indexed: 06/11/2023]
Abstract
Purpose Spatially separating speech and background noise improves speech understanding in normal-hearing listeners, an effect referred to as spatial release from masking (SRM). In cochlear implant (CI) users, SRM has often been demonstrated using asymmetric noise configurations, which maximize benefit from head shadow and the potential availability of binaural cues. In contrast, SRM in symmetrical configurations has been minimal to absent in CI users. We examined the interaction between two types of maskers (informational and energetic) and SRM in bimodal and bilateral CI users. We hypothesized that SRM would be absent or "negative" using symmetrically separated noise maskers. Second, we hypothesized that bimodal listeners would exhibit greater release from informational masking due to access to acoustic information in the non-CI ear. Method Participants included 10 bimodal and 10 bilateral CI users. Speech understanding in noise was tested in 24 conditions: 3 spatial configurations (S0N0, S0N45&315, S0N90&270) × 2 masker types (speech, signal-correlated noise) × 2 listening configurations (best-aided, CI-alone) × 2 talker gender conditions (different-gender, same-gender). Results In support of our first hypothesis, both groups exhibited negative SRM with increasing spatial separation. In opposition to our second hypothesis, both groups exhibited similar magnitudes of release from informational masking. The magnitude of release was greater for bimodal listeners, though this difference failed to reach statistical significance. Conclusions Both bimodal and bilateral CI recipients exhibited negative SRM. This finding is consistent with CI signal processing limitations, the audiologic factors associated with SRM, and known effects of behind-the-ear microphone technology. Though release from informational masking was not significantly different across groups, the magnitude of release was greater for bimodal listeners. This suggests that bimodal listeners may be at least marginally more susceptible to informational masking than bilateral CI users, though further research is warranted.
Collapse
Affiliation(s)
- Kristen D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | | | - René Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
23
|
Misurelli SM, Goupell MJ, Burg EA, Jocewicz R, Kan A, Litovsky RY. Auditory Attention and Spatial Unmasking in Children With Cochlear Implants. Trends Hear 2020; 24:2331216520946983. [PMID: 32812515 PMCID: PMC7446264 DOI: 10.1177/2331216520946983] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/02/2022] Open
Abstract
The ability to attend to target speech in background noise is an important skill, particularly for children who spend many hours in noisy environments. Intelligibility improves as a result of spatial or binaural unmasking in the free-field for normal-hearing children; however, children who use bilateral cochlear implants (BiCIs) demonstrate little benefit in similar situations. It was hypothesized that poor auditory attention abilities might explain the lack of unmasking observed in children with BiCIs. Target and interferer speech stimuli were presented to either or both ears of BiCI participants via their clinical processors. Speech reception thresholds remained low when the target and interferer were in opposite ears, but they did not show binaural unmasking when the interferer was presented to both ears and the target only to one ear. These results demonstrate that, in the most extreme cases of stimulus separation, children with BiCIs can ignore an interferer and attend to target speech, but there is weak or absent binaural unmasking. It appears that children with BiCIs mostly experience poor encoding of binaural cues rather than deficits in ability to selectively attend to target speech.
Collapse
Affiliation(s)
- Sara M Misurelli
- Waisman Center, University of Wisconsin-Madison.,Department of Surgery, Division of Otolaryngology, University of Wisconsin School of Medicine and Public Health
| | | | | | | | - Alan Kan
- Waisman Center, University of Wisconsin-Madison.,School of Engineering, Macquarie University, Sydney, Australia
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison.,Department of Surgery, Division of Otolaryngology, University of Wisconsin School of Medicine and Public Health
| |
Collapse
|
24
|
Peng ZE, Kan A, Litovsky RY. Development of Binaural Sensitivity: Eye Gaze as a Measure of Real-time Processing. Front Syst Neurosci 2020; 14:39. [PMID: 32733212 PMCID: PMC7360356 DOI: 10.3389/fnsys.2020.00039] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/03/2020] [Accepted: 05/27/2020] [Indexed: 11/13/2022] Open
Abstract
Children localize sounds using binaural cues when navigating everyday auditory environments. While sensitivity to binaural cues reaches maturity by 8-10 years of age, large individual variability has been observed in the just-noticeable-difference (JND) thresholds for interaural time difference (ITD) among children in this age range. To understand the development of binaural sensitivity beyond JND thresholds, the "looking-while-listening" paradigm was adapted in this study to reveal the real-time decision-making behavior during ITD processing. Children ages 8-14 years with normal hearing (NH) and a group of young NH adults were tested. This novel paradigm combined eye gaze tracking with behavioral psychoacoustics to estimate ITD JNDs in a two-alternative forced-choice discrimination task. Results from simultaneous eye gaze recordings during ITD processing suggested that children had adult-like ITD JNDs, but they demonstrated immature decision-making strategies. While the time course of arriving at the initial fixation and final decision in providing a judgment of the ITD direction was similar, children exhibited more uncertainty than adults during decision-making. Specifically, children made more fixation changes, particularly when tested using small ITD magnitudes, between the target and non-target response options prior to finalizing a judgment. These findings suggest that, while children may exhibit adult-like sensitivity to ITDs, their eye gaze behavior reveals that the processing of this binaural cue is still developing through late childhood.
Collapse
Affiliation(s)
- Z. Ellen Peng
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
| | - Alan Kan
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
- School of Engineering, Macquarie University, Sydney, NSW, Australia
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, WI, United States
| |
Collapse
|
25
|
Smieja DA, Dunkley BT, Papsin BC, Easwar V, Yamazaki H, Deighton M, Gordon KA. Interhemispheric auditory connectivity requires normal access to sound in both ears during development. Neuroimage 2020; 208:116455. [DOI: 10.1016/j.neuroimage.2019.116455] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/22/2019] [Revised: 11/21/2019] [Accepted: 12/09/2019] [Indexed: 10/25/2022] Open
|
26
|
Meister H, Walger M, Lang-Roth R, Müller V. Voice fundamental frequency differences and speech recognition with noise and speech maskers in cochlear implant recipients. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:EL19. [PMID: 32007021 DOI: 10.1121/10.0000499] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2019] [Accepted: 12/11/2019] [Indexed: 06/10/2023]
Abstract
Cochlear implant (CI) recipients are limited in their perception of voice cues, such as the fundamental frequency (F0). This has important consequences for speech recognition when several talkers speak simultaneously. This examination considered the comparison of clear speech and noise-vocoded sentences as maskers. For the speech maskers it could be shown that good CI performers are able to benefit from F0 differences between target and masker. This was due to the fact that a F0 difference of 80 Hz significantly reduced target-masker confusions, an effect that was slightly more pronounced in bimodal than in bilateral users.
Collapse
Affiliation(s)
- Hartmut Meister
- Jean-Uhrmacher-Institute for Clinical ENT-Research, University of Cologne, Geibelstrasse 29-31, D-50931 Cologne, Germany
| | - Martin Walger
- Department of Otorhinolaryngology, Head and Neck Surgery, Medical Faculty, University of Cologne, Kerpenerstrasse 62, 50937 Cologne, , , ,
| | - Ruth Lang-Roth
- Department of Otorhinolaryngology, Head and Neck Surgery, Medical Faculty, University of Cologne, Kerpenerstrasse 62, 50937 Cologne, , , ,
| | - Verena Müller
- Department of Otorhinolaryngology, Head and Neck Surgery, Medical Faculty, University of Cologne, Kerpenerstrasse 62, 50937 Cologne, , , ,
| |
Collapse
|
27
|
Murphy CFB, Hashim E, Dillon H, Bamiou DE. British children's performance on the listening in spatialised noise-sentences test (LISN-S). Int J Audiol 2019; 58:754-760. [PMID: 31195858 DOI: 10.1080/14992027.2019.1627592] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Abstract
Objective: To investigate whether British children's performance is equivalent to North American norms on the listening in spatialised noise-sentences test (LiSN-S). Design: Prospective study comparing the performance of a single British group of children to North-American norms on the LiSN-S (North American version). Study sample: The British group was composed of 46 typically developing children, aged 6-11 years 11 months, from a mainstream primary school in London. Results: No significant difference was observed between the British's group performance and the North-American norms for Low-cue, High-cue, Spatial Advantage and Total Advantage measure. The British group presented a significantly lower performance only for Talker Advantage measure (z-score: 0.35, 95% confidence interval -0.12 to -0.59). Age was significantly correlated with all unstandardised measures. Conclusion: Our results indicate that, when assessing British children, it would be appropriate to add a corrective factor of 0.35 to the z-score value obtained for the Talker Advantage in order to compare it to the North-American norms. This strategy would enable the use of LiSN-S in the UK to assess auditory stream segregation based on spatial cues.
Collapse
Affiliation(s)
- C F B Murphy
- The Ear Institute, University College London , London , UK
| | - E Hashim
- The Ear Institute, University College London , London , UK
| | - H Dillon
- Department of Linguistics, Macquarie University , Sydney , Australia.,Manchester Centre for Audiology and Deafness, University of Manchester , Manchester , UK.,National Acoustic Laboratories (NAL), Macquarie University , Macquarie Park , Australia
| | - D E Bamiou
- The Ear Institute, University College London , London , UK.,University College London Hospitals Biomedical Research Centre, National Institute for Health Research , London , UK
| |
Collapse
|
28
|
The Effect of Simulated Interaural Frequency Mismatch on Speech Understanding and Spatial Release From Masking. Ear Hear 2019; 39:895-905. [PMID: 29337763 DOI: 10.1097/aud.0000000000000541] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The binaural-hearing system interaurally compares inputs, which underlies the ability to localize sound sources and to better understand speech in complex acoustic environments. Cochlear implants (CIs) are provided in both ears to increase binaural-hearing benefits; however, bilateral CI users continue to struggle with understanding speech in the presence of interfering sounds and do not achieve the same level of spatial release from masking (SRM) as normal-hearing listeners. One reason for diminished SRM in CI users could be that the electrode arrays are inserted at different depths in each ear, which would cause an interaural frequency mismatch. Because interaural frequency mismatch diminishes the salience of interaural differences for relatively simple stimuli, it may also diminish binaural benefits for spectral-temporally complex stimuli like speech. This study evaluated the effect of simulated frequency-to-place mismatch on speech understanding and SRM. DESIGN Eleven normal-hearing listeners were tested on a speech understanding task. There was a female target talker who spoke five-word sentences from a closed set of words. There were two interfering male talkers who spoke unrelated sentences. Nonindividualized head-related transfer functions were used to simulate a virtual auditory space. The target was presented from the front (0°), and the interfering speech was either presented from the front (colocated) or from 90° to the right (spatially separated). Stimuli were then processed by an eight-channel vocoder with tonal carriers to simulate aspects of listening through a CI. Frequency-to-place mismatch ("shift") was introduced by increasing the center frequency of the synthesis filters compared with the corresponding analysis filters. Speech understanding was measured for different shifts (0, 3, 4.5, and 6 mm) and target-to-masker ratios (TMRs: +10 to -10 dB). SRM was calculated as the difference in the percentage of correct words for the colocated and separated conditions. Two types of shifts were tested: (1) bilateral shifts that had the same frequency-to-place mismatch in both ears, but no interaural frequency mismatch, and (2) unilateral shifts that produced an interaural frequency mismatch. RESULTS For the bilateral shift conditions, speech understanding decreased with increasing shift and with decreasing TMR, for both colocated and separate conditions. There was, however, no interaction between shift and spatial configuration; in other words, SRM was not affected by shift. For the unilateral shift conditions, speech understanding decreased with increasing interaural mismatch and with decreasing TMR for both the colocated and spatially separated conditions. Critically, there was a significant interaction between the amount of shift and spatial configuration; in other words, SRM decreased for increasing interaural mismatch. CONCLUSIONS A frequency-to-place mismatch in one or both ears resulted in decreased speech understanding. SRM, however, was only affected in conditions with unilateral shifts and interaural frequency mismatch. Therefore, matching frequency information between the ears provides listeners with larger binaural-hearing benefits, for example, improved speech understanding in the presence of interfering talkers. A clinical procedure to reduce interaural frequency mismatch when programming bilateral CIs may improve benefits in speech segregation that are due to binaural-hearing abilities.
Collapse
|
29
|
Hess CL, Misurelli SM, Litovsky RY. Spatial Release From Masking in 2-Year-Olds With Normal Hearing and With Bilateral Cochlear Implants. Trends Hear 2019; 22:2331216518775567. [PMID: 29761735 PMCID: PMC5956632 DOI: 10.1177/2331216518775567] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/29/2022] Open
Abstract
This study evaluated spatial release from masking (SRM) in 2- to 3-year-old children who are deaf and were implanted with bilateral cochlear implants (BiCIs), and in age-matched normal-hearing (NH) toddlers. Here, we examined whether early activation of bilateral hearing has the potential to promote SRM that is similar to age-matched NH children. Listeners were 13 NH toddlers and 13 toddlers with BiCIs, ages 27 to 36 months. Speech reception thresholds (SRTs) were measured for target speech in front (0°) and for competitors that were either Colocated in front (0°) or Separated toward the right (+90°). SRM was computed as the difference between SRTs in the front versus in the asymmetrical condition. Results show that SRTs were higher in the BiCI than NH group in all conditions. Both groups had higher SRTs in the Colocated and Separated conditions compared with Quiet, indicating masking. SRM was significant only in the NH group. In the BiCI group, the group effect of SRM was not significant, likely limited by the small sample size; however, all but two children had SRM values within the NH range. This work shows that to some extent, the ability to use spatial cues for source segregation develops by age 2 to 3 in NH children and is attainable in most of the children in the BiCI group. There is potential for the paradigm used here to be used in clinical settings to evaluate outcomes of bilateral hearing in very young children.
Collapse
|
30
|
Bilateral Cochlear Implants Using Two Electrode Lengths in Infants With Profound Deafness. Otol Neurotol 2019; 40:e267-e276. [PMID: 30741906 DOI: 10.1097/mao.0000000000002124] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE The goal of this investigation was to determine if a short electrode in one ear and standard electrode in the contralateral ear could be an option for infants with congenital profound deafness to theoretically preserve the structures of the inner ear. Similarities in performance between ears and compared with a control group of infants implanted with bilateral standard electrodes was evaluated. STUDY DESIGN Repeated-measure, single-subject experiment. SETTING University of Iowa-Department of Otolaryngology. PARTICIPANTS Nine infants with congenital profound bilateral sensorineural hearing loss. INTERVENTION(S) Short and standard implants. MAIN OUTCOME MEASURE(S) Early speech perception test (ESP), children's vowel, phonetically balanced-kindergarten (PB-K) word test, and preschool language scales-3 (PLS-3). RESULTS ESP scores showed performance reaching a ceiling effect for the individual short and standard ears and bilaterally. The children's vowel and PB-K word results indicated significant (both p < 0.001) differences between the two ears. Bilateral comparisons to age-matched children with standard bilateral electrodes showed no significant differences (p = 0.321) in performance. Global language performance for six children demonstrated standard scores around 1 standard deviation (SD) of the mean. Two children showed scores below the mean, but can be attributed to inconsistent device usage. Averaged total language scores between groups showed no difference in performance (p = 0.293). CONCLUSIONS The combined use of a short electrode and standard electrode might provide an option for implantation with the goal of preserving the cochlear anatomy. However, further studies are needed to understand why some children have or do not have symmetric performance.
Collapse
|
31
|
Lopez-Poveda EA, Eustaquio-Martín A. Objective speech transmission improvements with a binaural cochlear implant sound-coding strategy inspired by the contralateral medial olivocochlear reflex. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:2217. [PMID: 29716283 DOI: 10.1121/1.5031028] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
It has been recently shown that cochlear implant users could enjoy better speech reception in noise and enhanced spatial unmasking with binaural audio processing inspired by the inhibitory effects of the contralateral medial olivocochlear (MOC) reflex on compression [Lopez-Poveda, Eustaquio-Martin, Stohl, Wolford, Schatzer, and Wilson (2016). Ear Hear. 37, e138-e148]. The perceptual evidence supporting those benefits, however, is limited to a few target-interferer spatial configurations and to a particular implementation of contralateral MOC inhibition. Here, the short-term objective intelligibility index is used to (1) objectively demonstrate potential benefits over many more spatial configurations, and (2) investigate if the predicted benefits may be enhanced by using more realistic MOC implementations. Results corroborate the advantages and drawbacks of MOC processing indicated by the previously published perceptual tests. The results also suggest that the benefits may be enhanced and the drawbacks overcome by using longer time constants for the activation and deactivation of inhibition and, to a lesser extent, by using a comparatively greater inhibition in the lower than in the higher frequency channels. Compared to using two functionally independent processors, the better MOC processor improved the signal-to-noise ratio in the two ears between 1 and 6 decibels by enhancing head-shadow effects, and was advantageous for all tested target-interferer spatial configurations.
Collapse
Affiliation(s)
- Enrique A Lopez-Poveda
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| | - Almudena Eustaquio-Martín
- Instituto de Neurociencias de Castilla y León, Universidad de Salamanca, Calle Pintor Fernando Gallego 1, Salamanca 37007, Spain
| |
Collapse
|
32
|
Corbin NE, Buss E, Leibold LJ. Spatial Release From Masking in Children: Effects of Simulated Unilateral Hearing Loss. Ear Hear 2018; 38:223-235. [PMID: 27787392 PMCID: PMC5321780 DOI: 10.1097/aud.0000000000000376] [Citation(s) in RCA: 30] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was twofold: (1) to determine the effect of an acute simulated unilateral hearing loss on children's spatial release from masking in two-talker speech and speech-shaped noise, and (2) to develop a procedure to be used in future studies that will assess spatial release from masking in children who have permanent unilateral hearing loss. There were three main predictions. First, spatial release from masking was expected to be larger in two-talker speech than in speech-shaped noise. Second, simulated unilateral hearing loss was expected to worsen performance in all listening conditions, but particularly in the spatially separated two-talker speech masker. Third, spatial release from masking was expected to be smaller for children than for adults in the two-talker masker. DESIGN Participants were 12 children (8.7 to 10.9 years) and 11 adults (18.5 to 30.4 years) with normal bilateral hearing. Thresholds for 50%-correct recognition of Bamford-Kowal-Bench sentences were measured adaptively in continuous two-talker speech or speech-shaped noise. Target sentences were always presented from a loudspeaker at 0° azimuth. The masker stimulus was either co-located with the target or spatially separated to +90° or -90° azimuth. Spatial release from masking was quantified as the difference between thresholds obtained when the target and masker were co-located and thresholds obtained when the masker was presented from +90° or -90° azimuth. Testing was completed both with and without a moderate simulated unilateral hearing loss, created with a foam earplug and supra-aural earmuff. A repeated-measures design was used to compare performance between children and adults, and performance in the no-plug and simulated-unilateral-hearing-loss conditions. RESULTS All listeners benefited from spatial separation of target and masker stimuli on the azimuth plane in the no-plug listening conditions; this benefit was larger in two-talker speech than in speech-shaped noise. In the simulated-unilateral-hearing-loss conditions, a positive spatial release from masking was observed only when the masker was presented ipsilateral to the simulated unilateral hearing loss. In the speech-shaped noise masker, spatial release from masking in the no-plug condition was similar to that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. In contrast, in the two-talker speech masker, spatial release from masking in the no-plug condition was much larger than that obtained when the masker was presented ipsilateral to the simulated unilateral hearing loss. When either masker was presented contralateral to the simulated unilateral hearing loss, spatial release from masking was negative. This pattern of results was observed for both children and adults, although children performed more poorly overall. CONCLUSIONS Children and adults with normal bilateral hearing experience greater spatial release from masking for a two-talker speech than a speech-shaped noise masker. Testing in a two-talker speech masker revealed listening difficulties in the presence of disrupted binaural input that were not observed in a speech-shaped noise masker. This procedure offers promise for the assessment of spatial release from masking in children with permanent unilateral hearing loss.
Collapse
Affiliation(s)
- Nicole E. Corbin
- Department of Allied Health Sciences, Division of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC, USA
| | - Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, School of Medicine, Chapel Hill, NC, USA
| | | |
Collapse
|
33
|
Reidy PF, Kristensen K, Winn MB, Litovsky RY, Edwards JR. The Acoustics of Word-Initial Fricatives and Their Effect on Word-Level Intelligibility in Children With Bilateral Cochlear Implants. Ear Hear 2018; 38:42-56. [PMID: 27556521 PMCID: PMC5161607 DOI: 10.1097/aud.0000000000000349] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Previous research has found that relative to their peers with normal hearing (NH), children with cochlear implants (CIs) produce the sibilant fricatives /s/ and /∫/ less accurately and with less subphonemic acoustic contrast. The present study sought to further investigate these differences across groups in two ways. First, subphonemic acoustic properties were investigated in terms of dynamic acoustic features that indexed more than just the contrast between /s/ and /∫/. Second, the authors investigated whether such differences in subphonemic acoustic contrast between sibilant fricatives affected the intelligibility of sibilant-initial single word productions by children with CIs and their peers with NH. DESIGN In experiment 1, productions of /s/ and /∫/ in word-initial prevocalic contexts were elicited from 22 children with bilateral CIs (aged 4 to 7 years) who had at least 2 years of CI experience and from 22 chronological age-matched peers with NH. Acoustic features were measured from 17 points across the fricatives: peak frequency was measured to index the place of articulation contrast; spectral variance and amplitude drop were measured to index the degree of sibilance. These acoustic trajectories were fitted with growth-curve models to analyze time-varying spectral change. In experiment 2, phonemically accurate word productions that were elicited in experiment 1 were embedded within four-talker babble and played to 80 adult listeners with NH. Listeners were asked to repeat the words, and their accuracy rate was used as a measure of the intelligibility of the word productions. Regression analyses were run to test which acoustic properties measured in experiment 1 predicted the intelligibility scores from experiment 2. RESULTS The peak frequency trajectories indicated that the children with CIs produced less acoustic contrast between /s/ and /∫/. Group differences were observed in terms of the dynamic aspects (i.e., the trajectory shapes) of the acoustic properties. In the productions by children with CIs, the peak frequency and the amplitude drop trajectories were shallower, and the spectral variance trajectories were more asymmetric, exhibiting greater increases in variance (i.e., reduced sibilance) near the fricative-vowel boundary. The listeners' responses to the word productions indicated that when produced by children with CIs, /∫/-initial words were significantly more intelligible than /s/-initial words. However, when produced by children with NH, /s/-initial words and /∫/-initial words were equally intelligible. Intelligibility was partially predicted from the acoustic properties (Cox & Snell pseudo-R > 0.190), and the significant predictors were predominantly dynamic, rather than static, ones. CONCLUSIONS Productions from children with CIs differed from those produced by age-matched NH controls in terms of their subphonemic acoustic properties. The intelligibility of sibilant-initial single-word productions by children with CIs is sensitive to the place of articulation of the initial consonant (/∫/-initial words were more intelligible than /s/-initial words), but productions by children with NH were equally intelligible across both places of articulation. Therefore, children with CIs still exhibit differential production abilities for sibilant fricatives at an age when their NH peers do not.
Collapse
Affiliation(s)
- Patrick F. Reidy
- Waisman Center, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Communication Sciences and Disorders, University of Wisconsin—Madison, Madison, Wisconsin, USA
| | - Kayla Kristensen
- Waisman Center, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Communication Sciences and Disorders, University of Wisconsin—Madison, Madison, Wisconsin, USA
| | - Matthew B. Winn
- Waisman Center, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Surgery, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Speech & Hearing Sciences, University of Washington, Seattle, Washington, USA
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Communication Sciences and Disorders, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Surgery, University of Wisconsin—Madison, Madison, Wisconsin, USA
| | - Jan R. Edwards
- Waisman Center, University of Wisconsin—Madison, Madison, Wisconsin, USA
- Department of Communication Sciences and Disorders, University of Wisconsin—Madison, Madison, Wisconsin, USA
| |
Collapse
|
34
|
Liu YW, Tao DD, Jiang Y, GalvinIII JJ, Fu QJ, Yuan YS, Chen B. Effect of spatial separation and noise type on sentence recognition by Mandarin-speaking cochlear implant users. Acta Otolaryngol 2017; 137:829-836. [PMID: 28296522 DOI: 10.1080/00016489.2017.1292050] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To investigate the effects of spatial separation and noise type on sentence recognition by unilateral Mandarin-speaking cochlear implant (CI) users and normal-hearing (NH) listeners. METHOD Twenty-two unilateral Mandarin-speaking CI users and six NH listeners participated in this study. Speech reception thresholds were measured for three noise types (steady state noise, speech babble, and music). Sentences from the Mandarin Speech Perception test were presented directly in front of the listener (0°). Noise was presented from one of the five speaker locations: -90°, -45°, 0°, +45°, and +90°. RESULTS Overall, CI performance was significantly poorer than NH performance for all spatial separation and noise type conditions. NH listeners performed best with music and poorest with steady noise. CI users performed best with steady noise, and poorest with babble. Performance was significantly affected by noise location and noise type. There was no significant difference in head shadow effects among the different noise types for CI users. CONCLUSIONS Performance was much poorer in CI than in NH listeners for all noise types and spatial separations. Noise type differently affected unilateral CI users and NH listeners. The limited spectral resolution in CI users did not appear to affect head shadow.
Collapse
Affiliation(s)
- Yang-Wenyi Liu
- Department of Otology and Skull Base Surgery, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, Ministry of Health, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
| | - Duo-Duo Tao
- Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ye Jiang
- Department of Otology and Skull Base Surgery, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, Ministry of Health, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
| | - John J. GalvinIII
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, USA
| | - Ya-sheng Yuan
- Department of Otology and Skull Base Surgery, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, Ministry of Health, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
| | - Bing Chen
- Department of Otology and Skull Base Surgery, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, Ministry of Health, Eye Ear Nose and Throat Hospital, Fudan University, Shanghai, China
| |
Collapse
|
35
|
Goupell MJ, Kan A, Litovsky RY. Spatial attention in bilateral cochlear-implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1652. [PMID: 27914414 PMCID: PMC5848865 DOI: 10.1121/1.4962378] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2015] [Revised: 08/24/2016] [Accepted: 08/24/2016] [Indexed: 05/28/2023]
Abstract
Cochlear-implant (CI) users have difficulty understanding speech in the presence of interfering sounds. This study was designed to determine if binaural unmasking of speech is limited by peripheral or central encoding. Speech was presented to bilateral CI listeners using their clinical processors; unprocessed or vocoded speech was presented to normal-hearing (NH) listeners. Performance was worst for all listener groups in conditions where both the target and interferer were presented monaurally or diotically (i.e., no spatial differences). Listeners demonstrated improved performance compared to the monaural and diotic conditions when the target and interferer were presented to opposite ears. However, only some CI listeners demonstrated improved performance if the target was in one ear and the interferer was presented diotically, and there was no change for the group on average. This is unlike the 12-dB benefit observed in the NH group when presented the CI simulation. The results suggest that CI users can direct attention to a target talker if the target and interferer are presented to opposite ears; however, larger binaural benefits are limited for more realistic listening configurations, likely due to the imprecise peripheral encoding of the two sounds.
Collapse
Affiliation(s)
- Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Alan Kan
- Waisman Center, University of Wisconsin, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| |
Collapse
|
36
|
Does Bilateral Experience Lead to Improved Spatial Unmasking of Speech in Children Who Use Bilateral Cochlear Implants? Otol Neurotol 2016; 37:e35-42. [PMID: 26756153 DOI: 10.1097/mao.0000000000000905] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS In children with bilateral cochlear implants (BiCIs), experience over a 1 to 3-year period can improve speech understanding and spatial unmasking of speech. BACKGROUND One reason for providing children with BiCIs is to improve spatial hearing abilities. Little is known about changes in performance with added bilateral experience, and the relation between sound localization and spatial unmasking of speech. METHODS Twenty children with BiCIs participated. Testing was conducted typically within a year of bilateral activation, and at 1, 2, or 3 follow-up annual intervals. All testing was done while children listened with both devices activated. Target speech was presented from front (co-located); interfering speech was from front, right (asymmetrical), or right and left (symmetrical). Speech reception thresholds (SRTs) were measured in each condition. Spatial release from masking (SRM) was quantified as the difference in SRTs between conditions with interferers at 0 degrees and 90 degrees. For 11 of the children, data are also compared with sound localization measures obtained on the same visit to the laboratory but published elsewhere. RESULTS Change in SRM with bilateral experience varied; some children showed improvement and others did not. Regression analyses identified relationships between SRTs and SRM. Comparison of the SRM with localization data suggests little evidence for correlations between the two spatial tasks. CONCLUSION In children with BiCIs spatial hearing mechanisms involved in SRM and sound localization may be different. Reasons for reduced SRM include asymmetry between the ears, and individual differences in the ability to inhibit interfering information, switch and/or sustain attention.
Collapse
|
37
|
Litovsky RY, Gordon K. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception. Hear Res 2016; 338:76-87. [PMID: 26828740 PMCID: PMC5647834 DOI: 10.1016/j.heares.2016.01.003] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 01/07/2016] [Accepted: 01/11/2016] [Indexed: 11/29/2022]
Abstract
Spatial hearing skills are essential for children as they grow, learn and play. These skills provide critical cues for determining the locations of sources in the environment, and enable segregation of important sounds, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4-5 years. In contrast, most children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, these children generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Ruth Y Litovsky
- University of Wisconsin-Madison, 1500 Highland Ave, Madison, WI, 53705, United States.
| | | |
Collapse
|
38
|
Buss E, Leibold LJ, Hall JW. Effect of response context and masker type on word recognition in school-age children and adults. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:968. [PMID: 27586729 PMCID: PMC5392093 DOI: 10.1121/1.4960587] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/11/2016] [Revised: 07/13/2016] [Accepted: 07/26/2016] [Indexed: 05/14/2023]
Abstract
In adults, masked speech recognition improves with the provision of a closed set of response alternatives. The present study evaluated whether school-age children (5-13 years) benefit to the same extent as adults from a forced-choice context, and whether this effect depends on masker type. Experiment 1 compared masked speech reception thresholds for disyllabic words in either an open-set or a four-alternative forced-choice (4AFC) task. Maskers were speech-shaped noise or two-talker speech. Experiment 2 compared masked speech reception thresholds for monosyllabic words in two 4AFC tasks, one in which the target and foils were phonetically similar and one in which they were dissimilar. Maskers were speech-shaped noise, amplitude-modulated noise, or two-talker speech. For both experiments, it was predicted that children would not benefit from the information provided by the 4AFC context to the same degree as adults, particularly when the masker was complex (two-talker) or when audible speech cues were temporally sparse (modulated-noise). Results indicate that young children do benefit from a 4AFC context to the same extent as adults in speech-shaped noise and amplitude-modulated noise, but the benefit of context increases with listener age for the two-talker speech masker.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Lori J Leibold
- Boys Town National Research Hospital, Omaha, Nebraska 68131, USA
| | - Joseph W Hall
- Department of Otolaryngology/Head and Neck Surgery, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| |
Collapse
|
39
|
Todd AE, Goupell MJ, Litovsky RY. Binaural release from masking with single- and multi-electrode stimulation in children with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:59. [PMID: 27475132 PMCID: PMC5392083 DOI: 10.1121/1.4954717] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/10/2023]
Abstract
Cochlear implants (CIs) provide children with access to speech information from a young age. Despite bilateral cochlear implantation becoming common, use of spatial cues in free field is smaller than in normal-hearing children. Clinically fit CIs are not synchronized across the ears; thus binaural experiments must utilize research processors that can control binaural cues with precision. Research to date has used single pairs of electrodes, which is insufficient for representing speech. Little is known about how children with bilateral CIs process binaural information with multi-electrode stimulation. Toward the goal of improving binaural unmasking of speech, this study evaluated binaural unmasking with multi- and single-electrode stimulation. Results showed that performance with multi-electrode stimulation was similar to the best performance with single-electrode stimulation. This was similar to the pattern of performance shown by normal-hearing adults when presented an acoustic CI simulation. Diotic and dichotic signal detection thresholds of the children with CIs were similar to those of normal-hearing children listening to a CI simulation. The magnitude of binaural unmasking was not related to whether the children with CIs had good interaural time difference sensitivity. Results support the potential for benefits from binaural hearing and speech unmasking in children with bilateral CIs.
Collapse
Affiliation(s)
- Ann E Todd
- Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| |
Collapse
|