1
|
Yang AW, Pillion EM, Riley CA, Tolisano AM. Differences in music appreciation between bilateral and single-sided cochlear implant recipients. Am J Otolaryngol 2024; 45:104331. [PMID: 38677147 DOI: 10.1016/j.amjoto.2024.104331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 04/21/2024] [Indexed: 04/29/2024]
Abstract
OBJECTIVE To compare changes in music appreciation after cochlear implant (CI) surgery for patients with bilateral and single-sided deafness (SSD). METHODS A retrospective cohort study was performed on all adult CI unilateral or bilateral recipients from November 2019 to March 2023. Musical questionnaire subset data from the Cochlear Implant Quality of Life (CIQOL) - 35 Profile Instrument Score (maximum raw score of 15) was collected. Functional CI assessment was measured with CI-alone speech-in-quiet (SIQ) scores (AzBio and CNC). RESULTS 22 adults underwent CI surgery for SSD and 21 adults for bilateral deafness (8 sequentially implanted). Every patient group had clinically significant improvements (p < 0.001) in mean SIQ scores in the most recently implanted ear (Azbio (% correct) SSD: 14.23 to 68.48, bilateral: 24.54 to 82.23, sequential: 6.25 to 82.57). SSD adults on average had higher music QOL scores at baseline (SSD: 11.05; bilateral: 7.86, p < 0.001). No group had significant increases in raw score at the first post-operative visit (SSD: 11.45, p = 0.86; bilateral: 8.15, p = 0.15). By the most recent post-implantation evaluation (median 12.8 months for SSD, 12.3 months for bilateral), SSD adults had a significant increase in raw score from baseline (11.05 to 12.45, p = 0.03), whereas bilaterally deafened (7.86 to 9.38, p = 0.12) adults had nonsignificant increases. CONCLUSIONS SSD patients demonstrate higher baseline music appreciation than bilaterally deafened individuals regardless of unilateral or bilateral implantation and are more likely to demonstrate continued improvement in subjective music appreciation at last follow-up even when speech perception outcomes are similar.
Collapse
Affiliation(s)
- Alex W Yang
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Elicia M Pillion
- Department of Audiology, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Charles A Riley
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Anthony M Tolisano
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA.
| |
Collapse
|
2
|
Yu Q, Li H, Li S, Tang P. Prosodic and Visual Cues Facilitate Irony Comprehension by Mandarin-Speaking Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024:1-19. [PMID: 38820233 DOI: 10.1044/2024_jslhr-23-00701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
PURPOSE This study investigated irony comprehension by Mandarin-speaking children with cochlear implants, focusing on how prosodic and visual cues contribute to their comprehension, and whether second-order Theory of Mind is required for using these cues. METHOD We tested 52 Mandarin-speaking children with cochlear implants (aged 3-7 years) and 52 age- and gender-matched children with normal hearing. All children completed a Theory of Mind test and a story comprehension test. Ironic stories were presented in three conditions, each providing different cues: (a) context-only, (b) context and prosody, and (c) context, prosody, and visual cues. Comparisons were conducted on the accuracy of story understanding across the three conditions to examine the role of prosodic and visual cues. RESULTS The results showed that, compared to the context-only condition, the additional prosodic and visual cues both improved the accuracy of irony comprehension for children with cochlear implants, similar to their normal-hearing peers. Furthermore, such improvements were observed for all children, regardless of whether they passed the second-order Theory of Mind test or not. CONCLUSIONS This study is the first to demonstrate the benefits of prosodic and visual cues in irony comprehension, without reliance on second-order Theory of Mind, for Mandarin-speaking children with cochlear implants. It implies potential insights for utilizing prosodic and visual cues in intervention strategies to promote irony comprehension.
Collapse
Affiliation(s)
- Qianxi Yu
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Honglan Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Shanpeng Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Ping Tang
- School of Foreign Studies, Nanjing University of Science and Technology, China
| |
Collapse
|
3
|
Camarena A, Goldsworthy RL. Characterizing the relationship between modulation sensitivity and pitch resolution in cochlear implant users. Hear Res 2024; 448:109026. [PMID: 38776706 DOI: 10.1016/j.heares.2024.109026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/28/2024] [Accepted: 04/30/2024] [Indexed: 05/25/2024]
Abstract
Cochlear implants are medical devices that have restored hearing to approximately one million people around the world. Outcomes are impressive and most recipients attain excellent speech comprehension in quiet without relying on lip-reading cues, but pitch resolution is poor compared to normal hearing. Amplitude modulation of electrical stimulation is a primary cue for pitch perception in cochlear implant users. The experiments described in this article focus on the relationship between sensitivity to amplitude modulations and pitch resolution based on changes in the frequency of amplitude modulations. In the first experiment, modulation sensitivity and pitch resolution were measured in adults with no known hearing loss and in cochlear implant users with sounds presented to and processed by their clinical devices. Stimuli were amplitude-modulated sinusoids and amplitude-modulated narrow-band noises. Modulation detection and modulation frequency discrimination were measured for modulation frequencies centered on 110, 220, and 440 Hz. Pitch resolution based on changes in modulation frequency was measured for modulation depths of 25 %, 50 %, 100 %, and for a half-waved rectified modulator. Results revealed a strong linear relationship between modulation sensitivity and pitch resolution for cochlear implant users and peers with no known hearing loss. In the second experiment, cochlear implant users took part in analogous procedures of modulation sensitivity and pitch resolution but bypassing clinical sound processing using single-electrode stimulation. Results indicated that modulation sensitivity and pitch resolution was better conveyed by single-electrode stimulation than by clinical processors. Results at 440 Hz were worse, but also not well conveyed by clinical sound processing, so it remains unclear whether the 300 Hz perceptual limit described in the literature is a technological or biological limitation. These results highlight modulation depth and sensitivity as critical factors for pitch resolution in cochlear implant users and characterize the relationship that should inform the design of modulation enhancement algorithms for cochlear implants.
Collapse
Affiliation(s)
- Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| | - Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America.
| |
Collapse
|
4
|
Schulz KV, Gauer J, Martin R, Völter C. [Influence of overtones and undertones on melody recognition with a cochlear implant with SSD]. Laryngorhinootologie 2024; 103:279-288. [PMID: 37748501 DOI: 10.1055/a-2123-4315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Many cochlear implant (CI) users have difficulties recognising pitches and melodies because pitch transmission is blurred and shifted. This study investigates whether postlingually deafened adult CI users recognize melodies better when overtones are removed or undertones are added.Fifteen unilaterally postlingually deafened CI users (single sided deafness = SSD) were included aged 22 to 73 years (MW 52, SD 11.6) with CI hearing experience between 3 and 75 months (MW 33, SD 21.0) with varying MED-EL devices. Three short piano melodies were presented to them firstly to the normal-hearing ear and then in modified overtone or undertone variants and the original variant to the CI ear. These variants should be identified as one of the three original melodies. In addition, musical experience and ability were assessed by the Munich Music Questionnaire and the MiniPROMS music tests.The CI users showed the best melody recognition in the fundamental frequency variant. The overtone variant with the third overtone was as good as the original variant with all overtones with regard to melody recognition (p=1). However, the undertone variant with the first undertone was recognised significantly worse than the fundamental version (p=0.032). Furthermore, there was no correlation between musical experience or musical ability and the number of melodies recognised (p>0.1).Since a reduction of overtones did not worsen the melody recognition, overtone reduction should be considered in future music processing programs for the CI. This could reduce the energy consumption of the CI.
Collapse
Affiliation(s)
- Kira Viviane Schulz
- Universitätsklinik für Hals-Nasen-Ohrenheilkunde und Kopf- und Halschirurgie der Ruhr-Universität Bochum, Sankt Elisabeth Hospital, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Johannes Gauer
- Fakultät für Elektrotechnik und Informationstechnik, Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Rainer Martin
- Fakultät für Elektrotechnik und Informationstechnik, Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Christiane Völter
- Universitätsklinik für Hals-Nasen-Ohrenheilkunde und Kopf- und Halschirurgie der Ruhr-Universität Bochum, Sankt Elisabeth Hospital, Ruhr-Universität Bochum, Bochum, Deutschland
| |
Collapse
|
5
|
Yüksel M, Çiprut A. Reduced Channel Interaction Improves Timbre Recognition Under Vocoder Simulation of Cochlear Implant Processing. Otol Neurotol 2024; 45:e297-e306. [PMID: 38437807 DOI: 10.1097/mao.0000000000004151] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/06/2024]
Abstract
OBJECTIVE This study aimed to investigate the influence of the number of channels and channel interaction on timbre perception in cochlear implant (CI) processing. By utilizing vocoder simulations of CI processing, the effects of different numbers of channels and channel interaction were examined to assess their impact on timbre perception, an essential aspect of music and auditory performance. STUDY DESIGN, SETTING, AND PATIENTS Fourteen CI recipients, with at least 1 year of CI device use, and two groups (N = 16 and N = 19) of normal hearing (NH) participants completed a timbre recognition (TR) task. NH participants were divided into two groups, with each group being tested on different aspects of the study. The first group underwent testing with varying numbers of channels (8, 12, 16, and 20) to determine an ideal number that closely reflected the TR performance of CI recipients. Subsequently, the second group of NH participants participated in the assessment of channel interaction, utilizing the identified ideal number of 20 channels, with three conditions: low interaction (54 dB/octave), medium interaction (24 dB/octave), and high interaction (12 dB/octave). Statistical analyses, including repeated-measures analysis of variance and pairwise comparisons, were conducted to examine the effects. RESULTS The number of channels did not demonstrate a statistically significant effect on TR in NH participants ( p > 0.05). However, it was observed that the condition with 20 channels closely resembled the TR performance of CI recipients. In contrast, channel interaction exhibited a significant effect ( p < 0.001) on TR. Both the low interaction (54 dB/octave) and high interaction (12 dB/octave) conditions differed significantly from the actual CI recipients' performance. CONCLUSION Timbre perception, a complex ability reliant on highly detailed spectral resolution, was not significantly influenced by the number of channels. However, channel interaction emerged as a significant factor affecting timbre perception. The differences observed under different channel interaction conditions suggest potential mechanisms, including reduced spectro-temporal resolution and degraded spectral cues. These findings highlight the importance of considering channel interaction and optimizing CI processing strategies to enhance music perception and overall auditory performance for CI recipients.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Department of Audiology, Ankara Medipol University Faculty of Health Sciences, Ankara
| | - Ayça Çiprut
- Department of Audiology, Marmara University Faculty of Medicine, Istanbul, Turkey
| |
Collapse
|
6
|
Stone TC, Erickson ML. Experienced and Inexperienced Listeners' Perception of Vocal Strain. J Voice 2024:S0892-1997(24)00024-9. [PMID: 38443265 DOI: 10.1016/j.jvoice.2024.02.002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/14/2023] [Revised: 02/01/2024] [Accepted: 02/02/2024] [Indexed: 03/07/2024]
Abstract
OBJECTIVE The ability to perceive strain or tension in a voice is critical for both speech-language pathologists and singing teachers. Research on voice quality has focused primarily on the perception of breathiness or roughness. The perception of vocal strain has not been extensively researched and is poorly understood. METHODS/DESIGN This study employs a group and a within-subject design. Synthetic female sung stimuli were created that varied in source slope and vocal tract transfer function. Two groups of listeners, inexperienced listeners and experienced vocal pedagogues, listened to the stimuli and rated the perceived strain using a visual analog scale Synthetic female stimuli were constructed on the vowel /ɑ/ at 2 pitches, A3 and F5, using glottal source slopes that drop in amplitude at constant rates varying from - 6 dB/octave to - 18 dB/octave. All stimuli were filtered using three vocal tract transfer functions, one derived from a lyric/coloratura soprano, one derived from a mezzo-soprano, and a third that has resonance frequencies mid-way between the two. Listeners heard the stimuli over headphones and rated them on a scale from "no strain" to "very strained" using a visual-analog scale. RESULTS Spectral source slope was strongly related to the perception of strain in both groups of listeners. Experienced listeners' perception of strain was also related to formant pattern, while inexperienced listeners' perception of strain was also related to pitch. CONCLUSION This study has shown that spectral source slope can be a powerful cue to the perception of strain. However, inexperienced and experienced listeners also differ from each other in how strain is perceived across speaking and singing pitches. These differences may be based on both experience and the goals of the listener.
Collapse
Affiliation(s)
- Taylor Colton Stone
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee.
| | - Molly L Erickson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee
| |
Collapse
|
7
|
Chang YJ, Han JY, Chu WC, Li LPH, Lai YH. Enhancing music recognition using deep learning-powered source separation technology for cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1694-1703. [PMID: 38426839 DOI: 10.1121/10.0025057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2023] [Accepted: 02/09/2024] [Indexed: 03/02/2024]
Abstract
Cochlear implant (CI) is currently the vital technological device for assisting deaf patients in hearing sounds and greatly enhances their sound listening appreciation. Unfortunately, it performs poorly for music listening because of the insufficient number of electrodes and inaccurate identification of music features. Therefore, this study applied source separation technology with a self-adjustment function to enhance the music listening benefits for CI users. In the objective analysis method, this study showed that the results of the source-to-distortion, source-to-interference, and source-to-artifact ratios were 4.88, 5.92, and 15.28 dB, respectively, and significantly better than the Demucs baseline model. For the subjective analysis method, it scored higher than the traditional baseline method VIR6 (vocal to instrument ratio, 6 dB) by approximately 28.1 and 26.4 (out of 100) in the multi-stimulus test with hidden reference and anchor test, respectively. The experimental results showed that the proposed method can benefit CI users in identifying music in a live concert, and the personal self-fitting signal separation method had better results than any other default baselines (vocal to instrument ratio of 6 dB or vocal to instrument ratio of 0 dB) did. This finding suggests that the proposed system is a potential method for enhancing the music listening benefits for CI users.
Collapse
Affiliation(s)
- Yuh-Jer Chang
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ji-Yan Han
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Wei-Chung Chu
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Lieber Po-Hung Li
- Faculty of Medicine, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Department of Otolaryngology, Cheng Hsin General Hospital, Taipei, Taiwan
- Department of Medical Research, China Medical University Hospital, China Medical University, Taichung, Taiwan
- Institute of Brain Science, School of Medicine, National Yang Ming Chiao Tung University, Taipei, Taiwan
| | - Ying-Hui Lai
- Department of Biomedical Engineering, National Yang Ming Chiao Tung University, Taipei, Taiwan
- Medical Device Innovation Translation Center, National Yang Ming Chiao Tung University, Taipei, Taiwan
| |
Collapse
|
8
|
Calvino M, Zuazua A, Sanchez-Cuadrado I, Gavilán J, Mancheño M, Arroyo H, Lassaletta L. Meludia platform as a tool to evaluate music perception in pediatric and adult cochlear implant users. Eur Arch Otorhinolaryngol 2024; 281:629-638. [PMID: 37480418 PMCID: PMC10796694 DOI: 10.1007/s00405-023-08121-7] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/18/2023] [Accepted: 07/10/2023] [Indexed: 07/24/2023]
Abstract
PURPOSE Music perception is one of the greatest challenges for cochlear implant (CI) users. The aims of this study were: (i) to evaluate the music perception of CI users using the online Meludia music training program as music testing platform, (ii) to compare performance among three age groups, and (iii) to compare CI users with their normal hearing (NH) peers. METHODS 138 individuals participated, divided between children (6-10 y), adolescents (11-16 y), and adults (≥ 17 y). Five music perception tasks were evaluated: Rhythm, Spatialization, Stable/unstable, Melody, and Density. We also administered the music related quality of life (MuRQoL) questionnaire for adults, and a music questionnaire for pediatric population (6-16 y) (MuQPP). RESULTS A significantly higher percentage of the adolescent CI users completed the five tasks compared to the other age groups. Both pediatric and adolescent CI users had similar performance to their NH peers in most categories. On the MuRQoL, adult NH listeners reported more music exposure than CI users (3.8 ± 0.6 vs 3.0 ± 0.6, p < 0.01), but both groups reported similar levels of perceived music importance (3.4 ± 0.7 vs 3.2 ± 1.1, p = 0.340). On the MuQPP, pediatric CI users who scored highly on music perception also had higher reported questionnaire scores (54.2 ± 12.9 vs 40.9 ± 12.1, p = 0.009). CONCLUSIONS Meludia can be used to evaluate music perception and to use for music training in CI users of all ages. Adolescents had the highest performance in most musical tasks. Pediatric CI users were more similar to their NH peers. The importance of music in adult CI users was comparable to their NH peers.
Collapse
Affiliation(s)
- Miryam Calvino
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain.
- Biomedical Research Networking Centre on Rare Diseases (CIBERER), Institute of Health Carlos III (CIBERER-U761), Madrid, Spain.
| | - Alejandro Zuazua
- Department of Otorhinolaryngology, Hospital Infanta Leonor, Madrid, Spain
| | - Isabel Sanchez-Cuadrado
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain
| | - Javier Gavilán
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain
| | - Marta Mancheño
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain
| | - Helena Arroyo
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain
| | - Luis Lassaletta
- Department of Otorhinolaryngology, Hospital Universitario La Paz. IdiPAZ Research Institute, Paseo de la Castellana 261, 28046, Madrid, Spain
- Biomedical Research Networking Centre on Rare Diseases (CIBERER), Institute of Health Carlos III (CIBERER-U761), Madrid, Spain
| |
Collapse
|
9
|
Calvino M, Zuazua-González A, Gavilán J, Lassaletta L. Objective and Subjective Assessment of Music Perception and Musical Experiences in Young Cochlear Implant Users. Audiol Res 2024; 14:86-95. [PMID: 38247564 PMCID: PMC10801469 DOI: 10.3390/audiolres14010008] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2023] [Revised: 01/11/2024] [Accepted: 01/16/2024] [Indexed: 01/23/2024] Open
Abstract
For many individuals, music has a significant impact on the quality and enjoyability of life. Cochlear implant (CI) users must cope with the constraints that the CI imposes on music perception. Here, we assessed the musical experiences of young CI users and age-matched controls with normal hearing (NH). CI users and NH peers were divided into subgroups according to age: children and adolescents. Participants were tested on their ability to recognize vocal and instrumental music and instruments. A music questionnaire for pediatric populations (MuQPP) was also used. CI users and NH peers identified a similar percentage of vocal music. CI users were significantly worse at recognizing instruments (p < 0.05) and instrumental music (p < 0.05). CI users scored similarly to NH peers on the MuQPP, except for the musical frequency domain, where CI users in the children subgroup scored higher than their NH peers (p = 0.009). For CI users in the children subgroup, the identification of instrumental music was positively correlated with music importance (p = 0.029). Young CI users have significant deficits in some aspects of music perception (instrumental music and instrument identification) but have similar scores to NH peers in terms of interest in music, frequency of music exposure, and importance of music.
Collapse
Affiliation(s)
- Miryam Calvino
- Department of Otorhinolaryngology, Hospital La Paz, IdiPAZ Research Institute, 28046 Madrid, Spain; (M.C.); (J.G.)
- Biomedical Research Networking Centre on Rare Diseases (CIBERER), Institute of Health Carlos III (CIBERER-U761), 28029 Madrid, Spain
| | | | - Javier Gavilán
- Department of Otorhinolaryngology, Hospital La Paz, IdiPAZ Research Institute, 28046 Madrid, Spain; (M.C.); (J.G.)
| | - Luis Lassaletta
- Department of Otorhinolaryngology, Hospital La Paz, IdiPAZ Research Institute, 28046 Madrid, Spain; (M.C.); (J.G.)
- Biomedical Research Networking Centre on Rare Diseases (CIBERER), Institute of Health Carlos III (CIBERER-U761), 28029 Madrid, Spain
| |
Collapse
|
10
|
Creff G, Lambert C, Coudert P, Pean V, Laurent S, Godey B. Comparison of Tonotopic and Default Frequency Fitting for Speech Understanding in Noise in New Cochlear Implantees: A Prospective, Randomized, Double-Blind, Cross-Over Study. Ear Hear 2024; 45:35-52. [PMID: 37823850 DOI: 10.1097/aud.0000000000001423] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/13/2023]
Abstract
OBJECTIVES While cochlear implants (CIs) have provided benefits for speech recognition in quiet for subjects with severe-to-profound hearing loss, speech recognition in noise remains challenging. A body of evidence suggests that reducing frequency-to-place mismatch may positively affect speech perception. Thus, a fitting method based on a tonotopic map may improve speech perception results in quiet and noise. The aim of our study was to assess the impact of a tonotopic map on speech perception in noise and quiet in new CI users. DESIGN A prospective, randomized, double-blind, two-period cross-over study in 26 new CI users was performed over a 6-month period. New CI users older than 18 years with bilateral severe-to-profound sensorineural hearing loss or complete hearing loss for less than 5 years were selected in the University Hospital Centre of Rennes in France. An anatomical tonotopic map was created using postoperative flat-panel computed tomography and a reconstruction software based on the Greenwood function. Each participant was randomized to receive a conventional map followed by a tonotopic map or vice versa. Each setting was maintained for 6 weeks, at the end of which participants performed speech perception tasks. The primary outcome measure was speech recognition in noise. Participants were allocated to sequences by block randomization of size two with a ratio 1:1 (CONSORT Guidelines). Participants and those assessing the outcomes were blinded to the intervention. RESULTS Thirteen participants were randomized to each sequence. Two of the 26 participants recruited (one in each sequence) had to be excluded due to the COVID-19 pandemic. Twenty-four participants were analyzed. Speech recognition in noise was significantly better with the tonotopic fitting at all signal-to-noise ratio (SNR) levels tested [SNR = +9 dB, p = 0.002, mean effect (ME) = 12.1%, 95% confidence interval (95% CI) = 4.9 to 19.2, standardized effect size (SES) = 0.71; SNR = +6 dB, p < 0.001, ME = 16.3%, 95% CI = 9.8 to 22.7, SES = 1.07; SNR = +3 dB, p < 0.001 ME = 13.8%, 95% CI = 6.9 to 20.6, SES = 0.84; SNR = 0 dB, p = 0.003, ME = 10.8%, 95% CI = 4.1 to 17.6, SES = 0.68]. Neither period nor interaction effects were observed for any signal level. Speech recognition in quiet ( p = 0.66) and tonal audiometry ( p = 0.203) did not significantly differ between the two settings. 92% of the participants kept the tonotopy-based map after the study period. No correlation was found between speech-in-noise perception and age, duration of hearing deprivation, angular insertion depth, or position or width of the frequency filters allocated to the electrodes. CONCLUSION For new CI users, tonotopic fitting appears to be more efficient than the default frequency fitting because it allows for better speech recognition in noise without compromising understanding in quiet.
Collapse
Affiliation(s)
- Gwenaelle Creff
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
| | - Cassandre Lambert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | - Paul Coudert
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
| | | | | | - Benoit Godey
- Department of Otolaryngology-Head and Neck Surgery (HNS), University Hospital, Rennes, France
- MediCIS, LTSI (Image and Signal Processing Laboratory), INSERM, U1099, Rennes, France
- Hearing Aid Academy, Javene, France
| |
Collapse
|
11
|
Althoff J, Gajecki T, Nogueira W. Remixing Preferences for Western Instrumental Classical Music of Bilateral Cochlear Implant Users. Trends Hear 2024; 28:23312165241245219. [PMID: 38613359 DOI: 10.1177/23312165241245219] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 04/14/2024] Open
Abstract
For people with profound hearing loss, a cochlear implant (CI) is able to provide access to sounds that support speech perception. With current technology, most CI users obtain very good speech understanding in quiet listening environments. However, many CI users still struggle when listening to music. Efforts have been made to preprocess music for CI users and improve their music enjoyment. This work investigates potential modifications of instrumental music to make it more accessible for CI users. For this purpose, we used two datasets with varying complexity and containing individual tracks of instrumental music. The first dataset contained trios and it was newly created and synthesized for this study. The second dataset contained orchestral music with a large number of instruments. Bilateral CI users and normal hearing listeners were asked to remix the multitracks grouped into melody, bass, accompaniment, and percussion. Remixes could be performed in the amplitude, spatial, and spectral domains. Results showed that CI users preferred tracks being panned toward the right side, especially the percussion component. When CI users were grouped into frequent or occasional music listeners, significant differences in remixing preferences in all domains were observed.
Collapse
Affiliation(s)
- Jonas Althoff
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany
- Cluster of Excellence Hearing4all, Hanover, Germany
| | - Tom Gajecki
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany
- Cluster of Excellence Hearing4all, Hanover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany
- Cluster of Excellence Hearing4all, Hanover, Germany
| |
Collapse
|
12
|
Gfeller K, Mallalieu R. Psychosocial and auditory factors that influence successful music-based auditory training in pediatric cochlear implant recipients. Front Hum Neurosci 2023; 17:1308712. [PMID: 38178994 PMCID: PMC10764544 DOI: 10.3389/fnhum.2023.1308712] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/07/2023] [Accepted: 11/29/2023] [Indexed: 01/06/2024] Open
Abstract
Introduction Cochlear implants (CIs), which are designed to support spoken communication of persons with severe to profound hearing loss, can provide improved hearing capability through passive exposure. However, auditory training may optimize perception of spectrally complex sounds such as music or speech. Reviews of music-based training for pediatric CI users have reported modest though variable benefits, as well as problems with attrition. It is presumed that more substantial changes may result from longer, more intensive training; however, the development of protocols sufficiently motivating for sustained intensity is challenging. This article examined the experiences of star pediatric CI users, whose years of music training have yielded exceptional auditory benefits. Greater understanding of their experiences and attitudes may suggest best practices for music-based training. Research aims included: (a) characterizing the musical behaviors and perceptual learning processes of music-centric (Music-centric, for purposes of this paper, refers to CI users who engage in sustained and successful music making such as music lessons and ensembles and focused music listening over a period of years, and who derive deep satisfaction from those experiences.) pediatric CI users, and (b) identifying psychosocial and auditory factors that motivated persistence in auditory training. Methods We used qualitative and patient-engaged research methodologies, gathering data through questionnaires with open-ended questions. The participants, six music-centric CI users and five parents, described their experiences and attitudes regarding music training, and factors that supported or undermined those experiences. Data were analyzed using reflexive thematic analysis. Results The codes were consolidated into five themes and organized into a Model of Music-Based Learning for Pediatric Cochlear Implant Recipients. Sustained participation in music training was perceived as a dynamic process including varied musical stimuli, and moderated by intrinsic (attitude, perceived behavioral control) and extrinsic (parents, teachers, peers) influences, hearing status, sound access and background factors. Discussion These themes highlighted motivational factors that pediatric CI users and parents considered important to sustained, intensive and successful music learning throughout childhood and adolescence. These factors should be considered in the development of music-based training for pediatric CI recipients.
Collapse
Affiliation(s)
- Kate Gfeller
- Department of Otolaryngology—Head and Neck Surgery, School of Music, Department of Communication Sciences and Disorders, The University of Iowa, Iowa City, IA, United States
| | - Ruth Mallalieu
- Bodleian Libraries, The University of Oxford, Oxford, United Kingdom
| |
Collapse
|
13
|
Zhang L, Wang H, Xun M, Tang H, Wang J, Lv J, Zhu B, Chen Y, Wang D, Hu S, Gao Z, Liu J, Chen ZY, Chen B, Li H, Shu Y. Preclinical evaluation of the efficacy and safety of AAV1-hOTOF in mice and nonhuman primates. Mol Ther Methods Clin Dev 2023; 31:101154. [PMID: 38027066 PMCID: PMC10679773 DOI: 10.1016/j.omtm.2023.101154] [Citation(s) in RCA: 4] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/25/2023] [Accepted: 11/07/2023] [Indexed: 12/01/2023]
Abstract
Pathogenic mutations in the OTOF gene cause autosomal recessive hearing loss (DFNB9), one of the most common forms of auditory neuropathy. There is no biological treatment for DFNB9. Here, we designed an OTOF gene therapy agent by dual-adeno-associated virus 1 (AAV1) carrying human OTOF coding sequences with the expression driven by the hair cell-specific promoter Myo15, AAV1-hOTOF. To develop a clinical application of AAV1-hOTOF gene therapy, we evaluated its efficacy and safety in animal models using pharmacodynamics, behavior, and histopathology. AAV1-hOTOF inner ear delivery significantly improved hearing in Otof-/- mice without affecting normal hearing in wild-type mice. AAV1 was predominately distributed to the cochlea, although it was detected in other organs such as the CNS and the liver, and no obvious toxic effects of AAV1-hOTOF were observed in mice. To further evaluate the safety of Myo15 promoter-driven AAV1-transgene, AAV1-GFP was delivered into the inner ear of Macaca fascicularis via the round window membrane. AAV1-GFP transduced 60%-94% of the inner hair cells along the cochlear turns. AAV1-GFP was detected in isolated organs and no significant adverse effects were detected. These results suggest that AAV1-hOTOF is well tolerated and effective in animals, providing critical support for its clinical translation.
Collapse
Affiliation(s)
- Longlong Zhang
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Hui Wang
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Mengzhao Xun
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Honghai Tang
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Jinghan Wang
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Jun Lv
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Biyun Zhu
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Yuxin Chen
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Daqi Wang
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Shaowei Hu
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Ziwen Gao
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Jianping Liu
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Zheng-Yi Chen
- Department of Otolaryngology-Head and Neck Surgery, Graduate Program in Speech and Hearing Bioscience and Technology and Program in Neuroscience, Harvard Medical School, Boston, MA 02115, USA
- Eaton-Peabody Laboratory, Massachusetts Eye and Ear, 243 Charles Street, Boston, MA 02114, USA
| | - Bing Chen
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Huawei Li
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| | - Yilai Shu
- ENT Institute and Otorhinolaryngology Department of Eye & ENT Hospital, State Key Laboratory of Medical Neurobiology and MOE Frontiers Center for Brain Science, Fudan University, Shanghai 200031, China
- Institutes of Biomedical Science, Fudan University, Shanghai 200032, China
- NHC Key Laboratory of Hearing Medicine, Fudan University, Shanghai 200031, China
| |
Collapse
|
14
|
Sendesen İ, Sendesen E, Yücel E. Evaluation of musical emotion perception and language development in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2023; 175:111753. [PMID: 37839291 DOI: 10.1016/j.ijporl.2023.111753] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 10/17/2023]
Abstract
OBJECTIVES While the primary purpose of cochlear implant (CI) fitting is to improve individuals' receptive and expressive skills, musical emotion perception (MEP) is generally ignored. This study assesses the MEP and language skills (LS) of children using CI. METHODS 26 CI users and 26 matched healthy controls between the ages of 6 and 9 were included in the study. The Test of Language Development (TOLD) was applied to evaluate the LS of the participants, and the Montreal Emotion Identification Test (MEI) was applied to evaluate the MEP. RESULTS MEI test scores and all subtests of TOLD were statistically significantly lower in the CI group. Also, there was a statistically significant and moderate correlation between the listening subtest of TOLD and the MEI test. CONCLUSIONS MEP and language skills are poor in children with CI. Although language skills are primarily targeted in CI performance, improving MEP should also be included in rehabilitation programs. The relationship between music and the TOLD's listening subtest may provide evidence that listening skills can be improved by paying attention to the MEP, which is frequently ignored in rehabilitation programs.
Collapse
Affiliation(s)
- İrem Sendesen
- Department of Audiology, Gazi University, Ankara, Turkey; Ankara University, Faculty of Medicine, Otolaryngology Department, Audiology, Speech, Balance Disorders Diagnosis and Rehabilitation Unit, Ankara, Turkey.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| | - Esra Yücel
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
15
|
Abdulbaki H, Mo J, Limb CJ, Jiam NT. The Impact of Musical Rehabilitation on Complex Sound Perception in Cochlear Implant Users: A Systematic Review. Otol Neurotol 2023; 44:965-977. [PMID: 37758325 DOI: 10.1097/mao.0000000000004025] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/03/2023]
Abstract
OBJECTIVE Musical rehabilitation has been used in clinical and nonclinical contexts to improve postimplantation auditory processing in implanted individuals. This systematic review aimed to evaluate the efficacy of music rehabilitation in controlled experimental and quasi-experimental studies on cochlear implant (CI) user speech and music perception. DATABASES REVIEWED PubMed/MEDLINE, EMBASE, Web of Science, PsycARTICLES, and PsycINFO databases through July 2022. METHODS Controlled experimental trials and prospective studies were included if they compared pretest and posttest data and excluded hearing aid-only users. Preferred Reporting Items for Systematic Reviews and Meta-Analyses guidelines were then used to extract data from 11 included studies with a total of 206 pediatric and adult participants. Interventions included group music therapy, melodic contour identification training, auditory-motor instruction, or structured digital music training. Studies used heterogeneous outcome measures evaluating speech and music perception. Risk of bias was assessed using the National Heart, Lung, and Blood Institute Quality Assessment Tool. RESULTS A total of 735 studies were screened, and 11 met the inclusion criteria. Six trials reported both speech and music outcomes, whereas five reported only music perception outcomes after the intervention relative to control. For music perception outcomes, significant findings included improvements in melodic contour identification (five studies, p < 0.05), timbre recognition (three studies, p < 0.05), and song appraisal (three studies, p < 0.05) in their respective trials. For speech prosody outcomes, only vocal emotion identification demonstrated significant improvements (two studies, p < 0.05). CONCLUSION Music rehabilitation improves performance on multiple measures of music perception, as well as tone-based characteristics of speech (i.e., emotional prosody). This suggests that rehabilitation may facilitate improvements in the discrimination of spectrally complex signals.
Collapse
Affiliation(s)
- Hasan Abdulbaki
- University of California San Francisco School of Medicine, San Francisco
| | - Jonathan Mo
- University of California Davis School of Medicine, Sacramento
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California
| | - Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, Massachusetts Eye and Ear, Harvard Medical School, Boston, Massachusetts
| |
Collapse
|
16
|
Cychosz M, Xu K, Fu QJ. Effects of spectral smearing on speech understanding and masking release in simulated bilateral cochlear implants. PLoS One 2023; 18:e0287728. [PMID: 37917727 PMCID: PMC10621938 DOI: 10.1371/journal.pone.0287728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 06/11/2023] [Indexed: 11/04/2023] Open
Abstract
Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
17
|
Yüksel M, Sarlik E, Çiprut A. Emotions and Psychological Mechanisms of Listening to Music in Cochlear Implant Recipients. Ear Hear 2023; 44:1451-1463. [PMID: 37280743 DOI: 10.1097/aud.0000000000001388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Music is a multidimensional phenomenon and is classified by its arousal properties, emotional quality, and structural characteristics. Although structural features of music (i.e., pitch, timbre, and tempo) and music emotion recognition in cochlear implant (CI) recipients are popular research topics, music-evoked emotions, and related psychological mechanisms that reflect both the individual and social context of music are largely ignored. Understanding the music-evoked emotions (the "what") and related mechanisms (the "why") can help professionals and CI recipients better comprehend the impact of music on CI recipients' daily lives. Therefore, the purpose of this study is to evaluate these aspects in CI recipients and compare their findings to those of normal hearing (NH) controls. DESIGN This study included 50 CI recipients with diverse auditory experiences who were prelingually deafened (deafened at or before 6 years of age)-early implanted (N = 21), prelingually deafened-late implanted (implanted at or after 12 years of age-N = 13), and postlingually deafened (N = 16) as well as 50 age-matched NH controls. All participants completed the same survey, which included 28 emotions and 10 mechanisms (Brainstem reflex, Rhythmic entrainment, Evaluative Conditioning, Contagion, Visual imagery, Episodic memory, Musical expectancy, Aesthetic judgment, Cognitive appraisal, and Lyrics). Data were presented in detail for CI groups and compared between CI groups and between CI and NH groups. RESULTS The principal component analysis showed five emotion factors that are explained by 63.4% of the total variance, including anxiety and anger, happiness and pride, sadness and pain, sympathy and tenderness, and serenity and satisfaction in the CI group. Positive emotions such as happiness, tranquility, love, joy, and trust ranked as most often experienced in all groups, whereas negative and complex emotions such as guilt, fear, anger, and anxiety ranked lowest. The CI group ranked lyrics and rhythmic entrainment highest in the emotion mechanism, and there was a statistically significant group difference in the episodic memory mechanism, in which the prelingually deafened, early implanted group scored the lowest. CONCLUSION Our findings indicate that music can evoke similar emotions in CI recipients with diverse auditory experiences as it does in NH individuals. However, prelingually deafened and early implanted individuals lack autobiographical memories associated with music, which affects the feelings evoked by music. In addition, the preference for rhythmic entrainment and lyrics as mechanisms of music-elicited emotions suggests that rehabilitation programs should pay particular attention to these cues.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Ankara Medipol University School of Health Sciences, Department of Speech and Language Therapy, Ankara, Turkey
| | - Esra Sarlik
- Marmara University Institute of Health Sciences, Audiology and Speech Disorders Program, Istanbul, Turkey
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Department of Audiology, Istanbul, Turkey
| |
Collapse
|
18
|
Khurana L, Harczos T, Moser T, Jablonski L. En route to sound coding strategies for optical cochlear implants. iScience 2023; 26:107725. [PMID: 37720089 PMCID: PMC10502376 DOI: 10.1016/j.isci.2023.107725] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/19/2023] Open
Abstract
Hearing loss is the most common human sensory deficit. Severe-to-complete sensorineural hearing loss is often treated by electrical cochlear implants (eCIs) bypassing dysfunctional or lost hair cells by direct stimulation of the auditory nerve. The wide current spread from each intracochlear electrode array contact activates large sets of tonotopically organized neurons limiting spectral selectivity of sound coding. Despite many efforts, an increase in the number of independent eCI stimulation channels seems impossible to achieve. Light, which can be better confined in space than electric current may help optical cochlear implants (oCIs) to overcome eCI shortcomings. In this review, we present the current state of the optogenetic sound encoding. We highlight optical sound coding strategy development capitalizing on the optical stimulation that requires fine-grained, fast, and power-efficient real-time sound processing controlling dozens of microscale optical emitters as an emerging research area.
Collapse
Affiliation(s)
- Lakshay Khurana
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Auditory Neuroscience and Synaptic Nanophysiology Group, Max-Planck-Institute for Multidisciplinary Sciences, Göttingen, Germany
- Junior Research Group “Computational Neuroscience and Neuroengineering”, Göttingen, Germany
- The Doctoral Program “Sensory and Motor Neuroscience”, Göttingen Graduate Center for Neurosciences, Biophysics, and Molecular Biosciences (GGNB), Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
| | - Tamas Harczos
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
| | - Tobias Moser
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Auditory Neuroscience and Synaptic Nanophysiology Group, Max-Planck-Institute for Multidisciplinary Sciences, Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
- Cluster of Excellence “Multiscale Bioimaging: from Molecular Machines to Networks of Excitable Cells” (MBExC), University of Göttingen, Göttingen, Germany
| | - Lukasz Jablonski
- Institute for Auditory Neuroscience, University Medical Center Göttingen, Göttingen, Germany
- Auditory Neuroscience and Optogenetics Laboratory, German Primate Center, Göttingen, Germany
- Junior Research Group “Computational Neuroscience and Neuroengineering”, Göttingen, Germany
- InnerEarLab, University Medical Center Göttingen, Göttingen, Germany
| |
Collapse
|
19
|
Cartocci G, Inguscio BMS, Giorgi A, Vozzi A, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Fetoni AR, Freni F, Ciodaro F, Galletti F, Albera R, Canale A, Piccioni LO, Babiloni F. Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One 2023; 18:e0288461. [PMID: 37561758 PMCID: PMC10414671 DOI: 10.1371/journal.pone.0288461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/27/2023] [Indexed: 08/12/2023] Open
Abstract
Despite the plethora of studies investigating listening effort and the amount of research concerning music perception by cochlear implant (CI) users, the investigation of the influence of background noise on music processing has never been performed. Given the typical speech in noise recognition task for the listening effort assessment, the aim of the present study was to investigate the listening effort during an emotional categorization task on musical pieces with different levels of background noise. The listening effort was investigated, in addition to participants' ratings and performances, using EEG features known to be involved in such phenomenon, that is alpha activity in parietal areas and in the left inferior frontal gyrus (IFG), that includes the Broca's area. Results showed that CI users performed worse than normal hearing (NH) controls in the recognition of the emotional content of the stimuli. Furthermore, when considering the alpha activity corresponding to the listening to signal to noise ratio (SNR) 5 and SNR10 conditions subtracted of the activity while listening to the Quiet condition-ideally removing the emotional content of the music and isolating the difficulty level due to the SNRs- CI users reported higher levels of activity in the parietal alpha and in the homologous of the left IFG in the right hemisphere (F8 EEG channel), in comparison to NH. Finally, a novel suggestion of a particular sensitivity of F8 for SNR-related listening effort in music was provided.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Andrea Giorgi
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Anna Rita Fetoni
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Lucia Oriella Piccioni
- Department of Otolaryngology-Head and Neck Surgery, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| |
Collapse
|
20
|
Limb CJ, Mo J, Jiradejvong P, Jiam NT. The Impact of Vocal Boost Manipulations on Musical Sound Quality for Cochlear Implant Users. Laryngoscope 2023; 133:938-947. [PMID: 35906889 DOI: 10.1002/lary.30324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 06/27/2022] [Accepted: 06/28/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To evaluate the impact of vocal boost manipulations on cochlear implant (CI) musical sound quality appraisals. METHODS An anonymous, online study was distributed to 33 CI users. Participants listened to auditory tokens and assessed the musical quality of acoustic stimuli with vocal boosting and attenuation using a validated sound quality rating scale. Four versions of real-world musical stimuli were created: a version with +9 dB vocal boost, a version with -9 dB vocal attenuation, a composite stimulus containing a 1,000 Hz low-pass filter and white noise ("anchor"), and an unaltered version ("hidden reference"). Subjects listened to all four versions and provided ratings based on a 100-point scale that reflected the perceived sound quality difference of the music clip relative to the reference excerpt. RESULTS Vocal boost increased musical sound quality ratings relative to the reference clip (11.7; 95% CI, 1.62-21.8, p = 0.016) and vocal attenuation decreased musical sound quality ratings relative to the reference clip (28.5; 95% CI, 18.64-38.44, p < 0.001). When comparing the non-musical training group and musical training group, there was a significant difference in musical sound quality rating scores for the vocal boost condition (21.2; 95% CI: 1.76-40.7, p = 0.028). CONCLUSIONS CI-mediated musical sound quality appraisals are impacted by vocal boost and attenuation. Musically trained CI users to report greater musical sound quality enhancement with a vocal boost with respect to CI users with no musical training background. Implementation of front-end vocal boost manipulations in music may improve sound quality and music appreciation among CI users. LEVEL OF EVIDENCE 2 (Individual cohort study) Laryngoscope, 133:938-947, 2023.
Collapse
Affiliation(s)
- Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| | - Jonathan Mo
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Patpong Jiradejvong
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| | - Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
21
|
Gauer J, Nagathil A, Lentz B, Völter C, Martin R. A subjective evaluation of different music preprocessing approaches in cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:1307. [PMID: 36859137 DOI: 10.1121/10.0017249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/09/2022] [Accepted: 01/28/2023] [Indexed: 06/18/2023]
Abstract
Cochlear implants (CIs) can partially restore speech perception to relatively high levels in listeners with moderate to profound hearing loss. However, for most CI listeners, the perception and enjoyment of music remains notably poor. Since a number of technical and physiological restrictions of current implant designs cannot be easily overcome, a number of preprocessing methods for music signals have been proposed recently. They aim to emphasize the leading voice and rhythmic elements and to reduce their spectral complexity. In this study, CI listeners evaluated five remixing approaches in comparison to unprocessed signals. To identify potential explaining factors of CI preference ratings, different signal quality criteria of the processed signals were additionally assessed by normal-hearing listeners. Additional factors were investigated based on instrumental signal-level features. For three preprocessing methods, a significant improvement over the unprocessed reference was found. Especially, two deep neural network-based remix strategies proved to enhance music perception in CI listeners. These strategies provide remixes of the respective harmonic and percussive signal components of the four source stems "vocals," "bass," "drums," and "other accompaniment." Moreover, the results demonstrate that CI listeners prefer an attenuation of sustained components of drum source signals.
Collapse
Affiliation(s)
- Johannes Gauer
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Anil Nagathil
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Benjamin Lentz
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Christiane Völter
- Department of Otorhinolaringology, Head and Neck Surgery, St. Elisabeth-Hospital, Ruhr-Universität Bochum, Bochum, Germany
| | - Rainer Martin
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| |
Collapse
|
22
|
Seeberg AB, Haumann NT, Højlund A, Andersen ASF, Faulkner KF, Brattico E, Vuust P, Petersen B. Adapting to the Sound of Music - Development of Music Discrimination Skills in Recently Implanted CI Users. Trends Hear 2023; 27:23312165221148035. [PMID: 36597692 PMCID: PMC9830578 DOI: 10.1177/23312165221148035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2023] Open
Abstract
Cochlear implants (CIs) are optimized for speech perception but poor in conveying musical sound features such as pitch, melody, and timbre. Here, we investigated the early development of discrimination of musical sound features after cochlear implantation. Nine recently implanted CI users (CIre) were tested shortly after switch-on (T1) and approximately 3 months later (T2), using a musical multifeature mismatch negativity (MMN) paradigm, presenting four deviant features (intensity, pitch, timbre, and rhythm), and a three-alternative forced-choice behavioral test. For reference, groups of experienced CI users (CIex; n = 13) and normally hearing (NH) controls (n = 14) underwent the same tests once. We found significant improvement in CIre's neural discrimination of pitch and timbre as marked by increased MMN amplitudes. This was not reflected in the behavioral results. Behaviorally, CIre scored well above chance level at both time points for all features except intensity, but significantly below NH controls for all features except rhythm. Both CI groups scored significantly below NH in behavioral pitch discrimination. No significant difference was found in MMN amplitude between CIex and NH. The results indicate that development of musical discrimination can be detected neurophysiologically early after switch-on. However, to fully take advantage of the sparse information from the implant, a prolonged adaptation period may be required. Behavioral discrimination accuracy was notably high already shortly after implant switch-on, although well below that of NH listeners. This study provides new insight into the early development of music-discrimination abilities in CI users and may have clinical and therapeutic relevance.
Collapse
Affiliation(s)
- Alberte B. Seeberg
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark,Alberte B. Seeberg, Center for Music in the Brain, Department of Clinical Medicine, Aarhus University and The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark.
| | - Niels T. Haumann
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Andreas Højlund
- Center of Functionally Integrative Neuroscience, Department of Clinical Medicine, Aarhus University, Aarhus, Denmark,Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Denmark
| | - Anne S. F. Andersen
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | | | - Elvira Brattico
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Peter Vuust
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| | - Bjørn Petersen
- Center for Music in the Brain, Department of Clinical Medicine, Center for Music in the Brain, Aarhus University & The Royal Academy of Music Aarhus/Aalborg, Aarhus, Denmark
| |
Collapse
|
23
|
Dasdar S, Nasresfahani A, Kianfar N, Zarandi MM, Mobedshahi F, Dabiri S, Kouhi A. Perception of timbre in adult Cochlear implant users: Comparison of Iranian and Western musical instruments. Cochlear Implants Int 2023; 24:27-34. [PMID: 36495227 DOI: 10.1080/14670100.2022.2137909] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
OBJECTIVES Cochlear implants (CI) have dramatically improved speech perception for patients with sensorineural hearing impairment. However, listening to music is a great challenge for them. This study examined the perception and appraisal of Iranian musical instruments comparing with similar Western instruments. METHODS Eleven adult CI users and 25 normal hearing (NH) individuals participated in this study. Musical stimuli of three commonly heard instrument pairs were prepared. Participants were asked to identify the instruments and rate their appraisal on a ten-point Likert scale (0 = dislike very much, 10 = like very much). RESULTS The instrument recognition rate was 40.6% among the CI users, and the mean appraisal score was 5.2 ± 2.7. NH listeners had none significant higher scores on both tasks with a recognition rate of 50.0% and the mean appraisal score of 6.9 ± 1.5. Iranian instruments were more recognized in both groups. Regarding their appraisal, the mean score for both types was almost equal in the NH group, while CI users more appraised Iranian instruments. CONCLUSION In addition to better recognition of Iranian instruments, they were particularly better appraised in the CI group. Iranian instruments provide suitable musical pieces for CI recipients that can be considered in rehabilitation programs.
Collapse
Affiliation(s)
- Shayan Dasdar
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Azam Nasresfahani
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Nika Kianfar
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Masoud Motesadi Zarandi
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Farzad Mobedshahi
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Sasan Dabiri
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| | - Ali Kouhi
- Department of Cochlear Implant Center and Otorhinolaryngology, Tehran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
24
|
Lee Y, Jeong SW, Jeong SH. School adjustment of adolescents with sequential bilateral cochlear implants in mainstream school. Int J Pediatr Otorhinolaryngol 2022; 163:111338. [PMID: 36274325 DOI: 10.1016/j.ijporl.2022.111338] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 07/27/2022] [Revised: 09/10/2022] [Accepted: 10/06/2022] [Indexed: 11/19/2022]
Abstract
OBJECTIVES Little is known about the school adjustment of adolescents with sequential bilateral cochlear implants (CIs) in mainstream educational settings. This study aims to investigate the school adjustment of adolescents with sequential bilateral CIs, in comparison to those of age-matched adolescents with typical hearing (TH), to explore the relationships between individual variables and school adjustment in the bilateral CI group, and to assess the factors leading to strong school adjustment in the bilateral CI group. METHODS Twenty-five adolescents with sequential bilateral CIs and 30 adolescents with TH, aged 13-19 years, participated in this study. The adolescents completed the school adjustment scale (SAS). RESULTS The two groups were not significantly different on overall SAS scores. However, the TH group scored higher on the SAS than the sequential bilateral CI group with regard to communication skills and relationships with peers. In the bilateral CI group, SAS scores significantly correlated with open-set sentence and receptive vocabulary scores. Receptive vocabulary scores were a significant predictive factor for the level of school adjustment for the bilateral CI group. CONCLUSION Adolescents who received sequential bilateral CIs adapted well to mainstream schools. However, they did experience barriers to communication and to make friends in mainstream schools, and their level of school adjustment was affected by their receptive vocabulary skills.
Collapse
Affiliation(s)
- Youngmee Lee
- Department of Communication Disorders, Ewha Womans University, 11-1 Daehyun-dong, Seodaemoon-gu, Seoul, 120-750, South Korea.
| | - Sung-Wook Jeong
- Department of Otolaryngology-Head and Neck Surgery, College of Medicine, Dong-A University, Busan, South Korea.
| | | |
Collapse
|
25
|
Torppa R, Kuuluvainen S, Lipsanen J. The development of cortical processing of speech differs between children with cochlear implants and normal hearing and changes with parental singing. Front Neurosci 2022; 16:976767. [PMID: 36507354 PMCID: PMC9731313 DOI: 10.3389/fnins.2022.976767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/04/2022] [Indexed: 11/21/2022] Open
Abstract
Objective The aim of the present study was to investigate speech processing development in children with normal hearing (NH) and cochlear implants (CI) groups using a multifeature event-related potential (ERP) paradigm. Singing is associated to enhanced attention and speech perception. Therefore, its connection to ERPs was investigated in the CI group. Methods The paradigm included five change types in a pseudoword: two easy- (duration, gap) and three difficult-to-detect (vowel, pitch, intensity) with CIs. The positive mismatch responses (pMMR), mismatch negativity (MMN), P3a and late differentiating negativity (LDN) responses of preschoolers (below 6 years 9 months) and schoolchildren (above 6 years 9 months) with NH or CIs at two time points (T1, T2) were investigated with Linear Mixed Modeling (LMM). For the CI group, the association of singing at home and ERP development was modeled with LMM. Results Overall, responses elicited by the easy- and difficult to detect changes differed between the CI and NH groups. Compared to the NH group, the CI group had smaller MMNs to vowel duration changes and gaps, larger P3a responses to gaps, and larger pMMRs and smaller LDNs to vowel identity changes. Preschoolers had smaller P3a responses and larger LDNs to gaps, and larger pMMRs to vowel identity changes than schoolchildren. In addition, the pMMRs to gaps increased from T1 to T2 in preschoolers. More parental singing in the CI group was associated with increasing pMMR and less parental singing with decreasing P3a amplitudes from T1 to T2. Conclusion The multifeature paradigm is suitable for assessing cortical speech processing development in children. In children with CIs, cortical discrimination is often reflected in pMMR and P3a responses, and in MMN and LDN responses in children with NH. Moreover, the cortical speech discrimination of children with CIs develops late, and over time and age, their speech sound change processing changes as does the processing of children with NH. Importantly, multisensory activities such as parental singing can lead to improvement in the discrimination and attention shifting toward speech changes in children with CIs. These novel results should be taken into account in future research and rehabilitation.
Collapse
Affiliation(s)
- Ritva Torppa
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Centre of Excellence in Music, Mind, Body and Brain, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Soila Kuuluvainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Department of Digital Humanities, Faculty of Arts, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
26
|
Gfeller K, Veltman J, Mandara R, Napoli MB, Smith S, Choi Y, McCormick G, McKenzie T, Nastase A. Technological and Rehabilitative Concerns: Perspectives of Cochlear Implant Recipients Who Are Musicians. Trends Hear 2022; 26:23312165221122605. [PMID: 36203400 PMCID: PMC9549092 DOI: 10.1177/23312165221122605] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/28/2022] Open
Abstract
In these perspectives, we share the experiences of eight cochlear implant (CI) recipients who are musicians, and their efforts within and outside of audiological appointments to achieve satisfying music experiences. Their experiences were previously shared in a panel discussion as part of the 3rd Music and Cochlear Implant Symposium hosted at The University of Cambridge, United Kingdom. Following the symposium, the panel members and moderator developed and completed a follow-up questionnaire to facilitate a formal analysis of the following questions: (a) What forms of support for optimizing music exist within clinical CI appointments, including counseling, mapping, assessment, and rehabilitation? (b) What forms of support do CI users who are interested in music desire? (c) What self-initiated approaches can be used to improve music perception, enjoyment, and participation? Using qualitative methodology, the questionnaire data were coded, aggregated into themes, and then into core categories. The primary themes that emerged from the data were (a) limited levels of support for optimizing music outcomes within normal clinical appointments, (b) difficulties in current mapping and assessment in relation to music perception, and (c) limited availability of clinically sponsored training/rehabilitation for music. These CI recipients then recommended clinical protocol changes and described self-initiated rehabilitation. These findings were examined in relation to literature on clinical practices for CI users, auditory rehabilitation, and patient-centered care, emphasizing best practices and barriers to audiological care. The data as related to healthcare trends were conceptualized and developed into a proposed Reciprocal Model for Music Rehabilitation (RMMR).
Collapse
Affiliation(s)
- Kate Gfeller
- Department of Otolaryngology—Head and Neck Surgery, The University of Iowa, Iowa City, IA, USA
- Kate Gfeller, Department of
Otolaryngology—Head and Neck Surgery, The University of Iowa Hospitals and
Clinics, 200 Hawkins Drive, Iowa City, IA 52242, USA.
| | - Joke Veltman
- Behavioural Science Institute, Radboud University, Nijmegen, The Netherlands
| | | | | | - Sarah Smith
- Auditory Implant Service, University of Southampton, Southampton, UK
| | - Yoon Choi
- Independent Scholars, Brooklyn, NY, USA
| | - Gaelen McCormick
- Eastman School of Music, University of
Rochester, Rochester, NY, USA
| | | | | |
Collapse
|
27
|
Image-Guided Cochlear Implant Programming: A Systematic Review and Meta-analysis. Otol Neurotol 2022; 43:e924-e935. [PMID: 35973035 DOI: 10.1097/mao.0000000000003653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVE To review studies evaluating clinically implemented image-guided cochlear implant programing (IGCIP) and to determine its effect on cochlear implant (CI) performance. DATA SOURCES PubMed, EMBASE, and Google Scholar were searched for English language publications from inception to August 1, 2021. STUDY SELECTION Included studies prospectively compared intraindividual CI performance between an image-guided experimental map and a patient's preferred traditional map. Non-English studies, cadaveric studies, and studies where imaging did not directly inform programming were excluded. DATA EXTRACTION Seven studies were identified for review, and five reported comparable components of audiological testing and follow-up times appropriate for meta-analysis. Demographic, speech, spectral modulation, pitch accuracy, and quality-of-life survey data were collected. Aggregate data were used when individual data were unavailable. DATA SYNTHESIS Audiological test outcomes were evaluated as standardized mean change (95% confidence interval) using random-effects meta-analysis with raw score standardization. Improvements in speech and quality-of-life measures using the IGCIP map demonstrated nominal effect sizes: consonant-nucleus-consonant words, 0.15 (-0.12 to 0.42); AzBio quiet, 0.09 (-0.05 to 0.22); AzBio +10 dB signal-noise ratio, 0.14 (-0.01 to 0.30); Bamford-Kowel-Bench sentence in noise, -0.11 (-0.35 to 0.12); Abbreviated Profile of Hearing Aid Benefit, -0.14 (-0.28 to 0.00); and Speech Spatial and Qualities of Hearing Scale, 0.13 (-0.02 to 0.28). Nevertheless, 79% of patients allowed to keep their IGCIP map opted for continued use after the investigational period. CONCLUSION IGCIP has potential to precisely guide CI programming. Nominal effect sizes for objective outcome measures fail to reflect subjective benefits fully given discordance with the percentage of patients who prefer to maintain their IGCIP map.
Collapse
|
28
|
Normative Cochlear Implant Quality of Life (CIQOL)-35 Profile and CIQOL-10 Global Scores for Experienced Cochlear Implant Users from a Multi-Institutional Study. Otol Neurotol 2022; 43:797-802. [PMID: 35878634 PMCID: PMC9335896 DOI: 10.1097/mao.0000000000003596] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE Although adult cochlear implant (CI) outcomes have primarily focused on speech recognition scores, the rigorous development of a CI-specific patient-reported outcome measure provides an opportunity for a more comprehensive and ecologically valid approach to measure the real-world functional abilities of adult CI users. Here, we report for the first time normative Cochlear Implant Quality of Life (CIQOL)-35 Profile and global scores and variance for a large, multi-institutional sample of adult CI users. STUDY DESIGN Cross-sectional study design. SETTING CI centers in the United States. PATIENTS Seven hundred five adults with bilateral moderate to profound hearing loss with at least 1 year of CI use. INTERVENTIONS Cochlear implantation. MAIN OUTCOME MEASURES CIQOL-35 Profile and CIQOL-10 Global scores. RESULTS During the development of the CIQOL instruments, 1,000 CI users from all regions of the United States were invited to participate in studies. Of these, 705 (70.5%) completed all portions of the study, and their data are reported here. Mean CIQOL domain scores were highest (indicating better function) for the emotional and social domains and lowest for listening effort. The entertainment and social domains demonstrated the widest distribution of scores and largest standard deviations, indicating greatest variability in function. Overall, there were minimal ceiling and floor effects for all domains. CONCLUSION Normative scores from a large sample of experienced adult CI users are consistent with clinical observations, showing large differences in functional abilities and large variability. Normative CIQOL data for adult CI users have the potential to enhance preoperative discussions with CI candidates, improve post-CI activation monitoring, and establish standards for CI centers.
Collapse
|
29
|
Bissmeyer SRS, Ortiz JR, Gan H, Goldsworthy RL. Computer-based musical interval training program for Cochlear implant users and listeners with no known hearing loss. Front Neurosci 2022; 16:903924. [PMID: 35968373 PMCID: PMC9363605 DOI: 10.3389/fnins.2022.903924] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2022] [Accepted: 07/11/2022] [Indexed: 11/15/2022] Open
Abstract
A musical interval is the difference in pitch between two sounds. The way that musical intervals are used in melodies relative to the tonal center of a key can strongly affect the emotion conveyed by the melody. The present study examines musical interval identification in people with no known hearing loss and in cochlear implant users. Pitch resolution varies widely among cochlear implant users with average resolution an order of magnitude worse than in normal hearing. The present study considers the effect of training on musical interval identification and tests for correlations between low-level psychophysics and higher-level musical abilities. The overarching hypothesis is that cochlear implant users are limited in their ability to identify musical intervals both by low-level access to frequency cues for pitch as well as higher-level mapping of the novel encoding of pitch that implants provide. Participants completed a 2-week, online interval identification training. The benchmark tests considered before and after interval identification training were pure tone detection thresholds, pure tone frequency discrimination, fundamental frequency discrimination, tonal and rhythm comparisons, and interval identification. The results indicate strong correlations between measures of pitch resolution with interval identification; however, only a small effect of training on interval identification was observed for the cochlear implant users. Discussion focuses on improving access to pitch cues for cochlear implant users and on improving auditory training for musical intervals.
Collapse
Affiliation(s)
- Susan Rebekah Subrahmanyam Bissmeyer
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
- Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
- *Correspondence: Susan Rebekah Subrahmanyam Bissmeyer,
| | - Jacqueline Rose Ortiz
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Helena Gan
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| | - Raymond Lee Goldsworthy
- Caruso Department of Otolaryngology, Auditory Research Center, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
30
|
Abstract
Cochlear implants have been the most successful neural prosthesis, with one million users globally. Researchers used the source-filter model and speech vocoder to design the modern multi-channel implants, allowing implantees to achieve 70%-80% correct sentence recognition in quiet, on average. Researchers also used the cochlear implant to help understand basic mechanisms underlying loudness, pitch, and cortical plasticity. While front-end processing advances improved speech recognition in noise, the unilateral implant speech recognition in quiet has plateaued since the early 1990s. This lack of progress calls for action on re-designing the cochlear stimulating interface and collaboration with the general neurotechnology community.
Collapse
Affiliation(s)
- Fan-Gang Zeng
- Departments of Anatomy and Neurobiology, Biomedical Engineering, Cognitive Sciences, Otolaryngology-Head and Neck Surgery and Center for Hearing Research, University of California, 110 Medical Sciences E, Irvine, California 92697, USA
| |
Collapse
|
31
|
Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear Hear 2022; 43:1178-1188. [PMID: 34999594 PMCID: PMC9197138 DOI: 10.1097/aud.0000000000001181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
Collapse
|
32
|
Boyer J, Stohl J. MELUDIA – Online music training for cochlear implant users. Cochlear Implants Int 2022; 23:257-269. [DOI: 10.1080/14670100.2022.2069313] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Johanna Boyer
- MED-EL North American Research Laboratory in the Research Triangle Park, Durham, NC, USA
| | - Josh Stohl
- MED-EL North American Research Laboratory in the Research Triangle Park, Durham, NC, USA
| |
Collapse
|
33
|
Gauer J, Nagathil A, Eckel K, Belomestny D, Martin R. A versatile deep-neural-network-based music preprocessing and remixing scheme for cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:2975. [PMID: 35649910 DOI: 10.1121/10.0010371] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 04/13/2022] [Indexed: 06/15/2023]
Abstract
While cochlear implants (CIs) have proven to restore speech perception to a remarkable extent, access to music remains difficult for most CI users. In this work, a methodology for the design of deep learning-based signal preprocessing strategies that simplify music signals and emphasize rhythmic information is proposed. It combines harmonic/percussive source separation and deep neural network (DNN) based source separation in a versatile source mixture model. Two different neural network architectures were assessed with regard to their applicability for this task. The method was evaluated with instrumental measures and in two listening experiments for both network architectures and six mixing presets. Normal-hearing subjects rated the signal quality of the processed signals compared to the original both with and without a vocoder which provides an approximation of the auditory perception in CI listeners. Four combinations of remix models and DNNs have been selected for an evaluation with vocoded signals and were all rated significantly better in comparison to the unprocessed signal. In particular, the two best-performing remix networks are promising candidates for further evaluation in CI listeners.
Collapse
Affiliation(s)
- Johannes Gauer
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Anil Nagathil
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Kai Eckel
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Denis Belomestny
- Faculty of Mathematics, Universität Duisburg-Essen, Essen, Germany
| | - Rainer Martin
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| |
Collapse
|
34
|
Conversations in Cochlear Implantation: The Inner Ear Therapy of Today. Biomolecules 2022; 12:biom12050649. [PMID: 35625577 PMCID: PMC9138212 DOI: 10.3390/biom12050649] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/06/2022] [Revised: 04/27/2022] [Accepted: 04/28/2022] [Indexed: 02/01/2023] Open
Abstract
As biomolecular approaches for hearing restoration in profound sensorineural hearing loss evolve, they will be applied in conjunction with or instead of cochlear implants. An understanding of the current state-of-the-art of this technology, including its advantages, disadvantages, and its potential for delivering and interacting with biomolecular hearing restoration approaches, is helpful for designing modern hearing-restoration strategies. Cochlear implants (CI) have evolved over the last four decades to restore hearing more effectively, in more people, with diverse indications. This evolution has been driven by advances in technology, surgery, and healthcare delivery. Here, we offer a practical treatise on the state of cochlear implantation directed towards developing the next generation of inner ear therapeutics. We aim to capture and distill conversations ongoing in CI research, development, and clinical management. In this review, we discuss successes and physiological constraints of hearing with an implant, common surgical approaches and electrode arrays, new indications and outcome measures for implantation, and barriers to CI utilization. Additionally, we compare cochlear implantation with biomolecular and pharmacological approaches, consider strategies to combine these approaches, and identify unmet medical needs with cochlear implants. The strengths and weaknesses of modern implantation highlighted here can mark opportunities for continued progress or improvement in the design and delivery of the next generation of inner ear therapeutics.
Collapse
|
35
|
Inguscio BMS, Mancini P, Greco A, Nicastri M, Giallini I, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Rossi F, Canale A, Albera A, Giorgi A, Malerba P, Babiloni F, Cartocci G. ‘Musical effort’ and ‘musical pleasantness’: a pilot study on the neurophysiological correlates of classical music listening in adults normal hearing and unilateral cochlear implant users. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2079325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Tiziana Di Cesare
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Federica Rossi
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Andrea Canale
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | - Andrea Albera
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | | | | | - Fabio Babiloni
- BrainSigns Srl, Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou, China
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| | - Giulia Cartocci
- BrainSigns Srl, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
36
|
Moore BCJ. Listening to Music Through Hearing Aids: Potential Lessons for Cochlear Implants. Trends Hear 2022; 26:23312165211072969. [PMID: 35179052 PMCID: PMC8859663 DOI: 10.1177/23312165211072969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Some of the problems experienced by users of hearing aids (HAs) when listening to music are relevant to cochlear implants (CIs). One problem is related to the high peak levels (up to 120 dB SPL) that occur in live music. Some HAs and CIs overload at such levels, because of the limited dynamic range of the microphones and analogue-to-digital converters (ADCs), leading to perceived distortion. Potential solutions are to use 24-bit ADCs or to include an adjustable gain between the microphones and the ADCs. A related problem is how to squeeze the wide dynamic range of music into the limited dynamic range of the user, which can be only 6-20 dB for CI users. In HAs, this is usually done via multi-channel amplitude compression (automatic gain control, AGC). In CIs, a single-channel front-end AGC is applied to the broadband input signal or a control signal derived from a running average of the broadband signal level is used to control the mapping of the channel envelope magnitude to an electrical signal. This introduces several problems: (1) an intense narrowband signal (e.g. a strong bass sound) reduces the level for all frequency components, making some parts of the music harder to hear; (2) the AGC introduces cross-modulation effects that can make a steady sound (e.g. sustained strings or a sung note) appear to fluctuate in level. Potential solutions are to use several frequency channels to create slowly varying gain-control signals and to use slow-acting (or dual time-constant) AGC rather than fast-acting AGC.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, 2152University of Cambridge, Cambridge, England
| |
Collapse
|
37
|
Mo J, Jiam NT, Deroche MLD, Jiradejvong P, Limb CJ. Effect of Frequency Response Manipulations on Musical Sound Quality for Cochlear Implant Users. Trends Hear 2022; 26:23312165221120017. [PMID: 35983700 PMCID: PMC9393940 DOI: 10.1177/23312165221120017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Cochlear implant (CI) users commonly report degraded musical sound quality. To improve CI-mediated music perception and enjoyment, we must understand factors that affect sound quality. In the present study, we utilize frequency response manipulation (FRM), a process that adjusts the energies of frequency bands within an audio signal, to determine its impact on CI-user sound quality assessments of musical stimuli. Thirty-three adult CI users completed an online study and listened to FRM-altered clips derived from the top songs in Billboard magazine. Participants assessed sound quality using the MUltiple Stimulus with Hidden Reference and Anchor for CI users (CI-MUSHRA) rating scale. FRM affected sound quality ratings (SQR). Specifically, increasing the gain for low and mid-range frequencies led to higher quality ratings than reducing them. In contrast, manipulating the gain for high frequencies (those above 2 kHz) had no impact. Participants with musical training were more sensitive to FRM than non-musically trained participants and demonstrated preference for gain increases over reductions. These findings suggest that, even among CI users, past musical training provides listeners with subtleties in musical appraisal, even though their hearing is now mediated electrically and bears little resemblance to their musical experience prior to implantation. Increased gain below 2 kHz may lead to higher sound quality than for equivalent reductions, perhaps because it offers greater access to lyrics in songs or because it provides more salient beat sensations.
Collapse
Affiliation(s)
- Jonathan Mo
- Davis School of Medicine, 8785University of California, Sacramento, CA, USA
| | - Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| | | | - Patpong Jiradejvong
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, San Francisco School of Medicine, University of California, San Francisco, CA, USA
| |
Collapse
|
38
|
Kozma-Spytek L, Vogler C. Factors Affecting the Accessibility of Voice Telephony for People with Hearing Loss: Audio Encoding, Network Impairments, Video and Environmental Noise. ACM TRANSACTIONS ON ACCESSIBLE COMPUTING 2021. [DOI: 10.1145/3479160] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
This paper describes four studies with a total of 114 individuals with hearing loss and 12 hearing controls that investigate the impact of audio quality parameters on voice telecommunications. These studies were first informed by a survey of 439 individuals with hearing loss on their voice telecommunications experiences. While voice telephony was very important, with high usage of wireless mobile phones, respondents reported relatively low satisfaction with their hearing devices’ performance for telephone listening, noting that improved telephone audio quality was a significant need. The studies cover three categories of audio quality parameters: (1)
narrowband (NB)
versus
wideband (WB)
audio; (2) encoding audio at varying bit rates, from typical rates used in today's mobile networks to the highest quality supported by these audio codecs; and (3) absence of packet loss to worst-case packet loss in both mobile and VoIP networks. Additionally, NB versus WB audio was tested in auditory-only and audiovisual presentation modes and in quiet and noisy environments. With WB audio in a quiet environment, individuals with hearing loss exhibited better speech recognition, expended less perceived mental effort, and rated speech quality higher than with NB audio. WB audio provided a greater benefit when listening alone than when the visual channel also was available. The noisy environment significantly degraded performance for both presentation modes, but particularly for listening alone. Bit rate affected speech recognition for NB audio, and speech quality ratings for both NB and WB audio. Packet loss affected all of speech recognition, mental effort, and speech quality ratings. WB versus NB audio also affected hearing individuals, especially under packet loss. These results are discussed in terms of the practical steps they suggest for the implementation of telecommunications systems and related technical standards and policy considerations to improve the accessibility of voice telephony for people with hearing loss.
Collapse
|
39
|
Camarena A, Manchala G, Papadopoulos J, O’Connell SR, Goldsworthy RL. Pleasantness Ratings of Musical Dyads in Cochlear Implant Users. Brain Sci 2021; 12:brainsci12010033. [PMID: 35053777 PMCID: PMC8773901 DOI: 10.3390/brainsci12010033] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/02/2021] [Revised: 12/21/2021] [Accepted: 12/22/2021] [Indexed: 11/28/2022] Open
Abstract
Cochlear implants have been used to restore hearing to more than half a million people around the world. The restored hearing allows most recipients to understand spoken speech without relying on visual cues. While speech comprehension in quiet is generally high for recipients, many complain about the sound of music. The present study examines consonance and dissonance perception in nine cochlear implant users and eight people with no known hearing loss. Participants completed web-based assessments to characterize low-level psychophysical sensitivities to modulation and pitch, as well as higher-level measures of musical pleasantness and speech comprehension in background noise. The underlying hypothesis is that sensitivity to modulation and pitch, in addition to higher levels of musical sophistication, relate to higher-level measures of music and speech perception. This hypothesis tested true with strong correlations observed between measures of modulation and pitch with measures of consonance ratings and speech recognition. Additionally, the cochlear implant users who were the most sensitive to modulations and pitch, and who had higher musical sophistication scores, had similar pleasantness ratings as those with no known hearing loss. The implication is that better coding and focused rehabilitation for modulation and pitch sensitivity will broadly improve perception of music and speech for cochlear implant users.
Collapse
Affiliation(s)
- Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Grace Manchala
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Julianne Papadopoulos
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
- Thornton School of Music, University of Southern California, Los Angeles, CA 90089, USA
| | - Samantha R. O’Connell
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
| | - Raymond L. Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA 90033, USA; (A.C.); (G.M.); (J.P.); (S.R.O.)
- Correspondence:
| |
Collapse
|
40
|
Ziatabar Ahmadi Z, Mahmoudian S, Ashayeri H. P-MMR and LDN beside MMN as Speech-evoked Neural Markers in Children with Cochlear Implants: A Review. Dev Neuropsychol 2021; 47:1-16. [PMID: 34927493 DOI: 10.1080/87565641.2021.2004601] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Abstract
This review mainly explores less-reported neural markers to speech-evoked contrasts in children with cochlear implants (CI). Databases and electronic journals were searched with keywords of "mismatch responses" AND "positive mismatch response" (p-MMR) AND "late discriminate negativity" (LDN). P-MMR likely is as a measurement of brain immaturity in CI children while the developmental trajectories of LDN remain unexplained in older CI children. In CI children, there is a p-MMR-MMN-LDN sequence to speech stimuli developmentally. Whereas these aforementioned neural responses anticipate developmental changes in CI groups, it is still uncertain about the cutoff age for disappearance of p-MMR and LDN.
Collapse
Affiliation(s)
- Zohreh Ziatabar Ahmadi
- Department of Speech Therapy, School of Rehabilitation, Babol University of Medical Sciences, Babol, Iran
| | - Saied Mahmoudian
- ENT and Head & Neck Research Center and Department, The Five Senses Health Institute, Iran University of Medical Sciences, Tehran, Iran.,Department of Otolaryngology, Medical University of Hannover (Mhh), Hannover, Germany
| | - Hassan Ashayeri
- Department of Basic Sciences in Rehabilitation, School of Rehabilitation Sciences, Iran University of Medical Sciences, Tehran, Iran
| |
Collapse
|
41
|
Abstract
OBJECTIVES Variations in loudness are a fundamental component of the music listening experience. Cochlear implant (CI) processing, including amplitude compression, and a degraded auditory system may further degrade these loudness cues and decrease the enjoyment of music listening. This study aimed to identify optimal CI sound processor compression settings to improve music sound quality for CI users. DESIGN Fourteen adult MED-EL CI recipients participated (Experiment No. 1: n = 17 ears; Experiment No. 2: n = 11 ears) in the study. A software application using a modified comparison category rating (CCR) test method allowed participants to compare and rate the sound quality of various CI compression settings while listening to 25 real-world music clips. The two compression settings studied were (1) Maplaw, which informs audibility and compression of soft level sounds, and (2) automatic gain control (AGC), which applies compression to loud sounds. For each experiment, one compression setting (Maplaw or AGC) was held at the default, while the other was varied according to the values available in the clinical CI programming software. Experiment No. 1 compared Maplaw settings of 500, 1000 (default), and 2000. Experiment No. 2 compared AGC settings of 2.5:1, 3:1 (default), and 3.5:1. RESULTS In Experiment No. 1, the group preferred a higher Maplaw setting of 2000 over the default Maplaw setting of 1000 (p = 0.003) for music listening. There was no significant difference in music sound quality between the Maplaw setting of 500 and the default setting (p = 0.278). In Experiment No. 2, a main effect of AGC setting was found; however, no significant difference in sound quality ratings for pairwise comparisons were found between the experimental settings and the default setting (2.5:1 versus 3:1 at p = 0.546; 3.5:1 versus 3:1 at p = 0.059). CONCLUSIONS CI users reported improvements in music sound quality with higher than default Maplaw or AGC settings. Thus, participants preferred slightly higher compression for music listening, with results having clinical implications for improving music perception in CI users.
Collapse
|
42
|
Kranti Bhavana, Sangam, Shamshad, Chandan Kumar. An evaluation of music perception, appreciation, and overall music enjoyment in prelingual paediatric cochlear implant users utilizing simplified techniques: An Indian study. Int J Pediatr Otorhinolaryngol 2021; 150:110898. [PMID: 34450545 DOI: 10.1016/j.ijporl.2021.110898] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/10/2021] [Revised: 07/27/2021] [Accepted: 08/21/2021] [Indexed: 10/20/2022]
Abstract
INTRODUCTION This study was aimed to evaluate the speech abilities, music habits, ability to perceive and enjoy music in prelingual paediatric cochlear implantees between the age group (18-84 months). Testing paediatric CI recipients for their music habits is challenging. This study offers some unique yet simplified tools to test musical parameters in paediatric CI recipients. MATERIAL METHODS Twenty-seven paediatric CI recipients who had received at least one year of auditory verbal therapy post-implantation were selected. They were tested for their speech abilities using the CAP (Category of Auditory Performance) and SIR (Speech Intelligibility Ratings) score. Music habits (Musicality Rating Scale/MRS), music perception (Pitch, timbre, melody) and music enjoyment (Subjective Assessment of Music Enjoyment/SAME) were assessed using various tools. All these parameters were compared with age and sex-matched controls who had normal hearing. RESULTS Simple pitch discrimination, timbre recognition, and melody identification was observed in 29.60%, 37.03%, and 37.03% of implantees, respectively, compared to 88.88%, 81.48% and 88.88%, in normal-hearing children. The mean scores of CAP, SIR and MRS in cochlear implant users who perceived pitch timbre and melody differed significantly from those who did not. The mean SAME score of the normal-hearing group [4.37 ± 0.74] differs significantly from the paediatric cochlear implant user group [2.59 ± 1.47]. (p < .000). CONCLUSION This study offers some novel, simplified tools to assess music habits in paediatric cochlear implantees. These can be utilized in low resource settings and can be helpful for rehabilitationists training these children.
Collapse
Affiliation(s)
- Kranti Bhavana
- Department of Otorhinolaryngology, All India Institute of Medical Sciences, Patna, India.
| | - Sangam
- All India Institute of Medical Sciences, Patna, India
| | - Shamshad
- Department of Community and Family Medicine, All India Institute of Medical Sciences, Patna, India
| | - Chandan Kumar
- Clinical Director Speech and Hearing Care Pvt. Ltd., India
| |
Collapse
|
43
|
Wagner L, Altindal R, Plontke SK, Rahne T. Pure tone discrimination with cochlear implants and filter-band spread. Sci Rep 2021; 11:20236. [PMID: 34642437 PMCID: PMC8511217 DOI: 10.1038/s41598-021-99799-4] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Accepted: 09/22/2021] [Indexed: 11/09/2022] Open
Abstract
For many cochlear implant (CI) users, frequency discrimination is still challenging. We studied the effect of frequency differences relative to the electrode frequency bands on pure tone discrimination. A single-center, prospective, controlled, psychoacoustic exploratory study was conducted in a tertiary university referral center. Thirty-four patients with Cochlear Ltd. and MED-EL CIs and 19 age-matched normal-hearing control subjects were included. Two sinusoidal tones were presented with varying frequency differences. The reference tone frequency was chosen according to the center frequency of basal or apical electrodes. Discrimination abilities were psychophysically measured in a three-interval, two-alternative, forced-choice procedure (3I-2AFC) for various CI electrodes. Hit rates were measured, particularly with respect to discrimination abilities at the corner frequency of the electrode frequency-bands. The mean rate of correct decision concerning pitch difference was about 60% for CI users and about 90% for the normal-hearing control group. In CI users, the difference limen was two semitones, while normal-hearing participants detected the difference of one semitone. No influence of the corner frequency of the CI electrodes was found. In CI users, pure tone discrimination seems to be independent of tone positions relative to the corner frequency of the electrode frequency-band. Differences of 2 semitones can be distinguished within one electrode.
Collapse
Affiliation(s)
- Luise Wagner
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany. .,Universitätsklinikum Halle (Saale), HNO-Klinik, Ernst-Grube-Str. 40, 06120, Halle (Saale), Germany.
| | - Reyhan Altindal
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| | - Stefan K Plontke
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| | - Torsten Rahne
- Department of Otorhinolaryngology, Head and Neck Surgery, University Hospital Halle (Saale), Martin-Luther-University Halle-Wittenberg, Halle (Saale), Germany
| |
Collapse
|
44
|
Wang J, Liu J, Lai K, Zhang Q, Zheng Y, Wang S, Liang M. Mirror Mechanism Behind Visual-Auditory Interaction: Evidence From Event-Related Potentials in Children With Cochlear Implants. Front Neurosci 2021; 15:692520. [PMID: 34504413 PMCID: PMC8421565 DOI: 10.3389/fnins.2021.692520] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/08/2021] [Accepted: 07/16/2021] [Indexed: 11/13/2022] Open
Abstract
The mechanism underlying visual-induced auditory interaction is still under discussion. Here, we provide evidence that the mirror mechanism underlies visual–auditory interactions. In this study, visual stimuli were divided into two major groups—mirror stimuli that were able to activate mirror neurons and non-mirror stimuli that were not able to activate mirror neurons. The two groups were further divided into six subgroups as follows: visual speech-related mirror stimuli, visual speech-irrelevant mirror stimuli, and non-mirror stimuli with four different luminance levels. Participants were 25 children with cochlear implants (CIs) who underwent an event-related potential (ERP) and speech recognition task. The main results were as follows: (1) there were significant differences in P1, N1, and P2 ERPs between mirror stimuli and non-mirror stimuli; (2) these ERP differences between mirror and non-mirror stimuli were partly driven by Brodmann areas 41 and 42 in the superior temporal gyrus; (3) ERP component differences between visual speech-related mirror and non-mirror stimuli were partly driven by Brodmann area 39 (visual speech area), which was not observed when comparing the visual speech-irrelevant stimulus and non-mirror groups; and (4) ERPs evoked by visual speech-related mirror stimuli had more components correlated with speech recognition than ERPs evoked by non-mirror stimuli, while ERPs evoked by speech-irrelevant mirror stimuli were not significantly different to those induced by the non-mirror stimuli. These results indicate the following: (1) mirror and non-mirror stimuli differ in their associated neural activation; (2) the visual–auditory interaction possibly led to ERP differences, as Brodmann areas 41 and 42 constitute the primary auditory cortex; (3) mirror neurons could be responsible for the ERP differences, considering that Brodmann area 39 is associated with processing information about speech-related mirror stimuli; and (4) ERPs evoked by visual speech-related mirror stimuli could better reflect speech recognition ability. These results support the hypothesis that a mirror mechanism underlies visual–auditory interactions.
Collapse
Affiliation(s)
- Junbo Wang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | - Kaiyin Lai
- South China Normal University, Guangzhou, China
| | - Qi Zhang
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| | | | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Guangzhou, China
| |
Collapse
|
45
|
Fuller C, Free R, Maat B, Başkent D. Self-reported music perception is related to quality of life and self-reported hearing abilities in cochlear implant users. Cochlear Implants Int 2021; 23:1-10. [PMID: 34470590 DOI: 10.1080/14670100.2021.1948716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To investigate the relationship between self-reported music perception and appreciation and (1) quality of life (QoL), and (2) self-assessed hearing ability in 98 post-lingually deafened cochlear implant (CI) users with a wide age range. METHODS Participants filled three questionnaires: (1) the Dutch Musical Background Questionnaire (DMBQ), which measures the music listening habits, the quality of the sound of music and the self-assessed perception of elements of music; (2) the Nijmegen Cochlear Implant Questionnaire (NCIQ), which measures health-related QoL; (3) the Speech, Spatial and Qualities (SSQ) of hearing scale, which measures self-assessed hearing ability. Additionally, speech perception was behaviorally measured with a phoneme-in-word identification. RESULTS A decline in music listening habits and a low rating of the quality of music after implantation are reported in DMBQ. A significant relationship is found between the music measures and the NCIQ and SSQ; no significant relationships are observed between the DMBQ and speech perception scores. CONCLUSIONS The findings suggest some relationship between CI users' self-reported music perception ability and QoL and self-reported hearing ability. While the causal relationship is not currently evaluated, the findings may imply that music training programs and/or device improvements that improve music perception may improve QoL and hearing ability.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Treant Zorggroep, Emmen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
46
|
Lee Y. Benefit of Bilateral Cochlear Implantation on Phonological Processing Skills in Deaf Children. Otol Neurotol 2021; 42:e1001-e1007. [PMID: 34398108 DOI: 10.1097/mao.0000000000003136] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
HYPOTHESIS Children with bilateral cochlear implants (CIs) would have better phonological processing skills than children with unilateral CIs because those with bilateral CIs have better speech perception abilities in noisy environments and higher levels of central auditory system development than those with unilateral CIs. BACKGROUND Previous studies have focused on the performance of children with bilateral CIs on standardized clinical assessments. However, these tests are not sufficiently sensitive to explain better speech and language outcomes in children with bilateral CIs than children with unilateral CIs. Thus, this study focused on phonological processing skills at more central levels of analysis that reflect the operation of cognitive processes. METHOD Twenty children with bilateral CIs and 20 children with unilateral CIs, aged 4 to 6 years, participated in this study. The children completed the experience-dependent tasks and phonological processing tasks. The experience-dependent tasks involved the monosyllabic word, articulation, and receptive vocabulary tests. The phonological processing tasks involved the phonological awareness, phonological memory, and rapid automatic naming tasks. Task performance was compared between the unilateral and bilateral CI groups. RESULTS Children with unilateral CIs performed similarly to children with bilateral CIs on all three experience-dependent tasks. However, children with bilateral CIs significantly outperformed children with unilateral CIs on all three phonological processing tasks. Among the phonological processing tasks, the rapid automatic naming task scores differentiated children with unilateral CIs from children with bilateral CIs. CONCLUSIONS Bilateral cochlear implantation may positively impact the phonological processing skills of deaf children.
Collapse
Affiliation(s)
- Youngmee Lee
- Department of Communication Disorders, Ewha Womans University, Seoul, Korea
| |
Collapse
|
47
|
Lehmann A, Limb CJ, Marozeau J. Editorial: Music and Cochlear Implants: Recent Developments and Continued Challenges. Front Neurosci 2021; 15:736772. [PMID: 34456682 PMCID: PMC8387628 DOI: 10.3389/fnins.2021.736772] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/05/2021] [Accepted: 07/12/2021] [Indexed: 11/25/2022] Open
Affiliation(s)
- Alexandre Lehmann
- BRAMS-CRBLM, McGill University Faculty of Medicine, Montreal, QC, Canada
| | - Charles J Limb
- School of Medicine, University of California, San Francisco, San Francisco, CA, United States
| | | |
Collapse
|
48
|
Goldsworthy RL, Camarena A, Bissmeyer SRS. Pitch perception is more robust to interference and better resolved when provided by pulse rate than by modulation frequency of cochlear implant stimulation. Hear Res 2021; 409:108319. [PMID: 34340020 DOI: 10.1016/j.heares.2021.108319] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2021] [Revised: 07/15/2021] [Accepted: 07/21/2021] [Indexed: 01/14/2023]
Abstract
Cochlear implants are medical devices that have been used to restore hearing to more than half a million people worldwide. Most recipients achieve high levels of speech comprehension through these devices, but speech comprehension in background noise and music appreciation in general are markedly poor compared to normal hearing. A key aspect of hearing that is notably diminished in cochlear implant outcomes is the sense of pitch provided by these devices. Pitch perception is an important factor affecting speech comprehension in background noise and is critical for music perception. The present article summarizes two experiments that examine the robustness and resolution of pitch perception as provided by cochlear implant stimulation timing. The driving hypothesis is that pitch conveyed by stimulation timing cues is more robust and better resolved when provided by variable pulse rates than by modulation frequency of constant-rate stimulation. Experiment 1 examines the robustness for hearing a large, one-octave, pitch difference in the presence of interfering electrical stimulation. With robustness to interference characterized for an otherwise easily discernible pitch difference, Experiment 2 examines the resolution of discrimination thresholds in the presence of interference as conveyed by modulation frequency or by pulse rate. These experiments test for an advantage of stimulation with precise temporal cues. The results indicate that pitch provided by pulse rate is both more robust to interference and is better resolved compared to when provided by modulation frequency. These results should inform the development of new sound processing strategies for cochlear implants designed to encode fundamental frequency of sounds into precise temporal stimulation.
Collapse
Affiliation(s)
- Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States.
| | - Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States; Neuroscience Graduate Program, University of Southern California, Los Angeles, CA, United States
| | - Susan R S Bissmeyer
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States; Department of Biomedical Engineering, Viterbi School of Engineering, University of Southern California, Los Angeles, CA, United States
| |
Collapse
|
49
|
Three-Dimensional Modeling and Measurement of the Human Cochlear Hook Region: Considerations for Tonotopic Mapping. Otol Neurotol 2021; 42:e658-e665. [PMID: 34111048 DOI: 10.1097/mao.0000000000003065] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS Measuring the length of the basilar membrane (BM) in the cochlear hook region will result in improved accuracy of cochlear duct length (CDL) measurements. BACKGROUND Cochlear implant pitch mapping is generally performed in a patient independent approach, which has been shown to result in place-pitch mismatches. In order to customize cochlear implant pitch maps, accurate CDL measurements must be obtained. CDL measurements generally begin at the center of the round window (RW) and ignore the basal-most portion of the BM in the hook region. Measuring the size and morphology of the BM in the hook region can improve CDL measurements and our understanding of cochlear tonotopy. METHODS Ten cadaveric human cochleae underwent synchrotron radiation phase-contrast imaging. The length of the BM through the hook region and CDL were measured. Two different CDL measurements were obtained for each sample, with starting points at the center of the RW (CDLRW) and the basal-most tip of the BM (CDLHR). Regression analysis was performed to relate CDLRW to CDLHR. A three-dimensional polynomial model was determined to describe the average BM hook region morphology. RESULTS The mean CDLRW value was 33.03 ± 1.62 mm, and the mean CDLHR value was 34.68 ± 1.72 mm. The following relationship was determined between CDLRW and CDLHR: CDLHR = 1.06(CDLRW)-0.26 (R2 = 0.99). CONCLUSION The length and morphology of the hook region was determined. Current measurements underestimate CDL in the hook region and can be corrected using the results herein.
Collapse
|
50
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|