1
|
Camarena A, Goldsworthy RL. Characterizing the relationship between modulation sensitivity and pitch resolution in cochlear implant users. Hear Res 2024; 448:109026. [PMID: 38776706 DOI: 10.1016/j.heares.2024.109026] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/28/2023] [Revised: 03/28/2024] [Accepted: 04/30/2024] [Indexed: 05/25/2024]
Abstract
Cochlear implants are medical devices that have restored hearing to approximately one million people around the world. Outcomes are impressive and most recipients attain excellent speech comprehension in quiet without relying on lip-reading cues, but pitch resolution is poor compared to normal hearing. Amplitude modulation of electrical stimulation is a primary cue for pitch perception in cochlear implant users. The experiments described in this article focus on the relationship between sensitivity to amplitude modulations and pitch resolution based on changes in the frequency of amplitude modulations. In the first experiment, modulation sensitivity and pitch resolution were measured in adults with no known hearing loss and in cochlear implant users with sounds presented to and processed by their clinical devices. Stimuli were amplitude-modulated sinusoids and amplitude-modulated narrow-band noises. Modulation detection and modulation frequency discrimination were measured for modulation frequencies centered on 110, 220, and 440 Hz. Pitch resolution based on changes in modulation frequency was measured for modulation depths of 25 %, 50 %, 100 %, and for a half-waved rectified modulator. Results revealed a strong linear relationship between modulation sensitivity and pitch resolution for cochlear implant users and peers with no known hearing loss. In the second experiment, cochlear implant users took part in analogous procedures of modulation sensitivity and pitch resolution but bypassing clinical sound processing using single-electrode stimulation. Results indicated that modulation sensitivity and pitch resolution was better conveyed by single-electrode stimulation than by clinical processors. Results at 440 Hz were worse, but also not well conveyed by clinical sound processing, so it remains unclear whether the 300 Hz perceptual limit described in the literature is a technological or biological limitation. These results highlight modulation depth and sensitivity as critical factors for pitch resolution in cochlear implant users and characterize the relationship that should inform the design of modulation enhancement algorithms for cochlear implants.
Collapse
Affiliation(s)
- Andres Camarena
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America
| | - Raymond L Goldsworthy
- Auditory Research Center, Caruso Department of Otolaryngology, Keck School of Medicine, University of Southern California, Los Angeles, CA, United States of America.
| |
Collapse
|
2
|
Schulz KV, Gauer J, Martin R, Völter C. [Influence of overtones and undertones on melody recognition with a cochlear implant with SSD]. Laryngorhinootologie 2024; 103:279-288. [PMID: 37748501 DOI: 10.1055/a-2123-4315] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/27/2023]
Abstract
Many cochlear implant (CI) users have difficulties recognising pitches and melodies because pitch transmission is blurred and shifted. This study investigates whether postlingually deafened adult CI users recognize melodies better when overtones are removed or undertones are added.Fifteen unilaterally postlingually deafened CI users (single sided deafness = SSD) were included aged 22 to 73 years (MW 52, SD 11.6) with CI hearing experience between 3 and 75 months (MW 33, SD 21.0) with varying MED-EL devices. Three short piano melodies were presented to them firstly to the normal-hearing ear and then in modified overtone or undertone variants and the original variant to the CI ear. These variants should be identified as one of the three original melodies. In addition, musical experience and ability were assessed by the Munich Music Questionnaire and the MiniPROMS music tests.The CI users showed the best melody recognition in the fundamental frequency variant. The overtone variant with the third overtone was as good as the original variant with all overtones with regard to melody recognition (p=1). However, the undertone variant with the first undertone was recognised significantly worse than the fundamental version (p=0.032). Furthermore, there was no correlation between musical experience or musical ability and the number of melodies recognised (p>0.1).Since a reduction of overtones did not worsen the melody recognition, overtone reduction should be considered in future music processing programs for the CI. This could reduce the energy consumption of the CI.
Collapse
Affiliation(s)
- Kira Viviane Schulz
- Universitätsklinik für Hals-Nasen-Ohrenheilkunde und Kopf- und Halschirurgie der Ruhr-Universität Bochum, Sankt Elisabeth Hospital, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Johannes Gauer
- Fakultät für Elektrotechnik und Informationstechnik, Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Rainer Martin
- Fakultät für Elektrotechnik und Informationstechnik, Institut für Kommunikationsakustik, Ruhr-Universität Bochum, Bochum, Deutschland
| | - Christiane Völter
- Universitätsklinik für Hals-Nasen-Ohrenheilkunde und Kopf- und Halschirurgie der Ruhr-Universität Bochum, Sankt Elisabeth Hospital, Ruhr-Universität Bochum, Bochum, Deutschland
| |
Collapse
|
3
|
Kim EY, Seol HY. Comparison of Speech Perception Performance According to Prosody Change Between People With Normal Hearing and Cochlear Implant Users. J Audiol Otol 2024; 28:119-125. [PMID: 38052522 PMCID: PMC11065548 DOI: 10.7874/jao.2023.00234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/22/2023] [Revised: 07/13/2023] [Accepted: 07/27/2023] [Indexed: 12/07/2023] Open
Abstract
BACKGROUND AND OBJECTIVES Cochlear implants (CIs) are well known to improve audibility and speech recognition in individuals with hearing loss, but some individuals still struggle with many aspects in communication, such as prosody. This study explores how prosodic elements are perceived by those with normal hearing (NH) and CIs. SUBJECTS AND METHODS Thirteen individuals with NH and thirteen CI users participated in this study and completed speech perception, speech prosody perception, speech prosody production, pitch difference discrimination, and melodic contour perception testing. RESULTS NH listeners performed significantly better than CI users on speech perception, speech prosody perception (except for words with neutral meaning and a negative prosody change and when words were repeated twice), pitch difference discrimination, and melodic contour perception testing. No statistical significance was observed for speech prosody production for both groups. CONCLUSIONS Compared to NH listeners, CI users had limited ability to recognize prosodic elements. The study findings highlight the necessity of an assessment tool and signal processing algorithm for CIs, specifically targeting prosodic elements in clinical settings.
Collapse
Affiliation(s)
- Eun Yeon Kim
- Department of Speech Language Pathology, Graduate School of Interdisciplinary Therapy, Myongji University, Seoul, Korea
| | - Hye Yoon Seol
- Department of Communication Disorders, Ewha Womans University, Seoul, Korea
| |
Collapse
|
4
|
Limb CJ, Mo J, Jiradejvong P, Jiam NT. The Impact of Vocal Boost Manipulations on Musical Sound Quality for Cochlear Implant Users. Laryngoscope 2023; 133:938-947. [PMID: 35906889 DOI: 10.1002/lary.30324] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2022] [Revised: 06/27/2022] [Accepted: 06/28/2022] [Indexed: 11/11/2022]
Abstract
OBJECTIVE To evaluate the impact of vocal boost manipulations on cochlear implant (CI) musical sound quality appraisals. METHODS An anonymous, online study was distributed to 33 CI users. Participants listened to auditory tokens and assessed the musical quality of acoustic stimuli with vocal boosting and attenuation using a validated sound quality rating scale. Four versions of real-world musical stimuli were created: a version with +9 dB vocal boost, a version with -9 dB vocal attenuation, a composite stimulus containing a 1,000 Hz low-pass filter and white noise ("anchor"), and an unaltered version ("hidden reference"). Subjects listened to all four versions and provided ratings based on a 100-point scale that reflected the perceived sound quality difference of the music clip relative to the reference excerpt. RESULTS Vocal boost increased musical sound quality ratings relative to the reference clip (11.7; 95% CI, 1.62-21.8, p = 0.016) and vocal attenuation decreased musical sound quality ratings relative to the reference clip (28.5; 95% CI, 18.64-38.44, p < 0.001). When comparing the non-musical training group and musical training group, there was a significant difference in musical sound quality rating scores for the vocal boost condition (21.2; 95% CI: 1.76-40.7, p = 0.028). CONCLUSIONS CI-mediated musical sound quality appraisals are impacted by vocal boost and attenuation. Musically trained CI users to report greater musical sound quality enhancement with a vocal boost with respect to CI users with no musical training background. Implementation of front-end vocal boost manipulations in music may improve sound quality and music appreciation among CI users. LEVEL OF EVIDENCE 2 (Individual cohort study) Laryngoscope, 133:938-947, 2023.
Collapse
Affiliation(s)
- Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| | - Jonathan Mo
- Department of Computer and Information Science, University of Pennsylvania, Philadelphia, Pennsylvania, USA
| | - Patpong Jiradejvong
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| | - Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California, USA
| |
Collapse
|
5
|
Musical Mistuning Perception and Appraisal in Cochlear Implant Recipients. Otol Neurotol 2023; 44:e281-e286. [PMID: 36922018 DOI: 10.1097/mao.0000000000003860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Music is a very crucial art form that can evoke emotions, and the harmonious presence of the human voice in music is an impactful part of this process. As a result, vocals have had some significant effects on contemporary music. The mechanism behind the cochlear implant (CI) recipients perceiving different aspects of music is clear; however, how well they perceive vocal tuning within music it is not well known. Hence, this study evaluated the mistuning perception of CI recipients and compared their performance with normal-hearing (NH) listeners. STUDY DESIGN, SETTING, AND PATIENTS A total of 16 CI users (7 cisgender men, 9 cisgender women) and 16 sex-matched NH controls with an average age of 30.2 (±10.9; range, 19-53) years and 23.5 (±6.1; range, 20-37) years, respectively, were enrolled in this study. We evaluated the mistuning ability using the mistuning perception test (MPT) and assessed self-perceived music perception and engagement using the music-related quality-of-life questionnaire. Test performance was measured and reported on the item-response theory metric with a z score ranging from -4 to +4. RESULTS A significant difference in the MPT scores was found between NH and CI recipients, whereas a significant correlation was noted between the music-related quality-of-life questionnaire-frequency subscale and MPT scores. No significant correlations were found between age, CI age, and CI usage duration and MPT performance. CONCLUSIONS This study revealed that musical mistuning perception is a limitation for CI recipients, similar to previously evaluated aspects of music perception. Hence, it is important to consider this aspect in the assessment of music perception, enjoyment, and music-based auditory interventions in CI recipients, as vocals are paramount in music perception and recreation. The MPT is a convenient and accessible tool for mistuning assessment in CI and hearing-aid users.
Collapse
|
6
|
An overview of factors affecting bimodal and electric-acoustic stimulation (EAS) speech understanding outcomes. Hear Res 2023; 431:108736. [PMID: 36931019 DOI: 10.1016/j.heares.2023.108736] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/24/2022] [Revised: 02/13/2023] [Accepted: 03/04/2023] [Indexed: 03/08/2023]
Abstract
Improvements in device technology, surgical technique, and patient outcomes have resulted in a broadening of cochlear implantation criteria to consider those with increasing levels of useful low-to-mid frequency residual acoustic hearing. Residual acoustic hearing allows for the addition of a hearing aid (HA) to complement the cochlear implant (CI) and has demonstrated enhanced listening outcomes. However, wide inter-subject outcome variability exists and thus identification of contributing factors would be of clinical interest and may aid with pre-operative patient counselling. The optimal fitting procedure and frequency assignments for the two hearing devices used in combination to enhance listening outcomes also remains unclear. The understanding of how acoustic and electric speech information is fundamentally combined and utilised by the listener may allow for the optimisation of device fittings and frequency allocations to provide best bimodal and electric-acoustic stimulation (EAS) patient outcomes. This article will provide an overview of contributing factors to bimodal and EAS listening outcomes, explore areas of contention, and discuss common study limitations.
Collapse
|
7
|
Yoon YS, Jaisinghani P, Goldsworthy R. Effect of Realistic Test Conditions on Perception of Speech, Music, and Binaural Cues in Normal-Hearing Listeners. Am J Audiol 2023; 32:170-181. [PMID: 36580493 PMCID: PMC10166190 DOI: 10.1044/2022_aja-22-00143] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2022] [Revised: 10/03/2022] [Accepted: 10/12/2022] [Indexed: 12/30/2022] Open
Abstract
PURPOSE The purpose of this study was to determine the feasibility of online testing in a quiet room for three auditory perception experiments in normal-hearing listeners: speech, music, and binaural cue. METHOD Under Experiment 1, sentence perception was measured using fixed signal-to-noise ratios (SNRs: +10 dB, 0 dB, and -10 dB) and using adaptive speech reception threshold (SRT) procedures. The correct scores were compared between quiet room and soundproof booth listening environments. Experiment 2 was designed to compare melodic contour identification between the two listening environments. Melodic contour identification was assessed with 1, 2, and 4 semitone spacings. Under Experiment 3, interaural level difference (ILD) and interaural time differences (ITD) were measured as a function of carrier frequency. For both measures, two modulated tones (400-ms duration and 100-Hz modulation rate) were sequentially presented through headphones to both ears, and subjects were asked to indicate whether the sound moved to the left or right ear. The measured ITD and ILD were then compared between the two listening environments. RESULTS There were no significant differences in any outcome measures (SNR- and SRT-based speech perception, melodic contour identification, and ITD/ILD) between the two listening environments. CONCLUSIONS These results suggest that normal-hearing listeners may not require a controlled listening environment in any of the three auditory assessments. As comparable data can be obtained via the online testing tool, using the online auditory experiments is recommended.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | | | - Raymond Goldsworthy
- Department of Otolaryngology – Head and Neck Surgery, Keck School of Medicine, University of Southern California, Los Angeles
| |
Collapse
|
8
|
Tahmasebi S, Segovia-Martinez M, Nogueira W. Optimization of Sound Coding Strategies to Make Singing Music More Accessible for Cochlear Implant Users. Trends Hear 2023; 27:23312165221148022. [PMID: 36628453 PMCID: PMC9837293 DOI: 10.1177/23312165221148022] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
Cochlear implants (CIs) are implantable medical devices that can partially restore hearing to people suffering from profound sensorineural hearing loss. While these devices provide good speech understanding in quiet, many CI users face difficulties when listening to music. Reasons include poor spatial specificity of electric stimulation, limited transmission of spectral and temporal fine structure of acoustic signals, and restrictions in the dynamic range that can be conveyed via electric stimulation of the auditory nerve. The coding strategies currently used in CIs are typically designed for speech rather than music. This work investigates the optimization of CI coding strategies to make singing music more accessible to CI users. The aim is to reduce the spectral complexity of music by selecting fewer bands for stimulation, attenuating the background instruments by strengthening a noise reduction algorithm, and optimizing the electric dynamic range through a back-end compressor. The optimizations were evaluated through both objective and perceptual measures of speech understanding and melody identification of singing voice with and without background instruments, as well as music appreciation questionnaires. Consistent with the objective measures, results gathered from the perceptual evaluations indicated that reducing the number of selected bands and optimizing the electric dynamic range significantly improved speech understanding in music. Moreover, results obtained from questionnaires show that the new music back-end compressor significantly improved music enjoyment. These results have potential as a new CI program for improved singing music perception.
Collapse
Affiliation(s)
- Sina Tahmasebi
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany
- Cluster of Excellence Hearing4all, Hannover, Germany
- Sina Tahmasebi, Karl-Wiechert-Allee 3, 30625 Hannover, Germany.
Waldo Nogueira, Karl-Wiechert-Allee 3, 30625 Hannover, Germany.
| | | | - Waldo Nogueira
- Department of Otolaryngology, Hannover Medical School, Hannover, Germany
- Cluster of Excellence Hearing4all, Hannover, Germany
- Sina Tahmasebi, Karl-Wiechert-Allee 3, 30625 Hannover, Germany.
Waldo Nogueira, Karl-Wiechert-Allee 3, 30625 Hannover, Germany.
| |
Collapse
|
9
|
Lee JH, Shim H, Gantz B, Choi I. Strength of Attentional Modulation on Cortical Auditory Evoked Responses Correlates with Speech-in-Noise Performance in Bimodal Cochlear Implant Users. Trends Hear 2022; 26:23312165221141143. [PMID: 36464791 PMCID: PMC9726851 DOI: 10.1177/23312165221141143] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/12/2022] Open
Abstract
Auditory selective attention is a crucial top-down cognitive mechanism for understanding speech in noise. Cochlear implant (CI) users display great variability in speech-in-noise performance that is not easily explained by peripheral auditory profile or demographic factors. Thus, it is imperative to understand if auditory cognitive processes such as selective attention explain such variability. The presented study directly addressed this question by quantifying attentional modulation of cortical auditory responses during an attention task and comparing its individual differences with speech-in-noise performance. In our attention experiment, participants with CI were given a pre-stimulus visual cue that directed their attention to either of two speech streams and were asked to select a deviant syllable in the target stream. The two speech streams consisted of the female voice saying "Up" five times every 800 ms and the male voice saying "Down" four times every 1 s. The onset of each syllable elicited distinct event-related potentials (ERPs). At each syllable onset, the difference in the amplitudes of ERPs between the two attentional conditions (attended - ignored) was computed. This ERP amplitude difference served as a proxy for attentional modulation strength. Our group-level analysis showed that the amplitude of ERPs was greater when the syllable was attended than ignored, exhibiting that attention modulated cortical auditory responses. Moreover, the strength of attentional modulation showed a significant correlation with speech-in-noise performance. These results suggest that the attentional modulation of cortical auditory responses may provide a neural marker for predicting CI users' success in clinical tests of speech-in-noise listening.
Collapse
Affiliation(s)
- Jae-Hee Lee
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA,Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Hwan Shim
- Dept. Electrical and Computer Engineering Technology, Rochester Institute of Technology, Rochester, NY, 14623, USA
| | - Bruce Gantz
- Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA
| | - Inyong Choi
- Dept. Communication Sciences and Disorders, University of Iowa, Iowa City, IA, 52242, USA,Dept. Otolaryngology – Head and Neck Surgery, University of Iowa Hospitals and Clinics, Iowa City, IA, 52242, USA,Inyong Choi, 250 Hawkins Dr., Iowa City, IA 52242, USA.
| |
Collapse
|
10
|
Gauer J, Nagathil A, Eckel K, Belomestny D, Martin R. A versatile deep-neural-network-based music preprocessing and remixing scheme for cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 151:2975. [PMID: 35649910 DOI: 10.1121/10.0010371] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/01/2021] [Accepted: 04/13/2022] [Indexed: 06/15/2023]
Abstract
While cochlear implants (CIs) have proven to restore speech perception to a remarkable extent, access to music remains difficult for most CI users. In this work, a methodology for the design of deep learning-based signal preprocessing strategies that simplify music signals and emphasize rhythmic information is proposed. It combines harmonic/percussive source separation and deep neural network (DNN) based source separation in a versatile source mixture model. Two different neural network architectures were assessed with regard to their applicability for this task. The method was evaluated with instrumental measures and in two listening experiments for both network architectures and six mixing presets. Normal-hearing subjects rated the signal quality of the processed signals compared to the original both with and without a vocoder which provides an approximation of the auditory perception in CI listeners. Four combinations of remix models and DNNs have been selected for an evaluation with vocoded signals and were all rated significantly better in comparison to the unprocessed signal. In particular, the two best-performing remix networks are promising candidates for further evaluation in CI listeners.
Collapse
Affiliation(s)
- Johannes Gauer
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Anil Nagathil
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Kai Eckel
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Denis Belomestny
- Faculty of Mathematics, Universität Duisburg-Essen, Essen, Germany
| | - Rainer Martin
- Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| |
Collapse
|
11
|
Huang W, Wong LLN, Chen F. Just-Noticeable Differences of Fundamental Frequency Change in Mandarin-Speaking Children with Cochlear Implants. Brain Sci 2022; 12:brainsci12040443. [PMID: 35447975 PMCID: PMC9031813 DOI: 10.3390/brainsci12040443] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/16/2022] [Revised: 03/16/2022] [Accepted: 03/23/2022] [Indexed: 11/16/2022] Open
Abstract
Fundamental frequency (F0) provides the primary acoustic cue for lexical tone perception in tonal languages but remains poorly represented in cochlear implant (CI) systems. Currently, there is still a lack of understanding of sensitivity to F0 change in CI users who speak tonal languages. In the present study, just-noticeable differences (JNDs) of F0 contour and F0 level changes in Mandarin-speaking children with CIs were measured and compared with those in their age-matched normal-hearing (NH) peers. Results showed that children with CIs demonstrated significantly larger JND of F0 contour (JND-C) change and F0 level (JND-L) change compared to NH children. Further within-group comparison revealed that the JND-C change was significantly smaller than the JND-L change among children with CIs, whereas the opposite pattern was observed among NH children. No significant correlations were seen between JND-C change/JND-L change and age at implantation /duration of CI use. The contrast between children with CIs and NH children in sensitivity to F0 contour and F0 level change suggests different mechanisms of F0 processing in these two groups as a result of different hearing experiences.
Collapse
Affiliation(s)
- Wanting Huang
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China;
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong 999077, China;
| | - Lena L. N. Wong
- Unit of Human Communication, Development, and Information Sciences, Faculty of Education, The University of Hong Kong, Hong Kong 999077, China;
| | - Fei Chen
- Department of Electrical and Electronic Engineering, Southern University of Science and Technology, Shenzhen 518055, China;
- Correspondence:
| |
Collapse
|
12
|
Fletcher MD. Can Haptic Stimulation Enhance Music Perception in Hearing-Impaired Listeners? Front Neurosci 2021; 15:723877. [PMID: 34531717 PMCID: PMC8439542 DOI: 10.3389/fnins.2021.723877] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 08/11/2021] [Indexed: 01/07/2023] Open
Abstract
Cochlear implants (CIs) have been remarkably successful at restoring hearing in severely-to-profoundly hearing-impaired individuals. However, users often struggle to deconstruct complex auditory scenes with multiple simultaneous sounds, which can result in reduced music enjoyment and impaired speech understanding in background noise. Hearing aid users often have similar issues, though these are typically less acute. Several recent studies have shown that haptic stimulation can enhance CI listening by giving access to sound features that are poorly transmitted through the electrical CI signal. This “electro-haptic stimulation” improves melody recognition and pitch discrimination, as well as speech-in-noise performance and sound localization. The success of this approach suggests it could also enhance auditory perception in hearing-aid users and other hearing-impaired listeners. This review focuses on the use of haptic stimulation to enhance music perception in hearing-impaired listeners. Music is prevalent throughout everyday life, being critical to media such as film and video games, and often being central to events such as weddings and funerals. It represents the biggest challenge for signal processing, as it is typically an extremely complex acoustic signal, containing multiple simultaneous harmonic and inharmonic sounds. Signal-processing approaches developed for enhancing music perception could therefore have significant utility for other key issues faced by hearing-impaired listeners, such as understanding speech in noisy environments. This review first discusses the limits of music perception in hearing-impaired listeners and the limits of the tactile system. It then discusses the evidence around integration of audio and haptic stimulation in the brain. Next, the features, suitability, and success of current haptic devices for enhancing music perception are reviewed, as well as the signal-processing approaches that could be deployed in future haptic devices. Finally, the cutting-edge technologies that could be exploited for enhancing music perception with haptics are discussed. These include the latest micro motor and driver technology, low-power wireless technology, machine learning, big data, and cloud computing. New approaches for enhancing music perception in hearing-impaired listeners could substantially improve quality of life. Furthermore, effective haptic techniques for providing complex sound information could offer a non-invasive, affordable means for enhancing listening more broadly in hearing-impaired individuals.
Collapse
Affiliation(s)
- Mark D Fletcher
- University of Southampton Auditory Implant Service, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom.,Institute of Sound and Vibration Research, Faculty of Engineering and Physical Sciences, University of Southampton, Southampton, United Kingdom
| |
Collapse
|
13
|
Fuller C, Free R, Maat B, Başkent D. Self-reported music perception is related to quality of life and self-reported hearing abilities in cochlear implant users. Cochlear Implants Int 2021; 23:1-10. [PMID: 34470590 DOI: 10.1080/14670100.2021.1948716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To investigate the relationship between self-reported music perception and appreciation and (1) quality of life (QoL), and (2) self-assessed hearing ability in 98 post-lingually deafened cochlear implant (CI) users with a wide age range. METHODS Participants filled three questionnaires: (1) the Dutch Musical Background Questionnaire (DMBQ), which measures the music listening habits, the quality of the sound of music and the self-assessed perception of elements of music; (2) the Nijmegen Cochlear Implant Questionnaire (NCIQ), which measures health-related QoL; (3) the Speech, Spatial and Qualities (SSQ) of hearing scale, which measures self-assessed hearing ability. Additionally, speech perception was behaviorally measured with a phoneme-in-word identification. RESULTS A decline in music listening habits and a low rating of the quality of music after implantation are reported in DMBQ. A significant relationship is found between the music measures and the NCIQ and SSQ; no significant relationships are observed between the DMBQ and speech perception scores. CONCLUSIONS The findings suggest some relationship between CI users' self-reported music perception ability and QoL and self-reported hearing ability. While the causal relationship is not currently evaluated, the findings may imply that music training programs and/or device improvements that improve music perception may improve QoL and hearing ability.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Treant Zorggroep, Emmen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
14
|
Niu Y, Liu Y, Wu X, Chen J. Categorical perception of lexical tones based on acoustic-electric stimulation. JASA EXPRESS LETTERS 2021; 1:084405. [PMID: 36154241 DOI: 10.1121/10.0005807] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
The effect of low-frequency acoustic input on the categorical perception of lexical tones was investigated with simulated electric-acoustic hearing. A synthesized T1-T2 (flat-rising) tone continuum of Mandarin monosyllables /i/ was used, and they were manipulated as five conditions: unprocessed, low-frequency acoustic-only, electric-only, electric-acoustic stimulation, and bimodal stimulation. Results showed the performance under electric-only condition was the significantly lowest, and the difference of other pairwise comparisons between conditions was quite small. These findings suggest that the low-frequency acoustic input can shape the categorical perception, and the combinations of acoustic and electric hearing within or across ears have no significant effect.
Collapse
Affiliation(s)
- Yadong Niu
- Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China , , ,
| | - Xihong Wu
- Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| | - Jing Chen
- Department of Machine Intelligence, Speech and Hearing Research Center, and Key Laboratory of Machine Perception (Ministry of Education), Peking University, Beijing, China
| |
Collapse
|
15
|
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users. J Clin Med 2021; 10:jcm10102093. [PMID: 34068067 PMCID: PMC8152499 DOI: 10.3390/jcm10102093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/26/2021] [Accepted: 05/06/2021] [Indexed: 01/17/2023] Open
Abstract
In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.
Collapse
|
16
|
Impact of Auditory-Motor Musical Training on Melodic Pattern Recognition in Cochlear Implant Users. Otol Neurotol 2021; 41:e422-e431. [PMID: 32176126 DOI: 10.1097/mao.0000000000002525] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/18/2022]
Abstract
OBJECTIVE Cochlear implant (CI) users struggle with tasks of pitch-based prosody perception. Pitch pattern recognition is vital for both music comprehension and understanding the prosody of speech, which signals emotion and intent. Research in normal-hearing individuals shows that auditory-motor training, in which participants produce the auditory pattern they are learning, is more effective than passive auditory training. We investigated whether auditory-motor training of CI users improves complex sound perception, such as vocal emotion recognition and pitch pattern recognition, compared with purely auditory training. STUDY DESIGN Prospective cohort study. SETTING Tertiary academic center. PATIENTS Fifteen postlingually deafened adults with CIs. INTERVENTION(S) Participants were divided into 3 one-month training groups: auditory-motor (intervention), auditory-only (active control), and no training (control). Auditory-motor training was conducted with the "Contours" software program and auditory-only training was completed with the "AngelSound" software program. MAIN OUTCOME MEASURE Pre and posttest examinations included tests of speech perception (consonant-nucleus-consonant, hearing-in-noise test sentence recognition), speech prosody perception, pitch discrimination, and melodic contour identification. RESULTS Participants in the auditory-motor training group performed better than those in the auditory-only and no-training (p < 0.05) for the melodic contour identification task. No significant training effect was noted on tasks of speech perception, speech prosody perception, or pitch discrimination. CONCLUSIONS These data suggest that short-term auditory-motor music training of CI users impacts pitch pattern recognition. This study offers approaches for enriching the world of complex sound in the CI user.
Collapse
|
17
|
Translation and validation of the music-related quality of life questionnaire for adults with cochlear implant in Turkish language. Eur Arch Otorhinolaryngol 2021; 279:685-693. [PMID: 33599840 DOI: 10.1007/s00405-021-06693-w] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/09/2020] [Accepted: 02/08/2021] [Indexed: 10/22/2022]
Abstract
PURPOSE It is important to assess the impact of music on cochlear implant (CI) users' quality of life. The aim of this study was to adapt and validate the music-related quality of life questionnaire into the Turkish language for adult CI users. METHODS 161 CI users and 162 normal-hearing adults were included in the study. The final Turkish version of the questionnaire was prepared and evaluated for validity and reliability. The internal consistency of the questionnaire and test-retest reliability were evaluated by Cronbach's α and ICC index. Factor analysis and 'know-group' method was used to determine the construct validity. RESULTS Sampling adequacy for execution of factor analysis was confirmed by the results of Kaiser-Meyer-Olkin (= 0.91) and Bartlett test (p < 0.05). 2 factors for each scale were identified from exploratory factor analysis. Confirmatory factor analysis confirmed the questionnaire met the criteria standards for adequacy of fit. The reliability coefficient was determined at least 0.80. Correlation between items indicated excellent (> .80) internal consistency. CONCLUSION The Turkish version of the questionnaire has good validity and reliability and can be used to investigate the relationship between music and quality of life and as a diagnostic tool in identifying individuals who need music support and to guide and evaluate music rehabilitation.
Collapse
|
18
|
Erickson ML, Faulkner K, Johnstone PM, Hedrick MS, Stone T. Multidimensional Timbre Spaces of Cochlear Implant Vocoded and Non-vocoded Synthetic Female Singing Voices. Front Neurosci 2020; 14:307. [PMID: 32372904 PMCID: PMC7179674 DOI: 10.3389/fnins.2020.00307] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2019] [Accepted: 03/16/2020] [Indexed: 12/04/2022] Open
Abstract
Many post-lingually deafened cochlear implant (CI) users report that they no longer enjoy listening to music, which could possibly contribute to a perceived reduction in quality of life. One aspect of music perception, vocal timbre perception, may be difficult for CI users because they may not be able to use the same timbral cues available to normal hearing listeners. Vocal tract resonance frequencies have been shown to provide perceptual cues to voice categories such as baritone, tenor, mezzo-soprano, and soprano, while changes in glottal source spectral slope are believed to be related to perception of vocal quality dimensions such as fluty vs. brassy. As a first step toward understanding vocal timbre perception in CI users, we employed an 8-channel noise-band vocoder to test how vocoding can alter the timbral perception of female synthetic sung vowels across pitches. Non-vocoded and vocoded stimuli were synthesized with vibrato using 3 excitation source spectral slopes and 3 vocal tract transfer functions (mezzo-soprano, intermediate, soprano) at the pitches C4, B4, and F5. Six multi-dimensional scaling experiments were conducted: C4 not vocoded, C4 vocoded, B4 not vocoded, B4 vocoded, F5 not vocoded, and F5 vocoded. At the pitch C4, for both non-vocoded and vocoded conditions, dimension 1 grouped stimuli according to voice category and was most strongly predicted by spectral centroid from 0 to 2 kHz. While dimension 2 grouped stimuli according to excitation source spectral slope, it was organized slightly differently and predicted by different acoustic parameters in the non-vocoded and vocoded conditions. For pitches B4 and F5 spectral centroid from 0 to 2 kHz most strongly predicted dimension 1. However, while dimension 1 separated all 3 voice categories in the vocoded condition, dimension 1 only separated the soprano stimuli from the intermediate and mezzo-soprano stimuli in the non-vocoded condition. While it is unclear how these results predict timbre perception in CI listeners, in general, these results suggest that perhaps some aspects of vocal timbre may remain.
Collapse
Affiliation(s)
- Molly L. Erickson
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, TN, United States
| | | | | | | | | |
Collapse
|
19
|
Firestone GM, McGuire K, Liang C, Zhang N, Blankenship CM, Xiang J, Zhang F. A Preliminary Study of the Effects of Attentive Music Listening on Cochlear Implant Users' Speech Perception, Quality of Life, and Behavioral and Objective Measures of Frequency Change Detection. Front Hum Neurosci 2020; 14:110. [PMID: 32296318 PMCID: PMC7136537 DOI: 10.3389/fnhum.2020.00110] [Citation(s) in RCA: 13] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/20/2019] [Accepted: 03/11/2020] [Indexed: 11/17/2022] Open
Abstract
Introduction Most cochlear implant (CI) users have difficulty in listening tasks that rely strongly on perception of frequency changes (e.g., speech perception in noise, musical melody perception, etc.). Some previous studies using behavioral or subjective assessments have shown that short-term music training can benefit CI users’ perception of music and speech. Electroencephalographic (EEG) recordings may reveal the neural basis for music training benefits in CI users. Objective To examine the effects of short-term music training on CI hearing outcomes using a comprehensive test battery of subjective evaluation, behavioral tests, and EEG measures. Design Twelve adult CI users were recruited for a home-based music training program that focused on attentive listening to music genres and materials that have an emphasis on melody. The participants used a music streaming program (i.e., Pandora) downloaded onto personal electronic devices for training. The participants attentively listened to music through a direct audio cable or through Bluetooth streaming. The training schedule was 40 min/session/day, 5 days/week, for either 4 or 8 weeks. The pre-training and post-training tests included: hearing thresholds, Speech, Spatial and Qualities of Hearing Scale (SSQ12) questionnaire, psychoacoustic tests of frequency change detection threshold (FCDT), speech recognition tests (CNC words, AzBio sentences, and QuickSIN), and EEG responses to tones that contained different magnitudes of frequency changes. Results All participants except one finished the 4- or 8-week training, resulting in a dropout rate of 8.33%. Eleven participants performed all tests except for two who did not participate in EEG tests. Results showed a significant improvement in the FCDTs as well as performance on CNC and QuickSIN after training (p < 0.05), but no significant improvement in SSQ scores (p > 0.05). Results of the EEG tests showed larger post-training cortical auditory evoked potentials (CAEPs) in seven of the nine participants, suggesting a better cortical processing of both stimulus onset and within-stimulus frequency changes. Conclusion These preliminary data suggest that extensive, focused music listening can improve frequency perception and speech perception in CI users. Further studies that include a larger sample size and control groups are warranted to determine the efficacy of short-term music training in CI users.
Collapse
Affiliation(s)
- Gabrielle M Firestone
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Kelli McGuire
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Nanhua Zhang
- Division of Biostatistics and Epidemiology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States.,Department of Pediatrics, University of Cincinnati College of Medicine, Cincinnati, OH, United States
| | - Chelsea M Blankenship
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center, Cincinnati, OH, United States
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati, Cincinnati, OH, United States
| |
Collapse
|
20
|
Sorrentino F, Gheller F, Favaretto N, Franz L, Stocco E, Brotto D, Bovo R. Music perception in adult patients with cochlear implant. HEARING BALANCE AND COMMUNICATION 2020. [DOI: 10.1080/21695717.2020.1719787] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Affiliation(s)
- Flavia Sorrentino
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Flavia Gheller
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Niccolò Favaretto
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Leonardo Franz
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Elisabetta Stocco
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Davide Brotto
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| | - Roberto Bovo
- Department of Neurosciences, ENT Clinic, Padova University Hospital, Padua, Italy
| |
Collapse
|
21
|
Gauer J, Nagathil A, Martin R, Thomas JP, Völter C. Interactive Evaluation of a Music Preprocessing Scheme for Cochlear Implants Based on Spectral Complexity Reduction. Front Neurosci 2019; 13:1206. [PMID: 31803001 PMCID: PMC6872501 DOI: 10.3389/fnins.2019.01206] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 10/25/2019] [Indexed: 11/17/2022] Open
Abstract
Music is difficult to access for the majority of CI users as the reduced dynamic range and poor spectral resolution in cochlear implants (CI), amongst others constraints, severely impair their auditory perception. The reduction of spectral complexity is therefore a promising means to facilitate music enjoyment for CI listeners. We evaluate a spectral complexity reduction method for music signals based on principal component analysis that enforces spectral sparsity, emphasizes the melody contour and attenuates interfering accompanying voices. To cover a wide range of spectral complexity reduction levels a new experimental design for listening experiments was introduced. It allows CI users to select the preferred level of spectral complexity reduction interactively and in real-time. Ten adult CI recipients with post-lingual bilateral profound sensorineural hearing loss and CI experience of at least 6 months were enrolled in the study. In eight consecutive sessions over a period of 4 weeks they were asked to choose their preferred version out of 10 different complexity settings for a total number of 16 recordings of classical western chamber music. As the experiments were performed in consecutive sessions we also studied a potential long term effect. Therefore, we investigated the hypothesis that repeated engagement with music signals of reduced spectral complexity leads to a habituation effect which allows CI users to deal with music signals of increasing complexity. Questionnaires and tests about music listening habits and musical abilities complemented these experiments. The participants significantly preferred signals with high spectral complexity reduction levels over the unprocessed versions. While the results of earlier studies comprising only two preselected complexity levels were generally confirmed, this study revealed a tendency toward a selection of even higher spectral complexity reduction levels. Therefore, spectral complexity reduction for music signals is a useful strategy to enhance music enjoyment for CI users. Although there is evidence for a habituation effect in some subjects, such an effect has not been significant in general.
Collapse
Affiliation(s)
- Johannes Gauer
- Department of Electrical Engineering and Information Technology, Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Anil Nagathil
- Department of Electrical Engineering and Information Technology, Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Rainer Martin
- Department of Electrical Engineering and Information Technology, Institute of Communication Acoustics, Ruhr-Universität Bochum, Bochum, Germany
| | - Jan Peter Thomas
- Department of Otorhinolaryngology, Head and Neck Surgery, St. Elisabeth-Hospital, Ruhr-Universität Bochum, Bochum, Germany
| | - Christiane Völter
- Department of Otorhinolaryngology, Head and Neck Surgery, St. Elisabeth-Hospital, Ruhr-Universität Bochum, Bochum, Germany
| |
Collapse
|
22
|
Tillmann B, Poulin-Charronnat B, Gaudrain E, Akhoun I, Delbé C, Truy E, Collet L. Implicit Processing of Pitch in Postlingually Deafened Cochlear Implant Users. Front Psychol 2019; 10:1990. [PMID: 31572253 PMCID: PMC6749036 DOI: 10.3389/fpsyg.2019.01990] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 08/14/2019] [Indexed: 11/29/2022] Open
Abstract
Cochlear implant (CI) users can only access limited pitch information through their device, which hinders music appreciation. Poor music perception may not only be due to CI technical limitations; lack of training or negative attitudes toward the electric sound might also contribute to it. Our study investigated with an implicit (indirect) investigation method whether poorly transmitted pitch information, presented as musical chords, can activate listeners’ knowledge about musical structures acquired prior to deafness. Seven postlingually deafened adult CI users participated in a musical priming paradigm investigating pitch processing without explicit judgments. Sequences made of eight sung-chords that ended on either a musically related (expected) target chord or a less-related (less-expected) target chord were presented. The use of a priming task based on linguistic features allowed CI patients to perform fast judgments on target chords in the sung music. If listeners’ musical knowledge is activated and allows for tonal expectations (as in normal-hearing listeners), faster response times were expected for related targets than less-related targets. However, if the pitch percept is too different and does not activate musical knowledge acquired prior to deafness, storing pitch information in a short-term memory buffer predicts the opposite pattern. If transmitted pitch information is too poor, no difference in response times should be observed. Results showed that CI patients were able to perform the linguistic task on the sung chords, but correct response times indicated sensory priming, with faster response times observed for the less-related targets: CI patients processed at least some of the pitch information of the musical sequences, which was stored in an auditory short-term memory and influenced chord processing. This finding suggests that the signal transmitted via electric hearing led to a pitch percept that was too different from that based on acoustic hearing, so that it did not automatically activate listeners’ previously acquired musical structure knowledge. However, the transmitted signal seems sufficiently informative to lead to sensory priming. These findings are encouraging for the development of pitch-related training programs for CI patients, despite the current technological limitations of the CI coding.
Collapse
Affiliation(s)
- Barbara Tillmann
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France
| | - Bénédicte Poulin-Charronnat
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,LEAD-CNRS, UMR5022, Université Bourgogne Franche-Comté, Dijon, France
| | - Etienne Gaudrain
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,University Medical Center Groningen, University of Groningen, Groningen, Netherlands
| | - Idrick Akhoun
- School of Psychological Sciences, The University of Manchester, Manchester, United Kingdom
| | - Charles Delbé
- CNRS UMR5292, INSERM U1028, Auditory Cognition and Psychoacoustics Team, Lyon Neuroscience Research Center, Lyon, France.,University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,LEAD-CNRS, UMR5022, Université Bourgogne Franche-Comté, Dijon, France
| | - Eric Truy
- University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France.,CNRS UMR5292, INSERM U1028, Brain Dynamics and Cognition Team, Lyon Neuroscience Research Center, Lyon, France
| | - Lionel Collet
- University of Lyon, Lyon, France.,Université Claude Bernard Lyon 1, Villeurbanne, France
| |
Collapse
|
23
|
Spitzer ER, Landsberger DM, Friedmann DR, Galvin JJ. Pleasantness Ratings for Harmonic Intervals With Acoustic and Electric Hearing in Unilaterally Deaf Cochlear Implant Patients. Front Neurosci 2019; 13:922. [PMID: 31551686 PMCID: PMC6733976 DOI: 10.3389/fnins.2019.00922] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2019] [Accepted: 08/16/2019] [Indexed: 11/13/2022] Open
Abstract
Background Harmony is an important part of tonal music that conveys context, form and emotion. Two notes sounded simultaneously form a harmonic interval. In normal-hearing (NH) listeners, some harmonic intervals (e.g., minor 2nd, tritone, major 7th) typically sound more dissonant than others (e.g., octave, major 3rd, 4th). Because of the limited spectro-temporal resolution afforded by cochlear implants (CIs), music perception is generally poor. However, CI users may still be sensitive to relative dissonance across intervals. In this study, dissonance ratings for harmonic intervals were measured in 11 unilaterally deaf CI patients, in whom ratings from the CI could be compared to those from the normal ear. Methods Stimuli consisted of pairs of equal amplitude MIDI piano tones. Intervals spanned a range of two octaves relative to two root notes (F3 or C4). Dissonance was assessed in terms of subjective pleasantness ratings for intervals presented to the NH ear alone, the CI ear alone, and both ears together (NH + CI). Ratings were collected for both root notes for within- and across-octave intervals (1–12 and 13–24 semitones). Participants rated the pleasantness of each interval by clicking on a line anchored with “least pleasant” and “most pleasant.” A follow-up experiment repeated the task with a smaller stimulus set. Results With NH-only listening, within-octave intervals minor 2nd, major 2nd, and major 7th were rated least pleasant; major 3rd, 5th, and octave were rated most pleasant. Across-octave counterparts were similarly rated. With CI-only listening, ratings were consistently lower and showed a reduced range. Mean ratings were highly correlated between NH-only and CI-only listening (r = 0.845, p < 0.001). Ratings were similar between NH-only and NH + CI listening, with no significant binaural enhancement/interference. The follow-up tests showed that ratings were reliable for the least and most pleasant intervals. Discussion Although pleasantness ratings were less differentiated for the CI ear than the NH ear, there were similarities between the two listening modes. Given the lack of spectro-temporal detail needed for harmonicity-based distinctions, temporal envelope interactions (within and across channels) associated with a perception of roughness may contribute to dissonance perception for harmonic intervals with CI-only listening.
Collapse
Affiliation(s)
- Emily R Spitzer
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David M Landsberger
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | - David R Friedmann
- Department of Otolaryngology-Head and Neck Surgery, New York University School of Medicine, New York, NY, United States
| | | |
Collapse
|
24
|
Zhang F, Roland C, Rasul D, Cahn S, Liang C, Valencia G. Comparing musicians and non-musicians in signal-in-noise perception. Int J Audiol 2019; 58:717-723. [DOI: 10.1080/14992027.2019.1623424] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Fawen Zhang
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, USA
| | - Claire Roland
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, USA
| | - Deema Rasul
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, USA
| | - Steven Cahn
- Department of Music Theory, College-Conservatory of Music, University of Cincinnati, Cincinnati, OH, USA
| | - Chun Liang
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, USA
- Shenzhen Maternity & Child Healthcare Hospital, Shenzhen, China
| | - Gloria Valencia
- Department of Communication Sciences and Disorders, College of Allied Health Sciences, University of Cincinnati, Cincinnati, OH, USA
| |
Collapse
|
25
|
Rayes H, Al-Malky G, Vickers D. Systematic Review of Auditory Training in Pediatric Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:1574-1593. [PMID: 31039327 DOI: 10.1044/2019_jslhr-h-18-0252] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/09/2023]
Abstract
Objective The purpose of this systematic review is to evaluate the published research in auditory training (AT) for pediatric cochlear implant (CI) recipients. This review investigates whether AT in children with CIs leads to improvements in speech and language development, cognition, and/or quality of life and whether improvements, if any, remain over time post AT intervention. Method A systematic search of 7 databases identified 96 review articles published up until January 2017, 9 of which met the inclusion criteria. Data were extracted and independently assessed for risk of bias and quality of study against a PICOS (participants, intervention, control, outcomes, and study) framework. Results All studies reported improvements in trained AT tasks, including speech discrimination/identification and working memory. Retention of improvements over time was found whenever it was assessed. Transfer of learning was measured in 4 of 6 studies, which assessed generalization. Quality of life was not assessed. Overall, evidence for the included studies was deemed to be of low quality. Conclusion Benefits of AT were illustrated through the improvement in trained tasks, and this was observed in all reviewed studies. Transfer of improvement to other domains and also retention of benefits post AT were evident when assessed, although rarely done. However, higher quality evidence to further examine outcomes of AT in pediatric CI recipients is needed.
Collapse
Affiliation(s)
- Hanin Rayes
- Department of Speech Hearing and Phonetic Sciences, Faculty of Brain Sciences, University College London, United Kingdom
| | - Ghada Al-Malky
- Ear Institute, Faculty of Brain Sciences, University College London, United Kingdom
| | - Deborah Vickers
- Department of Speech Hearing and Phonetic Sciences, Faculty of Brain Sciences, University College London, United Kingdom
- Department of Clinical Neurosciences, Clinical School, University of Cambridge, United Kingdom
| |
Collapse
|
26
|
Zimmer V, Verhey JL, Ziese M, Böckmann-Barthel M. Harmony Perception in Prelingually Deaf, Juvenile Cochlear Implant Users. Front Neurosci 2019; 13:466. [PMID: 31139046 PMCID: PMC6518352 DOI: 10.3389/fnins.2019.00466] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/20/2018] [Accepted: 04/24/2019] [Indexed: 12/02/2022] Open
Abstract
Prelingually deaf children listening through cochlear implants (CIs) face severe limitations on their experience of music, since the hearing device degrades relevant details of the acoustic input. An important parameter of music is harmony, which conveys emotional as well as syntactic information. The present study addresses musical harmony in three psychoacoustic experiments in young, prelingually deaf CI listeners and normal-hearing (NH) peers. The discrimination and preference of typical musical chords were studied, as well as cadence sequences conveying musical syntax. The ability to discriminate chords depended on the hearing age of the CI listeners, and was less accurate than for the NH peers. The groups did not differ with respect to the preference of certain chord types. NH listeners were able to categorize cadences, and performance improved with age at testing. In contrast, CI listeners were largely unable to categorize cadences. This dissociation is in accordance with data found in postlingually deafened adults. Consequently, while musical harmony is available to a limited degree to CI listeners, they are unable to use harmony to interpret musical syntax.
Collapse
Affiliation(s)
- Victoria Zimmer
- Department of Experimental Audiology, Otto von Guericke University of Magdeburg, Magdeburg, Germany
| | - Jesko L Verhey
- Department of Experimental Audiology, Otto von Guericke University of Magdeburg, Magdeburg, Germany
| | - Michael Ziese
- Department of Experimental Audiology, Otto von Guericke University of Magdeburg, Magdeburg, Germany
| | - Martin Böckmann-Barthel
- Department of Experimental Audiology, Otto von Guericke University of Magdeburg, Magdeburg, Germany
| |
Collapse
|
27
|
Cheng X, Liu Y, Shu Y, Tao DD, Wang B, Yuan Y, Galvin JJ, Fu QJ, Chen B. Music Training Can Improve Music and Speech Perception in Pediatric Mandarin-Speaking Cochlear Implant Users. Trends Hear 2019; 22:2331216518759214. [PMID: 29484971 PMCID: PMC5833165 DOI: 10.1177/2331216518759214] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Due to limited spectral resolution, cochlear implants (CIs) do not convey pitch information very well. Pitch cues are important for perception of music and tonal language; it is possible that music training may improve performance in both listening tasks. In this study, we investigated music training outcomes in terms of perception of music, lexical tones, and sentences in 22 young (4.8 to 9.3 years old), prelingually deaf Mandarin-speaking CI users. Music perception was measured using a melodic contour identification (MCI) task. Speech perception was measured for lexical tones and sentences presented in quiet. Subjects received 8 weeks of MCI training using pitch ranges not used for testing. Music and speech perception were measured at 2, 4, and 8 weeks after training was begun; follow-up measures were made 4 weeks after training was stopped. Mean baseline performance was 33.2%, 76.9%, and 45.8% correct for MCI, lexical tone recognition, and sentence recognition, respectively. After 8 weeks of MCI training, mean performance significantly improved by 22.9, 14.4, and 14.5 percentage points for MCI, lexical tone recognition, and sentence recognition, respectively (p < .05 in all cases). Four weeks after training was stopped, there was no significant change in posttraining music and speech performance. The results suggest that music training can significantly improve pediatric Mandarin-speaking CI users’ music and speech perception.
Collapse
Affiliation(s)
- Xiaoting Cheng
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Yangwenyi Liu
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Yilai Shu
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Duo-Duo Tao
- 3 Department of Ear, Nose and Throat, The First Affiliated Hospital of Soochow University, Suzhu, China
| | - Bing Wang
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Yasheng Yuan
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | | | - Qian-Jie Fu
- 5 Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, USA
| | - Bing Chen
- 1 Department of Otology and Skull Base Surgery, Eye and Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China.,2 Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| |
Collapse
|
28
|
Fuller CD, Galvin JJ, Maat B, Başkent D, Free RH. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users. Trends Hear 2019; 22:2331216518765379. [PMID: 29621947 PMCID: PMC5894911 DOI: 10.1177/2331216518765379] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.
Collapse
Affiliation(s)
- Christina D Fuller
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - John J Galvin
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands.,4 House Ear Institute, Los Angeles, CA, USA.,5 Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, CA, USA
| | - Bert Maat
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Deniz Başkent
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Rolien H Free
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| |
Collapse
|
29
|
A Randomized Controlled Crossover Study of the Impact of Online Music Training on Pitch and Timbre Perception in Cochlear Implant Users. J Assoc Res Otolaryngol 2019; 20:247-262. [PMID: 30815761 DOI: 10.1007/s10162-018-00704-0] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/26/2017] [Accepted: 10/17/2018] [Indexed: 10/27/2022] Open
Abstract
Cochlear implant (CI) biomechanical constraints result in impoverished spectral cues and poor frequency resolution, making it difficult for users to perceive pitch and timbre. There is emerging evidence that music training may improve CI-mediated music perception; however, much of the existing studies involve time-intensive and less readily accessible in-person music training paradigms, without rigorous experimental control paradigms. Online resources for auditory rehabilitation remain an untapped potential resource for CI users. Furthermore, establishing immediate value from an acute music training program may encourage CI users to adhere to post-implantation rehabilitation exercises. In this study, we evaluated the impact of an acute online music training program on pitch discrimination and timbre identification. Via a randomized controlled crossover study design, 20 CI users and 21 normal hearing (NH) adults were assigned to one of two arms. Arm-A underwent 1 month of online self-paced music training (intervention) followed by 1 month of audiobook listening (control). Arm-B underwent 1 month of audiobook listening followed by 1 month of music training. Pitch and timbre sensitivity scores were taken across three visits: (1) baseline, (2) after 1 month of intervention, and (3) after 1 month of control. We found that performance improved in pitch discrimination among CI users and NH listeners, with both online music training and audiobook listening. Music training, however, provided slightly greater benefit for instrument identification than audiobook listening. For both tasks, this improvement appears to be related to both fast stimulus learning as well as procedural learning. In conclusion, auditory training (with either acute participation in an online music training program or audiobook listening) may improve performance on untrained tasks of pitch discrimination and timbre identification. These findings demonstrate a potential role for music training in perceptual auditory appraisal of complex stimuli. Furthermore, this study highlights the importance and the need for more tightly controlled training studies in order to accurately evaluate the impact of rehabilitation training protocols on auditory processing.
Collapse
|
30
|
Yoon YS, Shin YR, Kim JM, Coltisor A, Chun YM. Optimizing maps for electric acoustic stimulation users. Cochlear Implants Int 2019; 20:106-115. [DOI: 10.1080/14670100.2019.1572939] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/27/2022]
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX, USA
| | | | | | | | | |
Collapse
|
31
|
Gajęcki T, Nogueira W. Deep learning models to remix music for cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2018; 143:3602. [PMID: 29960485 DOI: 10.1121/1.5042056] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The severe hearing loss problems that some people suffer can be treated by providing them with a surgically implanted electrical device called cochlear implant (CI). CI users struggle to perceive complex audio signals such as music; however, previous studies show that CI recipients find music more enjoyable when the vocals are enhanced with respect to the background music. In this manuscript source separation (SS) algorithms are used to remix pop songs by applying gain to the lead singing voice. This work uses deep convolutional auto-encoders, a deep recurrent neural network, a multilayer perceptron (MLP), and non-negative matrix factorization to be evaluated objectively and subjectively through two different perceptual experiments which involve normal hearing subjects and CI recipients. The evaluation assesses the relevance of the artifacts introduced by the SS algorithms considering their computation time, as this study aims at proposing one of the algorithms for real-time implementation. Results show that the MLP performs in a robust way throughout the tested data while providing levels of distortions and artifacts which are not perceived by CI users. Thus, an MLP is proposed to be implemented for real-time monaural audio SS to remix music for CI users.
Collapse
Affiliation(s)
- Tom Gajęcki
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Hannover, 30625, Germany
| |
Collapse
|
32
|
Cheng X, Liu Y, Wang B, Yuan Y, Galvin JJ, Fu QJ, Shu Y, Chen B. The Benefits of Residual Hair Cell Function for Speech and Music Perception in Pediatric Bimodal Cochlear Implant Listeners. Neural Plast 2018; 2018:4610592. [PMID: 29849556 PMCID: PMC5925034 DOI: 10.1155/2018/4610592] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/27/2017] [Accepted: 03/13/2018] [Indexed: 11/17/2022] Open
Abstract
Objective The aim of this study was to investigate the benefits of residual hair cell function for speech and music perception in bimodal pediatric Mandarin-speaking cochlear implant (CI) listeners. Design Speech and music performance was measured in 35 Mandarin-speaking pediatric CI users for unilateral (CI-only) and bimodal listening. Mandarin speech perception was measured for vowels, consonants, lexical tones, and sentences in quiet. Music perception was measured for melodic contour identification (MCI). Results Combined electric and acoustic hearing significantly improved MCI and Mandarin tone recognition performance, relative to CI-only performance. For MCI, performance was significantly better with bimodal listening for all semitone spacing conditions (p < 0.05 in all cases). For tone recognition, bimodal performance was significantly better only for tone 2 (rising; p < 0.05). There were no significant differences between CI-only and CI + HA for vowel, consonant, or sentence recognition. Conclusions The results suggest that combined electric and acoustic hearing can significantly improve perception of music and Mandarin tones in pediatric Mandarin-speaking CI patients. Music and lexical tone perception depends strongly on pitch perception, and the contralateral acoustic hearing coming from residual hair cell function provided pitch cues that are generally not well preserved in electric hearing.
Collapse
Affiliation(s)
- Xiaoting Cheng
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Yangwenyi Liu
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Bing Wang
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Yasheng Yuan
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | | | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, CA, USA
| | - Yilai Shu
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| | - Bing Chen
- Department of Otology and Skull Base Surgery, Eye and Ear, Nose, Throat Hospital of Fudan University, Shanghai, China
- Key Laboratory of Hearing Medicine, National Health and Family Planning Commission, Shanghai, China
| |
Collapse
|
33
|
Nie Y, Galvin JJ, Morikawa M, André V, Wheeler H, Fu QJ. Music and Speech Perception in Children Using Sung Speech. Trends Hear 2018; 22:2331216518766810. [PMID: 29609496 PMCID: PMC5888806 DOI: 10.1177/2331216518766810] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
This study examined music and speech perception in normal-hearing children with some or no musical training. Thirty children (mean age = 11.3 years), 15 with and 15 without formal music training participated in the study. Music perception was measured using a melodic contour identification (MCI) task; stimuli were a piano sample or sung speech with a fixed timbre (same word for each note) or a mixed timbre (different words for each note). Speech perception was measured in quiet and in steady noise using a matrix-styled sentence recognition task; stimuli were naturally intonated speech or sung speech with a fixed pitch (same note for each word) or a mixed pitch (different notes for each word). Significant musician advantages were observed for MCI and speech in noise but not for speech in quiet. MCI performance was significantly poorer with the mixed timbre stimuli. Speech performance in noise was significantly poorer with the fixed or mixed pitch stimuli than with spoken speech. Across all subjects, age at testing and MCI performance were significantly correlated with speech performance in noise. MCI and speech performance in quiet was significantly poorer for children than for adults from a related study using the same stimuli and tasks; speech performance in noise was significantly poorer for young than for older children. Long-term music training appeared to benefit melodic pitch perception and speech understanding in noise in these pediatric listeners.
Collapse
Affiliation(s)
- Yingjiu Nie
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | | | - Michael Morikawa
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Victoria André
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Harley Wheeler
- 1 Department of Communication Sciences and Disorders, 3745 James Madison University , Harrisonburg, VA, USA
| | - Qian-Jie Fu
- 3 Department of Head and Neck Surgery, University of California-Los Angeles, CA, USA
| |
Collapse
|
34
|
Integration of acoustic and electric hearing is better in the same ear than across ears. Sci Rep 2017; 7:12500. [PMID: 28970567 PMCID: PMC5624923 DOI: 10.1038/s41598-017-12298-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/06/2017] [Indexed: 11/26/2022] Open
Abstract
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1–0.6 kHz. For CI simulations, the output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.
Collapse
|
35
|
|
36
|
Ping L, Wang N, Tang G, Lu T, Yin L, Tu W, Fu QJ. Implementation and preliminary evaluation of ‘C-tone’: A novel algorithm to improve lexical tone recognition in Mandarin-speaking cochlear implant users. Cochlear Implants Int 2017. [PMID: 28629258 DOI: 10.1080/14670100.2017.1339492] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/19/2022]
Affiliation(s)
| | - Ningyuan Wang
- Zhejiang Nurotron Biotechnology Co., Ltd, Zhejiang, PR China
| | - Guofang Tang
- Zhejiang Nurotron Biotechnology Co., Ltd, Zhejiang, PR China
| | - Thomas Lu
- Nurotron Biotechnology, Inc., Irvine, CA, USA
| | - Li Yin
- Zhejiang Nurotron Biotechnology Co., Ltd, Zhejiang, PR China
| | - Wenhe Tu
- Zhejiang Nurotron Biotechnology Co., Ltd, Zhejiang, PR China
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, Los Angeles, 2100 West Third Street, Suite 100, Los Angeles, CA 90057, USA
| |
Collapse
|
37
|
Polonenko MJ, Giannantonio S, Papsin BC, Marsella P, Gordon KA. Music perception improves in children with bilateral cochlear implants or bimodal devices. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:4494. [PMID: 28679263 DOI: 10.1121/1.4985123] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The objectives of this study were to determine if music perception by pediatric cochlear implant users can be improved by (1) providing access to bilateral hearing through two cochlear implants or a cochlear implant and a contralateral hearing aid (bimodal users) and (2) any history of music training. The Montreal Battery of Evaluation of Musical Ability test was presented via soundfield to 26 bilateral cochlear implant users, 8 bimodal users and 16 children with normal hearing. Response accuracy and reaction time were recorded via an iPad application. Bilateral cochlear implant and bimodal users perceived musical characteristics less accurately and more slowly than children with normal hearing. Children who had music training were faster and more accurate, regardless of their hearing status. Reaction time on specific subtests decreased with age, years of musical training and, for implant users, better residual hearing. Despite effects of these factors on reaction time, bimodal and bilateral cochlear implant users' responses were less accurate than those of their normal hearing peers. This means children using bilateral cochlear implants and bimodal devices continue to experience challenges perceiving music that are related to hearing impairment and/or device limitations during development.
Collapse
Affiliation(s)
- Melissa J Polonenko
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, Bambino Gesù Pediatric Hospital, Piazza di Sant'Onofrio 4, 00165, Rome, Italy
| | - Blake C Papsin
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| | - Pasquale Marsella
- Audiology and Otosurgery Unit, Bambino Gesù Pediatric Hospital, Piazza di Sant'Onofrio 4, 00165, Rome, Italy
| | - Karen A Gordon
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| |
Collapse
|
38
|
van de Velde DJ, Schiller NO, van Heuven VJ, Levelt CC, van Ginkel J, Beers M, Briaire JJ, Frijns JHM. The perception of emotion and focus prosody with varying acoustic cues in cochlear implant simulations with varying filter slopes. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:3349. [PMID: 28599540 PMCID: PMC5436976 DOI: 10.1121/1.4982198] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2016] [Revised: 03/15/2017] [Accepted: 04/11/2017] [Indexed: 06/07/2023]
Abstract
This study aimed to find the optimal filter slope for cochlear implant simulations (vocoding) by testing the effect of a wide range of slopes on the discrimination of emotional and linguistic (focus) prosody, with varying availability of F0 and duration cues. Forty normally hearing participants judged if (non-)vocoded sentences were pronounced with happy or sad emotion, or with adjectival or nominal focus. Sentences were recorded as natural stimuli and manipulated to contain only emotion- or focus-relevant segmental duration or F0 information or both, and then noise-vocoded with 5, 20, 80, 120, and 160 dB/octave filter slopes. Performance increased with steeper slopes, but only up to 120 dB/octave, with bigger effects for emotion than for focus perception. For emotion, results with both cues most closely resembled results with F0, while for focus results with both cues most closely resembled those with duration, showing emotion perception relies primarily on F0, and focus perception on duration. This suggests that filter slopes affect focus perception less than emotion perception because for emotion, F0 is both more informative and more affected. The performance increase until extreme filter slope values suggests that much performance improvement in prosody perception is still to be gained for CI users.
Collapse
Affiliation(s)
- Daan J van de Velde
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Niels O Schiller
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Vincent J van Heuven
- Department of Applied Linguistics, Pannon Egyetem, 10 Egyetem Utca, 8200 Veszprém, Hungary
| | - Claartje C Levelt
- Leiden University Centre for Linguistics, Leiden University, Van Wijkplaats 3, 2311 BX, Leiden, the Netherlands
| | - Joost van Ginkel
- Leiden University Centre for Child and Family Studies, Wassenaarseweg 52, 2333 AK, Leiden, the Netherlands
| | - Mieke Beers
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| | - Jeroen J Briaire
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| | - Johan H M Frijns
- Leiden University Medical Center, Ears, Nose, and Throat Department, Postbus 9600, 2300 RC, Leiden, the Netherlands
| |
Collapse
|
39
|
Kong YY, Jesse A. Low-frequency fine-structure cues allow for the online use of lexical stress during spoken-word recognition in spectrally degraded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:373. [PMID: 28147573 PMCID: PMC5848870 DOI: 10.1121/1.4972569] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/23/2016] [Revised: 11/22/2016] [Accepted: 12/07/2016] [Indexed: 06/01/2023]
Abstract
English listeners use suprasegmental cues to lexical stress during spoken-word recognition. Prosodic cues are, however, less salient in spectrally degraded speech, as provided by cochlear implants. The present study examined how spectral degradation with and without low-frequency fine-structure information affects normal-hearing listeners' ability to benefit from suprasegmental cues to lexical stress in online spoken-word recognition. To simulate electric hearing, an eight-channel vocoder spectrally degraded the stimuli while preserving temporal envelope information. Additional lowpass-filtered speech was presented to the opposite ear to simulate bimodal hearing. Using a visual world paradigm, listeners' eye fixations to four printed words (target, competitor, two distractors) were tracked, while hearing a word. The target and competitor overlapped segmentally in their first two syllables but mismatched suprasegmentally in their first syllables, as the initial syllable received primary stress in one word and secondary stress in the other (e.g., "'admiral," "'admi'ration"). In the vocoder-only condition, listeners were unable to use lexical stress to recognize targets before segmental information disambiguated them from competitors. With additional lowpass-filtered speech, however, listeners efficiently processed prosodic information to speed up online word recognition. Low-frequency fine-structure cues in simulated bimodal hearing allowed listeners to benefit from suprasegmental cues to lexical stress during word recognition.
Collapse
Affiliation(s)
- Ying-Yee Kong
- Department of Communication Sciences & Disorders, Northeastern University, 226 Forsyth Building, 360 Huntington Avenue, Boston, Massachusetts 02115, USA
| | - Alexandra Jesse
- Department of Psychological and Brain Sciences, University of Massachusetts, 135 Hicks Way, Amherst, Massachusetts 01003, USA
| |
Collapse
|
40
|
Pons J, Janer J, Rode T, Nogueira W. Remixing music using source separation algorithms to improve the musical experience of cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:4338. [PMID: 28040023 DOI: 10.1121/1.4971424] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Music perception remains rather poor for many Cochlear Implant (CI) users due to the users' deficient pitch perception. However, comprehensible vocals and simple music structures are well perceived by many CI users. In previous studies researchers re-mixed songs to make music more enjoyable for them, favoring the preferred music elements (vocals or beat) attenuating the others. However, mixing music requires the individually recorded tracks (multitracks) which are usually not accessible. To overcome this limitation, Source Separation (SS) techniques are proposed to estimate the multitracks. These estimated multitracks are further re-mixed to create more pleasant music for CI users. However, SS may introduce undesirable audible distortions and artifacts. Experiments conducted with CI users (N = 9) and normal hearing listeners (N = 9) show that CI users can have different mixing preferences than normal hearing listeners. Moreover, it is shown that CI users' mixing preferences are user dependent. It is also shown that SS methods can be successfully used to create preferred re-mixes although distortions and artifacts are present. Finally, CI users' preferences are used to propose a benchmark that defines the maximum acceptable levels of SS distortion and artifacts for two different mixes proposed by CI users.
Collapse
Affiliation(s)
- Jordi Pons
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| | - Jordi Janer
- Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra. Roc Boronat 138, 55.310, 08018 Barcelona, Spain
| | - Thilo Rode
- HoerSys GmbH, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| |
Collapse
|
41
|
Abstract
Combined use of a hearing aid (HA) and cochlear implant (CI) has been shown to improve CI users’ speech and music performance. However, different hearing devices, test stimuli, and listening tasks may interact and obscure bimodal benefits. In this study, speech and music perception were measured in bimodal listeners for CI-only, HA-only, and CI + HA conditions, using the Sung Speech Corpus, a database of monosyllabic words produced at different fundamental frequencies. Sentence recognition was measured using sung speech in which pitch was held constant or varied across words, as well as for spoken speech. Melodic contour identification (MCI) was measured using sung speech in which the words were held constant or varied across notes. Results showed that sentence recognition was poorer with sung speech relative to spoken, with little difference between sung speech with a constant or variable pitch; mean performance was better with CI-only relative to HA-only, and best with CI + HA. MCI performance was better with constant words versus variable words; mean performance was better with HA-only than with CI-only and was best with CI + HA. Relative to CI-only, a strong bimodal benefit was observed for speech and music perception. Relative to the better ear, bimodal benefits remained strong for sentence recognition but were marginal for MCI. While variations in pitch and timbre may negatively affect CI users’ speech and music perception, bimodal listening may partially compensate for these deficits.
Collapse
Affiliation(s)
- Joseph D Crew
- University of Southern California, Los Angeles, CA, USA
| | | | - Qian-Jie Fu
- University of California-Los Angeles, CA, USA
| |
Collapse
|
42
|
Liang C, Earl B, Thompson I, Whitaker K, Cahn S, Xiang J, Fu QJ, Zhang F. Musicians Are Better than Non-musicians in Frequency Change Detection: Behavioral and Electrophysiological Evidence. Front Neurosci 2016; 10:464. [PMID: 27826221 PMCID: PMC5078501 DOI: 10.3389/fnins.2016.00464] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2016] [Accepted: 09/27/2016] [Indexed: 11/13/2022] Open
Abstract
Objective: The objectives of this study were: (1) to determine if musicians have a better ability to detect frequency changes under quiet and noisy conditions; (2) to use the acoustic change complex (ACC), a type of electroencephalographic (EEG) response, to understand the neural substrates of musician vs. non-musician difference in frequency change detection abilities. Methods: Twenty-four young normal hearing listeners (12 musicians and 12 non-musicians) participated. All participants underwent psychoacoustic frequency detection tests with three types of stimuli: tones (base frequency at 160 Hz) containing frequency changes (Stim 1), tones containing frequency changes masked by low-level noise (Stim 2), and tones containing frequency changes masked by high-level noise (Stim 3). The EEG data were recorded using tones (base frequency at 160 and 1200 Hz, respectively) containing different magnitudes of frequency changes (0, 5, and 50% changes, respectively). The late-latency evoked potential evoked by the onset of the tones (onset LAEP or N1-P2 complex) and that evoked by the frequency change contained in the tone (the acoustic change complex or ACC or N1′-P2′ complex) were analyzed. Results: Musicians significantly outperformed non-musicians in all stimulus conditions. The ACC and onset LAEP showed similarities and differences. Increasing the magnitude of frequency change resulted in increased ACC amplitudes. ACC measures were found to be significantly different between musicians (larger P2′ amplitude) and non-musicians for the base frequency of 160 Hz but not 1200 Hz. Although the peak amplitude in the onset LAEP appeared to be larger and latency shorter in musicians than in non-musicians, the difference did not reach statistical significance. The amplitude of the onset LAEP is significantly correlated with that of the ACC for the base frequency of 160 Hz. Conclusion: The present study demonstrated that musicians do perform better than non-musicians in detecting frequency changes in quiet and noisy conditions. The ACC and onset LAEP may involve different but overlapping neural mechanisms. Significance: This is the first study using the ACC to examine music-training effects. The ACC measures provide an objective tool for documenting musical training effects on frequency detection.
Collapse
Affiliation(s)
- Chun Liang
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Brian Earl
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Ivy Thompson
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Kayla Whitaker
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| | - Steven Cahn
- Department of Composition, Musicology, and Theory, College-Conservatory of Music, University of Cincinnati Cincinnati, OH, USA
| | - Jing Xiang
- Department of Pediatrics and Neurology, Cincinnati Children's Hospital Medical Center Cincinnati, OH, USA
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California, Los Angeles Los Angeles, CA, USA
| | - Fawen Zhang
- Department of Communication Sciences and Disorders, University of Cincinnati Cincinnati, OH, USA
| |
Collapse
|
43
|
Gfeller K, Guthe E, Driscoll V, Brown CJ. A preliminary report of music-based training for adult cochlear implant users: Rationales and development. Cochlear Implants Int 2016; 16 Suppl 3:S22-31. [PMID: 26561884 DOI: 10.1179/1467010015z.000000000269] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVE This paper provides a preliminary report of a music-based training program for adult cochlear implant (CI) recipients. Included in this report are descriptions of the rationale for music-based training, factors influencing program development, and the resulting program components. METHODS Prior studies describing experience-based plasticity in response to music training, auditory training for persons with hearing impairment, and music training for CI recipients were reviewed. These sources revealed rationales for using music to enhance speech, factors associated with successful auditory training, relevant aspects of electric hearing and music perception, and extant evidence regarding limitations and advantages associated with parameters for music training with CI users. This informed the development of a computer-based music training program designed specifically for adult CI users. RESULTS Principles and parameters for perceptual training of music, such as stimulus choice, rehabilitation approach, and motivational concerns were developed in relation to the unique auditory characteristics of adults with electric hearing. An outline of the resulting program components and the outcome measures for evaluating program effectiveness are presented. CONCLUSIONS Music training can enhance the perceptual accuracy of music, but is also hypothesized to enhance several features of speech with similar processing requirements as music (e.g., pitch and timbre). However, additional evaluation of specific training parameters and the impact of music-based training on speech perception of CI users is required.
Collapse
|
44
|
Abstract
Direct stimulation of the auditory nerve via a Cochlear Implant (CI) enables profoundly hearing-impaired people to perceive sounds. Many CI users find language comprehension satisfactory, but music perception is generally considered difficult. However, music contains different dimensions which might be accessible in different ways. We aimed to highlight three main dimensions of music processing in CI users which rely on different processing mechanisms: (1) musical discrimination abilities, (2) access to meaning in music, and (3) subjective music appreciation. All three dimensions were investigated in two CI user groups (post- and prelingually deafened CI users, all implanted as adults) and a matched normal hearing control group. The meaning of music was studied by using event-related potentials (with the N400 component as marker) during a music-word priming task while music appreciation was gathered by a questionnaire. The results reveal a double dissociation between the three dimensions of music processing. Despite impaired discrimination abilities of both CI user groups compared to the control group, appreciation was reduced only in postlingual CI users. While musical meaning processing was restorable in postlingual CI users, as shown by a N400 effect, data of prelingual CI users lack the N400 effect and indicate previous dysfunctional concept building.
Collapse
|
45
|
|
46
|
Melodic pitch perception and lexical tone perception in Mandarin-speaking cochlear implant users. Ear Hear 2015; 36:102-10. [PMID: 25099401 DOI: 10.1097/aud.0000000000000086] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To examine the relationship between lexical tone perception and melodic pitch perception in Mandarin-speaking cochlear implant (CI) users and to investigate the influence of previous acoustic hearing on CI users' speech and music perception. DESIGN Lexical tone perception and melodic contour identification (MCI) were measured in 21 prelingual and 11 postlingual young (aged 6-26 years) Mandarin-speaking CI users. Lexical tone recognition was measured for four tonal patterns: tone 1 (flat F0), tone 2 (rising F0), tone 3 (falling-rising F0), and tone 4 (falling F0). MCI was measured using nine five-note melodic patterns that contained changes in pitch contour, as well as different semitone spacing between notes. RESULTS Lexical tone recognition was generally good (overall mean = 81% correct), and there was no significant difference between subject groups. MCI performance was generally poor (mean = 23% correct). MCI performance was significantly better for postlingual (mean = 32% correct) than for prelingual CI participants (mean = 18% correct). After correcting for outliers, there was no significant correlation between lexical tone recognition and MCI performance for prelingual or postlingual CI participants. Age at deafness was significantly correlated with MCI performance only for postlingual participants. CI experience was significantly correlated with MCI performance for both prelingual and postlingual participants. Duration of deafness was significantly correlated with tone recognition only for prelingual participants. CONCLUSIONS Despite the prevalence of pitch cues in Mandarin, the present CI participants had great difficulty perceiving melodic pitch. The availability of amplitude and duration cues in lexical tones most likely compensated for the poor pitch perception observed with these CI listeners. Previous acoustic hearing experience seemed to benefit postlingual CI users' melodic pitch perception. Longer CI experience was associated with better MCI performance for both subject groups, suggesting that CI users' music perception may improve as they gain experience with their device.
Collapse
|
47
|
Shannon RV. Auditory implant research at the House Ear Institute 1989-2013. Hear Res 2015; 322:57-66. [PMID: 25449009 PMCID: PMC4380593 DOI: 10.1016/j.heares.2014.11.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Revised: 11/04/2014] [Accepted: 11/07/2014] [Indexed: 11/29/2022]
Abstract
The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House - in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8-10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Robert V Shannon
- Department of Otolaryngology, University of Southern California, Keck School of Medicine of USC, 806 W. Adams Blvd, Los Angeles, CA 90007-2505, USA.
| |
Collapse
|
48
|
Crew JD, Galvin III JJ, Landsberger DM, Fu QJ. Contributions of electric and acoustic hearing to bimodal speech and music perception. PLoS One 2015; 10:e0120279. [PMID: 25790349 PMCID: PMC4366155 DOI: 10.1371/journal.pone.0120279] [Citation(s) in RCA: 46] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/21/2014] [Accepted: 01/26/2015] [Indexed: 11/18/2022] Open
Abstract
Cochlear implant (CI) users have difficulty understanding speech in noisy listening conditions and perceiving music. Aided residual acoustic hearing in the contralateral ear can mitigate these limitations. The present study examined contributions of electric and acoustic hearing to speech understanding in noise and melodic pitch perception. Data was collected with the CI only, the hearing aid (HA) only, and both devices together (CI+HA). Speech reception thresholds (SRTs) were adaptively measured for simple sentences in speech babble. Melodic contour identification (MCI) was measured with and without a masker instrument; the fundamental frequency of the masker was varied to be overlapping or non-overlapping with the target contour. Results showed that the CI contributes primarily to bimodal speech perception and that the HA contributes primarily to bimodal melodic pitch perception. In general, CI+HA performance was slightly improved relative to the better ear alone (CI-only) for SRTs but not for MCI, with some subjects experiencing a decrease in bimodal MCI performance relative to the better ear alone (HA-only). Individual performance was highly variable, and the contribution of either device to bimodal perception was both subject- and task-dependent. The results suggest that individualized mapping of CIs and HAs may further improve bimodal speech and music perception.
Collapse
Affiliation(s)
- Joseph D. Crew
- Department of Biomedical Engineering, University of Southern California, Los Angeles, California, United States of America
| | - John J. Galvin III
- Department of Head and Neck Surgery, University of California-Los Angeles, Los Angeles, California, United States of America
| | - David M. Landsberger
- Department of Otolaryngology, New York University School of Medicine, New York, New York, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, University of California-Los Angeles, Los Angeles, California, United States of America
| |
Collapse
|
49
|
Fu QJ, Galvin JJ, Wang X, Wu JL. Benefits of music training in mandarin-speaking pediatric cochlear implant users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:163-169. [PMID: 25321148 PMCID: PMC4712852 DOI: 10.1044/2014_jslhr-h-14-0127] [Citation(s) in RCA: 24] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/09/2014] [Revised: 07/22/2014] [Accepted: 09/11/2014] [Indexed: 06/04/2023]
Abstract
PURPOSE The aims of this study were to assess young (5- to 10-year-old) Mandarin-speaking cochlear implant (CI) users' musical pitch perception and to assess the benefits of computer-based home training on performance. METHOD Melodic contour identification (MCI) was used to assess musical pitch perception in 14 Mandarin-speaking pediatric CI users; the instrument timbre and the contour length were varied as experimental parameters. Six subjects received subsequent MCI training on their home computer in which auditory and visual feedback were provided. RESULTS MCI performance was generally poor (grand mean=33.3% correct) and highly variable, with scores ranging from 9.3% to 98.1% correct; there was no significant effect of instrument timbre or contour length on performance (p>.05). After 4 weeks of training, performance sharply improved. Follow-up measures that were conducted 8 weeks after training was stopped showed no significant decline in MCI performance. For the 6 trained subjects, there was a significant effect of contour length for the training and follow-up measures. CONCLUSION These preliminary data suggest that although baseline MCI performance initially may be poor, training may greatly improve Mandarin-speaking pediatric CI users' melodic pitch perception.
Collapse
Affiliation(s)
- Qian-Jie Fu
- Signal Processing and Auditory Research Laboratory, David Geffen School of Medicine, University of California Los Angeles
| | - John J. Galvin
- Signal Processing and Auditory Research Laboratory, David Geffen School of Medicine, University of California Los Angeles
| | - Xiaosong Wang
- Signal Processing and Auditory Research Laboratory, David Geffen School of Medicine, University of California Los Angeles
| | | |
Collapse
|
50
|
Torppa R, Huotilainen M, Leminen M, Lipsanen J, Tervaniemi M. Interplay between singing and cortical processing of music: a longitudinal study in children with cochlear implants. Front Psychol 2014; 5:1389. [PMID: 25540628 PMCID: PMC4261723 DOI: 10.3389/fpsyg.2014.01389] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2014] [Accepted: 11/13/2014] [Indexed: 11/30/2022] Open
Abstract
Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4–13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14–17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs.
Collapse
Affiliation(s)
- Ritva Torppa
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland
| | - Minna Huotilainen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland ; Brain Work Research Centre, Finnish Institute of Occupational Health Helsinki, Finland
| | - Miika Leminen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland ; MINDLab, Center of Functionally Integrative Neuroscience, Aarhus University Aarhus, Denmark
| | - Jari Lipsanen
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland
| |
Collapse
|