1
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Synchronizing Automatic Gain Control in Bilateral Cochlear Implants Mitigates Dynamic Localization Deficits Introduced by Independent Bilateral Compression. Ear Hear 2024; 45:969-984. [PMID: 38472134 DOI: 10.1097/aud.0000000000001492] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/14/2024]
Abstract
OBJECTIVES The independence of left and right automatic gain controls (AGCs) used in cochlear implants can distort interaural level differences and thereby compromise dynamic sound source localization. We assessed the degree to which synchronizing left and right AGCs mitigates those difficulties as indicated by listeners' ability to use the changes in interaural level differences that come with head movements to avoid front-back reversals (FBRs). DESIGN Broadband noise stimuli were presented from one of six equally spaced loudspeakers surrounding the listener. Sound source identification was tested for stimuli presented at 70 dBA (above AGC threshold) for 10 bilateral cochlear implant patients, under conditions where (1) patients remained stationary and (2) free head movements within ±30° were encouraged. These conditions were repeated for both synchronized and independent AGCs. The same conditions were run at 50 dBA, below the AGC threshold, to assess listeners' baseline performance when AGCs were not engaged. In this way, the expected high variability in listener performance could be separated from effects of independent AGCs to reveal the degree to which synchronizing AGCs could restore localization performance to what it was without AGC compression. RESULTS The mean rate of FBRs was higher for sound stimuli presented at 70 dBA with independent AGCs, both with and without head movements, than at 50 dBA, suggesting that when AGCs were independently engaged they contributed to poorer front-back localization. When listeners remained stationary, synchronizing AGCs did not significantly reduce the rate of FBRs. When AGCs were independent at 70 dBA, head movements did not have a significant effect on the rate of FBRs. Head movements did have a significant group effect on the rate of FBRs at 50 dBA when AGCs were not engaged and at 70 dBA when AGCs were synchronized. Synchronization of AGCs, together with head movements, reduced the rate of FBRs to approximately what it was in the 50-dBA baseline condition. Synchronizing AGCs also had a significant group effect on listeners' overall percent correct localization. CONCLUSIONS Synchronizing AGCs allowed for listeners to mitigate front-back confusions introduced by unsynchronized AGCs when head motion was permitted, returning individual listener performance to roughly what it was in the 50-dBA baseline condition when AGCs were not engaged. Synchronization of AGCs did not overcome localization deficiencies which were observed when AGCs were not engaged, and which are therefore unrelated to AGC compression.
Collapse
Affiliation(s)
- M Torben Pastore
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Kathryn R Pulling
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Chen Chen
- Advanced Bionics, Valencia, California, USA
| | - William A Yost
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| | - Michael F Dorman
- College of Health Solutions, Arizona State University, Tempe, Arizona, USA
| |
Collapse
|
2
|
Levin M, Zaltz Y. Voice Discrimination in Quiet and in Background Noise by Simulated and Real Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:5169-5186. [PMID: 37992412 DOI: 10.1044/2023_jslhr-23-00019] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/24/2023]
Abstract
PURPOSE Cochlear implant (CI) users demonstrate poor voice discrimination (VD) in quiet conditions based on the speaker's fundamental frequency (fo) and formant frequencies (i.e., vocal-tract length [VTL]). Our purpose was to examine the effect of background noise at levels that allow good speech recognition thresholds (SRTs) on VD via acoustic CI simulations and CI hearing. METHOD Forty-eight normal-hearing (NH) listeners who listened via noise-excited (n = 20) or sinewave (n = 28) vocoders and 10 prelingually deaf CI users (i.e., whose hearing loss began before language acquisition) participated in the study. First, the signal-to-noise ratio (SNR) that yields 70.7% correct SRT was assessed using an adaptive sentence-in-noise test. Next, the CI simulation listeners performed 12 adaptive VDs: six in quiet conditions, two with each cue (fo, VTL, fo + VTL), and six amid speech-shaped noise. The CI participants performed six VDs: one with each cue, in quiet and amid noise. SNR at VD testing was 5 dB higher than the individual's SRT in noise (SRTn +5 dB). RESULTS Results showed the following: (a) Better VD was achieved via the noise-excited than the sinewave vocoder, with the noise-excited vocoder better mimicking CI VD; (b) background noise had a limited negative effect on VD, only for the CI simulation listeners; and (c) there was a significant association between SNR at testing and VTL VD only for the CI simulation listeners. CONCLUSIONS For NH listeners who listen to CI simulations, noise that allows good SRT can nevertheless impede VD, probably because VD depends more on bottom-up sensory processing. Conversely, for prelingually deaf CI users, noise that allows good SRT hardly affects VD, suggesting that they rely strongly on bottom-up processing for both VD and speech recognition.
Collapse
Affiliation(s)
- Michal Levin
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
| | - Yael Zaltz
- Department of Communication Disorders, The Stanley Steyer School of Health Professions, Faculty of Medicine, Tel Aviv University, Israel
- Sagol School of Neuroscience, Tel Aviv University, Israel
| |
Collapse
|
3
|
Musical Mistuning Perception and Appraisal in Cochlear Implant Recipients. Otol Neurotol 2023; 44:e281-e286. [PMID: 36922018 DOI: 10.1097/mao.0000000000003860] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 03/17/2023]
Abstract
OBJECTIVE Music is a very crucial art form that can evoke emotions, and the harmonious presence of the human voice in music is an impactful part of this process. As a result, vocals have had some significant effects on contemporary music. The mechanism behind the cochlear implant (CI) recipients perceiving different aspects of music is clear; however, how well they perceive vocal tuning within music it is not well known. Hence, this study evaluated the mistuning perception of CI recipients and compared their performance with normal-hearing (NH) listeners. STUDY DESIGN, SETTING, AND PATIENTS A total of 16 CI users (7 cisgender men, 9 cisgender women) and 16 sex-matched NH controls with an average age of 30.2 (±10.9; range, 19-53) years and 23.5 (±6.1; range, 20-37) years, respectively, were enrolled in this study. We evaluated the mistuning ability using the mistuning perception test (MPT) and assessed self-perceived music perception and engagement using the music-related quality-of-life questionnaire. Test performance was measured and reported on the item-response theory metric with a z score ranging from -4 to +4. RESULTS A significant difference in the MPT scores was found between NH and CI recipients, whereas a significant correlation was noted between the music-related quality-of-life questionnaire-frequency subscale and MPT scores. No significant correlations were found between age, CI age, and CI usage duration and MPT performance. CONCLUSIONS This study revealed that musical mistuning perception is a limitation for CI recipients, similar to previously evaluated aspects of music perception. Hence, it is important to consider this aspect in the assessment of music perception, enjoyment, and music-based auditory interventions in CI recipients, as vocals are paramount in music perception and recreation. The MPT is a convenient and accessible tool for mistuning assessment in CI and hearing-aid users.
Collapse
|
4
|
Alvarez F, Kipping D, Nogueira W. A computational model to simulate spectral modulation and speech perception experiments of cochlear implant users. Front Neuroinform 2023; 17:934472. [PMID: 37006637 PMCID: PMC10061543 DOI: 10.3389/fninf.2023.934472] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/02/2022] [Accepted: 02/15/2023] [Indexed: 03/11/2023] Open
Abstract
Speech understanding in cochlear implant (CI) users presents large intersubject variability that may be related to different aspects of the peripheral auditory system, such as the electrode–nerve interface and neural health conditions. This variability makes it more challenging to proof differences in performance between different CI sound coding strategies in regular clinical studies, nevertheless, computational models can be helpful to assess the speech performance of CI users in an environment where all these physiological aspects can be controlled. In this study, differences in performance between three variants of the HiRes Fidelity 120 (F120) sound coding strategy are studied with a computational model. The computational model consists of (i) a processing stage with the sound coding strategy, (ii) a three-dimensional electrode-nerve interface that accounts for auditory nerve fiber (ANF) degeneration, (iii) a population of phenomenological ANF models, and (iv) a feature extractor algorithm to obtain the internal representation (IR) of the neural activity. As the back-end, the simulation framework for auditory discrimination experiments (FADE) was chosen. Two experiments relevant to speech understanding were performed: one related to spectral modulation threshold (SMT), and the other one related to speech reception threshold (SRT). These experiments included three different neural health conditions (healthy ANFs, and moderate and severe ANF degeneration). The F120 was configured to use sequential stimulation (F120-S), and simultaneous stimulation with two (F120-P) and three (F120-T) simultaneously active channels. Simultaneous stimulation causes electric interaction that smears the spectrotemporal information transmitted to the ANFs, and it has been hypothesized to lead to even worse information transmission in poor neural health conditions. In general, worse neural health conditions led to worse predicted performance; nevertheless, the detriment was small compared to clinical data. Results in SRT experiments indicated that performance with simultaneous stimulation, especially F120-T, were more affected by neural degeneration than with sequential stimulation. Results in SMT experiments showed no significant difference in performance. Although the proposed model in its current state is able to perform SMT and SRT experiments, it is not reliable to predict real CI users' performance yet. Nevertheless, improvements related to the ANF model, feature extraction, and predictor algorithm are discussed.
Collapse
Affiliation(s)
- Franklin Alvarez
- Medizinische Hochschule Hannover, Hannover, Germany
- Cluster of Excellence “Hearing4All”, Hannover, Germany
| | - Daniel Kipping
- Medizinische Hochschule Hannover, Hannover, Germany
- Cluster of Excellence “Hearing4All”, Hannover, Germany
| | - Waldo Nogueira
- Medizinische Hochschule Hannover, Hannover, Germany
- Cluster of Excellence “Hearing4All”, Hannover, Germany
- *Correspondence: Waldo Nogueira
| |
Collapse
|
5
|
Torppa R, Kuuluvainen S, Lipsanen J. The development of cortical processing of speech differs between children with cochlear implants and normal hearing and changes with parental singing. Front Neurosci 2022; 16:976767. [PMID: 36507354 PMCID: PMC9731313 DOI: 10.3389/fnins.2022.976767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/04/2022] [Indexed: 11/21/2022] Open
Abstract
Objective The aim of the present study was to investigate speech processing development in children with normal hearing (NH) and cochlear implants (CI) groups using a multifeature event-related potential (ERP) paradigm. Singing is associated to enhanced attention and speech perception. Therefore, its connection to ERPs was investigated in the CI group. Methods The paradigm included five change types in a pseudoword: two easy- (duration, gap) and three difficult-to-detect (vowel, pitch, intensity) with CIs. The positive mismatch responses (pMMR), mismatch negativity (MMN), P3a and late differentiating negativity (LDN) responses of preschoolers (below 6 years 9 months) and schoolchildren (above 6 years 9 months) with NH or CIs at two time points (T1, T2) were investigated with Linear Mixed Modeling (LMM). For the CI group, the association of singing at home and ERP development was modeled with LMM. Results Overall, responses elicited by the easy- and difficult to detect changes differed between the CI and NH groups. Compared to the NH group, the CI group had smaller MMNs to vowel duration changes and gaps, larger P3a responses to gaps, and larger pMMRs and smaller LDNs to vowel identity changes. Preschoolers had smaller P3a responses and larger LDNs to gaps, and larger pMMRs to vowel identity changes than schoolchildren. In addition, the pMMRs to gaps increased from T1 to T2 in preschoolers. More parental singing in the CI group was associated with increasing pMMR and less parental singing with decreasing P3a amplitudes from T1 to T2. Conclusion The multifeature paradigm is suitable for assessing cortical speech processing development in children. In children with CIs, cortical discrimination is often reflected in pMMR and P3a responses, and in MMN and LDN responses in children with NH. Moreover, the cortical speech discrimination of children with CIs develops late, and over time and age, their speech sound change processing changes as does the processing of children with NH. Importantly, multisensory activities such as parental singing can lead to improvement in the discrimination and attention shifting toward speech changes in children with CIs. These novel results should be taken into account in future research and rehabilitation.
Collapse
Affiliation(s)
- Ritva Torppa
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Centre of Excellence in Music, Mind, Body and Brain, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Soila Kuuluvainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Department of Digital Humanities, Faculty of Arts, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
6
|
Leterme G, Guigou C, Guenser G, Bigand E, Bozorg Grayeli A. Effect of Sound Coding Strategies on Music Perception with a Cochlear Implant. J Clin Med 2022; 11:jcm11154425. [PMID: 35956042 PMCID: PMC9369156 DOI: 10.3390/jcm11154425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/15/2022] [Accepted: 07/26/2022] [Indexed: 11/21/2022] Open
Abstract
The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.
Collapse
Affiliation(s)
- Gaëlle Leterme
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| | - Caroline Guigou
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
- Correspondence: ; Tel.: +33-615718531
| | - Geoffrey Guenser
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
| | - Emmanuel Bigand
- LEAD Research Laboratory, CNRS UMR 5022, Bourgogne-Franche-Comté University, 21000 Dijon, France;
| | - Alexis Bozorg Grayeli
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| |
Collapse
|
7
|
The burst gap is a peripheral temporal code for pitch perception that is shared across audition and touch. Sci Rep 2022; 12:11014. [PMID: 35773321 PMCID: PMC9246943 DOI: 10.1038/s41598-022-15269-5] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2022] [Accepted: 06/21/2022] [Indexed: 11/08/2022] Open
Abstract
When tactile afferents were manipulated to fire in periodic bursts of spikes, we discovered that the perceived pitch corresponded to the inter-burst interval (burst gap) in a spike train, rather than the spike rate or burst periodicity as previously thought. Given that tactile frequency mechanisms have many analogies to audition, and indications that temporal frequency channels are linked across the two modalities, we investigated whether there is burst gap temporal encoding in the auditory system. To link this putative neural code to perception, human subjects (n = 13, 6 females) assessed pitch elicited by trains of temporally-structured acoustic pulses in psychophysical experiments. Each pulse was designed to excite a fixed population of cochlear neurons, precluding place of excitation cues, and to elicit desired temporal spike trains in activated afferents. We tested periodicities up to 150 Hz using a variety of burst patterns and found striking deviations from periodicity-predicted pitch. Like the tactile system, the duration of the silent gap between successive bursts of neural activity best predicted perceived pitch, emphasising the role of peripheral temporal coding in shaping pitch. This suggests that temporal patterning of stimulus pulses in cochlear implant users might improve pitch perception.
Collapse
|
8
|
Soleimanifar S, Staisloff HE, Aronoff JM. The effect of simulated insertion depth differences on the vocal pitches of cochlear implant users. JASA EXPRESS LETTERS 2022; 2:044401. [PMID: 36154233 DOI: 10.1121/10.0010243] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
Cochlear implant (CI) users often produce different vocal pitches when using their left versus right CI. One possible explanation for this is that insertion depth differs across the two CIs. The goal of this study was to investigate the role of electrode insertion depth in the production of vocal pitch. Eleven individuals with bilateral CIs used maps simulating differences in insertion depth. Participants produced a sustained vowel and sang Happy Birthday. Approximately half the participants significantly shifted the pitch of their voice in response to different simulated insertion depths. The results suggest insertion depth differences can alter produced vocal pitch.
Collapse
Affiliation(s)
- Simin Soleimanifar
- Speech and Hearing Science Department, University of Illinois at Urbana-Champaign, 901 South 6th Street, Champaign, Illinois 61801, USA , ,
| | - Hannah E Staisloff
- Speech and Hearing Science Department, University of Illinois at Urbana-Champaign, 901 South 6th Street, Champaign, Illinois 61801, USA , ,
| | - Justin M Aronoff
- Speech and Hearing Science Department, University of Illinois at Urbana-Champaign, 901 South 6th Street, Champaign, Illinois 61801, USA , ,
| |
Collapse
|
9
|
Moore BCJ. Listening to Music Through Hearing Aids: Potential Lessons for Cochlear Implants. Trends Hear 2022; 26:23312165211072969. [PMID: 35179052 PMCID: PMC8859663 DOI: 10.1177/23312165211072969] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Some of the problems experienced by users of hearing aids (HAs) when listening to music are relevant to cochlear implants (CIs). One problem is related to the high peak levels (up to 120 dB SPL) that occur in live music. Some HAs and CIs overload at such levels, because of the limited dynamic range of the microphones and analogue-to-digital converters (ADCs), leading to perceived distortion. Potential solutions are to use 24-bit ADCs or to include an adjustable gain between the microphones and the ADCs. A related problem is how to squeeze the wide dynamic range of music into the limited dynamic range of the user, which can be only 6-20 dB for CI users. In HAs, this is usually done via multi-channel amplitude compression (automatic gain control, AGC). In CIs, a single-channel front-end AGC is applied to the broadband input signal or a control signal derived from a running average of the broadband signal level is used to control the mapping of the channel envelope magnitude to an electrical signal. This introduces several problems: (1) an intense narrowband signal (e.g. a strong bass sound) reduces the level for all frequency components, making some parts of the music harder to hear; (2) the AGC introduces cross-modulation effects that can make a steady sound (e.g. sustained strings or a sung note) appear to fluctuate in level. Potential solutions are to use several frequency channels to create slowly varying gain-control signals and to use slow-acting (or dual time-constant) AGC rather than fast-acting AGC.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, 2152University of Cambridge, Cambridge, England
| |
Collapse
|
10
|
Saadoun A, Schein A, Péan V, Legrand P, Aho Glélé LS, Bozorg Grayeli A. Frequency Fitting Optimization Using Evolutionary Algorithm in Cochlear Implant Users with Bimodal Binaural Hearing. Brain Sci 2022; 12:brainsci12020253. [PMID: 35204015 PMCID: PMC8870060 DOI: 10.3390/brainsci12020253] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/28/2021] [Revised: 01/26/2022] [Accepted: 01/28/2022] [Indexed: 02/04/2023] Open
Abstract
Optimizing hearing in patients with a unilateral cochlear implant (CI) and contralateral acoustic hearing is a challenge. Evolutionary algorithms (EA) can explore a large set of potential solutions in a stochastic manner to approach the optimum of a minimization problem. The objective of this study was to develop and evaluate an EA-based protocol to modify the default frequency settings of a MAP (fMAP) of the CI in patients with bimodal hearing. Methods: This monocentric prospective study included 27 adult CI users (with post-lingual deafness and contralateral functional hearing). A fitting program based on EA was developed to approach the best fMAP. Generated fMAPs were tested by speech recognition (word recognition score, WRS) in noise and free-field-like conditions. By combining these first fMAPs and adding some random changes, a total of 13 fMAPs over 3 generations were produced. Participants were evaluated before and 45 to 60 days after the fitting by WRS in noise and questionnaires on global sound quality and music perception in bimodal binaural conditions. Results: WRS in noise improved with the EA-based fitting in comparison to the default fMAP (41.67 ± 9.70% versus 64.63 ± 16.34%, respectively, p = 0.0001, signed-rank test). The global sound quality and music perception were also improved, as judged by ratings on questionnaires and scales. Finally, most patients chose to keep the new fitting definitively. Conclusions: By modifying the default fMAPs, the EA improved the speech discrimination in noise and the sound quality in bimodal binaural conditions.
Collapse
Affiliation(s)
- Alexis Saadoun
- Department of Otolaryngology—Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France; (A.S.); (A.S.)
| | - Antoine Schein
- Department of Otolaryngology—Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France; (A.S.); (A.S.)
| | - Vincent Péan
- Clinical Support Department, MED-EL, 75012 Paris, France;
| | - Pierrick Legrand
- Institute of Mathematics of Bordeaux, UMR CNRS 5251, ASTRAL Team, Inria Bordeaux Sud-Ouest, University of Bordeaux, 33405 Talence, France;
| | - Ludwig Serge Aho Glélé
- Department of Hospital Epidemiology and Infection Control, Dijon University Hospital, 21000 Dijon, France;
| | - Alexis Bozorg Grayeli
- Department of Otolaryngology—Head and Neck Surgery, Dijon University Hospital, 21000 Dijon, France; (A.S.); (A.S.)
- ImVia Research Laboratory, Bourgogne-Franche Comté University, 21000 Dijon, France
- Correspondence:
| |
Collapse
|
11
|
Joly CA, Reynard P, Hermann R, Seldran F, Gallego S, Idriss S, Thai-Van H. Intra-Cochlear Current Spread Correlates with Speech Perception in Experienced Adult Cochlear Implant Users. J Clin Med 2021; 10:jcm10245819. [PMID: 34945115 PMCID: PMC8709369 DOI: 10.3390/jcm10245819] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/10/2021] [Revised: 12/03/2021] [Accepted: 12/08/2021] [Indexed: 11/16/2022] Open
Abstract
Broader intra-cochlear current spread (ICCS) implies higher cochlear implant (CI) channel interactions. This study aimed to investigate the relationship between ICCS and speech intelligibility in experienced CI users. Using voltage matrices collected for impedance measurements, an individual exponential spread coefficient (ESC) was computed. Speech audiometry was performed to determine the intelligibility at 40 dB Sound Pressure Level (SPL) and the 50% speech reception threshold: I40 and SRT50 respectively. Correlations between ESC and either I40 or SRT50 were assessed. A total of 36 adults (mean age: 50 years) with more than 11 months (mean: 34 months) of CI experience were included. In the 21 subjects for whom all electrodes were active, ESC was moderately correlated with both I40 (r = −0.557, p = 0.009) and SRT50 (r = 0.569, p = 0.007). The results indicate that speech perception performance is negatively affected by the ICCS. Estimates of current spread at the closest vicinity of CI electrodes and prior to any activation of auditory neurons are indispensable to better characterize the relationship between CI stimulation and auditory perception in cochlear implantees.
Collapse
Affiliation(s)
- Charles-Alexandre Joly
- Institut de l’Audition, Institut Pasteur, Université de Paris, INSERM, 75012 Paris, France; (C.-A.J.); (P.R.)
- Université Claude Bernard Lyon 1, 69100 Villeurbanne, France; (R.H.); (S.G.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hôpital Edouard Herriot, Hospices Civils de Lyon, 69003 Lyon, France;
| | - Pierre Reynard
- Institut de l’Audition, Institut Pasteur, Université de Paris, INSERM, 75012 Paris, France; (C.-A.J.); (P.R.)
- Université Claude Bernard Lyon 1, 69100 Villeurbanne, France; (R.H.); (S.G.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hôpital Edouard Herriot, Hospices Civils de Lyon, 69003 Lyon, France;
| | - Ruben Hermann
- Université Claude Bernard Lyon 1, 69100 Villeurbanne, France; (R.H.); (S.G.)
- Integrative, Multisensory, Perception, Action and Cognition Team (IMPACT), Inserm U1028, CNRS UMR5292, Lyon Neuroscience Research Center, 69675 Bron, France
- Service d’ORL, Chirurgie Cervico-Faciale et d’Audiophonologie, Hospices Civils de Lyon, Hôpital Edouard Herriot, 69003 Lyon, France
| | | | - Stéphane Gallego
- Université Claude Bernard Lyon 1, 69100 Villeurbanne, France; (R.H.); (S.G.)
- Neuronal Dynamics and Audition Team (DNA), Laboratory of Cognitive Neuroscience, CNRS UMR7291, Aix-Marseille University, CEDEX 3, 13331 Marseille, France
| | - Samar Idriss
- Service d’Audiologie et d’Explorations Otoneurologiques, Hôpital Edouard Herriot, Hospices Civils de Lyon, 69003 Lyon, France;
| | - Hung Thai-Van
- Institut de l’Audition, Institut Pasteur, Université de Paris, INSERM, 75012 Paris, France; (C.-A.J.); (P.R.)
- Université Claude Bernard Lyon 1, 69100 Villeurbanne, France; (R.H.); (S.G.)
- Service d’Audiologie et d’Explorations Otoneurologiques, Hôpital Edouard Herriot, Hospices Civils de Lyon, 69003 Lyon, France;
- Correspondence:
| |
Collapse
|
12
|
Huang EHH, Wu CM, Lin HC. Combination and Comparison of Sound Coding Strategies Using Cochlear Implant Simulation With Mandarin Speech. IEEE Trans Neural Syst Rehabil Eng 2021; 29:2407-2416. [PMID: 34767509 DOI: 10.1109/tnsre.2021.3128064] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022]
Abstract
Three cochlear implant (CI) sound coding strategies were combined in the same signal processing path and compared for speech intelligibility with vocoded Mandarin sentences. The three CI coding strategies, biologically-inspired hearing aid algorithm (BioAid), envelope enhancement (EE), and fundamental frequency modulation (F0mod), were combined with the advanced combination encoder (ACE) strategy. Hence, four singular coding strategies and four combinational coding strategies were derived. Mandarin sentences with speech-shape noise were processed using these coding strategies. Speech understanding of vocoded Mandarin sentences was evaluated using short-time objective intelligibility (STOI) and subjective sentence recognition tests with normal-hearing listeners. For signal-to-noise ratios at 5 dB or above, the EE strategy had slightly higher average scores in both STOI and listening tests compared to ACE. The addition of EE to BioAid slightly increased the mean scores for BioAid+EE, which was the combination strategy with the highest scores in both objective and subjective speech intelligibility. The benefits of BioAid, F0mod, and the four combinational coding strategies were not observed in CI simulation. The findings of this study may be useful for the future design of coding strategies and related studies with Mandarin.
Collapse
|
13
|
Arjmandi M, Houston D, Wang Y, Dilley L. Estimating the reduced benefit of infant-directed speech in cochlear implant-related speech processing. Neurosci Res 2021; 171:49-61. [PMID: 33484749 PMCID: PMC8289972 DOI: 10.1016/j.neures.2021.01.007] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2020] [Revised: 12/19/2020] [Accepted: 01/17/2021] [Indexed: 11/27/2022]
Abstract
Caregivers modify their speech when talking to infants, a specific type of speech known as infant-directed speech (IDS). This speaking style facilitates language learning compared to adult-directed speech (ADS) in infants with normal hearing (NH). While infants with NH and those with cochlear implants (CIs) prefer listening to IDS over ADS, it is yet unknown how CI processing may affect the acoustic distinctiveness between ADS and IDS, as well as the degree of intelligibility of these. This study analyzed speech of seven female adult talkers to model the effects of simulated CI processing on (1) acoustic distinctiveness between ADS and IDS, (2) estimates of intelligibility of caregivers' speech in ADS and IDS, and (3) individual differences in caregivers' ADS-to-IDS modification and estimated speech intelligibility. Results suggest that CI processing is substantially detrimental to the acoustic distinctiveness between ADS and IDS, as well as to the intelligibility benefit derived from ADS-to-IDS modifications. Moreover, the observed variability across individual talkers in acoustic implementation of ADS-to-IDS modification and the estimated speech intelligibility was significantly reduced due to CI processing. The findings are discussed in the context of the link between IDS and language learning in infants with CIs.
Collapse
Affiliation(s)
- Meisam Arjmandi
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824, USA.
| | - Derek Houston
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Yuanyuan Wang
- Department of Otolaryngology - Head and Neck Surgery, The Ohio State University, 915 Olentangy River Road, Columbus, OH 43212, USA
| | - Laura Dilley
- Department of Communicative Sciences and Disorders, Michigan State University, 1026 Red Cedar Road, East Lansing, MI 48824, USA
| |
Collapse
|
14
|
Pastore MT, Pulling KR, Chen C, Yost WA, Dorman MF. Effects of Bilateral Automatic Gain Control Synchronization in Cochlear Implants With and Without Head Movements: Sound Source Localization in the Frontal Hemifield. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:2811-2824. [PMID: 34100627 PMCID: PMC8632503 DOI: 10.1044/2021_jslhr-20-00493] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/19/2020] [Revised: 12/31/2020] [Accepted: 02/24/2021] [Indexed: 06/12/2023]
Abstract
Purpose For bilaterally implanted patients, the automatic gain control (AGC) in both left and right cochlear implant (CI) processors is usually neither linked nor synchronized. At high AGC compression ratios, this lack of coordination between the two processors can distort interaural level differences, the only useful interaural difference cue available to CI patients. This study assessed the improvement, if any, in the utility of interaural level differences for sound source localization in the frontal hemifield when AGCs were synchronized versus independent and when listeners were stationary versus allowed to move their heads. Method Sound source identification of broadband noise stimuli was tested for seven bilateral CI patients using 13 loudspeakers in the frontal hemifield, under conditions where AGCs were linked and unlinked. For half the conditions, patients remained stationary; in the other half, they were encouraged to rotate or reorient their heads within a range of approximately ± 30° during sound presentation. Results In general, those listeners who already localized reasonably well with independent AGCs gained the least from AGC synchronization, perhaps because there was less room for improvement. Those listeners who performed worst with independent AGCs gained the most from synchronization. All listeners performed as well or better with synchronization than without; however, intersubject variability was high. Head movements had little impact on the effectiveness of synchronization of AGCs. Conclusion Synchronization of AGCs offers one promising strategy for improving localization performance in the frontal hemifield for bilaterally implanted CI patients. Supplemental Material https://doi.org/10.23641/asha.14681412.
Collapse
|
15
|
Meta-Analysis on the Identification of Linguistic and Emotional Prosody in Cochlear Implant Users and Vocoder Simulations. Ear Hear 2021; 41:1092-1102. [PMID: 32251011 DOI: 10.1097/aud.0000000000000863] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study quantitatively assesses how cochlear implants (CIs) and vocoder simulations of CIs influence the identification of linguistic and emotional prosody in nontonal languages. By means of meta-analysis, it was explored how accurately CI users and normal-hearing (NH) listeners of vocoder simulations (henceforth: simulation listeners) identify prosody compared with NH listeners of unprocessed speech (henceforth: NH listeners), whether this effect of electric hearing differs between CI users and simulation listeners, and whether the effect of electric hearing is influenced by the type of prosody that listeners identify or by the availability of specific cues in the speech signal. DESIGN Records were found by searching the PubMed Central, Web of Science, Scopus, Science Direct, and PsycINFO databases (January 2018) using the search terms "cochlear implant prosody" and "vocoder prosody." Records (published in English) were included that reported results of experimental studies comparing CI users' and/or simulation listeners' identification of linguistic and/or emotional prosody in nontonal languages to that of NH listeners (all ages included). Studies that met the inclusion criteria were subjected to a multilevel random-effects meta-analysis. RESULTS Sixty-four studies reported in 28 records were included in the meta-analysis. The analysis indicated that CI users and simulation listeners were less accurate in correctly identifying linguistic and emotional prosody compared with NH listeners, that the identification of emotional prosody was more strongly compromised by the electric hearing speech signal than linguistic prosody was, and that the low quality of transmission of fundamental frequency (f0) through the electric hearing speech signal was the main cause of compromised prosody identification in CI users and simulation listeners. Moreover, results indicated that the accuracy with which CI users and simulation listeners identified linguistic and emotional prosody was comparable, suggesting that vocoder simulations with carefully selected parameters can provide a good estimate of how prosody may be identified by CI users. CONCLUSIONS The meta-analysis revealed a robust negative effect of electric hearing, where CIs and vocoder simulations had a similar negative influence on the identification of linguistic and emotional prosody, which seemed mainly due to inadequate transmission of f0 cues through the degraded electric hearing speech signal of CIs and vocoder simulations.
Collapse
|
16
|
Tak S, Yathiraj A. Comparison of Relative Loudness Judgment in Children using Listening Devices with Typically Developing Children. Int Arch Otorhinolaryngol 2021; 25:e54-e63. [PMID: 33542752 PMCID: PMC7850889 DOI: 10.1055/s-0040-1702971] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Accepted: 12/21/2019] [Indexed: 11/15/2022] Open
Abstract
Introduction
Loudness perception is considered important for the perception of emotions, relative distance and stress patterns. However, certain digital hearing devices worn by those with hearing impairment may affect their loudness perception. This could happen in devices that have compression circuits to make loud sounds soft and soft sounds loud. These devices could hamper children from gaining knowledge about loudness of acoustical signals.
Objective
To compare relative loudness judgment of children using listening devices with age-matched typically developing children.
Methods
The relative loudness judgment of sounds created by day-to-day objects were evaluated on 60 children (20 normal-hearing, 20 hearing aid users, & 20 cochlear implant users), utilizing a standard group comparison design. Using a two-alternate forced-choice technique, the children were required to select picturized sound sources that were louder.
Results
The majority of the participants obtained good scores and poorer scores were mainly obtained by children using cochlear implants. The cochlear implant users obtained significantly lower scores than the normal-hearing participants. However, the scores were not significantly different between the normal-hearing children and the hearing aid users as well as between the two groups with hearing impairment.
Conclusion
Thus, despite loudness being altered by listening devices, children using non-linear hearing aids or cochlear implants are able to develop relative loudness judgment for acoustic stimuli. However, loudness growth for electrical stimuli needs to be studied.
Collapse
Affiliation(s)
- Shubha Tak
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru, Karnataka, India
| | - Asha Yathiraj
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru, Karnataka, India
| |
Collapse
|
17
|
Abstract
INTRODUCTION Cochlear implants (CIs) are biomedical devices that restore sound perception for people with severe-to-profound sensorineural hearing loss. Most postlingually deafened CI users are able to achieve excellent speech recognition in quiet environments. However, current CI sound processors remain limited in their ability to deliver fine spectrotemporal information, making it difficult for CI users to perceive complex sounds. Limited access to complex acoustic cues such as music, environmental sounds, lexical tones, and voice emotion may have significant ramifications on quality of life, social development, and community interactions. AREAS COVERED The purpose of this review article is to summarize the literature on CIs and music perception, with an emphasis on music training in pediatric CI recipients. The findings have implications on our understanding of noninvasive, accessible methods for improving auditory processing and may help advance our ability to improve sound quality and performance for implantees. EXPERT OPINION Music training, particularly in the pediatric population, may be able to continue to enhance auditory processing even after performance plateaus. The effects of these training programs appear generalizable to non-trained musical tasks, speech prosody and, emotion perception. Future studies should employ rigorous control groups involving a non-musical acoustic intervention, standardized auditory stimuli, and the provision of feedback.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| |
Collapse
|
18
|
Grandon B, Vilain A. Development of fricative production in French-speaking school-aged children using cochlear implants and children with normal hearing. JOURNAL OF COMMUNICATION DISORDERS 2020; 86:105996. [PMID: 32485648 DOI: 10.1016/j.jcomdis.2020.105996] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 03/27/2020] [Accepted: 03/29/2020] [Indexed: 06/11/2023]
Abstract
In the course of productive phonological development, fricatives are among the last speech sounds to emerge and to be mastered by children, probably because of the high degree of articulatory precision they require or because of difficulties with their perception. Children with cochlear implants (CI) face additional difficulties with fricative perception, since high spectral frequency components are shown to be especially difficult to perceive with a cochlear implant. Studying fricative production in children with CIs allows to study how the partial transmission of speech sounds by cochlear implants influences children's speech production, and therefore to explore how perceptual abilities influence the late stages of phonological development. This acoustic study focuses on fricative production at three places of articulation (i.e., /f/, /s/ and /ʃ/), comparing productions by two groups of children (20 children with normal hearing (NH) vs. 13 children with CIs, all aged 5;7 to 10;7 years), and taking into account their consistency in coarticulation and the stability of their production across two different tasks (word-repetition and picture-naming). Statistical analyses were carried out by means of linear mixed-effect models. The results show that while both groups produce /ʃ/ with similar acoustic characteristics, between-group differences are found for /f/ and /s/. Furthermore, effects of consonant-vowel coarticulation are found for children with NH, and are absent for children with CIs. Effects of chronological age are only found for children with CIs (production in older children with CIs nearing that of children with NH). Our study shows that the development of fricative production of five- to 11-year-old children with CIs is affected by the children's hearing abilities and late access to auditory information. These limitations however do not prevent the children from eventually reaching a consistency similar to that of children with NH, as suggested by the fact that their production is still evolving during that age span. The results also show that the acquisition of coarticulation strategies can be impeded by degraded or delayed access to audio.
Collapse
Affiliation(s)
- Bénédicte Grandon
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France.
| | - Anne Vilain
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France; Institut Universitaire de France, France
| |
Collapse
|
19
|
Kirchner A, Loucks TM, Abbs E, Shi K, Yu JW, Aronoff JM. Influence of bilateral cochlear implants on vocal control. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:2423. [PMID: 32359322 PMCID: PMC7173977 DOI: 10.1121/10.0001099] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Receiving a cochlear implant (CI) can improve fundamental frequency (F0) control for deaf individuals, resulting in increased vocal pitch control. However, it is unclear whether using bilateral CIs, which often result in mismatched pitch perception between ears, will counter this benefit. To investigate this, 23 bilateral CI users were asked to produce a sustained vocalization using one CI, the other CI, both CIs, or neither. Additionally, a set of eight normal hearing participants completed the sustained vocalization task as a control group. The results indicated that F0 control is worse with both CIs compared to using the ear that yields the lowest vocal variability. The results also indicated that there was a large range of F0 variability even for the relatively stable portion of the vocalization, spanning from 6 to 46 cents. These results suggest that bilateral CIs can detrimentally affect vocal control.
Collapse
Affiliation(s)
- Abbigail Kirchner
- Department of Speech and Hearing Science, The University of Illinois at Urbana Champaign, 901 South 6th Street, Champaign, Illinois 61820, USA
- Electronic mail:
| | - Torrey M. Loucks
- Department of Communication Sciences and Disorders, University of Alberta, 116 St. and 85 Avenue, Edmonton, Alberta T6G 2R3, Canada
| | - Elizabeth Abbs
- Department of Speech and Hearing Science, The University of Illinois at Urbana Champaign, 901 South 6th Street, Champaign, Illinois 61820, USA
| | - Kevin Shi
- Department of Otolaryngology, The University of Illinois at Chicago, 1740 West Taylor Street, Chicago, Illinois 60612, USA
| | - Jeff W. Yu
- Department of Otolaryngology, The University of Illinois at Chicago, 1740 West Taylor Street, Chicago, Illinois 60612, USA
| | - Justin M. Aronoff
- Department of Speech and Hearing Science, The University of Illinois at Urbana Champaign, 901 South 6th Street, Champaign, Illinois 61820, USA
| |
Collapse
|
20
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
21
|
Steel MM, Polonenko MJ, Giannantonio S, Hopyan T, Papsin BC, Gordon KA. Music Perception Testing Reveals Advantages and Continued Challenges for Children Using Bilateral Cochlear Implants. Front Psychol 2020; 10:3015. [PMID: 32038391 PMCID: PMC6985588 DOI: 10.3389/fpsyg.2019.03015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/19/2019] [Indexed: 11/25/2022] Open
Abstract
A modified version of the child’s Montreal Battery of Evaluation of Amusia (cMBEA) was used to assess music perception in children using bilateral cochlear implants. Our overall aim was to promote better performance by children with CIs on the cMBEA by modifying the complement of instruments used in the test and adding pieces transposed in frequency. The 10 test trials played by piano were removed and two high and two low frequency trials added to each of five subtests (20 additional). The modified cMBEA was completed by 14 children using bilateral cochlear implants and 23 peers with normal hearing. Results were compared with performance on the original version of the cMBEA previously reported in groups of similar aged children: 2 groups with normal hearing (n = 23: Hopyan et al., 2012; n = 16: Polonenko et al., 2017), 1 group using bilateral cochlear implants (CIs) (n = 26: Polonenko et al., 2017), 1 group using bimodal (hearing aid and CI) devices (n = 8: Polonenko et al., 2017), and 1 group using unilateral CI (n = 23: Hopyan et al., 2012). Children with normal hearing had high scores on the modified version of the cMBEA and there were no significant score differences from children with normal hearing who completed the original cMBEA. Children with CIs showed no significant improvement in scores on the modified cMBEA compared to peers with CIs who completed the original version of the test. The group with bilateral CIs who completed the modified cMBEA showed a trend toward better abilities to remember music compared to children listening through a unilateral CI but effects were smaller than in previous cohorts of children with bilateral CIs and bimodal devices who completed the original cMBEA. Results confirmed that musical perception changes with the type of instrument and is better for music transposed to higher rather than lower frequencies for children with normal hearing but not for children using bilateral CIs. Overall, the modified version of the cMBEA revealed that modifications to music do not overcome the limitations of the CI to improve music perception for children.
Collapse
Affiliation(s)
- Morrison M Steel
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Melissa J Polonenko
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Sara Giannantonio
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Talar Hopyan
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Blake C Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Karen A Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
22
|
Dillon MT, Buss E, Rooth MA, King ER, Pillsbury HC, Brown KD. Low-Frequency Pitch Perception in Cochlear Implant Recipients With Normal Hearing in the Contralateral Ear. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:2860-2871. [PMID: 31306588 DOI: 10.1044/2019_jslhr-h-18-0409] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose Three experiments were carried out to evaluate the low-frequency pitch perception of adults with unilateral hearing loss who received a cochlear implant (CI). Method Participants were recruited from a cohort of CI users with unilateral hearing loss and normal hearing in the contralateral ear. First, low-frequency pitch perception was assessed for the 5 most apical electrodes at 1, 3, 6, and 12 months after CI activation using an adaptive pitch-matching task. Participants listened with a coding strategy that presents low-frequency temporal fine structure (TFS) and compared the pitch to that of an acoustic target presented to the normal hearing ear. Next, participants listened with an envelope-only, continuous interleaved sampling strategy. Pitch perception was compared between coding strategies to assess the influence of TFS cues on low-frequency pitch perception. Finally, participants completed a vocal pitch-matching task to corroborate the results obtained with the adaptive pitch-matching task. Results Pitch matches roughly corresponded to electrode center frequencies (CFs) in the CI map. Adaptive pitch matches exceeded the CF for the most apical electrode, an effect that was larger for continuous interleaved sampling than TFS. Vocal pitch matches were variable but correlated with the CF of the 3 most apical electrodes. There was no evidence that pitch matches changed between the 1- and 12-month intervals. Conclusions Relatively accurate and asymptotic pitch perception was observed at the 1-month interval, indicating either very rapid acclimatization or the provision of familiar place and rate cues. Early availability of appropriate pitch cues could have played a role in the early improvements in localization and masked speech recognition previously observed in this cohort. Supplemental Material https://doi.org/10.23641/asha.8862389.
Collapse
Affiliation(s)
- Margaret T Dillon
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill
| | - Emily Buss
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill
| | - Meredith A Rooth
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill
| | - English R King
- Department of Audiology, UNC Healthcare, Chapel Hill, NC
| | - Harold C Pillsbury
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill
| | - Kevin D Brown
- Department of Otolaryngology/Head & Neck Surgery, University of North Carolina at Chapel Hill
| |
Collapse
|
23
|
Pak CL, Katz WF. Recognition of emotional prosody by Mandarin-speaking adults with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:EL165. [PMID: 31472572 DOI: 10.1121/1.5122192] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/28/2019] [Accepted: 07/29/2019] [Indexed: 06/10/2023]
Abstract
To understand how cochlear implant processing affects emotional prosody recognition in tonal languages, how normal-hearing (NH) and cochlear-implanted (CI) adults identify four emotions ("angry," "happy," "sad," and "neutral") in short, semantically neutral, Mandarin sentences are compared. Depending on hearing status (CI, NH), adults heard natural speech and/or noise-vocoded speech conditions (4-, 8-, and 16-spectral channels). Results suggest that Mandarin-speaking adults with CIs recognize emotions with similar accuracy as NH listeners attending to spectrally degraded (4-channel) vocoded speech. The accuracy noted for Mandarin appears to be lower than that described in previous studies of English.
Collapse
Affiliation(s)
- Cecilia L Pak
- Department of Communication Sciences and Disorders, The University of Texas at Dallas, 800 West Campbell Road, Richardson, Texas 75080, ,
| | - William F Katz
- Department of Communication Sciences and Disorders, The University of Texas at Dallas, 800 West Campbell Road, Richardson, Texas 75080, ,
| |
Collapse
|
24
|
De Clerck I, Verhoeven J, Gillis S, Pettinato M, Gillis S. Listeners' perception of lexical stress in the first words of infants with cochlear implants and normally hearing infants. JOURNAL OF COMMUNICATION DISORDERS 2019; 80:52-65. [PMID: 31078023 DOI: 10.1016/j.jcomdis.2019.03.008] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/16/2018] [Revised: 02/20/2019] [Accepted: 03/31/2019] [Indexed: 06/09/2023]
Abstract
Normally hearing (NH) infants are able to produce lexical stress in their first words, but congenitally hearing-impaired children with cochlear implants (CI) may find this more challenging, given the limited transmission of spectro-temporal information by the implant. Acoustic research has shown that the acoustic cues to stress in the first words of Dutch-acquiring CI infants are less pronounced (Pettinato, De Clerck, Verhoeven, & Gillis, 2017). The present study investigates how listeners perceive lexical stress in the first words of CI and NH infants. Two research questions are addressed: (1) How successful are CI and NH children in implementing the prosodic cues to prominence? (2) Is the degree of stress in CI and NH words perceived to be similar? The stimuli used in this study are disyllabic words (n = 1089) produced by 9 infants with CI and 9 NH infants acquiring Dutch. The words were presented to adult listeners in a listening experiment, in which they assessed the stress pattern on a continuous visual analogue scale (VAS) which expresses to what extent syllables are perceived as stressed. The results show that listeners perceive typical word stress production in the first words of infants with CI. The words of CI and NH infants were rated in agreement with the target stress pattern as often, and trochaic words were rated more frequently as such than iambic words. Listeners more frequently perceive unstressed syllables in the first words of infants with CI. However, for the words that are perceived to be clearly stressed, the degree of word stress is comparable in the two groups, and both infant groups are perceived to produce more contrast between stressed and unstressed syllables in trochees than in iambs. It is concluded that that acoustic differences between CI and NH infants' stress production are not necessarily perceptually salient.
Collapse
Affiliation(s)
- Ilke De Clerck
- Department of Linguistics, CLiPS Computational Linguistics and Psycholinguistics Research Centre, University of Antwerp, Prinstraat 13, Antwerp, Belgium.
| | - Jo Verhoeven
- Department of Linguistics, CLiPS Computational Linguistics and Psycholinguistics Research Centre, University of Antwerp, Prinstraat 13, Antwerp, Belgium; Division of Language and Communication Science, City University London, Northampton Square, London, UK.
| | - San Gillis
- Hasselt University, Department of Physics.
| | - Michèle Pettinato
- Department of Linguistics, CLiPS Computational Linguistics and Psycholinguistics Research Centre, University of Antwerp, Prinstraat 13, Antwerp, Belgium.
| | | |
Collapse
|
25
|
van de Velde DJ, Frijns JHM, Beers M, van Heuven VJ, Levelt CC, Briaire J, Schiller NO. Basic Measures of Prosody in Spontaneous Speech of Children With Early and Late Cochlear Implantation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:3075-3094. [PMID: 30515513 DOI: 10.1044/2018_jslhr-h-17-0233] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/16/2017] [Accepted: 07/05/2018] [Indexed: 06/09/2023]
Abstract
PURPOSE Relative to normally hearing (NH) peers, the speech of children with cochlear implants (CIs) has been found to have deviations such as a high fundamental frequency, elevated jitter and shimmer, and inadequate intonation. However, two important dimensions of prosody (temporal and spectral) have not been systematically investigated. Given that, in general, the resolution in CI hearing is best for the temporal dimension and worst for the spectral dimension, we expected this hierarchy to be reflected in the amount of CI speech's deviation from NH speech. Deviations, however, were expected to diminish with increasing device experience. METHOD Of 9 Dutch early- and late-implanted (division at 2 years of age) children and 12 hearing age-matched NH controls, spontaneous speech was recorded at 18, 24, and 30 months after implantation (CI) or birth (NH). Six spectral and temporal outcome measures were compared between groups, sessions, and genders. RESULTS On most measures, interactions of Group and/or Gender with Session were significant. For CI recipients as compared with controls, performance on temporal measures was not in general more deviant than spectral measures, although differences were found for individual measures. The late-implanted group had a tendency to be closer to the NH group than the early-implanted group. Groups converged over time. CONCLUSIONS Results did not support the phonetic dimension hierarchy hypothesis, suggesting that the appropriateness of the production of basic prosodic measures does not depend on auditory resolution. Rather, it seems to depend on the amount of control necessary for speech production.
Collapse
Affiliation(s)
- Daan J van de Velde
- Leiden University Centre for Linguistics, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| | - Johan H M Frijns
- Leiden Institute for Brain and Cognition, the Netherlands
- Leiden University Medical Center, the Netherlands
| | - Mieke Beers
- Leiden University Medical Center, the Netherlands
| | - Vincent J van Heuven
- Department of Hungarian and Applied Linguistics, Pannon Egyetem, Veszprém, Hungary
| | - Claartje C Levelt
- Leiden University Centre for Linguistics, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| | | | - Niels O Schiller
- Leiden University Centre for Linguistics, the Netherlands
- Leiden Institute for Brain and Cognition, the Netherlands
| |
Collapse
|
26
|
Pulse-rate discrimination deficit in cochlear implant users: is the upper limit of pitch peripheral or central? Hear Res 2018; 371:1-10. [PMID: 30423498 DOI: 10.1016/j.heares.2018.10.018] [Citation(s) in RCA: 13] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/17/2018] [Revised: 10/04/2018] [Accepted: 10/31/2018] [Indexed: 11/20/2022]
Abstract
Cochlear implant (CI) users do not reliably associate an increase in pulse rate above 300 pulses per second (pps) with an increase in pitch. The locus of this upper limit of pitch remains unknown. The present study tested the hypothesis that this deficit resides at least initially at the auditory nerve. The hypothesis was tested by comparing pulse rate discrimination in different neural excitation patterns, in which a large versus small population of auditory nerve fibers was activated. If poorer pulse rate discrimination was found under conditions where narrower spread of neural excitation (SOE) was anticipated where a relatively small neural population was activated, then it would support the hypothesis that the rate processing deficit found in CI users is related to peripheral neural degeneration. Nine listeners (12 ears) implanted with the Cochlear Americas Nucleus® devices participated in the study. Different SOE conditions were created by (1) selecting electrodes that showed narrow versus broad forward-masked psychophysical spatial tuning curves, and (2) by measuring these electrodes in monopolar (MP) and narrow bipolar (BP0) electrode configurations. Rate discrimination difference limen (DL) was measured at the selected electrodes in two electrode configurations at three base rates (200, 300 and 500 pps). Consistent with the prediction, group mean DL was better (1) at stimulation sites measured with broader tuning, and (2) in MP relative to BP stimulation. These effects were more salient at the more challenging base rates. There was a weak relationship between rate discrimination (above thresholds) and the effect of rate on detection thresholds. Finally, rate discrimination at rates above the known upper limit (i.e., 500 pps) was correlated with duration of deafness and highly predicted the subjects' speech recognition performance in noise. These findings support that pulse rate discrimination depends, at least partially, on neural conditions at the auditory periphery and this peripheral limit predicts speech recognition outcomes with a CI.
Collapse
|
27
|
Mathew R, Vickers D, Boyle P, Shaida A, Selvadurai D, Jiang D, Undurraga J. Development of electrophysiological and behavioural measures of electrode discrimination in adult cochlear implant users. Hear Res 2018; 367:74-87. [DOI: 10.1016/j.heares.2018.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Revised: 06/20/2018] [Accepted: 07/02/2018] [Indexed: 10/28/2022]
|
28
|
Sundström S, Löfkvist U, Lyxell B, Samuelsson C. Prosodic and segmental aspects of nonword repetition in 4- to 6-year-old children who are deaf and hard of hearing compared to controls with normal hearing. CLINICAL LINGUISTICS & PHONETICS 2018; 32:950-971. [PMID: 29723069 DOI: 10.1080/02699206.2018.1469671] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
Children who are deaf or hard of hearing (DHH) are at an increased risk of speech and language deficits. Nonword repetition (NWR) is a potential predictor of problems with phonology, grammar and lexicon in DHH children. The aim of the present study was to examine repetition of prosodic features and segments in nonwords by DHH children compared to children with normal hearing (NH) and to relate NWR performance to measures of language ability and background variables. In this cross-sectional study, 14 Swedish-speaking children with mild-profound sensorineural hearing loss, aged 4-6 years, and 29 age-matched controls with NH and typical language development participated. The DHH children used cochlear implants (CI), hearing aids or a combination of both. The assessment materials included a prosodically controlled NWR task, as well as tests of phonological production, expressive grammar and receptive vocabulary. The DHH children performed below the children with NH on the repetition of tonal word accents, stress patterns, vowels and consonants, with consonants being hardest, and tonal word accents easiest, to repeat. NWR performance was also correlated with language ability, and to hearing level, in the DHH children. Both prosodic and segmental features of nonwords are problematic for Swedish-speaking DHH children compared to children with NH, but performance on tonal word accent repetition is comparably high. NWR may have potential as a clinically useful tool for identification of children who are in need of speech and language intervention.
Collapse
Affiliation(s)
- Simon Sundström
- a Department of Clinical and Experimental Medicine , Linköping University , Linköping , Sweden
| | - Ulrika Löfkvist
- b Department of Special Needs Education , University of Oslo , Oslo , Norway
- c Department of Clinical Science, Intervention and Technology , Karolinska Institute , Stockholm , Sweden
| | - Björn Lyxell
- d Department of Behavioural Sciences and Learning and the Swedish Institute for Disability Research , Linköping University , Linköping , Sweden
| | - Christina Samuelsson
- a Department of Clinical and Experimental Medicine , Linköping University , Linköping , Sweden
| |
Collapse
|
29
|
Koning R, Bruce IC, Denys S, Wouters J. Perceptual and Model-Based Evaluation of Ideal Time-Frequency Noise Reduction in Hearing-Impaired Listeners. IEEE Trans Neural Syst Rehabil Eng 2018. [PMID: 29522412 DOI: 10.1109/tnsre.2018.2794557] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/06/2022]
Abstract
State-of-the-art hearing aids (HAs) try to overcome the deficit of poor speech intelligibility (SI) in noisy listening environments using digital noise reduction (NR) techniques. The application of time-frequency masks to the noisy sound input is a common NR technique to increase SI. The binary mask with its binary weights and the Wiener filter with continuous weights are representatives of a hard- and a soft-decision approach for time-frequency masking. In normal-hearing listeners, the ideal Wiener filter (IWF) outperforms the ideal binary mask (IBM) in terms of SI and speech quality with perfect SI even at very low signal-to-noise ratios. In this paper, both approaches were investigated for hearing-impaired (HI) listeners. Perceptual and auditory model-based measures were used for the evaluation. The IWF outperformed the IBM in terms of SI. Quality-wise, there was no overall difference between the NR algorithms perceived. Additionally, the processed signals were evaluated based on an auditory nerve model using the neurogram similarity metric (NSIM). The mean NSIM values were significantly different for intelligible and unintelligible sentences. The results suggest that a soft-mask seems to be promising for application in HAs.
Collapse
|
30
|
McCreery D, Yadev K, Han M. Responses of neurons in the feline inferior colliculus to modulated electrical stimuli applied on and within the ventral cochlear nucleus; Implications for an advanced auditory brainstem implant. Hear Res 2018; 363:85-97. [PMID: 29573880 DOI: 10.1016/j.heares.2018.03.009] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/29/2017] [Revised: 03/01/2018] [Accepted: 03/06/2018] [Indexed: 11/25/2022]
Abstract
Auditory brainstem implants (ABIs) can restore useful hearing to persons with deafness who cannot benefit from cochlear implants. However, the quality of hearing restored by ABIs rarely is comparable to that provided by cochlear implants in persons for whom those are appropriate. In an animal model, we evaluated elements of a prototype of an ABI in which the functions of macroelectrodes on the surface of the dorsal cochlear nucleus would be integrated with the function of multiple penetrating microelectrodes implanted into the ventral cochlear nucleus. The surface electrodes would convey most of the range of loudness percepts while the intranuclear microelectrodes would sharpen and focus pitch percepts. In the present study, stimulating electrodes were implanted chronically on the surface of the animal's dorsal cochlear nucleus (DCN) and also within their ventral cochlear nucleus (VCN). Recording microelectrodes were implanted into the central nucleus of the inferior colliculus (ICC). The electrical stimuli were sinusoidally modulated stimulus pulse trains applied on the DCN and within the VCN. Temporal encoding of neuronal responses was quantified as vector strength (VS) and as full-cycle rate of neuronal activity in the ICC. VS and full-cycle AP rate were measured for 4 stimulation modes; continuous and transient amplitude modulation of the stimulus pulse trains, each delivered via the macroelectrode on the surface of the DCN and then by the intranuclear penetrating microelectrodes. In the proposed clinical device the functions of the surface and intranuclear microelectrodes could best be integrated if there is minimal variation in the neuronal responses across the range of modulation depth, modulation frequencies, and across the four stimulation modes. In this study VS did vary as much as 34% across modulation frequency and modulation depth within a stimulation mode, and up to 40% between modulation modes. However, these intra- and inter-mode variances differed for different stimulation rates, and at 500 Hz the inter-mode differences in VS and across the range of modulation frequencies and modulation depths was<Roman> = </Roman>24% and the intra-modal differences were<Roman> = </Roman>15%. The findings were generally similar for rate encoding of modulation depth, although the depth of transient amplitude modulation delivered by the surface electrode was weakly encoded as full-cycle rate. Overall, our findings support the concept of a clinical ABI that employs surface stimulation and intranuclear microstimulation in an integrated manner.
Collapse
Affiliation(s)
- Douglas McCreery
- Neural Engineering Program at Huntington Medical Research Institutes, 734 Fairmount Ave, Pasadena, CA 91105, USA.
| | - Kamal Yadev
- Rigetti Computing, 775Heinz Avenue, Berkeley, CA 94710, USA.
| | - Martin Han
- Biomedical Engineering Department, School of Engineering & Institute of Material Sciences, The University of Connecticut at Storrs, 260Glenbrook Rd, Unit 3247, Storrs, Connecticut 06269-3247, USA.
| |
Collapse
|
31
|
Real-Time Robust Voice Activity Detection Using the Upper Envelope Weighted Entropy Measure and the Dual-Rate Adaptive Nonlinear Filter. ENTROPY 2017. [DOI: 10.3390/e19110487] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
|
32
|
Li YL, Lin YH, Yang HM, Chen YJ, Wu JL. Tone production and perception and intelligibility of produced speech in Mandarin-speaking cochlear implanted children. Int J Audiol 2017; 57:135-142. [DOI: 10.1080/14992027.2017.1374566] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
- Yi-Lu Li
- Department of Otolaryngology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan,
| | - Yi-Hui Lin
- Department of Otolaryngology, Tainan Municipal Hospital, Tainan, Taiwan, and
| | - Hui-Mei Yang
- Department of Otolaryngology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan,
| | - Yeou-Jiunn Chen
- Department of Electrical Engineering, Southern Taiwan University of Science and Technology, Tainan, Taiwan
| | - Jiunn-Liang Wu
- Department of Otolaryngology, National Cheng Kung University Hospital, College of Medicine, National Cheng Kung University, Tainan, Taiwan,
| |
Collapse
|
33
|
Tracking Down Nonresponsive Cortical Neurons in Cochlear Implant Stimulation. eNeuro 2017; 4:eN-COM-0095-17. [PMID: 28660249 PMCID: PMC5485376 DOI: 10.1523/eneuro.0095-17.2017] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2017] [Revised: 06/13/2017] [Accepted: 06/15/2017] [Indexed: 11/21/2022] Open
|
34
|
Expansion of Prosodic Abilities at the Transition From Babble to Words: A Comparison Between Children With Cochlear Implants and Normally Hearing Children. Ear Hear 2017; 38:475-486. [DOI: 10.1097/aud.0000000000000406] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
|
35
|
Abstract
Describing the human brain in mathematical terms is an important ambition of neuroscience research, yet the challenges remain considerable. It was Alan Turing, writing in 1950, who first sought to demonstrate how time-consuming such an undertaking would be. Through analogy to the computer program, Turing argued that arriving at a complete mathematical description of the mind would take well over a thousand years. In this opinion piece, we argue that — despite seventy years of progress in the field — his arguments remain both prescient and persuasive.
Collapse
|
36
|
Vanwalleghem G, Heap LA, Scott EK. A profile of auditory-responsive neurons in the larval zebrafish brain. J Comp Neurol 2017; 525:3031-3043. [DOI: 10.1002/cne.24258] [Citation(s) in RCA: 33] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/01/2017] [Revised: 05/26/2017] [Accepted: 05/29/2017] [Indexed: 12/19/2022]
Affiliation(s)
- Gilles Vanwalleghem
- School of Biomedical Sciences; The University of Queensland; St Lucia QLD Australia
| | - Lucy A. Heap
- School of Biomedical Sciences; The University of Queensland; St Lucia QLD Australia
| | - Ethan K. Scott
- School of Biomedical Sciences; The University of Queensland; St Lucia QLD Australia
- The Queensland Brain Institute, The University of Queensland; St Lucia QLD Australia
| |
Collapse
|
37
|
Johnson LA, Della Santina CC, Wang X. Selective Neuronal Activation by Cochlear Implant Stimulation in Auditory Cortex of Awake Primate. J Neurosci 2016; 36:12468-12484. [PMID: 27927962 PMCID: PMC5148231 DOI: 10.1523/jneurosci.1699-16.2016] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/27/2016] [Revised: 10/05/2016] [Accepted: 10/10/2016] [Indexed: 11/21/2022] Open
Abstract
Despite the success of cochlear implants (CIs) in human populations, most users perform poorly in noisy environments and music and tonal language perception. How CI devices engage the brain at the single neuron level has remained largely unknown, in particular in the primate brain. By comparing neuronal responses with acoustic and CI stimulation in marmoset monkeys unilaterally implanted with a CI electrode array, we discovered that CI stimulation was surprisingly ineffective at activating many neurons in auditory cortex, particularly in the hemisphere ipsilateral to the CI. Further analyses revealed that the CI-nonresponsive neurons were narrowly tuned to frequency and sound level when probed with acoustic stimuli; such neurons likely play a role in perceptual behaviors requiring fine frequency and level discrimination, tasks that CI users find especially challenging. These findings suggest potential deficits in central auditory processing of CI stimulation and provide important insights into factors responsible for poor CI user performance in a wide range of perceptual tasks. SIGNIFICANCE STATEMENT The cochlear implant (CI) is the most successful neural prosthetic device to date and has restored hearing in hundreds of thousands of deaf individuals worldwide. However, despite its huge successes, CI users still face many perceptual limitations, and the brain mechanisms involved in hearing through CI devices remain poorly understood. By directly comparing single-neuron responses to acoustic and CI stimulation in auditory cortex of awake marmoset monkeys, we discovered that neurons unresponsive to CI stimulation were sharply tuned to frequency and sound level. Our results point out a major deficit in central auditory processing of CI stimulation and provide important insights into mechanisms underlying the poor CI user performance in a wide range of perceptual tasks.
Collapse
Affiliation(s)
| | - Charles C Della Santina
- Departments of Biomedical Engineering and
- Otolaryngology-Head and Neck Surgery, Johns Hopkins University School of Medicine, Baltimore, Maryland 21025
| | | |
Collapse
|
38
|
Pons J, Janer J, Rode T, Nogueira W. Remixing music using source separation algorithms to improve the musical experience of cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:4338. [PMID: 28040023 DOI: 10.1121/1.4971424] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Music perception remains rather poor for many Cochlear Implant (CI) users due to the users' deficient pitch perception. However, comprehensible vocals and simple music structures are well perceived by many CI users. In previous studies researchers re-mixed songs to make music more enjoyable for them, favoring the preferred music elements (vocals or beat) attenuating the others. However, mixing music requires the individually recorded tracks (multitracks) which are usually not accessible. To overcome this limitation, Source Separation (SS) techniques are proposed to estimate the multitracks. These estimated multitracks are further re-mixed to create more pleasant music for CI users. However, SS may introduce undesirable audible distortions and artifacts. Experiments conducted with CI users (N = 9) and normal hearing listeners (N = 9) show that CI users can have different mixing preferences than normal hearing listeners. Moreover, it is shown that CI users' mixing preferences are user dependent. It is also shown that SS methods can be successfully used to create preferred re-mixes although distortions and artifacts are present. Finally, CI users' preferences are used to propose a benchmark that defines the maximum acceptable levels of SS distortion and artifacts for two different mixes proposed by CI users.
Collapse
Affiliation(s)
- Jordi Pons
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| | - Jordi Janer
- Music Technology Group, Department of Information and Communication Technologies, Universitat Pompeu Fabra. Roc Boronat 138, 55.310, 08018 Barcelona, Spain
| | - Thilo Rode
- HoerSys GmbH, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| | - Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover and Cluster of Excellence Hearing4all, Karl-Wiechert Allee 3, 30625, Hannover, Germany
| |
Collapse
|
39
|
Zarei E, Sadjedi H. A new approach for speech synthesis in cochlear implant systems based on electrophysiological factors. Technol Health Care 2016; 25:221-235. [PMID: 27689564 DOI: 10.3233/thc-161265] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
BACKGROUND Speech synthesis models have been considered as viable tools for performance evaluation of cochlear stimulation algorithms, due to the difficulties of clinical tests. OBJECTIVE The present study has developed a tool that can be used before any audio signal reconstruction algorithm, which shows more conformity with the electrophysiological parameters of the patient in evaluation of the cochlear implant stimulation algorithms. METHODS In this method, excitable nerve fiber characteristics such as stimulation threshold and effective refractory period have been considered in the signal pre-reconstruction process. This algorithm subsumes the user's biological parameters (e.g., the manner of distribution of the remaining intact nerve fibers) as well as the stimulation signal parameters (e.g., stimulation rate, pulse width, amplitude of stimulation, the distance between stimulation electrode and fibers) in the signal pre-reconstruction. RESULTS Effect of changes in these parameters can be observed by the number of excited fibers, which is directly related to the signal intensity and pitch frequency perceived by the user. The obtained results from simulations are in accordance with previous clinical findings. Also, the ability of the proposed tool can be seen by the correspondence between the results obtained from the proposed model and the amplitude growth functions of the cochlear implant users. CONCLUSIONS This paper has introduced a tool for signal reconstruction from electrical stimulation so that a more comprehensive criterion for examination of the stimulating algorithms in cochlear implant can be achieved.
Collapse
Affiliation(s)
- Elham Zarei
- Department of Biomedical Engineering, Science and Research Branch, Islamic Azad University, Tehran, Iran
| | - Hamed Sadjedi
- Engineering Faculty, Shahed University, Tehran, Iran
| |
Collapse
|
40
|
Monaghan JJM, Seeber BU. A method to enhance the use of interaural time differences for cochlear implants in reverberant environments. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1116. [PMID: 27586742 PMCID: PMC5708523 DOI: 10.1121/1.4960572] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
The ability of normal-hearing (NH) listeners to exploit interaural time difference (ITD) cues conveyed in the modulated envelopes of high-frequency sounds is poor compared to ITD cues transmitted in the temporal fine structure at low frequencies. Sensitivity to envelope ITDs is further degraded when envelopes become less steep, when modulation depth is reduced, and when envelopes become less similar between the ears, common factors when listening in reverberant environments. The vulnerability of envelope ITDs is particularly problematic for cochlear implant (CI) users, as they rely on information conveyed by slowly varying amplitude envelopes. Here, an approach to improve access to envelope ITDs for CIs is described in which, rather than attempting to reduce reverberation, the perceptual saliency of cues relating to the source is increased by selectively sharpening peaks in the amplitude envelope judged to contain reliable ITDs. Performance of the algorithm with room reverberation was assessed through simulating listening with bilateral CIs in headphone experiments with NH listeners. Relative to simulated standard CI processing, stimuli processed with the algorithm generated lower ITD discrimination thresholds and increased extents of laterality. Depending on parameterization, intelligibility was unchanged or somewhat reduced. The algorithm has the potential to improve spatial listening with CIs.
Collapse
Affiliation(s)
- Jessica J M Monaghan
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
| | - Bernhard U Seeber
- Medical Research Council Institute of Hearing Research, Nottingham, United Kingdom
| |
Collapse
|
41
|
Nguyen TAK, DiGiovanna J, Cavuscens S, Ranieri M, Guinand N, van de Berg R, Carpaneto J, Kingma H, Guyot JP, Micera S, Fornos AP. Characterization of pulse amplitude and pulse rate modulation for a human vestibular implant during acute electrical stimulation. J Neural Eng 2016; 13:046023. [DOI: 10.1088/1741-2560/13/4/046023] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/11/2022]
|
42
|
The Use of Prosodic Cues in Sentence Processing by Prelingually Deaf Users of Cochlear Implants. Ear Hear 2016; 37:e256-62. [DOI: 10.1097/aud.0000000000000253] [Citation(s) in RCA: 19] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
|
43
|
O'Brien GE, Rubinstein JT. The development of biophysical models of the electrically stimulated auditory nerve: Single-node and cable models. NETWORK (BRISTOL, ENGLAND) 2016; 27:135-156. [PMID: 27070730 DOI: 10.3109/0954898x.2016.1162338] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/05/2023]
Abstract
In the last few decades, biophysical models have emerged as a prominent tool in the study and improvement of cochlear implants, a neural prosthetic that restores a degree of sound perception to the profoundly deaf. Owing to the spatial phenomena associated with extracellular stimulation, these models have evolved to a relatively high degree of morphological and physiological detail: single-node models in the tradition of Hodgkin-Huxley are paired with cable descriptions of the auditory nerve fiber. No singular model has emerged as a frontrunner to the field; rather, parameter sets deriving from the channel kinetics and morphologies of numerous organisms (mammalian and otherwise) are combined and tuned to foster strong agreement with response properties observed in vivo, such as refractoriness, summation, and strength-duration relationships. Recently, biophysical models of the electrically stimulated auditory nerve have begun to incorporate adaptation and stochastic mechanisms, in order to better realize the goal of predicting realistic neural responses to a wide array of stimuli.
Collapse
Affiliation(s)
- Gabrielle E O'Brien
- a Department of Otolaryngology, V.M. Bloedel Hearing Research Center , University of Washington , Seattle , Washington , USA
| | - Jay T Rubinstein
- a Department of Otolaryngology, V.M. Bloedel Hearing Research Center , University of Washington , Seattle , Washington , USA
| |
Collapse
|
44
|
Morris DJ, Christiansen L, Uglebjerg C, Brännström KJ, Falkenberg ES. Parental comparison of the prosodic and paralinguistic ability of children with cochlear implants and their normal hearing siblings. CLINICAL LINGUISTICS & PHONETICS 2015; 29:840-851. [PMID: 26338285 PMCID: PMC4673563 DOI: 10.3109/02699206.2015.1055803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Received: 12/04/2014] [Revised: 05/21/2015] [Accepted: 05/22/2015] [Indexed: 06/05/2023]
Abstract
The everyday communication of children is commonly observed by their parents. This paper examines the responses of parents (n=18) who had both a Cochlear Implant (CI) and a Normal Hearing (NH) child. Through an online questionnaire, parents rated the ability of their children on a gamut of speech communication competencies encountered in everyday settings. Comparative parental ratings of the CI children were significantly poorer than those of their NH siblings in speaker recognition, happy and sad emotion, and question versus statement identification. Parents also reported that they changed the vocal effort and the enunciation of their speech when they addressed their CI child and that their CI child consistently responded when their name was called in normal, but not in noisy backgrounds. Demographic factors were not found to be linked to the parental impressions.
Collapse
Affiliation(s)
- David J. Morris
- Department of Nordic Studies and Linguistics, University of Copenhagen,
Copenhagen S,
Denmark
| | - Lærke Christiansen
- Department of Nordic Studies and Linguistics, University of Copenhagen,
Copenhagen S,
Denmark
| | - Cathrine Uglebjerg
- Department of Nordic Studies and Linguistics, University of Copenhagen,
Copenhagen S,
Denmark
| | - K. Jonas Brännström
- Department of Logopedics, Phoniatrics and Audiology, Clinical Sciences in Lund, Lund University,
Lund,
Sweden
| | - Eva-Signe Falkenberg
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo,
Oslo,
Norway
| |
Collapse
|
45
|
Tavartkiladze GA. [The current state and prospects of the development of cochlear implantation]. Vestn Otorinolaringol 2015; 80:4-9. [PMID: 26331167 DOI: 10.17116/otorino20158034-9] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
This paper reports the literature data summarizing the recent achievements in the field of rehabilitation of the patients suffering from deafness and serious impairment of hearing with the use of cochlear implantation. Much attention is given to the limitations of the modern strategies of signal processing and the prospects for the further development of scientific research in this area. Special emphasis is laid on recent progress in audiology including the binaural cochlear implant technology and the electroacoustic stimulation facilitating significant improvement in the outcomes of rehabilitation of the patients. Also, the prospects for the further developments in the field of construction of the new cochlear implantations systems, the novel algorithms for information processing, and the original therapeutic modalities designed to stimulated the growth of axonal processed of the spiral ganglion and their outgrowths into the electrode system.
Collapse
Affiliation(s)
- G A Tavartkiladze
- National Research Centre for Audiology and Hearing Rehabilitation, Russian Federal Medico-Biological Agency, Moscow, Russia, 117513; Russian Medical Academy of Post-Graduate Education, Russian Ministry of Health, Moscow, Russia, 123995
| |
Collapse
|
46
|
Boothalingam S, Allan C, Allen P, Purcell D. Cochlear Delay and Medial Olivocochlear Functioning in Children with Suspected Auditory Processing Disorder. PLoS One 2015; 10:e0136906. [PMID: 26317850 PMCID: PMC4552631 DOI: 10.1371/journal.pone.0136906] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2015] [Accepted: 08/09/2015] [Indexed: 11/18/2022] Open
Abstract
Behavioral manifestations of processing deficits associated with auditory processing disorder (APD) have been well documented. However, little is known about their anatomical underpinnings, especially cochlear processing. Cochlear delays, a proxy for cochlear tuning, measured using stimulus frequency otoacoustic emission (SFOAE) group delay, and the influence of the medial olivocochlear (MOC) system activation at the auditory periphery was studied in 23 children suspected with APD (sAPD) and 22 typically developing (TD) children. Results suggest that children suspected with APD have longer SFOAE group delays (possibly due to sharper cochlear tuning) and reduced MOC function compared to TD children. Other differences between the groups include correlation between MOC function and SFOAE delay in quiet in the TD group, and lack thereof in the sAPD group. MOC-mediated changes in SFOAE delay were in opposite directions between groups: increase in delay in TD vs. reduction in delay in the sAPD group. Longer SFOAE group delays in the sAPD group may lead to longer cochlear filter ringing, and potential increase in forward masking. These results indicate differences in cochlear and MOC function between sAPD and TD groups. Further studies are warranted to explore the possibility of cochlea as a potential site for processing deficits in APD.
Collapse
Affiliation(s)
- Sriram Boothalingam
- National Center for Audiology, Western University, London, ON, Canada
- * E-mail:
| | - Chris Allan
- National Center for Audiology, Western University, London, ON, Canada
| | - Prudence Allen
- National Center for Audiology, Western University, London, ON, Canada
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
| | - David Purcell
- National Center for Audiology, Western University, London, ON, Canada
- School of Communication Sciences and Disorders, Western University, London, ON, Canada
| |
Collapse
|
47
|
Theelen - van den Hoek FL, Boymans M, Dreschler WA. Spectral loudness summation for electrical stimulation in cochlear implant users. Int J Audiol 2015; 54:818-27. [DOI: 10.3109/14992027.2015.1046090] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
48
|
Kalathottukaren RT, Purdy SC, Ballard E. Prosody perception and musical pitch discrimination in adults using cochlear implants. Int J Audiol 2015; 54:444-52. [DOI: 10.3109/14992027.2014.997314] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
49
|
Torppa R, Huotilainen M, Leminen M, Lipsanen J, Tervaniemi M. Interplay between singing and cortical processing of music: a longitudinal study in children with cochlear implants. Front Psychol 2014; 5:1389. [PMID: 25540628 PMCID: PMC4261723 DOI: 10.3389/fpsyg.2014.01389] [Citation(s) in RCA: 28] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/26/2014] [Accepted: 11/13/2014] [Indexed: 11/30/2022] Open
Abstract
Informal music activities such as singing may lead to augmented auditory perception and attention. In order to study the accuracy and development of music-related sound change detection in children with cochlear implants (CIs) and normal hearing (NH) aged 4–13 years, we recorded their auditory event-related potentials twice (at T1 and T2, 14–17 months apart). We compared their MMN (preattentive discrimination) and P3a (attention toward salient sounds) to changes in piano tone pitch, timbre, duration, and gaps. Of particular interest was to determine whether singing can facilitate auditory perception and attention of CI children. It was found that, compared to the NH group, the CI group had smaller and later timbre P3a and later pitch P3a, implying degraded discrimination and attention shift. Duration MMN became larger from T1 to T2 only in the NH group. The development of response patterns for duration and gap changes were not similar in the CI and NH groups. Importantly, CI singers had enhanced or rapidly developing P3a or P3a-like responses over all change types. In contrast, CI non-singers had rapidly enlarging pitch MMN without enlargement of P3a, and their timbre P3a became smaller and later over time. These novel results show interplay between MMN, P3a, brain development, cochlear implantation, and singing. They imply an augmented development of neural networks for attention and more accurate neural discrimination associated with singing. In future studies, differential development of P3a between CI and NH children should be taken into account in comparisons of these groups. Moreover, further studies are needed to assess whether singing enhances auditory perception and attention of children with CIs.
Collapse
Affiliation(s)
- Ritva Torppa
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland
| | - Minna Huotilainen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland ; Brain Work Research Centre, Finnish Institute of Occupational Health Helsinki, Finland
| | - Miika Leminen
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland ; MINDLab, Center of Functionally Integrative Neuroscience, Aarhus University Aarhus, Denmark
| | - Jari Lipsanen
- Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland
| | - Mari Tervaniemi
- Cognitive Brain Research Unit, Cognitive Science, Institute of Behavioural Sciences, University of Helsinki Helsinki, Finland ; Finnish Centre for Interdisciplinary Music Research, University of Helsinki Helsinki, Finland
| |
Collapse
|
50
|
Effects of age on melody and timbre perception in simulations of electro-acoustic and cochlear-implant hearing. Ear Hear 2014; 35:195-202. [PMID: 24441739 DOI: 10.1097/aud.0b013e3182a69a5c] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Recent evidence suggests that age might affect the ability of listeners to process fundamental frequency cues in speech, and that this difficulty might impact the ability of older listeners to use and combine envelope and fine structure cues available in simulations of electro-acoustic and cochlear-implant hearing. The purpose of this article is to examine whether this difficulty extends to music. Specially, this study focuses on whether older listeners have a decreased ability to use and combine different types of cues in the perception of melody and timbre. DESIGN A group of older listeners with normal to near-normal hearing and a group of younger listeners with normal hearing participated in the melody and timbre recognition tasks of the University of Washington Clinical Assessment of Music Perception test. The recognition tasks were completed for five different processing conditions: (1) an unprocessed condition; (2) an eight-channel vocoding condition that simulated a traditional cochlear implant and contained temporal envelope cues; (3) a simulation of electro-acoustic stimulation (sEAS) that included a low-pass acoustic component and high-pass vocoded portion, and which provided fine structure and envelope cues; (4) a condition that included only the low-pass acoustic portion of the sEAS; and (5) a condition that included only the high-frequency vocoded portion of the sEAS stimulus. RESULTS Melody recognition was excellent for both younger and older listeners in the conditions containing the unprocessed stimuli, the full sEAS stimuli, and the low-pass sEAS stimuli. Melody recognition was significantly worse in the cochlear-implant simulation condition, especially for the older group of listeners. Performance on the timbre task was highest for the unprocessed condition, and progressively decreased for the sEAS and cochlear-implant simulation conditions. Compared with younger listeners, older listeners had significantly poorer timbre recognition for all processing conditions. For melody recognition, the unprocessed low-frequency portion of the sEAS stimulus was the primary factor determining improved performance in the sEAS condition compared with the cochlear-implant simulation. For timbre recognition, both the unprocessed low-frequency and high-frequency vocoded portions of the sEAS stimulus contributed to sEAS improvement in the younger group. In contrast, most listeners in the older group were not able to take advantage of the high-frequency vocoded portion of the sEAS stimulus for timbre recognition. CONCLUSIONS The results of this simulation study support the idea that older listeners will have diminished timbre and melody perception in traditional cochlear-implant listening due to degraded envelope processing. The findings also suggest that music perception by older listeners with cochlear implants will be improved with the addition of low-frequency residual hearing. However, these improvements might not be comparable for all dimensions of music perception. That is, more improvement might be evident for tasks that rely primarily on the low-frequency portion of the electro-acoustic stimulus (e.g., melody recognition), and less improvement might be evident in situations that require across-frequency integration of cues (e.g., timbre perception).
Collapse
|