1
|
Yang AW, Pillion EM, Riley CA, Tolisano AM. Differences in music appreciation between bilateral and single-sided cochlear implant recipients. Am J Otolaryngol 2024; 45:104331. [PMID: 38677147 DOI: 10.1016/j.amjoto.2024.104331] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/06/2024] [Accepted: 04/21/2024] [Indexed: 04/29/2024]
Abstract
OBJECTIVE To compare changes in music appreciation after cochlear implant (CI) surgery for patients with bilateral and single-sided deafness (SSD). METHODS A retrospective cohort study was performed on all adult CI unilateral or bilateral recipients from November 2019 to March 2023. Musical questionnaire subset data from the Cochlear Implant Quality of Life (CIQOL) - 35 Profile Instrument Score (maximum raw score of 15) was collected. Functional CI assessment was measured with CI-alone speech-in-quiet (SIQ) scores (AzBio and CNC). RESULTS 22 adults underwent CI surgery for SSD and 21 adults for bilateral deafness (8 sequentially implanted). Every patient group had clinically significant improvements (p < 0.001) in mean SIQ scores in the most recently implanted ear (Azbio (% correct) SSD: 14.23 to 68.48, bilateral: 24.54 to 82.23, sequential: 6.25 to 82.57). SSD adults on average had higher music QOL scores at baseline (SSD: 11.05; bilateral: 7.86, p < 0.001). No group had significant increases in raw score at the first post-operative visit (SSD: 11.45, p = 0.86; bilateral: 8.15, p = 0.15). By the most recent post-implantation evaluation (median 12.8 months for SSD, 12.3 months for bilateral), SSD adults had a significant increase in raw score from baseline (11.05 to 12.45, p = 0.03), whereas bilaterally deafened (7.86 to 9.38, p = 0.12) adults had nonsignificant increases. CONCLUSIONS SSD patients demonstrate higher baseline music appreciation than bilaterally deafened individuals regardless of unilateral or bilateral implantation and are more likely to demonstrate continued improvement in subjective music appreciation at last follow-up even when speech perception outcomes are similar.
Collapse
Affiliation(s)
- Alex W Yang
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Elicia M Pillion
- Department of Audiology, Walter Reed National Military Medical Center, Bethesda, MD, USA
| | - Charles A Riley
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA
| | - Anthony M Tolisano
- Department of Otolaryngology Head and Neck Surgery, Walter Reed National Military Medical Center, Bethesda, MD, USA; Department of Surgery, Uniformed Services University of the Health Sciences, Bethesda, MD, USA.
| |
Collapse
|
2
|
Khayr R, Khnifes R, Shpak T, Banai K. Task-Specific Rapid Auditory Perceptual Learning in Adult Cochlear Implant Recipients: What Could It Mean for Speech Recognition. Ear Hear 2024:00003446-990000000-00285. [PMID: 38829780 DOI: 10.1097/aud.0000000000001523] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/05/2024]
Abstract
OBJECTIVES Speech recognition in cochlear implant (CI) recipients is quite variable, particularly in challenging listening conditions. Demographic, audiological, and cognitive factors explain some, but not all, of this variance. The literature suggests that rapid auditory perceptual learning explains unique variance in speech recognition in listeners with normal hearing and those with hearing loss. The present study focuses on the early adaptation phase of task-specific rapid auditory perceptual learning. It investigates whether adult CI recipients exhibit this learning and, if so, whether it accounts for portions of the variance in their recognition of fast speech and speech in noise. DESIGN Thirty-six adult CI recipients (ages = 35 to 77, M = 55) completed a battery of general speech recognition tests (sentences in speech-shaped noise, four-talker babble noise, and natural-fast speech), cognitive measures (vocabulary, working memory, attention, and verbal processing speed), and a rapid auditory perceptual learning task with time-compressed speech. Accuracy in the general speech recognition tasks was modeled with a series of generalized mixed models that accounted for demographic, audiological, and cognitive factors before accounting for the contribution of task-specific rapid auditory perceptual learning of time-compressed speech. RESULTS Most CI recipients exhibited early task-specific rapid auditory perceptual learning of time-compressed speech within the course of the first 20 sentences. This early task-specific rapid auditory perceptual learning had unique contribution to the recognition of natural-fast speech in quiet and speech in noise, although the contribution to natural-fast speech may reflect the rapid learning that occurred in this task. When accounting for demographic and cognitive characteristics, an increase of 1 SD in the early task-specific rapid auditory perceptual learning rate was associated with ~52% increase in the odds of correctly recognizing natural-fast speech in quiet, and ~19% to 28% in the odds of correctly recognizing the different types of speech in noise. Age, vocabulary, attention, and verbal processing speed also had unique contributions to general speech recognition. However, their contribution varied between the different general speech recognition tests. CONCLUSIONS Consistent with previous findings in other populations, in CI recipients, early task-specific rapid auditory perceptual, learning also accounts for some of the individual differences in the recognition of speech in noise and natural-fast speech in quiet. Thus, across populations, the early rapid adaptation phase of task-specific rapid auditory perceptual learning might serve as a skill that supports speech recognition in various adverse conditions. In CI users, the ability to rapidly adapt to ongoing acoustical challenges may be one of the factors associated with good CI outcomes. Overall, CI recipients with higher cognitive resources and faster rapid learning rates had better speech recognition.
Collapse
Affiliation(s)
- Ranin Khayr
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Riyad Khnifes
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Talma Shpak
- Department of Otolaryngology-Head and Neck Surgery, Bnai-Zion Medical Center, Technion-Bruce Rappaport Faculty of Medicine, Haifa, Israel
| | - Karen Banai
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Studies, University of Haifa, Haifa, Israel
| |
Collapse
|
3
|
Caprini F, Zhao S, Chait M, Agus T, Pomper U, Tierney A, Dick F. Generalization of auditory expertise in audio engineers and instrumental musicians. Cognition 2024; 244:105696. [PMID: 38160651 DOI: 10.1016/j.cognition.2023.105696] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/02/2021] [Revised: 12/04/2023] [Accepted: 12/13/2023] [Indexed: 01/03/2024]
Abstract
From auditory perception to general cognition, the ability to play a musical instrument has been associated with skills both related and unrelated to music. However, it is unclear if these effects are bound to the specific characteristics of musical instrument training, as little attention has been paid to other populations such as audio engineers and designers whose auditory expertise may match or surpass that of musicians in specific auditory tasks or more naturalistic acoustic scenarios. We explored this possibility by comparing students of audio engineering (n = 20) to matched conservatory-trained instrumentalists (n = 24) and to naive controls (n = 20) on measures of auditory discrimination, auditory scene analysis, and speech in noise perception. We found that audio engineers and performing musicians had generally lower psychophysical thresholds than controls, with pitch perception showing the largest effect size. Compared to controls, audio engineers could better memorise and recall auditory scenes composed of non-musical sounds, whereas instrumental musicians performed best in a sustained selective attention task with two competing streams of tones. Finally, in a diotic speech-in-babble task, musicians showed lower signal-to-noise-ratio thresholds than both controls and engineers; however, a follow-up online study did not replicate this musician advantage. We also observed differences in personality that might account for group-based self-selection biases. Overall, we showed that investigating a wider range of forms of auditory expertise can help us corroborate (or challenge) the specificity of the advantages previously associated with musical instrument training.
Collapse
Affiliation(s)
- Francesco Caprini
- Department of Psychological Sciences, Birkbeck, University of London, UK.
| | - Sijia Zhao
- Department of Experimental Psychology, University of Oxford, UK
| | - Maria Chait
- University College London (UCL) Ear Institute, UK
| | - Trevor Agus
- School of Arts, English and Languages, Queen's University Belfast, UK
| | - Ulrich Pomper
- Department of Cognition, Emotion, and Methods in Psychology, Universität Wien, Austria
| | - Adam Tierney
- Department of Psychological Sciences, Birkbeck, University of London, UK
| | - Fred Dick
- Department of Experimental Psychology, University College London (UCL), UK
| |
Collapse
|
4
|
Drouin JR, Flores S. Effects of training length on adaptation to noise-vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:2114-2127. [PMID: 38488452 DOI: 10.1121/10.0025273] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/24/2023] [Accepted: 02/22/2024] [Indexed: 03/19/2024]
Abstract
Listeners show rapid perceptual learning of acoustically degraded speech, though the amount of exposure required to maximize speech adaptation is unspecified. The current work used a single-session design to examine the length of auditory training on perceptual learning for normal hearing listeners exposed to eight-channel noise-vocoded speech. Participants completed short, medium, or long training using a two-alternative forced choice sentence identification task with feedback. To assess learning and generalization, a 40-trial pre-test and post-test transcription task was administered using trained and novel sentences. Training results showed all groups performed near ceiling with no reliable differences. For test data, we evaluated changes in transcription accuracy using separate linear mixed models for trained or novel sentences. In both models, we observed a significant improvement in transcription at post-test relative to pre-test. Critically, the three training groups did not differ in the magnitude of improvement following training. Subsequent Bayes factors analysis evaluating the test by group interaction provided strong evidence in support of the null hypothesis. For these stimuli and procedure, results suggest increased training does not necessarily maximize learning outcomes; both passive and trained experience likely supported adaptation. Findings may contribute to rehabilitation recommendations for listeners adapting to degraded speech signals.
Collapse
Affiliation(s)
- Julia R Drouin
- Division of Speech and Hearing Sciences, University of North Carolina at Chapel Hill, Chapel Hill, North Carolina 27599, USA
| | - Stephany Flores
- Department of Communication Sciences and Disorders, California State University Fullerton, Fullerton, California 92831, USA
| |
Collapse
|
5
|
Walia A, Shew MA, Lefler SM, Ortmann AJ, Durakovic N, Wick CC, Herzog JA, Buchman CA. Factors Affecting Performance in Adults With Cochlear Implants: A Role for Cognition and Residual Cochlear Function. Otol Neurotol 2023; 44:988-996. [PMID: 37733968 PMCID: PMC10840600 DOI: 10.1097/mao.0000000000004015] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/23/2023]
Abstract
OBJECTIVE To evaluate the impact of preoperative and perioperative factors on postlinguistic adult cochlear implant (CI) performance and design a multivariate prediction model. STUDY DESIGN Prospective cohort study. SETTING Tertiary referral center. PATIENTS AND INTERVENTIONS Two hundred thirty-nine postlinguistic adult CI recipients. MAIN OUTCOME MEASURES Speech-perception testing (consonant-nucleus-consonant [CNC], AzBio in noise +10-dB signal-to-noise ratio) at 3, 6, and 12 months postoperatively; electrocochleography-total response (ECochG-TR) at the round window before electrode insertion. RESULTS ECochG-TR strongly correlated with CNC word score at 6 months ( r = 0.71, p < 0.0001). A multivariable linear regression model including age, duration of hearing loss, angular insertion depth, and ECochG-TR did not perform significantly better than ECochG-TR alone in explaining the variability in CNC. AzBio in noise at 6 months had moderate linear correlations with Montreal Cognitive Assessment (MoCA; r = 0.38, p < 0.0001) and ECochG-TR ( r = 0.42, p < 0.0001). ECochG-TR and MoCA and their interaction explained 45.1% of the variability in AzBio in noise scores. CONCLUSIONS This study uses the most comprehensive data set to date to validate ECochG-TR as a measure of cochlear health as it relates to suitability for CI stimulation, and it further underlies the importance of the cochlear neural substrate as the main driver in speech perception performance. Performance in noise is more complex and requires both good residual cochlear function (ECochG-TR) and cognition (MoCA). Other demographic, audiologic, and surgical variables are poorly correlated with CI performance suggesting that these are poor surrogates for the integrity of the auditory substrate.
Collapse
Affiliation(s)
- Amit Walia
- Department of Otolaryngology-Head and Neck Surgery, Washington University School of Medicine in St. Louis, St. Louis, Missouri
| | | | | | | | | | | | | | | |
Collapse
|
6
|
Cychosz M, Xu K, Fu QJ. Effects of spectral smearing on speech understanding and masking release in simulated bilateral cochlear implants. PLoS One 2023; 18:e0287728. [PMID: 37917727 PMCID: PMC10621938 DOI: 10.1371/journal.pone.0287728] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2023] [Accepted: 06/11/2023] [Indexed: 11/04/2023] Open
Abstract
Differences in spectro-temporal degradation may explain some variability in cochlear implant users' speech outcomes. The present study employs vocoder simulations on listeners with typical hearing to evaluate how differences in degree of channel interaction across ears affects spatial speech recognition. Speech recognition thresholds and spatial release from masking were measured in 16 normal-hearing subjects listening to simulated bilateral cochlear implants. 16-channel sine-vocoded speech simulated limited, broad, or mixed channel interaction, in dichotic and diotic target-masker conditions, across ears. Thresholds were highest with broad channel interaction in both ears but improved when interaction decreased in one ear and again in both ears. Masking release was apparent across conditions. Results from this simulation study on listeners with typical hearing show that channel interaction may impact speech recognition more than masking release, and may have implications for the effects of channel interaction on cochlear implant users' speech recognition outcomes.
Collapse
Affiliation(s)
- Margaret Cychosz
- Department of Linguistics, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States of America
| |
Collapse
|
7
|
Muacevic A, Adler JR, Chu TSM, Chan J. The 100 Most-Cited Manuscripts in Hearing Implants: A Bibliometrics Analysis. Cureus 2023; 15:e33711. [PMID: 36793822 PMCID: PMC9925031 DOI: 10.7759/cureus.33711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 01/12/2023] [Indexed: 01/13/2023] Open
Abstract
The aim of the study was to characterise the most frequently cited articles on the topic of hearing implants. A systematic search was carried out using the Thomson Reuters Web of Science Core Collection database. Eligibility criteria restricted the results to primary studies and reviews published from 1970 to 2022 in English dealing primarily with hearing implants. Data including the authors, year of publication, journal, country of origin, number of citations and average number of citations per year were extracted, as well as the impact factors and five-year impact factor of journals publishing the articles. The top 100 papers were published across 23 journals and were cited 23,139 times. The most-cited and influential article describes the first use of the continuous interleaved sampling (CIS) strategy utilised in all modern cochlear implants. More than half of the studies on the list were produced by authors from the United States, and the Ear and Hearing journal had both the greatest number of articles and the greatest number of total citations. To conclude, this research serves as a guide to the most influential articles on the topic of hearing implants, although bibliometric analyses mainly focus on citations. The most-cited article was an influential description of CIS.
Collapse
|
8
|
Image-Guided Cochlear Implant Programming: A Systematic Review and Meta-analysis. Otol Neurotol 2022; 43:e924-e935. [PMID: 35973035 DOI: 10.1097/mao.0000000000003653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVE To review studies evaluating clinically implemented image-guided cochlear implant programing (IGCIP) and to determine its effect on cochlear implant (CI) performance. DATA SOURCES PubMed, EMBASE, and Google Scholar were searched for English language publications from inception to August 1, 2021. STUDY SELECTION Included studies prospectively compared intraindividual CI performance between an image-guided experimental map and a patient's preferred traditional map. Non-English studies, cadaveric studies, and studies where imaging did not directly inform programming were excluded. DATA EXTRACTION Seven studies were identified for review, and five reported comparable components of audiological testing and follow-up times appropriate for meta-analysis. Demographic, speech, spectral modulation, pitch accuracy, and quality-of-life survey data were collected. Aggregate data were used when individual data were unavailable. DATA SYNTHESIS Audiological test outcomes were evaluated as standardized mean change (95% confidence interval) using random-effects meta-analysis with raw score standardization. Improvements in speech and quality-of-life measures using the IGCIP map demonstrated nominal effect sizes: consonant-nucleus-consonant words, 0.15 (-0.12 to 0.42); AzBio quiet, 0.09 (-0.05 to 0.22); AzBio +10 dB signal-noise ratio, 0.14 (-0.01 to 0.30); Bamford-Kowel-Bench sentence in noise, -0.11 (-0.35 to 0.12); Abbreviated Profile of Hearing Aid Benefit, -0.14 (-0.28 to 0.00); and Speech Spatial and Qualities of Hearing Scale, 0.13 (-0.02 to 0.28). Nevertheless, 79% of patients allowed to keep their IGCIP map opted for continued use after the investigational period. CONCLUSION IGCIP has potential to precisely guide CI programming. Nominal effect sizes for objective outcome measures fail to reflect subjective benefits fully given discordance with the percentage of patients who prefer to maintain their IGCIP map.
Collapse
|
9
|
Effect of Serious Gaming on Speech-in-Noise Intelligibility in Adult Cochlear Implantees: A Randomized Controlled Study. J Clin Med 2022; 11:jcm11102880. [PMID: 35629004 PMCID: PMC9145632 DOI: 10.3390/jcm11102880] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/18/2022] [Revised: 04/29/2022] [Accepted: 05/17/2022] [Indexed: 12/03/2022] Open
Abstract
Listening in noise remains challenging for adults with cochlear implants (CI) even after prolonged experience. Personalized auditory training (AT) programs can be proposed to improve specific auditory skills in adults with CI. The objective of this study was to assess serious gaming as a rehabilitation tool to improve speech-in-noise intelligibility in adult CI users. Thirty subjects with bilateral profound hearing loss and at least 9 months of CI experience were randomized to participate in a 5-week serious game-based AT program (n = 15) or a control group (n = 15). All participants were tested at enrolment and at 5 weeks using the sentence recognition-in-noise matrix test to measure the signal-to-noise ratio (SNR) allowing 70% of speech-in-noise understanding (70% speech reception threshold, SRT70). Thirteen subjects completed the AT program and nine of them were re-tested 5 weeks later. The mean SRT70 improved from 15.5 dB to 11.5 dB SNR after 5 weeks of AT (p < 0.001). No significant change in SRT70 was observed in the control group. In the study group, the magnitude of SRT70 improvement was not correlated to the total number of AT hours. A large inter-patient variability was observed for speech-in-noise intelligibility measured once the AT program was completed and at re-test. The results suggest that serious game-based AT may improve speech-in-noise intelligibility in adult CI users. Potential sources of inter-patient variability are discussed. Serious gaming may be considered as a complementary training approach for improving CI outcomes in adults.
Collapse
|
10
|
Sladen DP, Zeitler DM. Speech perception abilities of adult cochlear implant listeners with single-sided deafness vs. bilateral hearing loss. Cochlear Implants Int 2022; 23:225-231. [PMID: 35506493 DOI: 10.1080/14670100.2022.2054098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Abstract
OBJECTIVES The purpose of this study was to compare the speech perception abilities in adult cochlear implant recipients implanted for bilateral sensorineural hearing loss (BSNHL) with those implanted for single-sided deafness (SSD). DESIGN A total of 12 adults with BSNHL and 12 adults with SSD participated. Each participant completed a battery of speech perception measures including monosyllabic words, sentences, and consonant recognition. RESULTS Cochlear implant users with BSNHL performed higher on word and sentence recognition. Consonant recognition scores showed higher performance for CI listeners with BSNHL for voicing and manner, but not for place or articulation. CONCLUSIONS Results of this study suggest that adults with SSD may have lower speech perception abilities with their cochlear implant when compared to adults implanted for BSNHL.
Collapse
|
11
|
The benefit of hearing aids in adults with hearing loss during the Covid–19 pandemic. JOURNAL OF SURGERY AND MEDICINE 2022. [DOI: 10.28982/josam.997222] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/14/2022] Open
|
12
|
Lawrence BJ, Eikelboom RH, Jayakody DMP. Auditory-cognitive training for adult cochlear implant recipients: a study protocol for a randomised controlled trial. Trials 2021; 22:793. [PMID: 34772432 PMCID: PMC8588651 DOI: 10.1186/s13063-021-05714-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/19/2021] [Accepted: 10/11/2021] [Indexed: 11/20/2022] Open
Abstract
Background There is an urgent need to develop new therapies to improve cognitive function in adults following cochlear implant surgery. This study aims to determine if completing at-home computer-based brain training activities improve memory and thinking skills in adults following their first cochlear implant. Methods This study will be conducted as a single-blind, head-to-head, randomised controlled trial (RCT). It will determine whether auditory training combined with adaptive computerised cognitive training will elicit greater improvement in cognition, sound and speech perception, mood, and quality of life outcomes in adult cochlear implant recipients, when compared to auditory training combined with non-adaptive (i.e. placebo) computerised cognitive training. Participants 18 years or older who meet the clinical criteria for a cochlear implant will be recruited into the study. Results The results of this trial will clarify whether the auditory training combined with cognitive training will improve cognition, sound and speech perception, mood, and quality of life outcomes in adult cochlear implant recipients. Discussion We anticipate that our findings will have implications for clinical practice in the treatment of adult cochlear implant recipients. Trial registration Australian New Zealand Clinical Trials Registry ACTRN12619000609156. Registered on April 23 2019.
Collapse
Affiliation(s)
- Blake J Lawrence
- School of Population Health, Curtin University, Bentley, WA, Australia
| | - Robert H Eikelboom
- Ear Science Institute Australia, 1 Salvado Road, Subiaco, WA, 6008, Australia.,Ear Sciences Centre, Medical School, The University of Western Australia, Crawley, WA, Australia.,Department of Speech-Language Pathology and Audiology, University of Pretoria, Pretoria, South Africa
| | - Dona M P Jayakody
- Ear Science Institute Australia, 1 Salvado Road, Subiaco, WA, 6008, Australia. .,Ear Sciences Centre, Medical School, The University of Western Australia, Crawley, WA, Australia.
| |
Collapse
|
13
|
Xu K, Willis S, Gopen Q, Fu QJ. Effects of Spectral Resolution and Frequency Mismatch on Speech Understanding and Spatial Release From Masking in Simulated Bilateral Cochlear Implants. Ear Hear 2021; 41:1362-1371. [PMID: 32132377 DOI: 10.1097/aud.0000000000000865] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES Due to interaural frequency mismatch, bilateral cochlear-implant (CI) users may be less able to take advantage of binaural cues that normal-hearing (NH) listeners use for spatial hearing, such as interaural time differences and interaural level differences. As such, bilateral CI users have difficulty segregating competing speech even when the target and competing talkers are spatially separated. The goal of this study was to evaluate the effects of spectral resolution, tonotopic mismatch (the frequency mismatch between the acoustic center frequency assigned to CI electrode within an implanted ear relative to the expected spiral ganglion characteristic frequency), and interaural mismatch (differences in the degree of tonotopic mismatch in each ear) on speech understanding and spatial release from masking (SRM) in the presence of competing talkers in NH subjects listening to bilateral vocoder simulations. DESIGN During testing, both target and masker speech were presented in five-word sentences that had the same syntax but were not necessarily meaningful. The sentences were composed of five categories in fixed order (Name, Verb, Number, Color, and Clothes), each of which had 10 items, such that multiple sentences could be generated by randomly selecting a word from each category. Speech reception thresholds (SRTs) for the target sentence presented in competing speech maskers were measured. The target speech was delivered to both ears and the two speech maskers were delivered to (1) both ears (diotic masker), or (2) different ears (dichotic masker: one delivered to the left ear and the other delivered to the right ear). Stimuli included the unprocessed speech and four 16-channel sine-vocoder simulations with different interaural mismatch (0, 1, and 2 mm). SRM was calculated as the difference between the diotic and dichotic listening conditions. RESULTS With unprocessed speech, SRTs were 0.3 and -18.0 dB for the diotic and dichotic maskers, respectively. For the spectrally degraded speech with mild tonotopic mismatch and no interaural mismatch, SRTs were 5.6 and -2.0 dB for the diotic and dichotic maskers, respectively. When the tonotopic mismatch increased in both ears, SRTs worsened to 8.9 and 2.4 dB for the diotic and dichotic maskers, respectively. When the two ears had different tonotopic mismatch (e.g., there was interaural mismatch), the performance drop in SRTs was much larger for the dichotic than for the diotic masker. The largest SRM was observed with unprocessed speech (18.3 dB). With the CI simulations, SRM was significantly reduced to 7.6 dB even with mild tonotopic mismatch but no interaural mismatch; SRM was further reduced with increasing interaural mismatch. CONCLUSIONS The results demonstrate that frequency resolution, tonotopic mismatch, and interaural mismatch have differential effects on speech understanding and SRM in simulation of bilateral CIs. Minimizing interaural mismatch may be critical to optimize binaural benefits and improve CI performance for competing speech, a typical listening environment. SRM (the difference in SRTs between diotic and dichotic maskers) may be a useful clinical tool to assess interaural frequency mismatch in bilateral CI users and to evaluate the benefits of optimization methods that minimize interaural mismatch.
Collapse
Affiliation(s)
- Kevin Xu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, California, USA
| | | | | | | |
Collapse
|
14
|
Individual Variability in Recalibrating to Spectrally Shifted Speech: Implications for Cochlear Implants. Ear Hear 2021; 42:1412-1427. [PMID: 33795617 DOI: 10.1097/aud.0000000000001043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implant (CI) recipients are at a severe disadvantage compared with normal-hearing listeners in distinguishing consonants that differ by place of articulation because the key relevant spectral differences are degraded by the implant. One component of that degradation is the upward shifting of spectral energy that occurs with a shallow insertion depth of a CI. The present study aimed to systematically measure the effects of spectral shifting on word recognition and phoneme categorization by specifically controlling the amount of shifting and using stimuli whose identification specifically depends on perceiving frequency cues. We hypothesized that listeners would be biased toward perceiving phonemes that contain higher-frequency components because of the upward frequency shift and that intelligibility would decrease as spectral shifting increased. DESIGN Normal-hearing listeners (n = 15) heard sine wave-vocoded speech with simulated upward frequency shifts of 0, 2, 4, and 6 mm of cochlear space to simulate shallow CI insertion depth. Stimuli included monosyllabic words and /b/-/d/ and /∫/-/s/ continua that varied systematically by formant frequency transitions or frication noise spectral peaks, respectively. Recalibration to spectral shifting was operationally defined as shifting perceptual acoustic-phonetic mapping commensurate with the spectral shift. In other words, adjusting frequency expectations for both phonemes upward so that there is still a perceptual distinction, rather than hearing all upward-shifted phonemes as the higher-frequency member of the pair. RESULTS For moderate amounts of spectral shifting, group data suggested a general "halfway" recalibration to spectral shifting, but individual data suggested a notably different conclusion: half of the listeners were able to recalibrate fully, while the other halves of the listeners were utterly unable to categorize shifted speech with any reliability. There were no participants who demonstrated a pattern intermediate to these two extremes. Intelligibility of words decreased with greater amounts of spectral shifting, also showing loose clusters of better- and poorer-performing listeners. Phonetic analysis of word errors revealed certain cues were more susceptible to being compromised due to a frequency shift (place and manner of articulation), while voicing was robust to spectral shifting. CONCLUSIONS Shifting the frequency spectrum of speech has systematic effects that are in line with known properties of speech acoustics, but the ensuing difficulties cannot be predicted based on tonotopic mismatch alone. Difficulties are subject to substantial individual differences in the capacity to adjust acoustic-phonetic mapping. These results help to explain why speech recognition in CI listeners cannot be fully predicted by peripheral factors like electrode placement and spectral resolution; even among listeners with functionally equivalent auditory input, there is an additional factor of simply being able or unable to flexibly adjust acoustic-phonetic mapping. This individual variability could motivate precise treatment approaches guided by an individual's relative reliance on wideband frequency representation (even if it is mismatched) or limited frequency coverage whose tonotopy is preserved.
Collapse
|
15
|
Chavant M, Hervais-Adelman A, Macherey O. Perceptual Learning of Vocoded Speech With and Without Contralateral Hearing: Implications for Cochlear Implant Rehabilitation. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:196-205. [PMID: 33267729 DOI: 10.1044/2020_jslhr-20-00385] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose An increasing number of individuals with residual or even normal contralateral hearing are being considered for cochlear implantation. It remains unknown whether the presence of contralateral hearing is beneficial or detrimental to their perceptual learning of cochlear implant (CI)-processed speech. The aim of this experiment was to provide a first insight into this question using acoustic simulations of CI processing. Method Sixty normal-hearing listeners took part in an auditory perceptual learning experiment. Each subject was randomly assigned to one of three groups of 20 referred to as NORMAL, LOWPASS, and NOTHING. The experiment consisted of two test phases separated by a training phase. In the test phases, all subjects were tested on recognition of monosyllabic words passed through a six-channel "PSHC" vocoder presented to a single ear. In the training phase, which consisted of listening to a 25-min audio book, all subjects were also presented with the same vocoded speech in one ear but the signal they received in their other ear differed across groups. The NORMAL group was presented with the unprocessed speech signal, the LOWPASS group with a low-pass filtered version of the speech signal, and the NOTHING group with no sound at all. Results The improvement in speech scores following training was significantly smaller for the NORMAL than for the LOWPASS and NOTHING groups. Conclusions This study suggests that the presentation of normal speech in the contralateral ear reduces or slows down perceptual learning of vocoded speech but that an unintelligible low-pass filtered contralateral signal does not have this effect. Potential implications for the rehabilitation of CI patients with partial or full contralateral hearing are discussed.
Collapse
Affiliation(s)
- Martin Chavant
- Aix-Marseille University, Centre National de la Recherche Scientifique, Centrale Marseille, Laboratoire de Mécanique et d'Acoustique, France
| | | | - Olivier Macherey
- Aix-Marseille University, Centre National de la Recherche Scientifique, Centrale Marseille, Laboratoire de Mécanique et d'Acoustique, France
| |
Collapse
|
16
|
Goupell MJ, Draves GT, Litovsky RY. Recognition of vocoded words and sentences in quiet and multi-talker babble with children and adults. PLoS One 2020; 15:e0244632. [PMID: 33373427 PMCID: PMC7771688 DOI: 10.1371/journal.pone.0244632] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/14/2020] [Accepted: 12/14/2020] [Indexed: 11/18/2022] Open
Abstract
A vocoder is used to simulate cochlear-implant sound processing in normal-hearing listeners. Typically, there is rapid improvement in vocoded speech recognition, but it is unclear if the improvement rate differs across age groups and speech materials. Children (8–10 years) and young adults (18–26 years) were trained and tested over 2 days (4 hours) on recognition of eight-channel noise-vocoded words and sentences, in quiet and in the presence of multi-talker babble at signal-to-noise ratios of 0, +5, and +10 dB. Children achieved poorer performance than adults in all conditions, for both word and sentence recognition. With training, vocoded speech recognition improvement rates were not significantly different between children and adults, suggesting that improvement in learning how to process speech cues degraded via vocoding is absent of developmental differences across these age groups and types of speech materials. Furthermore, this result confirms that the acutely measured age difference in vocoded speech recognition persists after extended training.
Collapse
Affiliation(s)
- Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, Maryland, MD, United States of America
- * E-mail:
| | - Garrison T. Draves
- Waisman Center, University of Wisconsin, Madison, WI, United States of America
| | - Ruth Y. Litovsky
- Waisman Center, University of Wisconsin, Madison, WI, United States of America
- Department of Communication Sciences and Disorders, University of Wisconsin, Madison, WI, United States of America
| |
Collapse
|
17
|
Effects of noise on integration of acoustic and electric hearing within and across ears. PLoS One 2020; 15:e0240752. [PMID: 33057396 PMCID: PMC7561114 DOI: 10.1371/journal.pone.0240752] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2020] [Accepted: 10/01/2020] [Indexed: 11/19/2022] Open
Abstract
In bimodal listening, cochlear implant (CI) users combine electric hearing (EH) in one ear and acoustic hearing (AH) in the other ear. In electric-acoustic stimulation (EAS), CI users combine EH and AH in the same ear. In quiet, integration of EH and AH has been shown to be better with EAS, but with greater sensitivity to tonotopic mismatch in EH. The goal of the present study was to evaluate how external noise might affect integration of AH and EH within or across ears. Recognition of monosyllabic words was measured for normal-hearing subjects listening to simulations of unimodal (AH or EH alone), EAS, and bimodal listening in quiet and in speech-shaped steady noise (10 dB, 0 dB signal-to-noise ratio). The input/output frequency range for AH was 0.1–0.6 kHz. EH was simulated using an 8-channel noise vocoder. The output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth. The input frequency range was either matched (1.2–8.0 kHz) or mismatched (0.6–8.0 kHz) to the output frequency range; the mismatched input range maximized the amount of speech information, while the matched input resulted in some speech information loss. In quiet, tonotopic mismatch differently affected EAS and bimodal performance. In noise, EAS and bimodal performance was similarly affected by tonotopic mismatch. The data suggest that tonotopic mismatch may differently affect integration of EH and AH in quiet and in noise.
Collapse
|
18
|
A Neurophysiological Study of Musical Pitch Identification in Mandarin-Speaking Cochlear Implant Users. Neural Plast 2020; 2020:4576729. [PMID: 32774355 PMCID: PMC7396015 DOI: 10.1155/2020/4576729] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/02/2020] [Revised: 05/26/2020] [Accepted: 06/24/2020] [Indexed: 02/06/2023] Open
Abstract
Music perception in cochlear implant (CI) users is far from satisfactory, not only because of the technological limitations of current CI devices but also due to the neurophysiological alterations that generally accompany deafness. Early behavioral studies revealed that similar mechanisms underlie musical and lexical pitch perception in CI-based electric hearing. Although neurophysiological studies of the musical pitch perception of English-speaking CI users are actively ongoing, little such research has been conducted with Mandarin-speaking CI users; as Mandarin is a tonal language, these individuals require pitch information to understand speech. The aim of this work was to study the neurophysiological mechanisms accounting for the musical pitch identification abilities of Mandarin-speaking CI users and normal-hearing (NH) listeners. Behavioral and mismatch negativity (MMN) data were analyzed to examine musical pitch processing performance. Moreover, neurophysiological results from CI users with good and bad pitch discrimination performance (according to the just-noticeable differences (JND) and pitch-direction discrimination (PDD) tasks) were compared to identify cortical responses associated with musical pitch perception differences. The MMN experiment was conducted using a passive oddball paradigm, with musical tone C4 (262 Hz) presented as the standard and tones D4 (294 Hz), E4 (330 Hz), G#4 (415 Hz), and C5 (523 Hz) presented as deviants. CI users demonstrated worse musical pitch discrimination ability than did NH listeners, as reflected by larger JND and PDD thresholds for pitch identification, and significantly increased latencies and reduced amplitudes in MMN responses. Good CI performers had better MMN results than did bad performers. Consistent with findings for English-speaking CI users, the results of this work suggest that MMN is a viable marker of cortical pitch perception in Mandarin-speaking CI users.
Collapse
|
19
|
Karoui C, James C, Barone P, Bakhos D, Marx M, Macherey O. Searching for the Sound of a Cochlear Implant: Evaluation of Different Vocoder Parameters by Cochlear Implant Users With Single-Sided Deafness. Trends Hear 2020; 23:2331216519866029. [PMID: 31533581 PMCID: PMC6753516 DOI: 10.1177/2331216519866029] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022] Open
Abstract
Cochlear implantation in subjects with single-sided deafness (SSD) offers a unique opportunity to directly compare the percepts evoked by a cochlear implant (CI) with those evoked acoustically. Here, nine SSD-CI users performed a forced-choice task evaluating the similarity of speech processed by their CI with speech processed by several vocoders presented to their healthy ear. In each trial, subjects heard two intervals: their CI followed by a certain vocoder in Interval 1 and their CI followed by a different vocoder in Interval 2. The vocoders differed either (i) in carrier type-(sinusoidal [SINE], bandfiltered noise [NOISE], and pulse-spreading harmonic complex) or (ii) in frequency mismatch between the analysis and synthesis frequency ranges-(no mismatch, and two frequency-mismatched conditions of 2 and 4 equivalent rectangular bandwidths [ERBs]). Subjects had to state in which of the two intervals the CI and vocoder sounds were more similar. Despite a large intersubject variability, the PSHC vocoder was judged significantly more similar to the CI than SINE or NOISE vocoders. Furthermore, the No-mismatch and 2-ERB mismatch vocoders were judged significantly more similar to the CI than the 4-ERB mismatch vocoder. The mismatch data were also interpreted by comparing spiral ganglion characteristic frequencies with electrode contact positions determined from postoperative computed tomography scans. Only one subject demonstrated a pattern of preference consistent with adaptation to the CI sound processor frequency-to-electrode allocation table and two subjects showed possible partial adaptation. Those subjects with adaptation patterns presented overall small and consistent frequency mismatches across their electrode arrays.
Collapse
Affiliation(s)
- Chadlia Karoui
- Centre de Recherche Cerveau et Cognition, Toulouse, France.,Cochlear France SAS, Toulouse, France
| | - Chris James
- Cochlear France SAS, Toulouse, France.,Department of Otology-Neurotology and Skull Base Surgery, Purpan University Hospital, Toulouse, France
| | - Pascal Barone
- Centre de Recherche Cerveau et Cognition, Toulouse, France
| | - David Bakhos
- Université François-Rabelais de Tours, CHRU de Tours, France.,Ear Nose and Throat department, CHUR de Tours, Tours, France
| | - Mathieu Marx
- Centre de Recherche Cerveau et Cognition, Toulouse, France.,Department of Otology-Neurotology and Skull Base Surgery, Purpan University Hospital, Toulouse, France
| | - Olivier Macherey
- Aix Marseille University, CNRS, Centrale Marseille, LMA, Marseille, France
| |
Collapse
|
20
|
Casaponsa A, Sohoglu E, Moore DR, Füllgrabe C, Molloy K, Amitay S. Does training with amplitude modulated tones affect tone-vocoded speech perception? PLoS One 2019; 14:e0226288. [PMID: 31881550 PMCID: PMC6934405 DOI: 10.1371/journal.pone.0226288] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/04/2019] [Accepted: 11/22/2019] [Indexed: 11/17/2022] Open
Abstract
Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored.
Collapse
Affiliation(s)
- Aina Casaponsa
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
- Department of Linguistics and English Language, Lancaster University, Lancaster, England, United Kingdom
| | - Ediz Sohoglu
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - David R. Moore
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Christian Füllgrabe
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Katharine Molloy
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| | - Sygal Amitay
- Medical Research Council Institute of Hearing Research, Nottingham, England, United Kingdom
| |
Collapse
|
21
|
Assessment of Temporal Fine Structure Processing Among Older Adults With Cochlear Implants. Otol Neurotol 2019; 41:327-333. [PMID: 31860474 DOI: 10.1097/mao.0000000000002533] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES The purpose of this study was to determine if older adults with cochlear implants are able to take advantage of coding schemes that preserve temporal fine structure (TFS) cues. DESIGN A total of 19 older adults with cochlear implants participated in a prospective, repeated measures, A to B design. Participants entered the study using TFS. The participants used strategy A (high definition continuous interleaved sampling [HDCIS]) for 3 months and strategy B (TFS) for 3 months. Endpoint testing was administered at the end of each 3-month period. Testing included consonant recognition, speech understanding in noise, temporal modulation thresholds, and self-perceived benefit. RESULTS Older adults were able to use TFS successfully. Speech perception performance was improved using TFS compared with HDCIS for voicing, but not manner or place of articulation. There were no differences between the two strategies for speech understanding in noise, temporal modulation detection, or self-perceived benefit. At the end of the study, 13 out of 19 (68%) of participants chose to continue using TFS processing. CONCLUSIONS Advanced age does not prevent adults with cochlear implants from using TFS coding strategies. Performance outcomes using TFS and HDCIS were similar, with the exception of voicing which was improved when using TFS. The data support the idea of using various sound processing strategies with older adults.
Collapse
|
22
|
Han JH, Lee HJ, Kang H, Oh SH, Lee DS. Brain Plasticity Can Predict the Cochlear Implant Outcome in Adult-Onset Deafness. Front Hum Neurosci 2019; 13:38. [PMID: 30837852 PMCID: PMC6389609 DOI: 10.3389/fnhum.2019.00038] [Citation(s) in RCA: 22] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/16/2018] [Accepted: 01/24/2019] [Indexed: 01/30/2023] Open
Abstract
Sensory plasticity, which is associated with deafness, has not been as thoroughly investigated in the adult brain as it has in the developing brain. In this study, we examined the brain reorganization induced by auditory deprivation in people with adult-onset deafness and its clinical relevance by measuring glucose metabolism before cochlear implant (CI) surgery. F-18 fluorodeoxyglucose positron emission tomography (18F-FDG-PET) scans were performed in 37 postlingually deafened patients during the preoperative workup period, and in 39 normal-hearing (NH) controls. Behavioral CI outcomes were measured at 1 year after implantation using a phoneme identification test with auditory cueing only. In the deaf individuals, areas involved in the auditory pathway such as the inferior colliculus and bilateral superior temporal gyri were hypometabolic compared to the NH controls. The hypometabolism observed in the deaf auditory cortices gradually returned to levels similar to the controls as the duration of deafness increased. However, contrary to our previous findings in congenitally deaf children, this metabolic recovery failed to have a significant prognostic value for the recovery of the speech perception ability in adult CI patients. In a broad occipital area centered on the primary visual cortices, glucose metabolism was higher in the deaf patients than the controls, suggesting that the area had become visually hyperactive for sensory compensation immediately after the onset of deafness. In addition, a negative correlation between the metabolic activity and behavioral speech perception outcomes was observed in the visual association areas. In the medial frontal cortices, cortical metabolism in most patients decreased, but patients who had preserved metabolic activities showed better speech performance. These results suggest that the auditory cortex in people with adult-onset deafness is relatively resistant to cross-modal plasticity, and instead, individual traits in late-stage visual processing and cognitive control seem to be more reliable prognostic markers for adult-onset deafness.
Collapse
Affiliation(s)
- Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Chuncheon, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Chuncheon, South Korea.,Department of Otorhinolaryngology-Head and Neck Surgery, Hallym University College of Medicine, Chuncheon, South Korea
| | - Hyejin Kang
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, South Korea.,BK21 Plus Global Translational Research on Molecular Medicine and Biopharmaceutical Sciences, Seoul National University, Seoul, South Korea
| | - Seung-Ha Oh
- Department of Otorhinolaryngology-Head and Neck Surgery, Seoul National University College of Medicine, Seoul, South Korea.,Sensory Organ Research Institute, Seoul National University College of Medicine, Seoul, South Korea
| | - Dong Soo Lee
- Department of Nuclear Medicine, Seoul National University College of Medicine, Seoul, South Korea.,Department of Molecular Medicine and Biopharmaceutical Sciences, Graduate School of Convergence Science and Technology, Seoul National University, Seoul, South Korea
| |
Collapse
|
23
|
Mathew R, Vickers D, Boyle P, Shaida A, Selvadurai D, Jiang D, Undurraga J. Development of electrophysiological and behavioural measures of electrode discrimination in adult cochlear implant users. Hear Res 2018; 367:74-87. [DOI: 10.1016/j.heares.2018.07.002] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 01/06/2018] [Revised: 06/20/2018] [Accepted: 07/02/2018] [Indexed: 10/28/2022]
|
24
|
Yu F, Li H, Zhou X, Tang X, Galvin III JJ, Fu QJ, Yuan W. Effects of Training on Lateralization for Simulations of Cochlear Implants and Single-Sided Deafness. Front Hum Neurosci 2018; 12:287. [PMID: 30065641 PMCID: PMC6056606 DOI: 10.3389/fnhum.2018.00287] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/09/2018] [Accepted: 06/27/2018] [Indexed: 11/13/2022] Open
Abstract
While cochlear implantation has benefitted many patients with single-sided deafness (SSD), there is great variability in cochlear implant (CI) outcomes and binaural performance remains poorer than that of normal-hearing (NH) listeners. Differences in sound quality across ears-temporal fine structure (TFS) information with acoustic hearing vs. coarse spectro-temporal envelope information with electric hearing-may limit integration of acoustic and electric patterns. Binaural performance may also be limited by inter-aural mismatch between the acoustic input frequency and the place of stimulation in the cochlea. SSD CI patients must learn to accommodate these differences between acoustic and electric stimulation to maximize binaural performance. It is possible that training may increase and/or accelerate accommodation and further improve binaural performance. In this study, we evaluated lateralization training in NH subjects listening to broad simulations of SSD CI signal processing. A 16-channel vocoder was used to simulate the coarse spectro-temporal cues available with electric hearing; the degree of inter-aural mismatch was varied by adjusting the simulated insertion depth (SID) to be 25 mm (SID25), 22 mm (SID22) and 19 mm (SID19) from the base of the cochlea. Lateralization was measured using headphones and head-related transfer functions (HRTFs). Baseline lateralization was measured for unprocessed speech (UN) delivered to the left ear to simulate SSD and for binaural performance with the acoustic ear combined with the 16-channel vocoders (UN+SID25, UN+SID22 and UN+SID19). After completing baseline measurements, subjects completed six lateralization training exercises with the UN+SID22 condition, after which performance was re-measured for all baseline conditions. Post-training performance was significantly better than baseline for all conditions (p < 0.05 in all cases), with no significant difference in training benefits among conditions. Given that there was no significant difference between the SSD and the SSD CI conditions before or after training, the results suggest that NH listeners were unable to integrate TFS and coarse spectro-temporal cues across ears for lateralization, and that inter-aural mismatch played a secondary role at best. While lateralization training may benefit SSD CI patients, the training may largely improve spectral analysis with the acoustic ear alone, rather than improve integration of acoustic and electric hearing.
Collapse
Affiliation(s)
- Fei Yu
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Hai Li
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - Xiaoqing Zhou
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | - XiaoLin Tang
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| | | | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Wei Yuan
- Department of Otolaryngology, Southwest Hospital, Third Military Medical University, Chongqing, China
| |
Collapse
|
25
|
Casserly ED, Wang Y, Celestin N, Talesnick L, Pisoni DB. Supra-Segmental Changes in Speech Production as a Result of Spectral Feedback Degradation: Comparison with Lombard Speech. LANGUAGE AND SPEECH 2018; 61:227-245. [PMID: 28653556 PMCID: PMC6205159 DOI: 10.1177/0023830917713775] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
Perturbations to acoustic speech feedback have been typically localized to specific phonetic characteristics, for example, fundamental frequency (F0) or the first two formants (F1/F2), or affect all aspects of the speech signal equally, for example, via the addition of background noise. This paper examines the consequences of a more selective global perturbation: real-time cochlear implant (CI) simulation of acoustic speech feedback. Specifically, we examine the potential similarity between speakers' response to noise vocoding and the characteristics of Lombard speech. An acoustic analysis of supra-segmental characteristics in speaking rate, F0 production, and voice amplitude revealed changes that paralleled the Lombard effect in some domains but not others. Two studies of speech intelligibility complemented the acoustic analysis, finding that intelligibility significantly decreased as a result of CI simulation of speaker feedback. Together, the results point to differences in speakers' responses to these two superficially similar feedback manipulations. In both cases we see a complex, multi-faceted behavior on the part of talkers. We argue that more instances of global perturbation and broader response assessment are needed to determine whether such complexity is present in other feedback manipulations or if it represents a relatively rare exception to the typical compensatory feedback response.
Collapse
|
26
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
27
|
Nanjundaswamy M, Prabhu P, Rajanna RK, Ningegowda RG, Sharma M. Computer-Based Auditory Training Programs for Children with Hearing Impairment - A Scoping Review. Int Arch Otorhinolaryngol 2018; 22:88-93. [PMID: 29371904 PMCID: PMC5783687 DOI: 10.1055/s-0037-1602797] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/03/2017] [Accepted: 02/16/2017] [Indexed: 11/09/2022] Open
Abstract
Introduction
Communication breakdown, a consequence of hearing impairment (HI), is being fought by fitting amplification devices and providing auditory training since the inception of audiology. The advances in both audiology and rehabilitation programs have led to the advent of computer-based auditory training programs (CBATPs).
Objective
To review the existing literature documenting the evidence-based CBATPs for children with HIs. Since there was only one such article, we also chose to review the commercially available CBATPs for children with HI. The strengths and weaknesses of the existing literature were reviewed in order to improve further researches.
Data Synthesis
Google Scholar and PubMed databases were searched using various combinations of keywords. The participant, intervention, control, outcome and study design (PICOS) criteria were used for the inclusion of articles. Out of 124 article abstracts reviewed, 5 studies were shortlisted for detailed reading. One among them satisfied all the criteria, and was taken for review. The commercially available programs were chosen based on an extensive search in Google. The reviewed article was well-structured, with appropriate outcomes. The commercially available programs cover many aspects of the auditory training through a wide range of stimuli and activities.
Conclusions
There is a dire need for extensive research to be performed in the field of CBATPs to establish their efficacy, also to establish them as evidence-based practices.
Collapse
Affiliation(s)
- Manohar Nanjundaswamy
- Department of Electronics, All India Institute of Speech and Hearing Ringgold Standard Institution, Mysore, Karnataka, India
| | - Prashanth Prabhu
- Department of Audiology, All India Institute of Speech and Hearing Ringgold Standard Institution, Mysore, Karnataka, India
| | - Revathi Kittur Rajanna
- Department of Audiology, All India Institute of Speech and Hearing Ringgold Standard Institution, Mysore, Karnataka, India
| | | | - Madhuri Sharma
- Department of Electronics, All India Institute of Speech and Hearing Ringgold Standard Institution, Mysore, Karnataka, India
| |
Collapse
|
28
|
Benefits to Speech Perception in Noise From the Binaural Integration of Electric and Acoustic Signals in Simulated Unilateral Deafness. Ear Hear 2018; 37:248-59. [PMID: 27116049 PMCID: PMC4847646 DOI: 10.1097/aud.0000000000000252] [Citation(s) in RCA: 19] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study used vocoder simulations with normal-hearing (NH) listeners to (1) measure their ability to integrate speech information from an NH ear and a simulated cochlear implant (CI), and (2) investigate whether binaural integration is disrupted by a mismatch in the delivery of spectral information between the ears arising from a misalignment in the mapping of frequency to place. DESIGN Eight NH volunteers participated in the study and listened to sentences embedded in background noise via headphones. Stimuli presented to the left ear were unprocessed. Stimuli presented to the right ear (referred to as the CI-simulation ear) were processed using an eight-channel noise vocoder with one of the three processing strategies. An Ideal strategy simulated a frequency-to-place map across all channels that matched the delivery of spectral information between the ears. A Realistic strategy created a misalignment in the mapping of frequency to place in the CI-simulation ear where the size of the mismatch between the ears varied across channels. Finally, a Shifted strategy imposed a similar degree of misalignment in all channels, resulting in consistent mismatch between the ears across frequency. The ability to report key words in sentences was assessed under monaural and binaural listening conditions and at signal to noise ratios (SNRs) established by estimating speech-reception thresholds in each ear alone. The SNRs ensured that the monaural performance of the left ear never exceeded that of the CI-simulation ear. The advantages of binaural integration were calculated by comparing binaural performance with monaural performance using the CI-simulation ear alone. Thus, these advantages reflected the additional use of the experimentally constrained left ear and were not attributable to better-ear listening. RESULTS Binaural performance was as accurate as, or more accurate than, monaural performance with the CI-simulation ear alone. When both ears supported a similar level of monaural performance (50%), binaural integration advantages were found regardless of whether a mismatch was simulated or not. When the CI-simulation ear supported a superior level of monaural performance (71%), evidence of binaural integration was absent when a mismatch was simulated using both the Realistic and the Ideal processing strategies. This absence of integration could not be accounted for by ceiling effects or by changes in SNR. CONCLUSIONS If generalizable to unilaterally deaf CI users, the results of the current simulation study would suggest that benefits to speech perception in noise can be obtained by integrating information from an implanted ear and an NH ear. A mismatch in the delivery of spectral information between the ears due to a misalignment in the mapping of frequency to place may disrupt binaural integration in situations where both ears cannot support a similar level of monaural speech understanding. Previous studies that have measured the speech perception of unilaterally deaf individuals after CI but with nonindividualized frequency-to-electrode allocations may therefore have underestimated the potential benefits of providing binaural hearing. However, it remains unclear whether the size and nature of the potential incremental benefits from individualized allocations are sufficient to justify the time and resources required to derive them based on cochlear imaging or pitch-matching tasks.
Collapse
|
29
|
Integration of acoustic and electric hearing is better in the same ear than across ears. Sci Rep 2017; 7:12500. [PMID: 28970567 PMCID: PMC5624923 DOI: 10.1038/s41598-017-12298-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/06/2017] [Indexed: 11/26/2022] Open
Abstract
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1–0.6 kHz. For CI simulations, the output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.
Collapse
|
30
|
Kramer B, Tropitzsch A, Müller M, Löwenheim H. Myelin-induced inhibition in a spiral ganglion organ culture - Approaching a natural environment in vitro. Neuroscience 2017; 357:75-83. [PMID: 28596120 DOI: 10.1016/j.neuroscience.2017.05.053] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2016] [Revised: 05/12/2017] [Accepted: 05/30/2017] [Indexed: 12/23/2022]
Abstract
The performance of a cochlear implant depends on the defined interaction between afferent neurons of the spiral ganglion and the inserted electrode. Neurite outgrowth can be induced by neurotrophins such as brain-derived neurotrophic factor (BDNF) via tropomyosin kinase receptor B (TrkB). However, neurotrophin signaling through the p75 neurotrophin receptor (p75) inhibits neurite outgrowth in the presence of myelin. Organotypic cultures derived from postnatal (P3-5) mice were used to study myelin-induced inhibition in the cochlear spiral ganglion. Neurite outgrowth was analyzed and quantified utilizing an adapted Sholl analysis. Stimulation of neurite outgrowth was quantified after application of BDNF, the selective TrkB agonist 7,8-dihydroxyflavone (7,8-DHF) and a selective inhibitor of the Rho-associated kinase (Y27632), which inhibits the p75 pathway. Myelin-induced inhibition was assessed by application of myelin-associated glycoprotein (MAG-Fc) to stimulate the inhibitory p75 pathway. Inhibition of neurite outgrowth was achieved by the selective TrkB inhibitor K252a. Stimulation of neurite outgrowth was observed after treatment with BDNF, 7,8 DHF and a combination of BDNF and Y27632. The 7,8-DHF-induced growth effects could be inhibited by K252a. Furthermore, inhibition of neurite outgrowth was observed after supplementation with MAG-Fc. Myelin-induced inhibition could be overcome by 7,8-DHF and the combination of BDNF and Y27632. In this study, myelin-induced inhibition of neurite outgrowth was established in a spiral ganglion model. We reveal that 7,8-DHF is a viable novel compound for the stimulation of neurite outgrowth in a myelin-induced inhibitory environment. The combination of TrkB stimulation and ROCK inhibition can be used to overcome myelin inhibition.
Collapse
Affiliation(s)
- Benedikt Kramer
- Department of Otorhinolaryngology - Head and Neck Surgery, Hearing Research Centre Tübingen (THRC), University Tübingen, Germany
| | - Anke Tropitzsch
- Department of Otorhinolaryngology - Head and Neck Surgery, Hearing Research Centre Tübingen (THRC), University Tübingen, Germany
| | - Marcus Müller
- Department of Otorhinolaryngology - Head and Neck Surgery, Hearing Research Centre Tübingen (THRC), University Tübingen, Germany.
| | - Hubert Löwenheim
- Department of Otorhinolaryngology - Head and Neck Surgery, Hearing Research Centre Tübingen (THRC), University Tübingen, Germany
| |
Collapse
|
31
|
Ihler F, Blum J, Steinmetz G, Weiss B, Zirn S, Canis M. Development of a home-based auditory training to improve speech recognition on the telephone for patients with cochlear implants: A randomised trial. Clin Otolaryngol 2017; 42:1303-1310. [DOI: 10.1111/coa.12871] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/14/2017] [Indexed: 11/28/2022]
Affiliation(s)
- F. Ihler
- Department of Otorhinolaryngology; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
| | - J. Blum
- Department of Otorhinolaryngology; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
| | - G. Steinmetz
- Department of Otorhinolaryngology; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
- Division of Plastic Surgery; Department of Trauma Surgery, Orthopaedics and Plastic Surgery; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
| | - B.G. Weiss
- Department of Otorhinolaryngology; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
| | - S. Zirn
- Electrical Engineering and Information Engineering; University of Applied Science Offenburg; Offenburg Germany
| | - M. Canis
- Department of Otorhinolaryngology; University Medical Center Göttingen; Georg-August-University Göttingen; Göttingen Germany
| |
Collapse
|
32
|
Sullivan JR, Assmann PF, Hossain S, Schafer EC. Voice gender and the segregation of competing talkers: Perceptual learning in cochlear implant simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:1643. [PMID: 28372046 PMCID: PMC5346103 DOI: 10.1121/1.4976002] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/13/2014] [Revised: 12/26/2016] [Accepted: 01/27/2017] [Indexed: 06/07/2023]
Abstract
Two experiments explored the role of differences in voice gender in the recognition of speech masked by a competing talker in cochlear implant simulations. Experiment 1 confirmed that listeners with normal hearing receive little benefit from differences in voice gender between a target and masker sentence in four- and eight-channel simulations, consistent with previous findings that cochlear implants deliver an impoverished representation of the cues for voice gender. However, gender differences led to small but significant improvements in word recognition with 16 and 32 channels. Experiment 2 assessed the benefits of perceptual training on the use of voice gender cues in an eight-channel simulation. Listeners were assigned to one of four groups: (1) word recognition training with target and masker differing in gender; (2) word recognition training with same-gender target and masker; (3) gender recognition training; or (4) control with no training. Significant improvements in word recognition were observed from pre- to post-test sessions for all three training groups compared to the control group. These improvements were maintained at the late session (one week following the last training session) for all three groups. There was an overall improvement in masked word recognition performance provided by gender mismatch following training, but the amount of benefit did not differ as a function of the type of training. The training effects observed here are consistent with a form of rapid perceptual learning that contributes to the segregation of competing voices but does not specifically enhance the benefits provided by voice gender cues.
Collapse
Affiliation(s)
- Jessica R Sullivan
- Department of Communication Sciences & Professional Counseling, University of West Georgia, Carrollton, Georgia 30118, USA
| | - Peter F Assmann
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas 75083, USA
| | - Shaikat Hossain
- School of Behavioral and Brain Sciences, University of Texas at Dallas, Richardson, Texas 75083, USA
| | - Erin C Schafer
- College of Health and Public Service, University of North Texas, Denton, Texas 76203, USA
| |
Collapse
|
33
|
Casserly ED, Barney EC. Auditory Training With Multiple Talkers and Passage-Based Semantic Cohesion. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:159-171. [PMID: 28002542 DOI: 10.1044/2016_jslhr-h-15-0357] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/13/2015] [Accepted: 06/13/2016] [Indexed: 06/06/2023]
Abstract
PURPOSE Current auditory training methods typically result in improvements to speech recognition abilities in quiet, but learner gains may not extend to other domains in speech (e.g., recognition in noise) or self-assessed benefit. This study examined the potential of training involving multiple talkers and training emphasizing discourse-level top-down processing to produce more generalized learning. METHOD Normal-hearing participants (N = 64) were randomly assigned to 1 of 4 auditory training protocols using noise-vocoded speech simulating the processing of an 8-channel cochlear implant: sentence-based single-talker training, training with 24 different talkers, passage-based transcription training, and a control (transcribing unvocoded sentence materials). In all cases, participants completed 2 pretests under cochlear implant simulation, 1 hr of training, and 5 posttests to assess perceptual learning and cross-context generalization. RESULTS Performance above the control was seen in all 3 experimental groups for sentence recognition in quiet. In addition, the multitalker training method generalized to a context word-recognition task, and the passage training method caused gains in sentence recognition in noise. CONCLUSION The gains of the multitalker and passage training groups over the control suggest that, with relatively small modifications, improvements to the generalized outcomes of auditory training protocols may be possible.
Collapse
Affiliation(s)
| | - Erin C Barney
- Department of Psychology, Trinity College, Hartford, CT
| |
Collapse
|
34
|
Abstract
OBJECTIVE Considerable unexplained variability and large individual differences exist in speech recognition outcomes for postlingually deaf adults who use cochlear implants (CIs), and a sizeable fraction of CI users can be considered "poor performers." This article summarizes our current knowledge of poor CI performance, and provides suggestions to clinicians managing these patients. METHOD Studies are reviewed pertaining to speech recognition variability in adults with hearing loss. Findings are augmented by recent studies in our laboratories examining outcomes in postlingually deaf adults with CIs. RESULTS In addition to conventional clinical predictors of CI performance (e.g., amount of residual hearing, duration of deafness), factors pertaining to both "bottom-up" auditory sensitivity to the spectro-temporal details of speech, and "top-down" linguistic knowledge and neurocognitive functions contribute to CI outcomes. CONCLUSIONS The broad array of factors that contribute to speech recognition performance in adult CI users suggests the potential both for novel diagnostic assessment batteries to explain poor performance, and also new rehabilitation strategies for patients who exhibit poor outcomes. Moreover, this broad array of factors determining outcome performance suggests the need to treat individual CI patients using a personalized rehabilitation approach.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - Chelsea Bates
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - Michael S. Harris
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - David B. Pisoni
- Psychological and Brain Sciences Department, Indiana University
| |
Collapse
|
35
|
Patro C, Mendel LL. Role of contextual cues on the perception of spectrally reduced interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 140:1336. [PMID: 27586760 DOI: 10.1121/1.4961450] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/06/2023]
Abstract
Understanding speech within an auditory scene is constantly challenged by interfering noise in suboptimal listening environments when noise hinders the continuity of the speech stream. In such instances, a typical auditory-cognitive system perceptually integrates available speech information and "fills in" missing information in the light of semantic context. However, individuals with cochlear implants (CIs) find it difficult and effortful to understand interrupted speech compared to their normal hearing counterparts. This inefficiency in perceptual integration of speech could be attributed to further degradations in the spectral-temporal domain imposed by CIs making it difficult to utilize the contextual evidence effectively. To address these issues, 20 normal hearing adults listened to speech that was spectrally reduced and spectrally reduced interrupted in a manner similar to CI processing. The Revised Speech Perception in Noise test, which includes contextually rich and contextually poor sentences, was used to evaluate the influence of semantic context on speech perception. Results indicated that listeners benefited more from semantic context when they listened to spectrally reduced speech alone. For the spectrally reduced interrupted speech, contextual information was not as helpful under significant spectral reductions, but became beneficial as the spectral resolution improved. These results suggest top-down processing facilitates speech perception up to a point, and it fails to facilitate speech understanding when the speech signals are significantly degraded.
Collapse
Affiliation(s)
- Chhayakanta Patro
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| | - Lisa Lucks Mendel
- School of Communication Sciences and Disorders, University of Memphis, 4055 North Park Loop, Memphis, Tennessee, 38152, USA
| |
Collapse
|
36
|
|
37
|
Isaiah A, Hartley DEH. Can training extend current guidelines for cochlear implant candidacy? Neural Regen Res 2015; 10:718-20. [PMID: 26109944 PMCID: PMC4468761 DOI: 10.4103/1673-5374.156964] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 03/19/2015] [Indexed: 02/04/2023] Open
Affiliation(s)
- Amal Isaiah
- Department of Otorhinolaryngology-Head and Neck Surgery, University of Maryland School of Medicine, Baltimore, MD 21201, USA
| | - Douglas E H Hartley
- National Institute for Health Research (NIHR), Nottingham Hearing Biomedical Research Unit, Nottingham, NG 1 5DU, UK; Otology and Hearing Group, Division of Clinical Neuroscience, School of Medicine, University of Nottingham, NG7 2RD, UK; Medical Research Council (MRC) Institute of Hearing Research, Nottingham NG7 2UH, UK
| |
Collapse
|
38
|
Casserly ED, Pisoni DB. Auditory Learning Using a Portable Real-Time Vocoder: Preliminary Findings. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:1001-16. [PMID: 25674884 PMCID: PMC4490076 DOI: 10.1044/2015_jslhr-h-13-0216] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/12/2013] [Accepted: 01/15/2015] [Indexed: 05/04/2023]
Abstract
PURPOSE Although traditional study of auditory training has been in controlled laboratory settings, interest has been increasing in more interactive options. The authors examine whether such interactive training can result in short-term perceptual learning, and the range of perceptual skills it impacts. METHOD Experiments 1 (N = 37) and 2 (N = 21) used pre- and posttest measures of speech and nonspeech recognition to find evidence of learning (within subject) and to compare the effects of 3 kinds of training (between subject) on the perceptual abilities of adults with normal hearing listening to simulations of cochlear implant processing. Subjects were given interactive, standard lab-based, or control training experience for 1 hr between the pre- and posttest tasks (unique sets across Experiments 1 & 2). RESULTS Subjects receiving interactive training showed significant learning on sentence recognition in quiet task (Experiment 1), outperforming controls but not lab-trained subjects following training. Training groups did not differ significantly on any other task, even those directly involved in the interactive training experience. CONCLUSIONS Interactive training has the potential to produce learning in 1 domain (sentence recognition in quiet), but the particulars of the present training method (short duration, high complexity) may have limited benefits to this single criterion task.
Collapse
|
39
|
Casserly ED. Effects of real-time cochlear implant simulation on speech production. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 137:2791-2800. [PMID: 25994707 PMCID: PMC4441710 DOI: 10.1121/1.4916965] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/15/2014] [Revised: 03/17/2015] [Accepted: 03/24/2015] [Indexed: 05/30/2023]
Abstract
Investigations using normal-hearing subjects listening to simulations of cochlear implant (CI) acoustic processing have provided substantial information about the impact of these distorted listening conditions on the accuracy of auditory perception, but extensions of this method to the domain of speech production have been limited. In the present study, a portable, real-time vocoder was used to simulate conditions of CI auditory feedback during speech production in NH subjects. Acoustic-phonetic characteristics of sibilant fricatives, aspirated stops, and F1/F2 vowel qualities were analyzed for changes as a result of CI simulation of acoustic speech feedback. Significant changes specific to F1 were observed; speakers reduced their phonological vowel height contrast, typically via talker-specific raising of the low vowels [æ] and [ɑ] or lowering of high vowels [i] and [u]. Comparisons to the results of both localized feedback perturbation procedures and investigations of speech production in deaf adults with CIs are discussed.
Collapse
Affiliation(s)
- Elizabeth D Casserly
- Department of Psychological and Brain Sciences, Speech Research Laboratory, Indiana University, 1101 East 10th Street, Bloomington, Indiana 47405
| |
Collapse
|
40
|
Frequency-place map for electrical stimulation in cochlear implants: Change over time. Hear Res 2015; 326:8-14. [PMID: 25840373 DOI: 10.1016/j.heares.2015.03.011] [Citation(s) in RCA: 44] [Impact Index Per Article: 4.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/12/2014] [Revised: 03/19/2015] [Accepted: 03/23/2015] [Indexed: 11/21/2022]
Abstract
The relationship between the place of electrical stimulation from a cochlear implant and the corresponding perceived pitch remains uncertain. Previous studies have estimated what the pitch corresponding to a particular location should be. However, perceptual verification is difficult because a subject needs both a cochlear implant and sufficient residual hearing to reliably compare electric and acoustic pitches. Additional complications can arise from the possibility that the pitch corresponding to an electrode may change as the auditory system adapts to a sound processor. In the following experiment, five subjects with normal or near-to-normal hearing in one ear and a cochlear implant with a long electrode array in the other ear were studied. Pitch matches were made between single electrode pulse trains and acoustic tones before activation of the speech processor to gain an estimate of the pitch provided by electrical stimulation at a given insertion angle without the influence of exposure to a sound processor. The pitch matches were repeated after 1, 3, 6, and 12 months of experience with the sound processor to evaluate the effect of adaptation over time. Pre-activation pitch matches were lower than would be estimated by a spiral ganglion pitch map. Deviations were largest for stimulation below 240° degrees and smallest above 480°. With experience, pitch matches shifted towards the frequency-to-electrode allocation. However, no statistically significant pitch shifts were observed over time. The likely explanation for the lack of pitch change is that the frequency-to-electrode allocations for the long electrode arrays were already similar to the pre-activation pitch matches. Minimal place pitch shifts over time suggest a minimal amount of perceptual remapping needed for the integration of electric and acoustic stimuli, which may contribute to shorter times to asymptotic performance.
Collapse
|
41
|
Shannon RV. Auditory implant research at the House Ear Institute 1989-2013. Hear Res 2015; 322:57-66. [PMID: 25449009 PMCID: PMC4380593 DOI: 10.1016/j.heares.2014.11.003] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/11/2014] [Revised: 11/04/2014] [Accepted: 11/07/2014] [Indexed: 11/29/2022]
Abstract
The House Ear Institute (HEI) had a long and distinguished history of auditory implant innovation and development. Early clinical innovations include being one of the first cochlear implant (CI) centers, being the first center to implant a child with a cochlear implant in the US, developing the auditory brainstem implant, and developing multiple surgical approaches and tools for Otology. This paper reviews the second stage of auditory implant research at House - in-depth basic research on perceptual capabilities and signal processing for both cochlear implants and auditory brainstem implants. Psychophysical studies characterized the loudness and temporal perceptual properties of electrical stimulation as a function of electrical parameters. Speech studies with the noise-band vocoder showed that only four bands of tonotopically arrayed information were sufficient for speech recognition, and that most implant users were receiving the equivalent of 8-10 bands of information. The noise-band vocoder allowed us to evaluate the effects of the manipulation of the number of bands, the alignment of the bands with the original tonotopic map, and distortions in the tonotopic mapping, including holes in the neural representation. Stimulation pulse rate was shown to have only a small effect on speech recognition. Electric fields were manipulated in position and sharpness, showing the potential benefit of improved tonotopic selectivity. Auditory training shows great promise for improving speech recognition for all patients. And the Auditory Brainstem Implant was developed and improved and its application expanded to new populations. Overall, the last 25 years of research at HEI helped increase the basic scientific understanding of electrical stimulation of hearing and contributed to the improved outcomes for patients with the CI and ABI devices. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Robert V Shannon
- Department of Otolaryngology, University of Southern California, Keck School of Medicine of USC, 806 W. Adams Blvd, Los Angeles, CA 90007-2505, USA.
| |
Collapse
|
42
|
Nogueira W, Litvak LM, Saoji AA, Büchner A. Design and evaluation of a cochlear implant strategy based on a "Phantom" channel. PLoS One 2015; 10:e0120148. [PMID: 25806818 PMCID: PMC4373925 DOI: 10.1371/journal.pone.0120148] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/08/2014] [Accepted: 01/19/2015] [Indexed: 11/30/2022] Open
Abstract
Unbalanced bipolar stimulation, delivered using charge balanced pulses, was used to produce "Phantom stimulation", stimulation beyond the most apical contact of a cochlear implant's electrode array. The Phantom channel was allocated audio frequencies below 300 Hz in a speech coding strategy, conveying energy some two octaves lower than the clinical strategy and hence delivering the fundamental frequency of speech and of many musical tones. A group of 12 Advanced Bionics cochlear implant recipients took part in a chronic study investigating the fitting of the Phantom strategy and speech and music perception when using Phantom. The evaluation of speech in noise was performed immediately after fitting Phantom for the first time (Session 1) and after one month of take-home experience (Session 2). A repeated measures of analysis of variance (ANOVA) within factors strategy (Clinical, Phantom) and interaction time (Session 1, Session 2) revealed a significant effect for the interaction time and strategy. Phantom obtained a significant improvement in speech intelligibility after one month of use. Furthermore, a trend towards a better performance with Phantom (48%) with respect to F120 (37%) after 1 month of use failed to reach significance after type 1 error correction. Questionnaire results show a preference for Phantom when listening to music, likely driven by an improved balance between high and low frequencies.
Collapse
Affiliation(s)
- Waldo Nogueira
- Department of Otolaryngology, Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, Germany
| | - Leonid M. Litvak
- Research and Technology Group, Advanced Bionics LLC, Valencia CA, USA
| | - Aniket A. Saoji
- Research and Technology Group, Advanced Bionics LLC, Valencia CA, USA
| | - Andreas Büchner
- Department of Otolaryngology, Medical University Hannover, Cluster of Excellence “Hearing4all”, Hannover, Germany
| |
Collapse
|
43
|
Sousa AFD, Carvalho ACMD, Couto MIV, Tsuji RK, Goffi-Gomez MVS, Bento RF, Matas CG, Befi-Lopes DM. Telephone Usage and Cochlear Implant: Auditory Training Benefits. Int Arch Otorhinolaryngol 2014; 19:269-72. [PMID: 26157504 PMCID: PMC4490929 DOI: 10.1055/s-0034-1390301] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/03/2014] [Accepted: 08/21/2014] [Indexed: 11/23/2022] Open
Abstract
Introduction Difficulties with telephone use by adult users of cochlear implants (CIs) are reported as a limitation in daily life. Studies to improve the speech understanding of CI users on the telephone are scarce in the Brazilian scientific literature. Objective To develop and evaluate the effectiveness of a training program of auditory abilities on the telephone for an adult CI user. Resumed Report The subject was a 55-year-old woman with a degree in accounting who used a CI for 24 months. The program consisted of three stages: pretraining evaluation, eight sessions of advanced auditory abilities training, and post-training evaluation. Auditory abilities with CI were evaluated before and after training in three conditions: sound field, telephone with the speech processor in the microphone function, and telephone with the speech processor in the telecoil function. Speech recognition was assessed by three different lists: one with monosyllabic and dissyllabic words, another with nonsense syllables, and another one with sentences. The Client Oriented Scale of Improvement (COSI) was used to assess whether the needs established by the CI user in everyday telephone use situations improved after training. The auditory abilities training resulted in a relevant improvement in the percentage of correct answers in speech tests both in the telephone use conditions and in the sound field condition. Conclusion The results obtained with the COSI inventory indicated a performance improvement in all situations presented at the beginning of the program.
Collapse
Affiliation(s)
- Aline Faria de Sousa
- Department of Physiotherapy, Speech-Language Pathology & Audiology and Occupational Therapy, Universidade de São Paulo, São Paulo, Brazil
| | - Ana Claudia Martinho de Carvalho
- Department of Physiotherapy, Speech-Language Pathology & Audiology and Occupational Therapy, Universidade de São Paulo, São Paulo, Brazil
| | - Maria Ines Vieira Couto
- Department of Physiotherapy, Speech-Language Pathology & Audiology and Occupational Therapy, Universidade de São Paulo, São Paulo, Brazil
| | | | | | | | - Carla Gentile Matas
- Department of Physiotherapy, Speech-Language Pathology & Audiology and Occupational Therapy, Universidade de São Paulo, São Paulo, Brazil
| | - Debora Maria Befi-Lopes
- Department of Physiotherapy, Speech-Language Pathology & Audiology and Occupational Therapy, Universidade de São Paulo, São Paulo, Brazil
| |
Collapse
|
44
|
Svirsky MA, Talavage TM, Sinha S, Neuburger H, Azadpour M. Gradual adaptation to auditory frequency mismatch. Hear Res 2014; 322:163-70. [PMID: 25445816 DOI: 10.1016/j.heares.2014.10.008] [Citation(s) in RCA: 38] [Impact Index Per Article: 3.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/13/2014] [Revised: 10/13/2014] [Accepted: 10/16/2014] [Indexed: 12/01/2022]
Abstract
What is the best way to help humans adapt to a distorted sensory input? Interest in this question is more than academic. The answer may help facilitate auditory learning by people who became deaf after learning language and later received a cochlear implant (a neural prosthesis that restores hearing through direct electrical stimulation of the auditory nerve). There is evidence that some cochlear implants (which provide information that is spectrally degraded to begin with) stimulate neurons with higher characteristic frequency than the acoustic frequency of the original stimulus. In other words, the stimulus is shifted in frequency with respect to what the listener expects to hear. This frequency misalignment may have a negative influence on speech perception by CI users. However, a perfect frequency-place alignment may result in the loss of important low frequency speech information. A trade-off may involve a gradual approach: start with correct frequency-place alignment to allow listeners to adapt to the spectrally degraded signal first, and then gradually increase the frequency shift to allow them to adapt to it over time. We used an acoustic model of a cochlear implant to measure adaptation to a frequency-shifted signal, using either the gradual approach or the "standard" approach (sudden imposition of the frequency shift). Listeners in both groups showed substantial auditory learning, as measured by increases in speech perception scores over the course of fifteen one-hour training sessions. However, the learning process was faster for listeners who were exposed to the gradual approach. These results suggest that gradual rather than sudden exposure may facilitate perceptual learning in the face of a spectrally degraded, frequency-shifted input. This article is part of a Special Issue entitled <Lasker Award>.
Collapse
Affiliation(s)
- Mario A Svirsky
- Dept. of Otolaryngology-HNS, New York University School of Medicine, New York, NY, USA; Center of Neural Science, New York University, New York, NY, USA.
| | - Thomas M Talavage
- ECE, Purdue University, West Lafayette, IN, USA; BME Depts., Purdue University, West Lafayette, IN, USA
| | | | - Heidi Neuburger
- Dept. of Otolaryngology-HNS, Indiana University School of Medicine, Indianapolis, IN, USA
| | - Mahan Azadpour
- Dept. of Otolaryngology-HNS, New York University School of Medicine, New York, NY, USA
| |
Collapse
|
45
|
van de Velde DJ, Dritsakis G, Frijns JHM, van Heuven VJ, Schiller NO. The effect of spectral smearing on the identification of pureF0intonation contours in vocoder simulations of cochlear implants. Cochlear Implants Int 2014; 16:77-87. [DOI: 10.1179/1754762814y.0000000086] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
|
46
|
Dickinson AM, Baker R, Siciliano C, Munro KJ. Adaptation to nonlinear frequency compression in normal-hearing adults: a comparison of training approaches. Int J Audiol 2014; 53:719-29. [PMID: 24975233 DOI: 10.3109/14992027.2014.921338] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE To identify which training approach, if any, is most effective for improving perception of frequency-compressed speech. DESIGN A between-subject design using repeated measures. STUDY SAMPLE Forty young adults with normal hearing were randomly allocated to one of four groups: a training group (sentence or consonant) or a control group (passive exposure or test-only). Test and training material differed in terms of material and speaker. RESULTS On average, sentence training and passive exposure led to significantly improved sentence recognition (11.0% and 11.7%, respectively) compared with the consonant training group (2.5%) and test-only group (0.4%), whilst, consonant training led to significantly improved consonant recognition (8.8%) compared with the sentence training group (1.9%), passive exposure group (2.8%), and test-only group (0.8%). CONCLUSIONS Sentence training led to improved sentence recognition, whilst consonant training led to improved consonant recognition. This suggests learning transferred between speakers and material but not stimuli. Passive exposure to sentence material led to an improvement in sentence recognition that was equivalent to gains from active training. This suggests that it may be possible to adapt passively to frequency-compressed speech.
Collapse
|
47
|
Abdala C, Dhar S, Ahmadi M, Luo P. Aging of the medial olivocochlear reflex and associations with speech perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:754-65. [PMID: 25234884 PMCID: PMC3985974 DOI: 10.1121/1.4861841] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/23/2013] [Revised: 12/19/2013] [Accepted: 12/30/2013] [Indexed: 05/24/2023]
Abstract
The medial olivocochlear reflex (MOCR) modulates cochlear amplifier gain and is thought to facilitate the detection of signals in noise. High-resolution distortion product otoacoustic emissions (DPOAEs) were recorded in teens, young, middle-aged, and elderly adults at moderate levels using primary tones swept from 0.5 to 4 kHz with and without a contralateral acoustic stimulus (CAS) to elicit medial efferent activation. Aging effects on magnitude and phase of the 2f1-f2 DPOAE and on its components were examined, as was the link between speech-in-noise performance and MOCR strength. Results revealed a mild aging effect on the MOCR through middle age for frequencies below 1.5 kHz. Additionally, positive correlations were observed between strength of the MOCR and performance on select measures of speech perception parsed into features. The elderly group showed unexpected results including relatively large effects of CAS on DPOAE, and CAS-induced increases in DPOAE fine structure as well as increases in the amplitude and phase accumulation of DPOAE reflection components. Contamination of MOCR estimates by middle ear muscle contractions cannot be ruled out in the oldest subjects. The findings reiterate that DPOAE components should be unmixed when measuring medial efferent effects to better consider and understand these potential confounds.
Collapse
Affiliation(s)
- Carolina Abdala
- House Research Institute, Division of Communication and Auditory Neuroscience, 2100 West Third Street, Los Angeles, California 90057
| | - Sumitrajit Dhar
- Knowles Hearing Center, Roxelyn and Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois 60208
| | - Mahnaz Ahmadi
- House Research Institute, Division of Communication and Auditory Neuroscience, 2100 West Third Street, Los Angeles, California 90057
| | - Ping Luo
- House Research Institute, Division of Communication and Auditory Neuroscience, 2100 West Third Street, Los Angeles, California 90057
| |
Collapse
|
48
|
Reiss LAJ, Turner CW, Karsten SA, Gantz BJ. Plasticity in human pitch perception induced by tonotopically mismatched electro-acoustic stimulation. Neuroscience 2014; 256:43-52. [PMID: 24157931 PMCID: PMC3893921 DOI: 10.1016/j.neuroscience.2013.10.024] [Citation(s) in RCA: 76] [Impact Index Per Article: 7.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/08/2013] [Revised: 10/10/2013] [Accepted: 10/11/2013] [Indexed: 10/26/2022]
Abstract
Under normal conditions, the acoustic pitch percept of a pure tone is determined mainly by the tonotopic place of the stimulation along the cochlea. Unlike acoustic stimulation, electric stimulation of a cochlear implant (CI) allows for the direct manipulation of the place of stimulation in human subjects. CI sound processors analyze the range of frequencies needed for speech perception and allocate portions of this range to the small number of electrodes distributed in the cochlea. Because the allocation is assigned independently of the original resonant frequency of the basilar membrane associated with the location of each electrode, CI users who have access to residual hearing in either or both ears often have tonotopic mismatches between the acoustic and electric stimulation. Here we demonstrate plasticity of place pitch representations of up to three octaves in Hybrid CI users after experience with combined electro-acoustic stimulation. The pitch percept evoked by single CI electrodes, measured relative to acoustic tones presented to the non-implanted ear, changed over time in directions that reduced the electro-acoustic pitch mismatch introduced by the CI programming. This trend was particularly apparent when the allocations of stimulus frequencies to electrodes were changed over time, with pitch changes even reversing direction in some subjects. These findings show that pitch plasticity can occur more rapidly and on a greater scale in the mature auditory system than previously thought possible. Overall, the results suggest that the adult auditory system can impose perceptual order on disordered arrays of inputs.
Collapse
Affiliation(s)
- L A J Reiss
- Department of Otolaryngology, Oregon Health and Science University, Portland, OR, USA; Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA.
| | - C W Turner
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA; Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| | - S A Karsten
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - B J Gantz
- Department of Otolaryngology, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
49
|
Farris-Trimble A, McMurray B, Cigrand N, Tomblin JB. The process of spoken word recognition in the face of signal degradation. J Exp Psychol Hum Percept Perform 2013; 40:308-27. [PMID: 24041330 DOI: 10.1037/a0034353] [Citation(s) in RCA: 36] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/08/2022]
Abstract
Though much is known about how words are recognized, little research has focused on how a degraded signal affects the fine-grained temporal aspects of real-time word recognition. The perception of degraded speech was examined in two populations with the goal of describing the time course of word recognition and lexical competition. Thirty-three postlingually deafened cochlear implant (CI) users and 57 normal hearing (NH) adults (16 in a CI-simulation condition) participated in a visual world paradigm eye-tracking task in which their fixations to a set of phonologically related items were monitored as they heard one item being named. Each degraded-speech group was compared with a set of age-matched NH participants listening to unfiltered speech. CI users and the simulation group showed a delay in activation relative to the NH listeners, and there is weak evidence that the CI users showed differences in the degree of peak and late competitor activation. In general, though, the degraded-speech groups behaved statistically similarly with respect to activation levels.
Collapse
Affiliation(s)
| | | | | | - J Bruce Tomblin
- Department of Communication Sciences and Disorders, Delta Center, University of Iowa
| |
Collapse
|
50
|
Green T, Rosen S, Faulkner A, Paterson R. Adaptation to spectrally-rotated speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2013; 134:1369-1377. [PMID: 23927133 DOI: 10.1121/1.4812759] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
Much recent interest surrounds listeners' abilities to adapt to various transformations that distort speech. An extreme example is spectral rotation, in which the spectrum of low-pass filtered speech is inverted around a center frequency (2 kHz here). Spectral shape and its dynamics are completely altered, rendering speech virtually unintelligible initially. However, intonation, rhythm, and contrasts in periodicity and aperiodicity are largely unaffected. Four normal hearing adults underwent 6 h of training with spectrally-rotated speech using Continuous Discourse Tracking. They and an untrained control group completed pre- and post-training speech perception tests, for which talkers differed from the training talker. Significantly improved recognition of spectrally-rotated sentences was observed for trained, but not untrained, participants. However, there were no significant improvements in the identification of medial vowels in /bVd/ syllables or intervocalic consonants. Additional tests were performed with speech materials manipulated so as to isolate the contribution of various speech features. These showed that preserving intonational contrasts did not contribute to the comprehension of spectrally-rotated speech after training, and suggested that improvements involved adaptation to altered spectral shape and dynamics, rather than just learning to focus on speech features relatively unaffected by the transformation.
Collapse
Affiliation(s)
- Tim Green
- Speech, Hearing, and Phonetic Sciences, UCL, Chandler House, 2, Wakefield Street, London, WC1N 1PF, United Kingdom
| | | | | | | |
Collapse
|