1
|
Jeon EK, Driscoll V, Mussoi BS, Scheperle R, Guthe E, Gfeller K, Abbas PJ, Brown CJ. Evaluating Changes in Adult Cochlear Implant Users' Brain and Behavior Following Auditory Training. Ear Hear 2024:00003446-990000000-00316. [PMID: 39044323 DOI: 10.1097/aud.0000000000001569] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/25/2024]
Abstract
OBJECTIVES To describe the effects of two types of auditory training on both behavioral and physiological measures of auditory function in cochlear implant (CI) users, and to examine whether a relationship exists between the behavioral and objective outcome measures. DESIGN This study involved two experiments, both of which used a within-subject design. Outcome measures included behavioral and cortical electrophysiological measures of auditory processing. In Experiment I, 8 CI users participated in a music-based auditory training. The training program included both short training sessions completed in the laboratory as well as a set of 12 training sessions that participants completed at home over the course of a month. As part of the training program, study participants listened to a range of different musical stimuli and were asked to discriminate stimuli that differed in pitch or timbre and to identify melodic changes. Performance was assessed before training and at three intervals during and after training was completed. In Experiment II, 20 CI users participated in a more focused auditory training task: the detection of spectral ripple modulation depth. Training consisted of a single 40-minute session that took place in the laboratory under the supervision of the investigators. Behavioral and physiologic measures of spectral ripple modulation depth detection were obtained immediately pre- and post-training. Data from both experiments were analyzed using mixed linear regressions, paired t tests, correlations, and descriptive statistics. RESULTS In Experiment I, there was a significant improvement in behavioral measures of pitch discrimination after the study participants completed the laboratory and home-based training sessions. There was no significant effect of training on electrophysiologic measures of the auditory N1-P2 onset response and acoustic change complex (ACC). There were no significant relationships between electrophysiologic measures and behavioral outcomes after the month-long training. In Experiment II, there was no significant effect of training on the ACC, although there was a small but significant improvement in behavioral spectral ripple modulation depth thresholds after the short-term training. CONCLUSIONS This study demonstrates that auditory training improves spectral cue perception in CI users, with significant perceptual gains observed despite cortical electrophysiological responses like the ACC not reliably predicting training benefits across short- and long-term interventions. Future research should further explore individual factors that may lead to greater benefit from auditory training, in addition to optimization of training protocols and outcome measures, as well as demonstrate the generalizability of these findings.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
| | - Virginia Driscoll
- Department of Music Education and Therapy, East Carolina University, Greenville, North Carolina, USA
| | - Bruna S Mussoi
- Department of Audiology and Speech Pathology, University of Tennessee Health Science Center, Knoxville, Tennessee, USA
| | - Rachel Scheperle
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Emily Guthe
- Department of Music Therapy, Cleveland State University, Cleveland, Ohio, USA
| | - Kate Gfeller
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Paul J Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| | - Carolyn J Brown
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, Iowa, USA
- Department of Otolaryngology, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
2
|
Ching TYC, Zhang VW, Ibrahim R, Bardy F, Rance G, Van Dun B, Sharma M, Chisari D, Dillon H. Acoustic change complex for assessing speech discrimination in normal-hearing and hearing-impaired infants. Clin Neurophysiol 2023; 149:121-132. [PMID: 36963143 DOI: 10.1016/j.clinph.2023.02.172] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/28/2022] [Revised: 01/15/2023] [Accepted: 02/12/2023] [Indexed: 03/08/2023]
Abstract
OBJECTIVE This study examined (1) the utility of a clinical system to record acoustic change complex (ACC, an event-related potential recorded by electroencephalography) for assessing speech discrimination in infants, and (2) the relationship between ACC and functional performance in real life. METHODS Participants included 115 infants (43 normal-hearing, 72 hearing-impaired), aged 3-12 months. ACCs were recorded using [szs], [uiu], and a spectral rippled noise high-pass filtered at 2 kHz as stimuli. Assessments were conducted at age 3-6 months and at 7-12 months. Functional performance was evaluated using a parent-report questionnaire, and correlations with ACC were examined. RESULTS The rates of onset and ACC responses of normal-hearing infants were not significantly different from those of aided infants with mild or moderate hearing loss but were significantly higher than those with severe loss. On average, response rates measured at 3-6 months were not significantly different from those at 7-12 months. Higher rates of ACC responses were significantly associated with better functional performance. CONCLUSIONS ACCs demonstrated auditory capacity for discrimination in infants by 3-6 months. This capacity was positively related to real-life functional performance. SIGNIFICANCE ACCs can be used to evaluate the effectiveness of amplification and monitor development in aided hearing-impaired infants.
Collapse
Affiliation(s)
- Teresa Y C Ching
- National Acoustic Laboratories, Australia; Macquarie School of Education, Macquarie University, Australia; NextSense Institute, Australia; School of Health and Rehabilitation Sciences, University of Queensland, Australia.
| | - Vicky W Zhang
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Ronny Ibrahim
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia
| | - Fabrice Bardy
- National Acoustic Laboratories, Australia; School of Psychology, University of Auckland, New Zealand
| | - Gary Rance
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | | | - Mridula Sharma
- Department of Linguistics, Macquarie University, Australia
| | - Donella Chisari
- Department of Audiology and Speech Pathology, The University of Melbourne, Australia
| | - Harvey Dillon
- National Acoustic Laboratories, Australia; Department of Linguistics, Macquarie University, Australia; Department of Hearing, University of Manchester, United Kingdom
| |
Collapse
|
3
|
Jeon EK, Mussoi BS, Brown CJ, Abbas PJ. Acoustic Change Complex Recorded in Hybrid Cochlear Implant Users. Audiol Neurootol 2022; 28:151-157. [PMID: 36450234 PMCID: PMC10227181 DOI: 10.1159/000527671] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2022] [Accepted: 10/11/2022] [Indexed: 01/03/2024] Open
Abstract
INTRODUCTION Expanding cochlear implant (CI) candidacy criteria and advances in electrode arrays and soft surgical techniques have increased the number of CI recipients who have residual low-frequency hearing. Objective measures such as obligatory cortical auditory-evoked potentials (CAEPs) may help clinicians make more tailored recommendations to recipients regarding optimal listening mode. As a step toward this goal, this study investigated how CAEPs measured from hybrid CI users differ in two listening modes: acoustic alone (A-alone) versus acoustic plus electric (A + E). METHODS Eight successful hybrid CI users participated in this study. Two CAEPs, the P1-N1-P2 and the acoustic change complex (ACC), were measured simultaneously in response to the onset and change of a series of different and spectrally complex acoustic signals, in each of the two listening modes (A-alone and A + E). We examined the effects of listening mode and stimulus type on the onset and ACC N1-P2 amplitudes and peak latencies. RESULTS ACC amplitudes in hybrid CI users significantly differed as a function of listening mode and stimulus type. ACC responses in A + E were larger than those in the A-alone mode. This was most evident for stimuli involving a change from low to high frequency. CONCLUSIONS Results of this study showed that the ACC varies as a function of listening mode and stimulus type. This finding suggests that the ACC can be used as a physiologic, objective measure of the benefit of hybrid CIs, potentially supporting clinicians in making clinical recommendations on individualized listening mode, or to document subjective preference for a given listening mode. Further research into this potential clinical application in a range of hybrid recipients and/or long electrode users who have residual low-frequency hearing is warranted.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, Iowa City, Iowa, United States
| | | | - Carolyn J. Brown
- Department of Communication Sciences and Disorders, Iowa City, Iowa, United States
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa, Iowa City, Iowa, United States
| | - Paul J. Abbas
- Department of Communication Sciences and Disorders, Iowa City, Iowa, United States
- Department of Otolaryngology – Head and Neck Surgery, University of Iowa, Iowa City, Iowa, United States
| |
Collapse
|
4
|
Saraç Kaya E, Türkyılmaz MD, Yaralı M. The evaluation of cochlear implant users’ acoustic change detection ability. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2142390] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/14/2022]
Affiliation(s)
- Eylem Saraç Kaya
- Department of Audiology, Faculty of Health Sciences, Lokman Hekim University, Ankara, Turkey
| | - Meral Didem Türkyılmaz
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| | - Mehmet Yaralı
- Department of Audiology, Faculty of Health Sciences, Hacettepe University, Ankara, Turkey
| |
Collapse
|
5
|
The Acoustic Change Complex Compared to Hearing Performance in Unilaterally and Bilaterally Deaf Cochlear Implant Users. Ear Hear 2022; 43:1783-1799. [PMID: 35696186 PMCID: PMC9592183 DOI: 10.1097/aud.0000000000001248] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 02/04/2023]
Abstract
OBJECTIVES Clinical measures evaluating hearing performance in cochlear implant (CI) users depend on attention and linguistic skills, which limits the evaluation of auditory perception in some patients. The acoustic change complex (ACC), a cortical auditory evoked potential to a sound change, might yield useful objective measures to assess hearing performance and could provide insight in cortical auditory processing. The aim of this study is to examine the ACC in response to frequency changes as an objective measure for hearing performance in CI users. DESIGN Thirteen bilaterally deaf and six single-sided deaf subjects were included, all having used a unilateral CI for at least 1 year. Speech perception was tested with a consonant-vowel-consonant test (+10 dB signal-to-noise ratio) and a digits-in-noise test. Frequency discrimination thresholds were measured at two reference frequencies, using a 3-interval, 2-alternative forced-choice, adaptive staircase procedure. The two reference frequencies were selected using each participant's frequency allocation table and were centered in the frequency band of an electrode that included 500 or 2000 Hz, corresponding to the apical electrode or the middle electrode, respectively. The ACC was evoked with pure tones of the same two reference frequencies with varying frequency increases: within the frequency band of the middle or the apical electrode (+0.25 electrode step), and steps to the center frequency of the first (+1), second (+2), and third (+3) adjacent electrodes. RESULTS Reproducible ACCs were recorded in 17 out of 19 subjects. Most successful recordings were obtained with the largest frequency change (+3 electrode step). Larger frequency changes resulted in shorter N1 latencies and larger N1-P2 amplitudes. In both unilaterally and bilaterally deaf subjects, the N1 latency and N1-P2 amplitude of the CI ears correlated to speech perception as well as frequency discrimination, that is, short latencies and large amplitudes were indicative of better speech perception and better frequency discrimination. No significant differences in ACC latencies or amplitudes were found between the CI ears of the unilaterally and bilaterally deaf subjects, but the CI ears of the unilaterally deaf subjects showed substantially longer latencies and smaller amplitudes than their contralateral normal-hearing ears. CONCLUSIONS The ACC latency and amplitude evoked by tone frequency changes correlate well to frequency discrimination and speech perception capabilities of CI users. For patients unable to reliably perform behavioral tasks, the ACC could be of added value in assessing hearing performance.
Collapse
|
6
|
Fan ZT, Zhao ZH, Sharma M, Valderrama JT, Fu QJ, Liu JX, Fu X, Li H, Zhao XL, Guo XY, Fu LY, Wang NY, Zhang J. Acoustic Change Complex Evoked by Horizontal Sound Location Change in Young Adults With Normal Hearing. Front Neurosci 2022; 16:908989. [PMID: 35733932 PMCID: PMC9207405 DOI: 10.3389/fnins.2022.908989] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/31/2022] [Accepted: 05/10/2022] [Indexed: 11/13/2022] Open
Abstract
Acoustic change complex (ACC) is a cortical auditory-evoked potential induced by a change of continuous sound stimulation. This study aimed to explore: (1) whether the change of horizontal sound location can elicit ACC; (2) the relationship between the change of sound location and the amplitude or latency of ACC; (3) the relationship between the behavioral measure of localization, minimum audible angle (MAA), and ACC. A total of 36 normal-hearing adults participated in this study. A 180° horizontal arc-shaped bracket with a 1.2 m radius was set in a sound field where participants sat at the center. MAA was measured in a two-alternative forced-choice setting. The objective electroencephalography recording of ACC was conducted with the location changed at four sets of positions, ±45°, ±15°, ±5°, and ±2°. The test stimulus was a 125–6,000 Hz broadband noise of 1 s at 60 ± 2 dB SPL with a 2 s interval. The N1′–P2′ amplitudes, N1′ latencies, and P2′ latencies of ACC under four positions were evaluated. The influence of electrode sites and the direction of sound position change on ACC waveform was analyzed with analysis of variance. Results suggested that (1) ACC can be elicited successfully by changing the horizontal sound location position. The elicitation rate of ACC increased with the increase of location change. (2) N1′–P2′ amplitude increased and N1′ and P2′ latencies decreased as the change of sound location increased. The effects of test angles on N1′–P2′ amplitude [F(1.91,238.1) = 97.172, p < 0.001], N1′ latency [F(1.78,221.90) = 96.96, p < 0.001], and P2′ latency [F(1.87,233.11) = 79.97, p < 0.001] showed a statistical significance. (3) The direction of sound location change had no significant effect on any of the ACC peak amplitudes or latencies. (4) Sound location discrimination threshold by the ACC test (97.0% elicitation rate at ±5°) was higher than MAA threshold (2.08 ± 0.5°). The current study results show that though the ACC thresholds are higher than the behavioral thresholds on MAA task, ACC can be used as an objective method to evaluate sound localization ability. This article discusses the implications of this research for clinical practice and evaluation of localization skills, especially for children.
Collapse
Affiliation(s)
- Zhi-Tong Fan
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Zi-Hui Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Mridula Sharma
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
| | - Joaquin T. Valderrama
- Department of Linguistics, Faculty of Human Sciences, Macquarie University, Sydney, NSW, Australia
- National Acoustic Laboratories, Sydney, NSW, Australia
| | - Qian-Jie Fu
- Department of Head and Neck Surgery, David Geffen School of Medicine, University of California, Los Angeles, Los Angeles, CA, United States
| | - Jia-Xing Liu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xin Fu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Huan Li
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xue-Lei Zhao
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Xin-Yu Guo
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Luo-Yi Fu
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Ning-Yu Wang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
| | - Juan Zhang
- Department of Otolaryngology Head and Neck Surgery, Beijing Chaoyang Hospital, Capital Medical University, Beijing, China
- *Correspondence: Juan Zhang,
| |
Collapse
|
7
|
Clinard CG, Piker EG, Romero DJ. Inter-trial coherence as a measure of synchrony in cervical vestibular evoked myogenic potentials. J Neurosci Methods 2022; 377:109628. [PMID: 35618165 DOI: 10.1016/j.jneumeth.2022.109628] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/08/2021] [Revised: 05/11/2022] [Accepted: 05/18/2022] [Indexed: 11/16/2022]
Abstract
BACKGROUND Cervical vestibular evoked myogenic potentials (cVEMPs) are surface-recorded responses that reflect saccular function. Analysis of cVEMPs has focused, nearly exclusively, on time-domain waveform measurements such as amplitude and latency of response peaks, but synchrony-based measures have not been previously reported. NEW METHOD Time-frequency analyses were used to apply an objective response-detection algorithm and to quantify response synchrony. These methods are new to VEMP literature and have been adapted from previous auditory research. Air-conducted cVEMPs were elicited using a 500Hz tone burst in twenty young, healthy participants. RESULTS Time-frequency characteristics of cVEMPs and time-frequency boundaries for response energy were established. An inter-trial coherence analysis approach revealed highly synchronous responses with representative inter-trial coherence values of approximately 0.7. COMPARISON WITH EXISTING METHODS Inter-trial coherence measures were highly correlated with conventional amplitude measures in this group of young, healthy adults (R2 = 0.91 - 0.94), although the frequencies at which these measures had their largest magnitude were unrelated (R2 =.02). Conventional measures of peak-to-peak amplitude and latency were consistent with previous literature. Interaural asymmetry ratios were comparable between amplitude- and synchrony-based measures. CONCLUSIONS Synchrony-based time-frequency analyses were successfully applied to cVEMP data and this type of analysis may be helpful to differentiate synchrony from amplitude in populations with disrupted neural synchrony.
Collapse
Affiliation(s)
- Christopher G Clinard
- Department of Communication Sciences and Disorders, 235 MLK Jr. Way, MSC 4304, HBS 1024, James Madison University, Harrisonburg, VA, 22807 USA.
| | - Erin G Piker
- Department of Communication Sciences and Disorders, 235 MLK Jr. Way, MSC 4304, HBS 1024, James Madison University, Harrisonburg, VA, 22807 USA
| | - Daniel J Romero
- Department of Hearing and Speech Sciences, 1215 21(st) Avenue South, Medical Center East, Vanderbilt University, Nashville, TN, 37232 USA
| |
Collapse
|
8
|
Horn D, Walter M, Rubinstein J, Lau BK. Electrophysiological responses to spectral ripple envelope phase inversion in typical hearing 2- to 4-month-olds. PROCEEDINGS OF MEETINGS ON ACOUSTICS. ACOUSTICAL SOCIETY OF AMERICA 2021; 45:050003. [PMID: 35891886 PMCID: PMC9311477 DOI: 10.1121/2.0001558] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/15/2023]
Affiliation(s)
- David Horn
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Max Walter
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Jay Rubinstein
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| | - Bonnie K. Lau
- University of Washington, Department of Otolaryngology-Head & Neck Surgery
| |
Collapse
|
9
|
Relationship between objective measures of hearing discrimination elicited by non-linguistic stimuli and speech perception in adults. Sci Rep 2021; 11:19554. [PMID: 34599244 PMCID: PMC8486784 DOI: 10.1038/s41598-021-98950-5] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 09/14/2021] [Indexed: 11/08/2022] Open
Abstract
Some people using hearing aids have difficulty discriminating between sounds even though the sounds are audible. As such, cochlear implants may provide greater benefits for speech perception. One method to identify people with auditory discrimination deficits is to measure discrimination thresholds using spectral ripple noise (SRN). Previous studies have shown that behavioral discrimination of SRN was associated with speech perception, and behavioral discrimination was also related to cortical responses to acoustic change or ACCs. We hypothesized that cortical ACCs could be directly related to speech perception. In this study, we investigated the relationship between subjective speech perception and objective ACC responses measured using SRNs. We tested 13 normal-hearing and 10 hearing-impaired adults using hearing aids. Our results showed that behavioral SRN discrimination was correlated with speech perception in quiet and in noise. Furthermore, cortical ACC responses to phase changes in the SRN were significantly correlated with speech perception. Audibility was a major predictor of discrimination and speech perception, but direct measures of auditory discrimination could contribute information about a listener’s sensitivity to acoustic cues that underpin speech perception. The findings lend support for potential application of measuring ACC responses to SRNs for identifying people who may benefit from cochlear implants.
Collapse
|
10
|
Yoon YS, Mills I, Toliver B, Park C, Whitaker G, Drew C. Comparisons in Frequency Difference Limens Between Sequential and Simultaneous Listening Conditions in Normal-Hearing Listeners. Am J Audiol 2021; 30:266-274. [PMID: 33769845 DOI: 10.1044/2021_aja-20-00134] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose We compared frequency difference limens (FDLs) in normal-hearing listeners under two listening conditions: sequential and simultaneous. Method Eighteen adult listeners participated in three experiments. FDL was measured using a method of limits for comparison frequency. In the sequential listening condition, the tones were presented with a half-second time interval in between, but for the simultaneous listening condition, the tones were presented simultaneously. For the first experiment, one of four reference tones (125, 250, 500, or 750 Hz), which was presented to the left ear, was paired with one of four starting comparison tones (250, 500, 750, or 1000 Hz), which was presented to the right ear. The second and third experiments had the same testing conditions as the first experiment except with two- and three-tone complexes, comparison tones. The subjects were asked if the tones sounded the same or different. When a subject chose "different," the comparison frequency decreased by 10% of the frequency difference between the reference and comparison tones. FDLs were determined when the subjects chose "same" 3 times in a row. Results FDLs were significantly broader (worse) with simultaneous listening than with sequential listening for the two- and three-tone complex conditions but not for the single-tone condition. The FDLs were narrowest (best) with the three-tone complex under both listening conditions. FDLs broadened as the testing frequencies increased for the single tone and the two-tone complex. The FDLs were not broadened at frequencies > 250 Hz for the three-tone complex. Conclusion The results suggest that sequential and simultaneous frequency discriminations are mediated by different processes at different stages in the auditory pathway for complex tones, but not for pure tones.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Ivy Mills
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - BaileyAnn Toliver
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - Christine Park
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | - George Whitaker
- Division of Otolaryngology, Baylor Scott & White Medical Center, Temple, TX
| | - Carrie Drew
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| |
Collapse
|
11
|
Speech Perception with Noise Vocoding and Background Noise: An EEG and Behavioral Study. J Assoc Res Otolaryngol 2021; 22:349-363. [PMID: 33851289 DOI: 10.1007/s10162-021-00787-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/13/2020] [Accepted: 01/26/2021] [Indexed: 10/21/2022] Open
Abstract
This study explored the physiological response of the human brain to degraded speech syllables. The degradation was introduced using noise vocoding and/or background noise. The goal was to identify physiological features of auditory-evoked potentials (AEPs) that may explain speech intelligibility. Ten human subjects with normal hearing participated in syllable-detection tasks, while their AEPs were recorded with 32-channel electroencephalography. Subjects were presented with six syllables in the form of consonant-vowel-consonant or vowel-consonant-vowel. Noise vocoding with 22 or 4 frequency channels was applied to the syllables. When examining the peak heights in the AEPs (P1, N1, and P2), vocoding alone showed no consistent effect. P1 was not consistently reduced by background noise, N1 was sometimes reduced by noise, and P2 was almost always highly reduced. Two other physiological metrics were examined: (1) classification accuracy of the syllables based on AEPs, which indicated whether AEPs were distinguishable for different syllables, and (2) cross-condition correlation of AEPs (rcc) between the clean and degraded speech, which indicated the brain's ability to extract speech-related features and suppress response to noise. Both metrics decreased with degraded speech quality. We further tested if the two metrics can explain cross-subject variations in their behavioral performance. A significant correlation existed for rcc, as well as classification based on early AEPs, in the fronto-central areas. Because rcc indicates similarities between clean and degraded speech, our finding suggests that high speech intelligibility may be a result of the brain's ability to ignore noise in the sound carrier and/or background.
Collapse
|
12
|
Yoon YS, Boren CM, Diaz B. Effect of Realistic Test Conditions on Spectral and Temporal Processing in Normal-Hearing Listeners. Am J Audiol 2021; 30:160-169. [PMID: 33621127 DOI: 10.1044/2020_aja-20-00120] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose To measure the effect of testing conditions (in the soundproof booth vs. quiet room), test order, and number of test sessions on spectral and temporal processing in normal-hearing (NH) listeners. Method Thirty-two adult NH listeners participated in the three experiments. For all three experiments, the stimuli were presented to the left ear at the subjects' most comfortable level through headphones. All tests were administered in an adaptive three-alternative forced-choice paradigm. Experiment 1 was designed to compare the effect of soundproof booth and quiet room test conditions on amplitude modulation detection threshold and modulation frequency discrimination threshold with each of the five modulation frequencies. Experiment 2 was designed to compare the effect of two test orders on the frequency discrimination thresholds under the quiet room test conditions. The thresholds were first measured in the ascending and descending order of four pure tones, and then with counterbalanced order. For Experiment 3, the amplitude discrimination threshold under the quiet room testing condition was assessed 3 times to determine the effect of the number of test sessions. Then the thresholds were compared over the sessions. Results Results showed no significant effect of test environment. The test order is an important variable for frequency discrimination, particularly between piano tunes and pure tones. Results also show no significant difference across test sessions. Conclusions These results suggest that a controlled test environment may not be required in spectral and temporal assessment for NH listeners. Under the quiet test environment, a single outcome measure is sufficient, but test orders should be counterbalanced.
Collapse
Affiliation(s)
- Yang-Soo Yoon
- Department of Communication Sciences and Disorders, Baylor University, Waco, TX
| | | | - Brianna Diaz
- Department of Speech, Language and Hearing Sciences, Texas Tech University Health Sciences Center, Lubbock
| |
Collapse
|
13
|
Lee J, Han JH, Lee HJ. Long-Term Musical Training Alters Auditory Cortical Activity to the Frequency Change. Front Hum Neurosci 2020; 14:329. [PMID: 32973478 PMCID: PMC7471721 DOI: 10.3389/fnhum.2020.00329] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/29/2020] [Accepted: 07/24/2020] [Indexed: 11/13/2022] Open
Abstract
Objective: The ability to detect frequency variation is a fundamental skill necessary for speech perception. It is known that musical expertise is associated with a range of auditory perceptual skills, including discriminating frequency change, which suggests the neural encoding of spectral features can be enhanced by musical training. In this study, we measured auditory cortical responses to frequency change in musicians to examine the relationships between N1/P2 responses and behavioral performance/musical training. Methods: Behavioral and electrophysiological data were obtained from professional musicians and age-matched non-musician participants. Behavioral data included frequency discrimination detection thresholds for no threshold-equalizing noise (TEN), +5, 0, and -5 signal-to-noise ratio settings. Auditory-evoked responses were measured using a 64-channel electroencephalogram (EEG) system in response to frequency changes in ongoing pure tones consisting of 250 and 4,000 Hz, and the magnitudes of frequency change were 10%, 25% or 50% from the base frequencies. N1 and P2 amplitudes and latencies as well as dipole source activation in the left and right hemispheres were measured for each condition. Results: Compared to the non-musician group, behavioral thresholds in the musician group were lower for frequency discrimination in quiet conditions only. The scalp-recorded N1 amplitudes were modulated as a function of frequency change. P2 amplitudes in the musician group were larger than in the non-musician group. Dipole source analysis showed that P2 dipole activity to frequency changes was lateralized to the right hemisphere, with greater activity in the musician group regardless of the hemisphere side. Additionally, N1 amplitudes to frequency changes were positively related to behavioral thresholds for frequency discrimination while enhanced P2 amplitudes were associated with a longer duration of musical training. Conclusions: Our results demonstrate that auditory cortical potentials evoked by frequency change are related to behavioral thresholds for frequency discrimination in musicians. Larger P2 amplitudes in musicians compared to non-musicians reflects musical training-induced neural plasticity.
Collapse
Affiliation(s)
- Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, South Korea.,Department of Otorhinolaryngology, College of Medicine, Hallym University, Anyang, South Korea
| |
Collapse
|
14
|
Narne VK, Jain S, Sharma C, Baer T, Moore BCJ. Narrow-band ripple glide direction discrimination and its relationship to frequency selectivity estimated using psychophysical tuning curves. Hear Res 2020; 389:107910. [PMID: 32086020 DOI: 10.1016/j.heares.2020.107910] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/08/2019] [Revised: 01/29/2020] [Accepted: 02/06/2020] [Indexed: 10/25/2022]
Abstract
The highest spectral ripple density at which the discrimination of ripple glide direction was possible (STRtdir task) was assessed for one-octave wide (narrowband) stimuli with center frequencies of 500, 1000, 2000, and 4000 Hz and for a broadband stimulus. A pink noise lowpass filtered at the lower edge frequency of the rippled-noise stimuli was used to mask possible combination ripples. The relationship between thresholds measured using the STRtdir task and estimates of the sharpness of tuning (Q10) derived from fast psychophysical tuning curves was assessed for subjects with normal hearing (NH) and cochlear hearing loss (CHL). The STRtdir thresholds for the narrowband stimuli were highly correlated with Q10 values for the same center frequency, supporting the idea that STRtdir thresholds for the narrowband stimuli provide a good measure of frequency resolution. Both the STRtdir thresholds and the Q10 values were lower (worse) for the subjects with CHL than for the subjects with NH. For both the NH and CHL subjects, mean STRtdir thresholds for the broadband stimulus were not significantly higher (better) than for the narrowband stimuli, suggesting little or no ability to combine information across center frequencies.
Collapse
Affiliation(s)
- Vijaya Kumar Narne
- Department of Audiology, JSS Institute of Speech and Hearing, Mysore, India.
| | - Saransh Jain
- Department of Audiology, JSS Institute of Speech and Hearing, Mysore, India
| | - Chitkala Sharma
- Department of Audiology, JSS Institute of Speech and Hearing, Mysore, India
| | - Thomas Baer
- Department of Experimental Psychology, University of Cambridge, Cambridge, UK
| | - Brian C J Moore
- Department of Experimental Psychology, University of Cambridge, Cambridge, UK
| |
Collapse
|
15
|
Shim HJ, Go G, Lee H, Choi SW, Won JH. Influence of Visual Deprivation on Auditory Spectral Resolution, Temporal Resolution, and Speech Perception. Front Neurosci 2019; 13:1200. [PMID: 31780886 PMCID: PMC6851016 DOI: 10.3389/fnins.2019.01200] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/11/2019] [Accepted: 10/23/2019] [Indexed: 11/23/2022] Open
Abstract
We evaluated whether blind subjects have advantages in auditory spectral resolution, temporal resolution, and speech perception in noise compared with sighted subjects. We also compared psychoacoustic performance between early blind (EB) subjects and late blind (LB) subjects. Nineteen EB subjects, 16 LB subjects, and 20 sighted individuals were enrolled. All subjects were right-handed with normal and symmetric hearing thresholds and without cognitive impairments. Three psychoacoustic measurements of the subjects’ right ears were performed via an inserted earphone to determine spectral-ripple discrimination (SRD), temporal modulation detection (TMD), and speech recognition threshold (SRT) in noisy conditions. Acoustic change complex (ACC) responses were recorded during passive listening to standard ripple-inverted ripple stimuli. EB subjects exhibited better SRD than did LB (p = 0.020) and sighted (p = 0.003) subjects. TMD was better in EB (p < 0.001) and LB (p = 0.007) subjects compared with sighted subjects. SRD was positively correlated with the duration of blindness (r = 0.386, p = 0.024). Acoustic change complex data for ripple noise change at the Cz and Fz electrodes showed trends toward significant correlations with the behavioral results. In conclusion, compared with sighted subjects, EB subjects showed advantages in terms of auditory spectral and temporal resolution, while LB subjects showed an advantage in temporal resolution exclusively. These findings suggest that it might take longer for auditory spectral resolution to functionally enhance following visual deprivation compared to temporal resolution. Alternatively, a critical period of very young age may be required for auditory spectral resolution to improve following visual deprivation.
Collapse
Affiliation(s)
- Hyun Joon Shim
- Department of Otorhinolaryngology-Head and Neck Surgery, Eulji Medical Center, Eulji University School of Medicine, Seoul, South Korea
| | - Geurim Go
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Heirim Lee
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Sung Won Choi
- Department of Psychology, Duksung Women's University, Seoul, South Korea
| | - Jong Ho Won
- Division of ENT, Sleep Disordered Breathing, Respiratory, and Anesthesia, Office of Product Evaluation and Quality, Center for Devices and Radiological Health, U.S. Food and Drug Administration, Silver Spring, MD, United States
| |
Collapse
|
16
|
Mattingly JK, Castellanos I, Moberly AC. Nonverbal Reasoning as a Contributor to Sentence Recognition Outcomes in Adults With Cochlear Implants. Otol Neurotol 2019; 39:e956-e963. [PMID: 30444843 DOI: 10.1097/mao.0000000000001998] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS Significant variability in speech recognition persists among postlingually deafened adults with cochlear implants (CIs). We hypothesize that scores of nonverbal reasoning predict sentence recognition in adult CI users. BACKGROUND Cognitive functions contribute to speech recognition outcomes in adults with hearing loss. These functions may be particularly important for CI users who must interpret highly degraded speech signals through their devices. This study used a visual measure of reasoning (the ability to solve novel problems), the Raven's Progressive Matrices (RPM), to predict sentence recognition in CI users. METHODS Participants were 39 postlingually deafened adults with CIs and 43 age-matched normal-hearing (NH) controls. CI users were assessed for recognition of words in sentences in quiet, and NH controls listened to eight-channel vocoded versions to simulate the degraded signal delivered by a CI. A computerized visual task of the RPM, requiring participants to identify the correct missing piece in a 3×3 matrix of geometric designs, was also performed. Particular items from the RPM were examined for their associations with sentence recognition abilities, and a subset of items on the RPM was tested for the ability to predict degraded sentence recognition in the NH controls. RESULTS The overall number of items answered correctly on the 48-item RPM significantly correlated with sentence recognition in CI users (r = 0.35-0.47) and NH controls (r = 0.36-0.57). An abbreviated 12-item version of the RPM was created and performance also correlated with sentence recognition in CI users (r = 0.40-0.48) and NH controls (r = 0.49-0.56). CONCLUSIONS Nonverbal reasoning skills correlated with sentence recognition in both CI and NH subjects. Our findings provide further converging evidence that cognitive factors contribute to speech processing by adult CI users and can help explain variability in outcomes. Our abbreviated version of the RPM may serve as a clinically meaningful assessment for predicting sentence recognition outcomes in CI users.
Collapse
Affiliation(s)
- Jameson K Mattingly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University, Columbus, Ohio
| | | | | |
Collapse
|
17
|
Moberly AC, Mattingly JK, Castellanos I. How Does Nonverbal Reasoning Affect Sentence Recognition in Adults with Cochlear Implants and Normal-Hearing Peers? Audiol Neurootol 2019; 24:127-138. [PMID: 31266013 DOI: 10.1159/000500699] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/31/2018] [Accepted: 04/30/2019] [Indexed: 11/19/2022] Open
Abstract
BACKGROUND Previous research has demonstrated an association of scores on a visual test of nonverbal reasoning, Raven's Progressive Matrices (RPM), with scores on open-set sentence recognition in quiet for adult cochlear implant (CI) users as well as for adults with normal hearing (NH) listening to noise-vocoded sentence materials. Moreover, in that study, CI users demonstrated poorer nonverbal reasoning when compared with NH peers. However, it remains unclear what underlying neurocognitive processes contributed to the association of nonverbal reasoning scores with sentence recognition, and to the poorer scores demonstrated by CI users. OBJECTIVES Three hypotheses were tested: (1) nonverbal reasoning abilities of adult CI users and normal-hearing (NH) age-matched peers would be predicted by performance on more basic neurocognitive measures of working memory capacity, information-processing speed, inhibitory control, and concentration; (2) nonverbal reasoning would mediate the effects of more basic neurocognitive functions on sentence recognition in both groups; and (3) group differences in more basic neurocognitive functions would explain the group differences previously demonstrated in nonverbal reasoning. METHOD Eighty-three participants (40 CI and 43 NH) underwent testing of sentence recognition using two sets of sentence materials: sentences produced by a single male talker (Harvard sentences) and high-variability sentences produced by multiple talkers (Perceptually Robust English Sentence Test Open-set, PRESTO). Participants also completed testing of nonverbal reasoning using a visual computerized RPM test, and additional neurocognitive assessments were collected using a visual Digit Span test and a Stroop Color-Word task. Multivariate regression analyses were performed to test our hypotheses while treating age as a covariate. RESULTS In the CI group, information processing speed on the Stroop task predicted RPM performance, and RPM scores mediated the effects of information processing speed on sentence recognition abilities for both Harvard and PRESTO sentences. In contrast, for the NH group, Stroop inhibitory control predicted RPM performance, and a trend was seen towards RPM scores mediating the effects of inhibitory control on sentence recognition, but only for PRESTO sentences. Poorer RPM performance in CI users than NH controls could be partially attributed to slower information processing speed. CONCLUSIONS Neurocognitive functions contributed differentially to nonverbal reasoning performance in CI users as compared with NH peers, and nonverbal reasoning appeared to partially mediate the effects of these different neurocognitive functions on sentence recognition in both groups, at least for PRESTO sentences. Slower information processing speed accounted for poorer nonverbal reasoning scores in CI users. Thus, it may be that prolonged auditory deprivation contributes to cognitive decline through slower information processing.
Collapse
Affiliation(s)
- Aaron C Moberly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA,
| | - Jameson K Mattingly
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| | - Irina Castellanos
- Department of Otolaryngology-Head and Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
18
|
O'Neill ER, Kreft HA, Oxenham AJ. Cognitive factors contribute to speech perception in cochlear-implant users and age-matched normal-hearing listeners under vocoded conditions. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2019; 146:195. [PMID: 31370651 PMCID: PMC6637026 DOI: 10.1121/1.5116009] [Citation(s) in RCA: 37] [Impact Index Per Article: 7.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
This study examined the contribution of perceptual and cognitive factors to speech-perception abilities in cochlear-implant (CI) users. Thirty CI users were tested on word intelligibility in sentences with and without semantic context, presented in quiet and in noise. Performance was compared with measures of spectral-ripple detection and discrimination, thought to reflect peripheral processing, as well as with cognitive measures of working memory and non-verbal intelligence. Thirty age-matched and thirty younger normal-hearing (NH) adults also participated, listening via tone-excited vocoders, adjusted to produce mean performance for speech in noise comparable to that of the CI group. Results suggest that CI users may rely more heavily on semantic context than younger or older NH listeners, and that non-auditory working memory explains significant variance in the CI and age-matched NH groups. Between-subject variability in spectral-ripple detection thresholds was similar across groups, despite the spectral resolution for all NH listeners being limited by the same vocoder, whereas speech perception scores were more variable between CI users than between NH listeners. The results highlight the potential importance of central factors in explaining individual differences in CI users and question the extent to which standard measures of spectral resolution in CIs reflect purely peripheral processing.
Collapse
Affiliation(s)
- Erin R O'Neill
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Heather A Kreft
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| | - Andrew J Oxenham
- Department of Psychology, University of Minnesota, Elliott Hall, 75 East River Parkway, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
19
|
DiNino M, Arenberg JG. Age-Related Performance on Vowel Identification and the Spectral-temporally Modulated Ripple Test in Children With Normal Hearing and With Cochlear Implants. Trends Hear 2019; 22:2331216518770959. [PMID: 29708065 PMCID: PMC5949928 DOI: 10.1177/2331216518770959] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/21/2022] Open
Abstract
Children’s performance on psychoacoustic tasks improves with age, but inadequate auditory input may delay this maturation. Cochlear implant (CI) users receive a degraded auditory signal with reduced frequency resolution compared with normal, acoustic hearing; thus, immature auditory abilities may contribute to the variation among pediatric CI users’ speech recognition scores. This study investigated relationships between age-related variables, spectral resolution, and vowel identification scores in prelingually deafened, early-implanted children with CIs compared with normal hearing (NH) children. All participants performed vowel identification and the Spectral-temporally Modulated Ripple Test (SMRT). Vowel stimuli for NH children were vocoded to simulate the reduced spectral resolution of CI hearing. Age positively predicted NH children’s vocoded vowel identification scores, but time with the CI was a stronger predictor of vowel recognition and SMRT performance of children with CIs. For both groups, SMRT thresholds were related to vowel identification performance, analogous to previous findings in adults. Sequential information analysis of vowel feature perception indicated greater transmission of duration-related information compared with formant features in both groups of children. In addition, the amount of F2 information transmitted predicted SMRT thresholds in children with NH and with CIs. Comparisons between the two CIs of bilaterally implanted children revealed disparate task performance levels and information transmission values within the same child. These findings indicate that adequate auditory experience contributes to auditory perceptual abilities of pediatric CI users. Further, factors related to individual CIs may be more relevant to psychoacoustic task performance than are the overall capabilities of the child.
Collapse
Affiliation(s)
- Mishaela DiNino
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Julie G Arenberg
- 1 Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| |
Collapse
|
20
|
Croghan NBH, Smith ZM. Speech Understanding With Various Maskers in Cochlear-Implant and Simulated Cochlear-Implant Hearing: Effects of Spectral Resolution and Implications for Masking Release. Trends Hear 2019; 22:2331216518787276. [PMID: 30022730 PMCID: PMC6053854 DOI: 10.1177/2331216518787276] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/18/2022] Open
Abstract
The purpose of this study was to investigate the relationship between psychophysical spectral resolution and sentence reception in various types of interfering backgrounds for listeners with cochlear implants and normal-hearing subjects listening to vocoded speech. Spectral resolution was measured with a spectral modulation detection (SMD) task. For speech testing, maskers included stationary speech-shaped noise (SSN), four-talker babble, multitone noise, and a competing talker. To explore the possible trade-offs between spectral resolution and susceptibility to different types of maskers, the degree of simulated current spread was varied within the vocoder group, achieving a range of performance for SMD and speech tasks. Greater simulated current spread was detrimental to both spectral resolution and speech recognition, suggesting that interventions that decrease current spread may improve performance for both tasks. Better SMD sensitivity was significantly correlated with improved sentence reception. In addition, differences in sentence reception across the four maskers were significantly associated with SMD across the combined group of cochlear-implant and vocoder subjects. Masking release (MR) was quantified as the signal-to-noise ratio difference in speech reception threshold between the SSN and competing talker. Several individual cochlear-implant subjects demonstrated substantial MR, in contrast to previous studies, and the degree of MR increased with better SMD thresholds across subjects. The results of this study suggest that alternative masker types, particularly competing talkers, are more sensitive than stationary SSN to differences in spectral resolution in the cochlear-implant population.
Collapse
Affiliation(s)
- Naomi B H Croghan
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,2 Department of Speech, Language, and Hearing Sciences, University of Colorado, Boulder, CO, USA
| | - Zachary M Smith
- 1 Denver Research & Technology Labs, Cochlear Ltd., Centennial, CO, USA.,3 Department of Physiology and Biophysics, School of Medicine, University of Colorado, Aurora, CO, USA
| |
Collapse
|
21
|
Speech Perception with Spectrally Non-overlapping Maskers as Measure of Spectral Resolution in Cochlear Implant Users. J Assoc Res Otolaryngol 2018; 20:151-167. [PMID: 30456730 DOI: 10.1007/s10162-018-00702-2] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/24/2018] [Accepted: 10/07/2018] [Indexed: 10/27/2022] Open
Abstract
Poor spectral resolution contributes to the difficulties experienced by cochlear implant (CI) users when listening to speech in noise. However, correlations between measures of spectral resolution and speech perception in noise have not always been found to be robust. It may be that the relationship between spectral resolution and speech perception in noise becomes clearer in conditions where the speech and noise are not spectrally matched, so that improved spectral resolution can assist in separating the speech from the masker. To test this prediction, speech intelligibility was measured with noise or tone maskers that were presented either in the same spectral channels as the speech or in interleaved spectral channels. Spectral resolution was estimated via a spectral ripple discrimination task. Results from vocoder simulations in normal-hearing listeners showed increasing differences in speech intelligibility between spectrally overlapped and interleaved maskers as well as improved spectral ripple discrimination with increasing spectral resolution. However, no clear differences were observed in CI users between performance with spectrally interleaved and overlapped maskers, or between tone and noise maskers. The results suggest that spectral resolution in current CIs is too poor to take advantage of the spectral separation produced by spectrally interleaved speech and maskers. Overall, the spectrally interleaved and tonal maskers produce a much larger difference in performance between normal-hearing listeners and CI users than do traditional speech-in-noise measures, and thus provide a more sensitive test of speech perception abilities for current and future implantable devices.
Collapse
|
22
|
Souza P, Hoover E. The Physiologic and Psychophysical Consequences of Severe-to-Profound Hearing Loss. Semin Hear 2018; 39:349-363. [PMID: 30443103 DOI: 10.1055/s-0038-1670698] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022] Open
Abstract
Substantial loss of cochlear function is required to elevate pure-tone thresholds to the severe hearing loss range; yet, individuals with severe or profound hearing loss continue to rely on hearing for communication. Despite the impairment, sufficient information is encoded at the periphery to make acoustic hearing a viable option. However, the probability of significant cochlear and/or neural damage associated with the loss has consequences for sound perception and speech recognition. These consequences include degraded frequency selectivity, which can be assessed with tests including psychoacoustic tuning curves and broadband rippled stimuli. Because speech recognition depends on the ability to resolve frequency detail, a listener with severe hearing loss is likely to have impaired communication in both quiet and noisy environments. However, the extent of the impairment varies widely among individuals. A better understanding of the fundamental abilities of listeners with severe and profound hearing loss and the consequences of those abilities for communication can support directed treatment options in this population.
Collapse
Affiliation(s)
- Pamela Souza
- Department of Communication Sciences and Disorders, Northwestern University, Evanston, Illinois
| | - Eric Hoover
- Department of Hearing and Speech Sciences, University of Maryland, Baltimore, Maryland
| |
Collapse
|
23
|
Carlyon RP, Cosentino S, Deeks JM, Parkinson W, Arenberg JG. Effect of Stimulus Polarity on Detection Thresholds in Cochlear Implant Users: Relationships with Average Threshold, Gap Detection, and Rate Discrimination. J Assoc Res Otolaryngol 2018; 19:559-567. [PMID: 29881937 PMCID: PMC6226408 DOI: 10.1007/s10162-018-0677-5] [Citation(s) in RCA: 27] [Impact Index Per Article: 4.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/01/2017] [Accepted: 05/18/2018] [Indexed: 12/03/2022] Open
Abstract
Previous psychophysical and modeling studies suggest that cathodic stimulation by a cochlear implant (CI) may preferentially activate the peripheral processes of the auditory nerve, whereas anodic stimulation may preferentially activate the central axons. Because neural degeneration typically starts with loss of the peripheral processes, lower thresholds for cathodic than for anodic stimulation may indicate good local neural survival. We measured thresholds for 99-pulse-per-second trains of triphasic (TP) pulses where the central high-amplitude phase was either anodic (TP-A) or cathodic (TP-C). Thresholds were obtained in monopolar mode from four or five electrodes and a total of eight ears from subjects implanted with the Advanced Bionics CI. When between-subject differences were removed, there was a modest but significant correlation between the polarity effect (TP-C threshold minus TP-A threshold) and the average of TP-C and TP-A thresholds, consistent with the hypothesis that a large polarity effect corresponds to good neural survival. When data were averaged across electrodes for each subject, relatively low thresholds for TP-C correlated with a high "upper limit" (the pulse rate up to which pitch continues to increase) from a previous study (Cosentino et al. J Assoc Otolaryngol 17:371-382). Overall, the results provide modest indirect support for the hypothesis that the polarity effect provides an estimate of local neural survival.
Collapse
Affiliation(s)
- Robert P Carlyon
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK.
| | - Stefano Cosentino
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| | - John M Deeks
- Medical Research Council Cognition and Brain Sciences Unit, University of Cambridge, 15 Chaucer Road, Cambridge, CB2 7EF, UK
| | - Wendy Parkinson
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St., Seattle, WA, 98105, USA
| | - Julie G Arenberg
- Department of Speech and Hearing Sciences, University of Washington, 1417 NE 42nd St., Seattle, WA, 98105, USA
| |
Collapse
|
24
|
Kosaner J, Van Dun B, Yigit O, Gultekin M, Bayguzina S. Clinically recorded cortical auditory evoked potentials from paediatric cochlear implant users fitted with electrically elicited stapedius reflex thresholds. Int J Pediatr Otorhinolaryngol 2018; 108:100-112. [PMID: 29605337 DOI: 10.1016/j.ijporl.2018.02.033] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/14/2017] [Revised: 02/07/2018] [Accepted: 02/20/2018] [Indexed: 10/18/2022]
Abstract
OBJECTIVES This study aimed to objectively evaluate access to soft sounds (55 dB SPL) in paediatric CI users, all wearing MED-EL (Innsbruck, Austria) devices who were fitted with the objective electrically elicited stapedius reflex threshold (eSRT) fitting method, to track their cortical auditory evoked potential (CAEP) presence and latency, and to compare their CAEPs to those of normal-hearing peers. METHODS Forty-five unilaterally implanted, pre-lingually deafened MED-EL CI users, aged 12-48 months, underwent CAEP testing in the clinic at regular monthly intervals post switch-on. CAEPs were recorded in response to short speech tokens /m/, /g/ and /t/ presented in the free field at 55 dB SPL. Twenty children with normal hearing (NH), similarly aged, underwent CAEP testing once. RESULTS The proportion of present CAEPs increased and CAEP P1 latencies reduced significantly with post-implantation duration. CAEPs were scored based on their presence and age-appropriate P1 latency. These CAEP scores increased significantly with post-implantation duration. CAEP scores were significantly worse for the /m/ speech token compared to the other two tokens. Compared to the NH group, CAEP scores were significantly smaller for all post-implantation test intervals. CONCLUSIONS This study provides clinicians with a first step towards typical ranges of CAEP presence, latency, and derived CAEP score over the first months of MED-EL CI use. CAEPs within these typical ranges could validate intervention whereas less than optimum CAEPs could prompt clinicians to seek solutions in a timely manner. CAEPs could clinically validate whether a CI provides adequate access to soft sounds. This approach could form an alternative to behavioural soft sound access verification.
Collapse
Affiliation(s)
- Julie Kosaner
- Meders Speech and Hearing Clinic, Meders İşitme ve Konuşma Merkezi, Söğütlüçeşme Caddesi: No 102, Kadıköy, İstanbul 34714, Turkey.
| | - Bram Van Dun
- National Acoustic Laboratories, Australian Hearing Hub, Level 5, 16 University Avenue, Macquarie University, NSW 2109, Australia; The HEARing CRC, 550 Swanston St, Carlton, NSW 3053, Australia.
| | - Ozgur Yigit
- Istanbul Training and Research Hospital, SBÜ, İstanbul Eğitim ve Araştırma Hastanesi, Kasap İlyas Mah., Org. Abdurrahman Nafiz Gürman Cd., 34098 Fatih/İstanbul, Turkey.
| | - Muammer Gultekin
- Meders Speech and Hearing Clinic, Meders İşitme ve Konuşma Merkezi, Söğütlüçeşme Caddesi: No 102, Kadıköy, İstanbul 34714, Turkey.
| | - Svetlana Bayguzina
- Meders Speech and Hearing Clinic, Meders İşitme ve Konuşma Merkezi, Söğütlüçeşme Caddesi: No 102, Kadıköy, İstanbul 34714, Turkey.
| |
Collapse
|
25
|
Abstract
OBJECTIVES Spectral resolution is a correlate of open-set speech understanding in postlingually deaf adults and prelingually deaf children who use cochlear implants (CIs). To apply measures of spectral resolution to assess device efficacy in younger CI users, it is necessary to understand how spectral resolution develops in normal-hearing children. In this study, spectral ripple discrimination (SRD) was used to measure listeners' sensitivity to a shift in phase of the spectral envelope of a broadband noise. Both resolution of peak to peak location (frequency resolution) and peak to trough intensity (across-channel intensity resolution) are required for SRD. DESIGN SRD was measured as the highest ripple density (in ripples per octave) for which a listener could discriminate a 90° shift in phase of the sinusoidally-modulated amplitude spectrum. A 2 × 3 between-subjects design was used to assess the effects of age (7-month-old infants versus adults) and ripple peak/trough "depth" (10, 13, and 20 dB) on SRD in normal-hearing listeners (experiment 1). In experiment 2, SRD thresholds in the same age groups were compared using a task in which ripple starting phases were randomized across trials to obscure within-channel intensity cues. In experiment 3, the randomized starting phase method was used to measure SRD as a function of age (3-month-old infants, 7-month-old infants, and young adults) and ripple depth (10 and 20 dB in repeated measures design). RESULTS In experiment 1, there was a significant interaction between age and ripple depth. The infant SRDs were significantly poorer than the adult SRDs at 10 and 13 dB ripple depths but adult-like at 20 dB depth. This result is consistent with immature across-channel intensity resolution. In contrast, the trajectory of SRD as a function of depth was steeper for infants than adults suggesting that frequency resolution was better in infants than adults. However, in experiment 2 infant performance was significantly poorer than adults at 20 dB depth suggesting that variability of infants' use of within-channel intensity cues, rather than better frequency resolution, explained the results of experiment 1. In experiment 3, age effects were seen with both groups of infants showing poorer SRD than adults but, unlike experiment 1, no significant interaction between age and depth was seen. CONCLUSIONS Measurement of SRD thresholds in individual 3 to 7-month-old infants is feasible. Performance of normal-hearing infants on SRD may be limited by across-channel intensity resolution despite mature frequency resolution. These findings have significant implications for design and stimulus choice for applying SRD for testing infants with CIs. The high degree of variability in infant SRD can be somewhat reduced by obscuring within-channel cues.
Collapse
Affiliation(s)
- David L Horn
- 1Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington, USA; 2Division of Otolaryngology, Seattle Children's Hospital, Seattle, Wahington, USA; and 3Department of Speech and Hearing Sciences, University of Washington, Seattle, Washington
| | | | | | | |
Collapse
|
26
|
Abstract
OBJECTIVE Evidence suggests that musicians, as a group, have superior frequency resolution abilities when compared with nonmusicians. It is possible to assess auditory discrimination using either behavioral or electrophysiologic methods. The purpose of this study was to determine if the acoustic change complex (ACC) is sensitive enough to reflect the differences in spectral processing exhibited by musicians and nonmusicians. DESIGN Twenty individuals (10 musicians and 10 nonmusicians) participated in this study. Pitch and spectral ripple discrimination were assessed using both behavioral and electrophysiologic methods. Behavioral measures were obtained using a standard three interval, forced choice procedure. The ACC was recorded and used as an objective (i.e., nonbehavioral) measure of discrimination between two auditory signals. The same stimuli were used for both psychophysical and electrophysiologic testing. RESULTS As a group, musicians were able to detect smaller changes in pitch than nonmusician. They also were able to detect a shift in the position of the peaks and valleys in a ripple noise stimulus at higher ripple densities than non-musicians. ACC responses recorded from musicians were larger than those recorded from non-musicians when the amplitude of the ACC response was normalized to the amplitude of the onset response in each stimulus pair. Visual detection thresholds derived from the evoked potential data were better for musicians than non-musicians regardless of whether the task was discrimination of musical pitch or detection of a change in the frequency spectrum of the ripple noise stimuli. Behavioral measures of discrimination were generally more sensitive than the electrophysiologic measures; however, the two metrics were correlated. CONCLUSIONS Perhaps as a result of extensive training, musicians are better able to discriminate spectrally complex acoustic signals than nonmusicians. Those differences are evident not only in perceptual/behavioral tests but also in electrophysiologic measures of neural response at the level of the auditory cortex. While these results are based on observations made from normal-hearing listeners, they suggest that the ACC may provide a non-behavioral method of assessing auditory discrimination and as a result might prove useful in future studies that explore the efficacy of participation in a musically based, auditory training program perhaps geared toward pediatric or hearing-impaired listeners.
Collapse
|
27
|
Clinard CG, Hodgson SL, Scherer ME. Neural Correlates of the Binaural Masking Level Difference in Human Frequency-Following Responses. J Assoc Res Otolaryngol 2017; 18:355-369. [PMID: 27896486 PMCID: PMC5352611 DOI: 10.1007/s10162-016-0603-7] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/10/2016] [Accepted: 11/02/2016] [Indexed: 11/30/2022] Open
Abstract
The binaural masking level difference (BMLD) is an auditory phenomenon where binaural tone-in-noise detection is improved when the phase of either signal or noise is inverted in one of the ears (SπNo or SoNπ, respectively), relative to detection when signal and noise are in identical phase at each ear (SoNo). Processing related to BMLDs and interaural time differences has been confirmed in the auditory brainstem of non-human mammals; in the human auditory brainstem, phase-locked neural responses elicited by BMLD stimuli have not been systematically examined across signal-to-noise ratio. Behavioral and physiological testing was performed in three binaural stimulus conditions: SoNo, SπNo, and SoNπ. BMLDs at 500 Hz were obtained from 14 young, normal-hearing adults (ages 21-26). Physiological BMLDs used the frequency-following response (FFR), a scalp-recorded auditory evoked potential dependent on sustained phase-locked neural activity; FFR tone-in-noise detection thresholds were used to calculate physiological BMLDs. FFR BMLDs were significantly smaller (poorer) than behavioral BMLDs, and FFR BMLDs did not reflect a physiological release from masking, on average. Raw FFR amplitude showed substantial reductions in the SπNo condition relative to SoNo and SoNπ conditions, consistent with negative effects of phase summation from left and right ear FFRs. FFR amplitude differences between stimulus conditions (e.g., SoNo amplitude-SπNo amplitude) were significantly predictive of behavioral SπNo BMLDs; individuals with larger amplitude differences had larger (better) behavioral B MLDs and individuals with smaller amplitude differences had smaller (poorer) behavioral B MLDs. These data indicate a role for sustained phase-locked neural activity in BMLDs of humans and are the first to show predictive relationships between behavioral BMLDs and human brainstem responses.
Collapse
Affiliation(s)
- Christopher G. Clinard
- Department of Communication Sciences and Disorders, James Madison University, 235 Martin Luther King Jr. Way, MSC 4304, Harrisonburg, VA 22807 USA
| | - Sarah L. Hodgson
- Department of Communication Sciences and Disorders, James Madison University, 235 Martin Luther King Jr. Way, MSC 4304, Harrisonburg, VA 22807 USA
| | - Mary Ellen Scherer
- Department of Communication Sciences and Disorders, James Madison University, 235 Martin Luther King Jr. Way, MSC 4304, Harrisonburg, VA 22807 USA
| |
Collapse
|
28
|
Abstract
OBJECTIVE Considerable unexplained variability and large individual differences exist in speech recognition outcomes for postlingually deaf adults who use cochlear implants (CIs), and a sizeable fraction of CI users can be considered "poor performers." This article summarizes our current knowledge of poor CI performance, and provides suggestions to clinicians managing these patients. METHOD Studies are reviewed pertaining to speech recognition variability in adults with hearing loss. Findings are augmented by recent studies in our laboratories examining outcomes in postlingually deaf adults with CIs. RESULTS In addition to conventional clinical predictors of CI performance (e.g., amount of residual hearing, duration of deafness), factors pertaining to both "bottom-up" auditory sensitivity to the spectro-temporal details of speech, and "top-down" linguistic knowledge and neurocognitive functions contribute to CI outcomes. CONCLUSIONS The broad array of factors that contribute to speech recognition performance in adult CI users suggests the potential both for novel diagnostic assessment batteries to explain poor performance, and also new rehabilitation strategies for patients who exhibit poor outcomes. Moreover, this broad array of factors determining outcome performance suggests the need to treat individual CI patients using a personalized rehabilitation approach.
Collapse
Affiliation(s)
- Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - Chelsea Bates
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - Michael S. Harris
- Department of Otolaryngology, The Ohio State University Wexner Medical Center
| | - David B. Pisoni
- Psychological and Brain Sciences Department, Indiana University
| |
Collapse
|
29
|
Van Dun B, Kania A, Dillon H. Cortical Auditory Evoked Potentials in (Un)aided Normal-Hearing and Hearing-Impaired Adults. Semin Hear 2016; 37:9-24. [PMID: 27587919 DOI: 10.1055/s-0035-1570333] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/22/2022] Open
Abstract
Cortical auditory evoked potentials (CAEPs) are influenced by the characteristics of the stimulus, including level and hearing aid gain. Previous studies have measured CAEPs aided and unaided in individuals with normal hearing. There is a significant difference between providing amplification to a person with normal hearing and a person with hearing loss. This study investigated this difference and the effects of stimulus signal-to-noise ratio (SNR) and audibility on the CAEP amplitude in a population with hearing loss. Twelve normal-hearing participants and 12 participants with a hearing loss participated in this study. Three speech sounds-/m/, /g/, and /t/-were presented in the free field. Unaided stimuli were presented at 55, 65, and 75 dB sound pressure level (SPL) and aided stimuli at 55 dB SPL with three different gains in steps of 10 dB. CAEPs were recorded and their amplitudes analyzed. Stimulus SNRs and audibility were determined. No significant effect of stimulus level or hearing aid gain was found in normal hearers. Conversely, a significant effect was found in hearing-impaired individuals. Audibility of the signal, which in some cases is determined by the signal level relative to threshold and in other cases by the SNR, is the dominant factor explaining changes in CAEP amplitude. CAEPs can potentially be used to assess the effects of hearing aid gain in hearing-impaired users.
Collapse
Affiliation(s)
- Bram Van Dun
- The HEARing CRC, Sydney, Australia; National Acoustic Laboratories, Sydney, Australia
| | | | - Harvey Dillon
- The HEARing CRC, Sydney, Australia; National Acoustic Laboratories, Sydney, Australia
| |
Collapse
|
30
|
Abstract
OBJECTIVES Nucleus Hybrid Cochlear Implant (CI) users hear low-frequency sounds via acoustic stimulation and high-frequency sounds via electrical stimulation. This within-subject study compares three different methods of coordinating programming of the acoustic and electrical components of the Hybrid device. Speech perception and cortical auditory evoked potentials (CAEP) were used to assess differences in outcome. The goals of this study were to determine whether (1) the evoked potential measures could predict which programming strategy resulted in better outcome on the speech perception task or was preferred by the listener, and (2) CAEPs could be used to predict which subjects benefitted most from having access to the electrical signal provided by the Hybrid implant. DESIGN CAEPs were recorded from 10 Nucleus Hybrid CI users. Study participants were tested using three different experimental processor programs (MAPs) that differed in terms of how much overlap there was between the range of frequencies processed by the acoustic component of the Hybrid device and range of frequencies processed by the electrical component. The study design included allowing participants to acclimatize for a period of up to 4 weeks with each experimental program prior to speech perception and evoked potential testing. Performance using the experimental MAPs was assessed using both a closed-set consonant recognition task and an adaptive test that measured the signal-to-noise ratio that resulted in 50% correct identification of a set of 12 spondees presented in background noise. Long-duration, synthetic vowels were used to record both the cortical P1-N1-P2 "onset" response and the auditory "change" response (also known as the auditory change complex [ACC]). Correlations between the evoked potential measures and performance on the speech perception tasks are reported. RESULTS Differences in performance using the three programming strategies were not large. Peak-to-peak amplitude of the ACC was not found to be sensitive enough to accurately predict the programming strategy that resulted in the best performance on either measure of speech perception. All 10 Hybrid CI users had residual low-frequency acoustic hearing. For all 10 subjects, allowing them to use both the acoustic and electrical signals provided by the implant improved performance on the consonant recognition task. For most subjects, it also resulted in slightly larger cortical change responses. However, the impact that listening mode had on the cortical change responses was small, and again, the correlation between the evoked potential and speech perception results was not significant. CONCLUSIONS CAEPs can be successfully measured from Hybrid CI users. The responses that are recorded are similar to those recorded from normal-hearing listeners. The goal of this study was to see if CAEPs might play a role either in identifying the experimental program that resulted in best performance on a consonant recognition task or in documenting benefit from the use of the electrical signal provided by the Hybrid CI. At least for the stimuli and specific methods used in this study, no such predictive relationship was found.
Collapse
|
31
|
Abstract
OBJECTIVES Nonlinear frequency compression is a signal processing technique used to increase the audibility of high-frequency speech sounds for hearing aid users with sloping, high-frequency hearing loss. However, excessive compression ratios may reduce spectral contrast between sounds and negatively impact speech perception. This is of particular concern in infants and young children who may not be able to provide feedback about frequency compression settings. This study explores the use of an objective cortical auditory evoked potential that is sensitive to changes in spectral contrast, the acoustic change complex (ACC), in the verification of frequency compression parameters. DESIGN ACC responses were recorded from adult listeners to a spectral ripple contrast stimulus that was processed using a range of frequency compression ratios (1:1, 1.5:1, 2:1, 3:1, and 4:1). Vowel identification, consonant identification, speech recognition in noise (QuickSIN), and behavioral ripple discrimination thresholds were also measured under identical frequency compression conditions. In Experiment 1, these tasks were completed in 10 adults with normal hearing. In Experiment 2, these same tasks were repeated in 10 adults with sloping, high-frequency hearing loss. RESULTS Repeated measures analysis of variance was completed for each task and each group with frequency compression ratio as the within-subjects factor. Increasing the compression ratio did not affect vowel identification for the normal hearing group but did cause a significant decrease in vowel identification for the hearing-impaired listeners. Increases in compression ratio were associated with significant decrements in ACC amplitudes, consonant identification scores, ripple discrimination thresholds, and speech perception in noise scores for both groups of listeners. CONCLUSIONS The ACC response, like speech and nonspeech perceptual measures, is sensitive to frequency compression ratio. Additional study is needed to establish optimal stimulus and recording parameters for the clinical application of this measure in the verification of hearing aid frequency compression settings.
Collapse
|
32
|
Kim JR. Acoustic Change Complex: Clinical Implications. J Audiol Otol 2015; 19:120-4. [PMID: 26771009 PMCID: PMC4704548 DOI: 10.7874/jao.2015.19.3.120] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/12/2015] [Revised: 11/16/2015] [Accepted: 11/18/2015] [Indexed: 11/22/2022] Open
Abstract
The acoustic change complex (ACC) is a cortical auditory evoked potential elicited in response to a change in an ongoing sound. The characteristics and potential clinical implications of the ACC are reviewed in this article. The P1-N1-P2 recorded from the auditory cortex following presentation of an acoustic stimulus is believed to reflect the neural encoding of a sound signal, but this provides no information regarding sound discrimination. However, the neural processing underlying behavioral discrimination capacity can be measured by modifying the traditional methodology for recording the P1-N1-P2. When obtained in response to an acoustic change within an ongoing sound, the resulting waveform is referred to as the ACC. When elicited, the ACC indicates that the brain has detected changes within a sound and the patient has the neural capacity to discriminate the sounds. In fact, results of several studies have shown that the ACC amplitude increases with increasing magnitude of acoustic changes in intensity, spectrum, and gap duration. In addition, the ACC can be reliably recorded with good test-retest reliability not only from listeners with normal hearing but also from individuals with hearing loss, hearing aids, and cochlear implants. The ACC can be obtained even in the absence of attention, and requires relatively few stimulus presentations to record a response with a good signal-to-noise ratio. Most importantly, the ACC shows reasonable agreement with behavioral measures. Therefore, these findings suggest that the ACC might represent a promising tool for the objective clinical evaluation of auditory discrimination and/or speech perception capacity.
Collapse
Affiliation(s)
- Jae-Ryong Kim
- Department of Otolaryngology-Head and Neck Surgery, Busan Paik Hospital, Inje University College of Medicine, Busan, Korea
| |
Collapse
|
33
|
Jeon EK, Turner CW, Karsten SA, Henry BA, Gantz BJ. Cochlear implant users' spectral ripple resolution. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:2350-8. [PMID: 26520316 PMCID: PMC4617737 DOI: 10.1121/1.4932020] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/19/2014] [Revised: 09/10/2015] [Accepted: 09/15/2015] [Indexed: 05/26/2023]
Abstract
This study revisits the issue of the spectral ripple resolution abilities of cochlear implant (CI) users. The spectral ripple resolution of recently implanted CI recipients (implanted during the last 10 years) were compared to those of CI recipients implanted 15 to 20 years ago, as well as those of normal-hearing and hearing-impaired listeners from previously published data from Henry, Turner, and Behrens [J. Acoust. Soc. Am. 118, 1111-1121 (2005)]. More recently, implanted CI recipients showed significantly better spectral ripple resolution. There is no significant difference in spectral ripple resolution for these recently implanted subjects compared to hearing-impaired (acoustic) listeners. The more recently implanted CI users had significantly better pre-operative speech perception than previously reported CI users. These better pre-operative speech perception scores in CI users from the current study may be related to better performance on the spectral ripple discrimination task; however, other possible factors such as improvements in internal and external devices cannot be excluded.
Collapse
Affiliation(s)
- Eun Kyung Jeon
- Department of Communication Sciences and Disorders, University of Iowa, 227 SHC, 200 Hawkins Drive, Iowa City, Iowa 52242, USA
| | - Christopher W Turner
- Department of Communication Sciences and Disorders, University of Iowa, 121B SHC, 200 Hawkins Drive, Iowa City, Iowa 52242, USA
| | - Sue A Karsten
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa, 21016 PFP, 200 Hawkins Drive, Iowa City, Iowa 52242, USA
| | - Belinda A Henry
- School of Health and Rehabilitation Sciences, The University of Queensland, Brisbane St Lucia, Queensland 4072, Australia
| | - Bruce J Gantz
- Department of Otolaryngology - Head and Neck Surgery, University of Iowa, 21158 PFP, 200 Hawkins Drive, Iowa City, Iowa 52242, USA
| |
Collapse
|
34
|
Abstract
OBJECTIVE To determine if unaided, non-linguistic psychoacoustic measures can be effective in evaluating cochlear implant (CI) candidacy. STUDY DESIGN Prospective split-cohort study including predictor development subgroup and independent predictor validation subgroup. SETTING Tertiary referral center. SUBJECTS Fifteen subjects (28 ears) with hearing loss were recruited from patients visiting the University of Washington Medical Center for CI evaluation. METHODS Spectral-ripple discrimination (using a 13-dB modulation depth) and temporal modulation detection using 10- and 100-Hz modulation frequencies were assessed with stimuli presented through insert earphones. Correlations between performance for psychoacoustic tasks and speech perception tasks were assessed. Receiver operating characteristic curve analysis was performed to estimate the optimal psychoacoustic score for CI candidacy evaluation in the development subgroup and then tested in an independent sample. RESULTS Strong correlations were observed between spectral-ripple thresholds and both aided sentence recognition and unaided word recognition. Weaker relationships were found between temporal modulation detection and speech tests. Receiver operating characteristic curve analysis demonstrated that the unaided spectral-ripple discrimination shows a good sensitivity, specificity, positive predictive value, and negative predictive value compared to the current gold standard, aided sentence recognition. CONCLUSION Results demonstrated that the unaided spectral-ripple discrimination test could be a promising tool for evaluating CI candidacy.
Collapse
|
35
|
Davies-Venn E, Nelson P, Souza P. Comparing auditory filter bandwidths, spectral ripple modulation detection, spectral ripple discrimination, and speech recognition: Normal and impaired hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2015; 138:492-503. [PMID: 26233047 PMCID: PMC4514721 DOI: 10.1121/1.4922700] [Citation(s) in RCA: 45] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Some listeners with hearing loss show poor speech recognition scores in spite of using amplification that optimizes audibility. Beyond audibility, studies have suggested that suprathreshold abilities such as spectral and temporal processing may explain differences in amplified speech recognition scores. A variety of different methods has been used to measure spectral processing. However, the relationship between spectral processing and speech recognition is still inconclusive. This study evaluated the relationship between spectral processing and speech recognition in listeners with normal hearing and with hearing loss. Narrowband spectral resolution was assessed using auditory filter bandwidths estimated from simultaneous notched-noise masking. Broadband spectral processing was measured using the spectral ripple discrimination (SRD) task and the spectral ripple depth detection (SMD) task. Three different measures were used to assess unamplified and amplified speech recognition in quiet and noise. Stepwise multiple linear regression revealed that SMD at 2.0 cycles per octave (cpo) significantly predicted speech scores for amplified and unamplified speech in quiet and noise. Commonality analyses revealed that SMD at 2.0 cpo combined with SRD and equivalent rectangular bandwidth measures to explain most of the variance captured by the regression model. Results suggest that SMD and SRD may be promising clinical tools for diagnostic evaluation and predicting amplification outcomes.
Collapse
Affiliation(s)
- Evelyn Davies-Venn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| | - Peggy Nelson
- Department of Speech-Language-Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| | - Pamela Souza
- Department of Communication Sciences and Disorders and Knowles Hearing Center, Northwestern University, 2240 Campus Drive, Evanston, Illinois 60208, USA
| |
Collapse
|
36
|
Scheperle RA, Abbas PJ. Relationships Among Peripheral and Central Electrophysiological Measures of Spatial and Spectral Selectivity and Speech Perception in Cochlear Implant Users. Ear Hear 2015; 36:441-53. [PMID: 25658746 PMCID: PMC4478147 DOI: 10.1097/aud.0000000000000144] [Citation(s) in RCA: 28] [Impact Index Per Article: 3.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The ability to perceive speech is related to the listener's ability to differentiate among frequencies (i.e., spectral resolution). Cochlear implant (CI) users exhibit variable speech-perception and spectral-resolution abilities, which can be attributed in part to the extent of electrode interactions at the periphery (i.e., spatial selectivity). However, electrophysiological measures of peripheral spatial selectivity have not been found to correlate with speech perception. The purpose of this study was to evaluate auditory processing at the periphery and cortex using both simple and spectrally complex stimuli to better understand the stages of neural processing underlying speech perception. The hypotheses were that (1) by more completely characterizing peripheral excitation patterns than in previous studies, significant correlations with measures of spectral selectivity and speech perception would be observed, (2) adding information about processing at a level central to the auditory nerve would account for additional variability in speech perception, and (3) responses elicited with spectrally complex stimuli would be more strongly correlated with speech perception than responses elicited with spectrally simple stimuli. DESIGN Eleven adult CI users participated. Three experimental processor programs (MAPs) were created to vary the likelihood of electrode interactions within each participant. For each MAP, a subset of 7 of 22 intracochlear electrodes was activated: adjacent (MAP 1), every other (MAP 2), or every third (MAP 3). Peripheral spatial selectivity was assessed using the electrically evoked compound action potential (ECAP) to obtain channel-interaction functions for all activated electrodes (13 functions total). Central processing was assessed by eliciting the auditory change complex with both spatial (electrode pairs) and spectral (rippled noise) stimulus changes. Speech-perception measures included vowel discrimination and the Bamford-Kowal-Bench Speech-in-Noise test. Spatial and spectral selectivity and speech perception were expected to be poorest with MAP 1 (closest electrode spacing) and best with MAP 3 (widest electrode spacing). Relationships among the electrophysiological and speech-perception measures were evaluated using mixed-model and simple linear regression analyses. RESULTS All electrophysiological measures were significantly correlated with each other and with speech scores for the mixed-model analysis, which takes into account multiple measures per person (i.e., experimental MAPs). The ECAP measures were the best predictor. In the simple linear regression analysis on MAP 3 data, only the cortical measures were significantly correlated with speech scores; spectral auditory change complex amplitude was the strongest predictor. CONCLUSIONS The results suggest that both peripheral and central electrophysiological measures of spatial and spectral selectivity provide valuable information about speech perception. Clinically, it is often desirable to optimize performance for individual CI users. These results suggest that ECAP measures may be most useful for within-subject applications when multiple measures are performed to make decisions about processor options. They also suggest that if the goal is to compare performance across individuals based on a single measure, then processing central to the auditory nerve (specifically, cortical measures of discriminability) should be considered.
Collapse
Affiliation(s)
- Rachel A. Scheperle
- Department of Communication Sciences and Disorders, University of Iowa, Iowa
City, IA, USA
| | - Paul J. Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa
City, IA, USA
- Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA,
USA
| |
Collapse
|
37
|
Abstract
OBJECTIVES The primary goal of this study was to describe relationships between peripheral and central electrophysiologic measures of auditory processing within individual cochlear implant (CI) users. The distinctiveness of neural excitation patterns resulting from the stimulation of different electrodes, referred to as 'spatial selectivity,' was evaluated. The hypothesis was that if central representations of spatial interactions differed across participants semi-independently of peripheral input, then the within-subject relationships between peripheral and central electrophysiologic measures of spatial selectivity would reflect those differences. Cross-subject differences attributable to processing central to the auditory nerve may help explain why peripheral electrophysiologic measures of spatial selectivity have not been found to correlate with speech perception. DESIGN Eleven adults participated in this and a companion study. All were peri- or post-lingually deafened with more than 1 year of CI experience. Peripheral spatial selectivity was evaluated at 13 cochlear locations using 13 electrodes as probes to elicit electrically evoked compound action potentials (ECAPs). Masker electrodes were varied across the array for each probe electrode to derive channel-interaction functions. The same 13 electrodes were used to evaluate spatial selectivity represented at a cortical level. Electrode pairs were stimulated sequentially to elicit the auditory change complex (ACC), an obligatory cortical potential suggestive of discrimination. For each participant, the relationship between ECAP channel-interaction functions (quantified as channel-separation indices) and ACC N1-P2 amplitudes was modeled using the saturating exponential function y = a * (1-e). Both a and b coefficients were varied using a least-squares approach to optimize the fits. RESULTS Electrophysiologic measures of spatial selectivity assessed at peripheral (ECAP) and central (ACC) levels varied across participants. The results indicate that differences in ACC amplitudes observed across participants for the same stimulus conditions were not solely the result of differences in peripheral excitation patterns. This finding supports the view that processing at multiple points along the auditory neural pathway from the periphery to the cortex may vary across individuals with different etiologies and auditory experiences. CONCLUSIONS The distinctiveness of neural excitation resulting from electrical stimulation varies across CI recipients, and this variability was observed in both peripheral and cortical electrophysiologic measures. The ACC amplitude differences observed across participants were partially independent from differences in peripheral neural spatial selectivity. These findings are clinically relevant because they imply that there may be limits (1) to the predictive ability of peripheral measures and (2) in the extent to which improving the selectivity of electrical stimulation via programming options (e.g., current focusing/steering) will result in more specific central neural excitation patterns or will improve speech perception.
Collapse
Affiliation(s)
- Rachel A. Scheperle
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
| | - Paul J. Abbas
- Department of Communication Sciences and Disorders, University of Iowa, Iowa City, IA, USA
- Otolaryngology-Head and Neck Surgery, University of Iowa, Iowa City, IA, USA
| |
Collapse
|
38
|
Lopez-Valdes A, Mc Laughlin M, Viani L, Walshe P, Smith J, Zeng FG, Reilly RB. Auditory mismatch negativity in cochlear implant users: a window to spectral discrimination. ANNUAL INTERNATIONAL CONFERENCE OF THE IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. IEEE ENGINEERING IN MEDICINE AND BIOLOGY SOCIETY. ANNUAL INTERNATIONAL CONFERENCE 2015; 2013:3555-8. [PMID: 24110497 DOI: 10.1109/embc.2013.6610310] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/10/2022]
Abstract
A cochlear implant (CI) can partially restore hearing in patients with severe to profound sensorineural hearing loss. However, the large outcome variability in CI users prompts the need for more objective measures of speech perception performance. Electrophysiological metrics of CI performance may be an important tool for audiologists in the assessment of hearing rehabilitation. Utilizing electroencephalography (EEG), it may be possible to evaluate speech perception correlates such as spectral discrimination. The mismatch negativity (MMN) of 10 CI subjects was recorded for stimuli containing different spectral densities. The neural spectral discrimination threshold, estimated by the MMN responses, showed a significant correlation with the behavioral spectral discrimination threshold measured in each subject. Results suggest that the MMN can be potentially used to obtain an objective estimate of spectral discrimination abilities in CI users.
Collapse
|
39
|
Won JH, Jones GL, Moon IJ, Rubinstein JT. Spectral and temporal analysis of simulated dead regions in cochlear implants. J Assoc Res Otolaryngol 2015; 16:285-307. [PMID: 25740402 DOI: 10.1007/s10162-014-0502-8] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/05/2014] [Accepted: 12/23/2014] [Indexed: 11/29/2022] Open
Abstract
A cochlear implant (CI) electrode in a "cochlear dead region" will excite neighboring neural populations. In previous research that simulated such dead regions, stimulus information in the simulated dead region was either added to the immediately adjacent frequency regions or dropped entirely. There was little difference in speech perception ability between the two conditions. This may imply that there may be little benefit of ensuring that stimulus information on an electrode in a suspected cochlear dead region is transmitted. Alternatively, performance may be enhanced by a broader frequency redistribution, rather than adding stimuli from the dead region to the edges. In the current experiments, cochlear dead regions were introduced by excluding selected CI electrodes or vocoder noise-bands. Participants were assessed for speech understanding as well as spectral and temporal sensitivities as a function of the size of simulated dead regions. In one set of tests, the normal input frequency range of the sound processor was distributed among the active electrodes in bands with approximately logarithmic spacing ("redistributed" maps); in the remaining tests, information in simulated dead regions was dropped ("dropped" maps). Word recognition and Schroeder-phase discrimination performance, which require both spectral and temporal sensitivities, decreased as the size of simulated dead regions increased, but the redistributed and dropped remappings showed similar performance in these two tasks. Psychoacoustic experiments showed that the near match in word scores may reflect a tradeoff between spectral and temporal sensitivity: spectral-ripple discrimination was substantially degraded in the redistributed condition relative to the dropped condition while performance in a temporal modulation detection task degraded in the dropped condition but remained constant in the redistributed condition.
Collapse
Affiliation(s)
- Jong Ho Won
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, WA, 98195, USA
| | | | | | | |
Collapse
|
40
|
Abstract
OBJECTIVES Nonspeech psychophysical tests of spectral resolution, such as the spectral-ripple discrimination task, have been shown to correlate with speech-recognition performance in cochlear implant (CI) users. However, these tests are best suited for use in the research laboratory setting and are impractical for clinical use. A test of spectral resolution that is quicker and could more easily be implemented in the clinical setting has been developed. The objectives of this study were (1) To determine whether this new clinical ripple test would yield individual results equivalent to the longer, adaptive version of the ripple-discrimination test; (2) To evaluate test-retest reliability for the clinical ripple measure; and (3) To examine the relationship between clinical ripple performance and monosyllabic word recognition in quiet for a group of CI listeners. DESIGN Twenty-eight CI recipients participated in the study. Each subject was tested on both the adaptive and the clinical versions of spectral ripple discrimination, as well as consonant-nucleus-consonant word recognition in quiet. The adaptive version of spectral ripple used a two-up, one-down procedure for determining spectral ripple discrimination threshold. The clinical ripple test used a method of constant stimuli, with trials for each of 12 fixed ripple densities occurring six times in random order. Results from the clinical ripple test (proportion correct) were then compared with ripple-discrimination thresholds (in ripples per octave) from the adaptive test. RESULTS The clinical ripple test showed strong concurrent validity, evidenced by a good correlation between clinical ripple and adaptive ripple results (r = 0.79), as well as a correlation with word recognition (r = 0.7). Excellent test-retest reliability was also demonstrated with a high test-retest correlation (r = 0.9). CONCLUSIONS The clinical ripple test is a reliable nonlinguistic measure of spectral resolution, optimized for use with CI users in a clinical setting. The test might be useful as a diagnostic tool or as a possible surrogate outcome measure for evaluating treatment effects in hearing.
Collapse
|
41
|
Won JH, Humphrey EL, Yeager KR, Martinez AA, Robinson CH, Mills KE, Johnstone PM, Moon IJ, Woo J. Relationship among the physiologic channel interactions, spectral-ripple discrimination, and vowel identification in cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:2714-25. [PMID: 25373971 DOI: 10.1121/1.4895702] [Citation(s) in RCA: 18] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/26/2023]
Abstract
The hypothesis of this study was that broader patterns of physiological channel interactions in the local region of the cochlea are associated with poorer spectral resolution in the same region. Electrically evoked compound action potentials (ECAPs) were measured for three to six probe electrodes per subject to examine the channel interactions in different regions across the electrode array. To evaluate spectral resolution at a confined location within the cochlea, spectral-ripple discrimination (SRD) was measured using narrowband ripple stimuli with the bandwidth spanning five electrodes: Two electrodes apical and basal to the ECAP probe electrode. The relationship between the physiological channel interactions, spectral resolution in the local cochlear region, and vowel identification was evaluated. Results showed that (1) there was within- and across-subject variability in the widths of ECAP channel interaction functions and in narrowband SRD performance, (2) significant correlations were found between the widths of the ECAP functions and narrowband SRD thresholds, and between mean bandwidths of ECAP functions averaged across multiple probe electrodes and broadband SRD performance across subjects, and (3) the global spectral resolution reflecting the entire electrode array, not the local region, predicts vowel identification.
Collapse
Affiliation(s)
- Jong Ho Won
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | | | - Kelly R Yeager
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | - Alexis A Martinez
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | - Camryn H Robinson
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | - Kristen E Mills
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | - Patti M Johnstone
- University of Tennessee Health Science Center, Knoxville, Tennessee 37996
| | - Il Joon Moon
- Department of Otorhinolaryngology-Head and Neck Surgery, Samsung Medical Center, School of Medicine, Sungkyunkwan University, Seoul, 135-710, South Korea
| | - Jihwan Woo
- Department of Biomedical Engineering, University of Ulsan, Ulsan, 680-749, South Korea
| |
Collapse
|
42
|
Lopez Valdes A, Mc Laughlin M, Viani L, Walshe P, Smith J, Zeng FG, Reilly RB. Objective assessment of spectral ripple discrimination in cochlear implant listeners using cortical evoked responses to an oddball paradigm. PLoS One 2014; 9:e90044. [PMID: 24599314 PMCID: PMC3943794 DOI: 10.1371/journal.pone.0090044] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/23/2013] [Accepted: 01/28/2014] [Indexed: 11/19/2022] Open
Abstract
Cochlear implants (CIs) can partially restore functional hearing in deaf individuals. However, multiple factors affect CI listener's speech perception, resulting in large performance differences. Non-speech based tests, such as spectral ripple discrimination, measure acoustic processing capabilities that are highly correlated with speech perception. Currently spectral ripple discrimination is measured using standard psychoacoustic methods, which require attentive listening and active response that can be difficult or even impossible in special patient populations. Here, a completely objective cortical evoked potential based method is developed and validated to assess spectral ripple discrimination in CI listeners. In 19 CI listeners, using an oddball paradigm, cortical evoked potential responses to standard and inverted spectrally rippled stimuli were measured. In the same subjects, psychoacoustic spectral ripple discrimination thresholds were also measured. A neural discrimination threshold was determined by systematically increasing the number of ripples per octave and determining the point at which there was no longer a significant difference between the evoked potential response to the standard and inverted stimuli. A correlation was found between the neural and the psychoacoustic discrimination thresholds (R2 = 0.60, p<0.01). This method can objectively assess CI spectral resolution performance, providing a potential tool for the evaluation and follow-up of CI listeners who have difficulty performing psychoacoustic tests, such as pediatric or new users.
Collapse
Affiliation(s)
| | - Myles Mc Laughlin
- Trinity Centre for Bioengineering, Trinity College, Dublin, Ireland
- Hearing and Speech Laboratory, University of California Irvine, Irvine, California, United States of America
| | - Laura Viani
- National Cochlear Implant Programme, Beaumont Hospital, Dublin, Ireland
| | - Peter Walshe
- National Cochlear Implant Programme, Beaumont Hospital, Dublin, Ireland
| | - Jaclyn Smith
- National Cochlear Implant Programme, Beaumont Hospital, Dublin, Ireland
| | - Fan-Gang Zeng
- Hearing and Speech Laboratory, University of California Irvine, Irvine, California, United States of America
| | | |
Collapse
|
43
|
Gifford RH, Hedley-Williams A, Spahr AJ. Clinical assessment of spectral modulation detection for adult cochlear implant recipients: a non-language based measure of performance outcomes. Int J Audiol 2014; 53:159-64. [PMID: 24456178 PMCID: PMC4067975 DOI: 10.3109/14992027.2013.851800] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
OBJECTIVE Spectral modulation detection (SMD) provides a psychoacoustic estimate of spectral resolution. The SMD threshold for an implanted ear is highly correlated with speech understanding and is thus a non-linguistic, psychoacoustic index of speech understanding. This measure, however, is time and equipment intensive and thus not practical for clinical use. Thus the purpose of the current study was to investigate the efficacy of a quick SMD task with the following three study aims: (1) to investigate the correlation between the long psychoacoustic, and quick SMD tasks, (2) to determine the test/retest variability of the quick SMD task, and (3) to evaluate the relationship between the quick SMD task and speech understanding. DESIGN This study included a within-subjects, repeated-measures design. STUDY SAMPLE Seventy-six adult cochlear implant recipients participated. RESULTS The results were as follows: (1) there was a significant correlation between the long psychoacoustic, and quick SMD tasks, (2) the test-retest variability of the quick SMD task was highly significant and, (3) there was a significant positive correlation between the quick SMD task and monosyllabic word recognition. CONCLUSIONS The results of this study represent the direct clinical translation of a research-proven task of SMD into a quick, clinically feasible format.
Collapse
Affiliation(s)
- René H. Gifford
- Department of Hearing and Speech Sciences Vanderbilt University, Nashville, TN, USA
| | | | - Anthony J. Spahr
- Department of Hearing and Speech Sciences Vanderbilt University, Nashville, TN, USA
- Advanced Bionics, Valencia, CA, USA
| |
Collapse
|
44
|
Abstract
OBJECTIVES Patients with a cochlear implant (CI) in one ear and a hearing aid in the other ear commonly achieve the highest speech-understanding scores when they have access to both electrically and acoustically stimulated information. At issue in this study was whether a measure of auditory function in the hearing aided ear would predict the benefit to speech understanding when the information from the aided ear was added to the information from the CI. DESIGN The subjects were 22 bimodal listeners with a CI in one ear and low-frequency acoustic hearing in the nonimplanted ear. The subjects were divided into two groups-one with mild-to-moderate low-frequency loss and one with severe-to-profound loss. Measures of auditory function included (1) audiometric thresholds at 750 Hz or lower, (2) speech-understanding scores (words in quiet and sentences in noise), and (3) spectral-modulation detection (SMD) thresholds. In the SMD task, one stimulus was a flat spectrum noise and the other was a noise with sinusoidal modulations at 1.0 peak/octave. RESULTS Significant correlations were found among all three measures of auditory function and the benefit to speech understanding when the acoustic and electric stimulation were combined. Benefit was significantly correlated with audiometric thresholds (r = -0.814), acoustic speech understanding (r = 0.635), and SMD thresholds (r = -0.895) in the hearing aided ear. However, only the SMD threshold was significantly correlated with benefit within the group with mild-to-moderate loss (r = -0.828) and within the group with severe-to-profound loss (r = -0.896). CONCLUSIONS The SMD threshold at 1 cycle/octave has the potential to provide clinicians with information relevant to the question of whether an ear with low-frequency hearing is likely to add to the intelligibility of speech provided by a CI.
Collapse
|
45
|
Won JH, Jones GL, Drennan WR, Jameyson EM, Rubinstein JT. Evidence of across-channel processing for spectral-ripple discrimination in cochlear implant listeners. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:2088-97. [PMID: 21973363 PMCID: PMC3206911 DOI: 10.1121/1.3624820] [Citation(s) in RCA: 20] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/08/2023]
Abstract
Spectral-ripple discrimination has been used widely for psychoacoustical studies in normal-hearing, hearing-impaired, and cochlear implant listeners. The present study investigated the perceptual mechanism for spectral-ripple discrimination in cochlear implant listeners. The main goal of this study was to determine whether cochlear implant listeners use a local intensity cue or global spectral shape for spectral-ripple discrimination. The effect of electrode separation on spectral-ripple discrimination was also evaluated. Results showed that it is highly unlikely that cochlear implant listeners depend on a local intensity cue for spectral-ripple discrimination. A phenomenological model of spectral-ripple discrimination, as an "ideal observer," showed that a perceptual mechanism based on discrimination of a single intensity difference cannot account for performance of cochlear implant listeners. Spectral modulation depth and electrode separation were found to significantly affect spectral-ripple discrimination. The evidence supports the hypothesis that spectral-ripple discrimination involves integrating information from multiple channels.
Collapse
Affiliation(s)
- Jong Ho Won
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology-Head and Neck Surgery, University of Washington, Seattle, Washington 98195, USA.
| | | | | | | | | |
Collapse
|