Wei C, Cao K, Jin X, Chen X. Psychophysical performance and Mandarin tone recognition in noise by cochlear implant users.
Ear Hear 2007;
28:62S-65S. [PMID:
17496650 PMCID:
PMC2674760 DOI:
10.1097/aud.0b013e318031512c]
[Citation(s) in RCA: 36] [Impact Index Per Article: 2.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVE
The present study was aimed to examine the relationship between psychophysical performance in temporal and spectral resolution and Mandarin tone recognition in noise by cochlear-implant (CI) listeners.
DESIGN
Seventeen Nucleus-24 implant users, 10 postlingually deafened and 7 prelingually deafened, participated in the experiments. A 3-interval, forced-choice procedure was used to measure gap detection and pure-tone frequency discrimination at 250 to 4,000 Hz in octave steps. A 4-alternative forced-choice procedure was used to measure Mandarin tone recognition in quiet and in noise. Signal-to-noise ratios (SNRs) varied from +10 to -10 dB. All stimuli were delivered to the clinical processor via a speaker in a sound free field. The obtained data were compared to data collected from normal-hearing control subjects, as well as cochlear-implant users who performed similar tasks using single-electrode stimulation via a research interface.
RESULTS
Postlingually-deafened CI subjects generally performed better than prelingually-deafened subjects. The average gap detection threshold was 30 ms with a range from 4 to 128 ms. The average frequency difference limen was 100 Hz with a range from 12 to 192 Hz, regardless of the standard frequency. The average tone recognition was 80% correct in quiet, which dropped to 55% at +10 dB SNR and essentially chance performance at -5 dB SNR. In comparison, the normal-hearing control subjects maintained essentially perfect performance over this SNR range. Only frequency discrimination at 1,000 Hz was significantly correlated with tone recognition in quiet but all psychophysical measures were correlated to tone recognition in noise.
CONCLUSIONS
The present result suggests that the CI users can rely on either temporal or spectral cues to perform tone recognition in quiet, but need both cues for tone recognition in noise. Future CI processors need to extract and encode these acoustic cues to achieve better performance in tone perception and production.
Collapse