1
|
Differential weighting of temporal envelope cues from the low-frequency region for Mandarin sentence recognition in noise. BMC Neurosci 2022; 23:35. [PMID: 35698039 PMCID: PMC9190152 DOI: 10.1186/s12868-022-00721-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/11/2021] [Accepted: 06/01/2022] [Indexed: 11/10/2022] Open
Abstract
BACKGROUND Temporal envelope cues are conveyed by cochlear implants (CIs) to hearing loss patients to restore hearing. Although CIs could enable users to communicate in clear listening environments, noisy environments still pose a problem. To improve speech-processing strategies used in Chinese CIs, we explored the relative contributions made by the temporal envelope in various frequency regions, as relevant to Mandarin sentence recognition in noise. METHODS Original speech material from the Mandarin version of the Hearing in Noise Test (MHINT) was mixed with speech-shaped noise (SSN), sinusoidally amplitude-modulated speech-shaped noise (SAM SSN), and sinusoidally amplitude-modulated (SAM) white noise (4 Hz) at a + 5 dB signal-to-noise ratio, respectively. Envelope information of the noise-corrupted speech material was extracted from 30 contiguous bands that were allocated to five frequency regions. The intelligibility of the noise-corrupted speech material (temporal cues from one or two regions were removed) was measured to estimate the relative weights of temporal envelope cues from the five frequency regions. RESULTS In SSN, the mean weights of Regions 1-5 were 0.34, 0.19, 0.20, 0.16, and 0.11, respectively; in SAM SSN, the mean weights of Regions 1-5 were 0.34, 0.17, 0.24, 0.14, and 0.11, respectively; and in SAM white noise, the mean weights of Regions 1-5 were 0.46, 0.24, 0.22, 0.06, and 0.02, respectively. CONCLUSIONS The results suggest that the temporal envelope in the low-frequency region transmits the greatest amount of information in terms of Mandarin sentence recognition for three types of noise, which differed from the perception strategy employed in clear listening environments.
Collapse
|
2
|
Listening to speech with a guinea pig-to-human brain-to-brain interface. Sci Rep 2021; 11:12231. [PMID: 34112826 PMCID: PMC8192924 DOI: 10.1038/s41598-021-90823-1] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/09/2020] [Accepted: 05/12/2021] [Indexed: 11/30/2022] Open
Abstract
Nicolelis wrote in his 2003 review on brain-machine interfaces (BMIs) that the design of a successful BMI relies on general physiological principles describing how neuronal signals are encoded. Our study explored whether neural information exchanged between brains of different species is possible, similar to the information exchange between computers. We show for the first time that single words processed by the guinea pig auditory system are intelligible to humans who receive the processed information via a cochlear implant. We recorded the neural response patterns to single-spoken words with multi-channel electrodes from the guinea inferior colliculus. The recordings served as a blueprint for trains of biphasic, charge-balanced electrical pulses, which a cochlear implant delivered to the cochlear implant user’s ear. Study participants completed a four-word forced-choice test and identified the correct word in 34.8% of trials. The participants' recognition, defined by the ability to choose the same word twice, whether right or wrong, was 53.6%. For all sessions, the participants received no training and no feedback. The results show that lexical information can be transmitted from an animal to a human auditory system. In the discussion, we will contemplate how learning from the animals might help developing novel coding strategies.
Collapse
|
3
|
Tao DD, Liu JS, Yang ZD, Wilson BS, Zhou N. Bilaterally Combined Electric and Acoustic Hearing in Mandarin-Speaking Listeners: The Population With Poor Residual Hearing. Trends Hear 2019; 22:2331216518757892. [PMID: 29451107 PMCID: PMC5818091 DOI: 10.1177/2331216518757892] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/23/2022] Open
Abstract
The hearing loss criterion for cochlear implant candidacy in mainland China is extremely stringent (bilateral severe to profound hearing loss), resulting in few patients with substantial residual hearing in the nonimplanted ear. The main objective of the current study was to examine the benefit of bimodal hearing in typical Mandarin-speaking implant users who have poorer residual hearing in the nonimplanted ear relative to those used in the English-speaking studies. Seventeen Mandarin-speaking bimodal users with pure-tone averages of ∼80 dB HL participated in the study. Sentence recognition in quiet and in noise as well as tone and word recognition in quiet were measured in monaural and bilateral conditions. There was no significant bimodal effect for word and sentence recognition in quiet. Small bimodal effects were observed for sentence recognition in noise (6%) and tone recognition (4%). The magnitude of both effects was correlated with unaided thresholds at frequencies near voice fundamental frequencies (F0s). A weak correlation between the bimodal effect for word recognition and unaided thresholds at frequencies higher than F0s was identified. These results were consistent with previous findings that showed more robust bimodal benefits for speech recognition tasks that require higher spectral resolution than speech recognition in quiet. The significant but small F0-related bimodal benefit was also consistent with the limited acoustic hearing in the nonimplanted ear of the current subject sample, who are representative of the bimodal users in mainland China. These results advocate for a more relaxed implant candidacy criterion to be used in mainland China.
Collapse
Affiliation(s)
- Duo-Duo Tao
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Ji-Sheng Liu
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Zhen-Dong Yang
- 1 Department of Ear, Nose, and Throat, The First Affiliated Hospital of Soochow University, Suzhou, China
| | - Blake S Wilson
- 2 Departments of Surgery, Biomedical Engineering, and Electrical and Computer Engineering, Duke University, Durham, NC, USA
| | - Ning Zhou
- 3 Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, USA
| |
Collapse
|
4
|
Melody recognition in dichotic listening with or without frequency-place mismatch. Ear Hear 2013; 35:379-82. [PMID: 24351609 DOI: 10.1097/aud.0000000000000013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The purpose of the study was to examine recognition of degraded melodic stimuli in dichotic listening with or without frequency-place mismatch. DESIGN Melodic stimuli were noise vocoded with various number-of-channel conditions in a dichotic and monaural processor. In the dichotic zipper processor, the odd-indexed channels were tonotopically matched and presented to the left ear while the even-indexed channels were tonotopically matched or upward shifted in frequency and presented to the right ear. In the monaural processor, all channels either unshifted or shifted were presented to the left ear alone. Familiar melody recognition was measured in 16 normal-hearing adult listeners. RESULTS Performance for dichotically presented melodic stimuli did not differ from that for monaurally presented stimuli even with low spectral resolution (8 channels). With spectral shift introduced in one ear, melody recognition decreased with increasing spectral shift in a nonmonotonic fashion. With spectral shift, melody recognition in dichotic listening was either not different or superior in a few cases relative to the monaural condition. CONCLUSIONS With no spectral shift, cohesive fusion of dichotically presented melodic stimuli did not seem to depend on spectral resolution. In spectrally shifted conditions, listeners may have suppressed the partially shifted channels in the right ear and selectively attended only to the unshifted ones, resulting in dichotic advantages for melody recognition in some cases.
Collapse
|
5
|
Acoustic properties of vocal singing in prelingually-deafened children with cochlear implants or hearing aids. Int J Pediatr Otorhinolaryngol 2013; 77:1833-40. [PMID: 24035642 DOI: 10.1016/j.ijporl.2013.08.022] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/12/2013] [Revised: 08/18/2013] [Accepted: 08/20/2013] [Indexed: 11/24/2022]
Abstract
OBJECTIVE The purpose of the present study was to investigate vocal singing performance of hearing-impaired children with cochlear implants (CI) and hearing aids (HA) as well as to evaluate the relationship between demographic factors of those hearing-impaired children and their singing ability. METHODS Thirty-seven prelingually-deafened children with CIs and 31 prelingually-deafened children with HAs, and 37 normal-hearing (NH) children participated in the study. The fundamental frequencies (F0) of each note in the recorded songs were extracted and the duration of each sung note was measured. Five metrics were used to evaluate the pitch-related and rhythm-based aspects of singing accuracy. RESULTS Children with CIs and HAs showed significantly poorer performance in either the pitch-based assessments or the rhythm-based measure than the NH children. No significant differences were seen between the CI and HA groups in all of these measures except for the mean deviation of the pitch intervals. For both hearing-impaired groups, length of device use was significantly correlated with singing accuracy. CONCLUSIONS There is a marked deficit in vocal singing ability either in pitch or rhythm accuracy in a majority of prelingually-deafened children who have received CIs or fitted with HAs. Although an increased length of device use might facilitate singing performance to some extent, the chance for the hearing-impaired children fitted with either HAs or CIs to reach high proficiency in singing is quite slim.
Collapse
|
6
|
Relationship between tone perception and production in prelingually deafened children with cochlear implants. Otol Neurotol 2013; 34:499-506. [PMID: 23442566 DOI: 10.1097/mao.0b013e318287ca86] [Citation(s) in RCA: 55] [Impact Index Per Article: 5.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
HYPOTHESIS Performance in tone perception and production are correlated in prelingually deafened pediatric cochlear implant (CI) users across individuals. Demographic variables, such as age at implantation, contribute to the performance variability. BACKGROUND Poor representation of pitch information in CI devices hinders pitch perception and affects perception of lexical tones in cochlear implant users who speak tonal languages. METHODS One hundred ten Mandarin-speaking, prelingually deafened CI subjects and 125 typically developing, normal-hearing subjects were recruited from Beijing, China. Lexical tone perception was measured using a computerized tone contrast test. Tone production was judged by native Mandarin-speaking adult listeners as well as analyzed acoustically and with an artificial neural network. A general linear model analysis was performed to determine factors that accounted for performance variability. RESULTS CI subjects scored ≈ 67% correct on the lexical tone perception task. The degree of differentiation of tones produced by the CI group was significantly lower than the control group as revealed by acoustic analysis. Tone production performance assessed by the neural network was highly correlated with that evaluated by human listeners. There was a moderate correlation between the overall tone perception and production performance across CI subjects. Duration of implant use and age at implantation jointly explained ≈ 29% of the variance in the tone perception performance. Age at implantation was the only significant predictor for tone production performance in the CI subjects. CONCLUSION Tone production performance in pediatric CI users is dependent on accurate perception. Early implantation predicts a better outcome in lexical tone perception and production.
Collapse
|
7
|
Chen F, Wong LLN, Qiu J, Liu Y, Azimi B, Hu Y. The contribution of matched envelope dynamic range to the binaural benefits in simulated bilateral electric hearing. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2013; 56:1166-1174. [PMID: 23926330 DOI: 10.1044/1092-4388(2012/12-0255)] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
PURPOSE This study examined the effects of envelope dynamic-range mismatch on the intelligibility of Mandarin speech in noise by simulated bilateral electric hearing. METHOD Noise-vocoded Mandarin speech, corrupted by speech-shaped noise at 5 and 0 dB signal-to-noise ratios, was presented unilaterally or bilaterally to 10 normal-hearing listeners for recognition. For unilateral conditions, the right ear was presented with the 8-channel noise-vocoded stimuli generated using a 15-dB envelope dynamic range (DR). To simulate the envelope DR mismatch between the 2 ears, the left ear was presented with the 8-channel noise-vocoded stimuli generated using a 5-, 10-, or 15-dB envelope DR, respectively. RESULTS Significant binaural summation benefits for Mandarin speech recognition were observed only with matched envelope DR between the 2 ears. With reduced DR, the performance of tone identification was more consistent in the steady-state speech-shaped noise than that of sentence recognition. CONCLUSIONS Consistent with previous findings, the present results suggest that Mandarin speech-perception performance of bilateral electric listening in noise is affected by the difference of envelope DR between the 2 implanted ears, and the binaural summation benefits are maximized when DR mismatch is minimized between the 2 implanted ears.
Collapse
Affiliation(s)
- Fei Chen
- Prince Philip Dental Hospital, The University of Hong Kong, SAR, PR China.
| | | | | | | | | | | |
Collapse
|
8
|
Landwehr M, Fürstenberg D, Walger M, von Wedel H, Meister H. Effects of various electrode configurations on music perception, intonation and speaker gender identification. Cochlear Implants Int 2013; 15:27-35. [PMID: 23684531 DOI: 10.1179/1754762813y.0000000037] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
Advances in speech coding strategies and electrode array designs for cochlear implants (CIs) predominantly aim at improving speech perception. Current efforts are also directed at transmitting appropriate cues of the fundamental frequency (F0) to the auditory nerve with respect to speech quality, prosody, and music perception. The aim of this study was to examine the effects of various electrode configurations and coding strategies on speech intonation identification, speaker gender identification, and music quality rating. In six MED-EL CI users electrodes were selectively deactivated in order to simulate different insertion depths and inter-electrode distances when using the high definition continuous interleaved sampling (HDCIS) and fine structure processing (FSP) speech coding strategies. Identification of intonation and speaker gender was determined and music quality rating was assessed. For intonation identification HDCIS was robust against the different electrode configurations, whereas fine structure processing showed significantly worse results when a short electrode depth was simulated. In contrast, speaker gender recognition was not affected by electrode configuration or speech coding strategy. Music quality rating was sensitive to electrode configuration. In conclusion, the three experiments revealed different outcomes, even though they all addressed the reception of F0 cues. Rapid changes in F0, as seen with intonation, were the most sensitive to electrode configurations and coding strategies. In contrast, electrode configurations and coding strategies did not show large effects when F0 information was available over a longer time period, as seen with speaker gender. Music quality relies on additional spectral cues other than F0, and was poorest when a shallow insertion was simulated.
Collapse
|
9
|
Liu C, Azimi B, Tahmina Q, Hu Y. Effects of low harmonics on tone identification in natural and vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:EL378-EL384. [PMID: 23145698 DOI: 10.1121/1.4757729] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study investigated the contribution of low-frequency harmonics to identifying Mandarin tones in natural and vocoded speech in quiet and noisy conditions. Results showed that low-frequency harmonics of natural speech led to highly accurate tone identification; however, for vocoded speech, low-frequency harmonics yielded lower tone identification than stimuli with full harmonics, except for tone 4. Analysis of the correlation between tone accuracy and the amplitude-F0 correlation index suggested that "more" speech contents (i.e., more harmonics) did not necessarily yield better tone recognition for vocoded speech, especially when the amplitude contour of the signals did not co-vary with the F0 contour.
Collapse
Affiliation(s)
- Chang Liu
- Department of Communication Sciences and Disorders, University of Texas at Austin, 1 University Station A1100, Austin, Texas 78712, USA.
| | | | | | | |
Collapse
|
10
|
Chen F, Wong LLN, Tahmina Q, Azimi B, Hu Y. The effects of binaural spectral resolution mismatch on Mandarin speech perception in simulated electric hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2012; 132:EL142-EL148. [PMID: 22894313 DOI: 10.1121/1.4737595] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/01/2023]
Abstract
This study assessed the effects of binaural spectral resolution mismatch on the intelligibility of Mandarin speech in noise using bilateral cochlear implant simulations. Noise-vocoded Mandarin speech, corrupted by speech-shaped noise at 0 and 5 dB signal-to-noise ratios, were presented unilaterally or bilaterally to normal-hearing listeners with mismatched spectral resolution between ears. Significant binaural benefits for Mandarin speech recognition were observed only with matched spectral resolution between ears. In addition, the performance of tone identification was more robust to noise than that of sentence recognition, suggesting factors other than tone identification might account more for the degraded sentence recognition in noise.
Collapse
Affiliation(s)
- Fei Chen
- Division of Speech and Hearing Sciences, The University of Hong Kong, Prince Philip Dental Hospital, 34 Hospital Road, Hong Kong.
| | | | | | | | | |
Collapse
|
11
|
Contribution of spectral cues to mandarin lexical tone recognition in normal-hearing and hearing-impaired Mandarin Chinese speakers. Ear Hear 2011; 32:97-103. [PMID: 20625301 DOI: 10.1097/aud.0b013e3181ec5c28] [Citation(s) in RCA: 12] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVE The purpose of this study was to investigate the contribution of spectral fine structure and spectral envelope cues to recognition of Mandarin lexical tones in normal-hearing and sensorineural hearing-impaired Mandarin-speaking listeners. DESIGN Four groups of subjects participated in the study, including 20 normal-hearing, 20 moderately, 20 moderately to severely, and 8 severely hearing-impaired listeners. The original speech materials consisted of 16 sets of Mandarin monosyllables spoken by a male and a female. Each monosyllable had four tonal patterns, resulting in a total of 64 combinations of consonants, vowels, and tones. A Linear Predictive Coding (LPC) algorithm was used to create two sets of synthesized materials, including 128 tokens with the original spectral fine structure mixed with the spectral envelope from a different tone, as well as 128 tokens with noise fine structure and the original spectral envelope. All subjects participated in tone recognition tests using the two sets of chimeric tone tokens. Oral responses to tones were recorded and scored as percent correct. RESULTS Hearing-impaired listeners could take advantage of spectral fine structure in the recognition of lexical tones, but with increasing hearing loss, the ability of hearing-impaired listeners to recognize tones became worse, especially for severely hearing-impaired listeners. Hearing-impaired listeners showed significant differences in tone recognition between the male and female voices. Tone 3 was the easiest tone to perceive, followed by tone 2, whereas tones 1 and 4 were hard for all subjects, particularly when only the spectral envelope cue was available. Hearing-impaired listeners showed a significantly lower level of lexical tone recognition than normal-hearing listeners when using spectral envelope cues in comparison with normal-hearing listeners. CONCLUSIONS These results demonstrate that the spectral fine structure cue dominates lexical tone recognition for all subjects. Listeners with sensorineural hearing impairment showed reduced ability in the recognition of lexical tones using both spectral fine structure and spectral envelope cues, which may result from their impaired auditory spectral resolution.
Collapse
|
12
|
|
13
|
Qi B, Liu B, Krenmayr A, Liu S, Gong S, Liu H, Zhang N, Han D. The contribution of apical stimulation to Mandarin speech perception in users of the MED-EL COMBI 40+ cochlear implant. Acta Otolaryngol 2011; 131:52-8. [PMID: 20863152 DOI: 10.3109/00016489.2010.506652] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
Abstract
CONCLUSION Not stimulating the apical cochlear region in tonal language speaking cochlear implantees significantly reduces discrimination of Mandarin vowels. The data presented here suggest that electrode arrays that allow complete cochlear coverage with stimulation pulses seem to be preferable over shorter arrays for use in cochlear implant (CI) indications. OBJECTIVE To assess the contribution of electrical stimulation beyond the first cochlear turn on tonal language speech perception. METHODS Twelve Mandarin-speaking users of the MED-EL COMBI 40+ cochlear implant with complete insertion of the standard COMBI 40+ electrode array participated in the study. Acute speech tests were performed in seven electrode configurations with stimulation either distributed over the whole length of the cochlea or restricted to the apical, middle or basal regions. The test battery comprised tone, consonant, and vowel identification in quiet as well as a sentence recognition task in quiet and noise. RESULTS While neither tone nor consonant identification depended crucially on the placement of the active electrodes, vowel identification and sentence recognition decreased significantly when the four apical electrodes were not stimulated.
Collapse
Affiliation(s)
- Beier Qi
- Beijing Tong Ren Hospital, Capital Medical University, Beijing Institute of Otolaryngology, Ministry of Education, China
| | | | | | | | | | | | | | | |
Collapse
|
14
|
Abstract
OBJECTIVE The purpose of the present study was to test the hypothesis that cochlear implant (CI) users' music perception is correlated with their lexical tone perception, and the two types of perception share similar mechanisms in electric hearing. DESIGN A lexical tone perception test and a pitch interval discrimination test were administered to a group of CI users and a group of normal-hearing (NH) listeners. SAMPLE STUDY: Nineteen adult CI users and 10 NH listeners who are native-Mandarin-Chinese speakers participated in the study. RESULT Tone-perception performance of the CI group was, on average, 58.3% correct (± 19.78% correct), and performance of the NH group was near perfect. The CI group had a mean threshold of 5.66 semitones (± 5.57 semitones) in pitch discrimination as compared to the threshold of 0.44 semitone from the NH group. There was a strong correlation between the CI users' tone-perception performance and their pitch discrimination threshold (r = -0.75, p < 0.001). CONCLUSION Musical and lexical pitch perceptions are strongly correlated with each other and they might share similar mechanisms in electric hearing.
Collapse
Affiliation(s)
- Wuqing Wang
- Eye, Ear, Nose, and Throat Hospital, Fudan University, Shanghai, China
| | | | | |
Collapse
|
15
|
Zhou N, Xu L, Lee CY. The effects of frequency-place shift on consonant confusion in cochlear implant simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 128:401-9. [PMID: 20649234 PMCID: PMC2921437 DOI: 10.1121/1.3436558] [Citation(s) in RCA: 11] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/01/2009] [Revised: 04/26/2010] [Accepted: 05/04/2010] [Indexed: 05/26/2023]
Abstract
The effects of frequency-place shift on consonant recognition and confusion matrices were examined. Frequency-place shift was manipulated using a noise-excited vocoder with 4 to 16 channels. In the vocoder processing, the location of the most apical carrier band varied from the matched condition (i.e., 28 mm from the base of the cochlear) to a basal shift (i.e., 22 mm from the base) in a step size of 1 mm. Ten normal-hearing subjects participated in the 20-alternative forced-choice test, where the consonants were presented in a /Ca/ context. Shift of 3 mm or more caused the consonant recognition scores to decrease significantly. The effects of spectral resolution disappeared when the amount of shift reached >or=3 mm. Information transmitted for voicing and place of articulation varied with spectral shift and spectral resolution, while information transmitted for manner was affected only by spectral shift but not spectral resolution. Spectral shift has shown specific effects on the confusion patterns of the consonants. The direction of errors reversed as spectral shift increased and the patterns of reversal were consistent across channel conditions. Overall, transmission of the consonant features can be accounted for by the acoustic features of the speech signal.
Collapse
Affiliation(s)
- Ning Zhou
- School of Hearing, Speech and Language Sciences, Ohio University, Athens, Ohio 45701, USA
| | | | | |
Collapse
|
16
|
Li X, Ning Z, Brashears R, Rife K. Relative Contributions of Spectral and Temporal Cues for Speech Recognition in Patients with Sensorineural Hearing Loss. J Otol 2008. [DOI: 10.1016/s1672-2930(08)50019-5] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/30/2022] Open
|