1
|
Ashjaei S, Behroozmand R, Fozdar S, Farrar R, Arjmandi M. Vocal control and speech production in cochlear implant listeners: A review within auditory-motor processing framework. Hear Res 2024; 453:109132. [PMID: 39447319 DOI: 10.1016/j.heares.2024.109132] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/24/2024] [Revised: 10/11/2024] [Accepted: 10/14/2024] [Indexed: 10/26/2024]
Abstract
A comprehensive literature review is conducted to summarize and discuss prior findings on how cochlear implants (CI) affect the users' abilities to produce and control vocal and articulatory movements within the auditory-motor integration framework of speech. Patterns of speech production pre- versus post-implantation, post-implantation adjustments, deviations from the typical ranges of speakers with normal hearing (NH), the effects of switching the CI on and off, as well as the impact of altered auditory feedback on vocal and articulatory speech control are discussed. Overall, findings indicate that CIs enhance the vocal and articulatory control aspects of speech production at both segmental and suprasegmental levels. While many CI users achieve speech quality comparable to NH individuals, some features still deviate in a group of CI users even years post-implantation. More specifically, contracted vowel space, increased vocal jitter and shimmer, longer phoneme and utterance durations, shorter voice onset time, decreased contrast in fricative production, limited prosodic patterns, and reduced intelligibility have been reported in subgroups of CI users compared to NH individuals. Significant individual variations among CI users have been observed in both the pace of speech production adjustments and long-term speech outcomes. Few controlled studies have explored how the implantation age and the duration of CI use influence speech features, leaving substantial gaps in our understanding about the effects of spectral resolution, auditory rehabilitation, and individual auditory-motor processing abilities on vocal and articulatory speech outcomes in CI users. Future studies under the auditory-motor integration framework are warranted to determine how suboptimal CI auditory feedback impacts auditory-motor processing and precise vocal and articulatory control in CI users.
Collapse
Affiliation(s)
- Samin Ashjaei
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Roozbeh Behroozmand
- Speech Neuroscience Lab, Department of Speech, Language, and Hearing, Callier Center for Communication Disorders, School of Behavioral and Brain Sciences, The University of Texas at Dallas, 2811 North Floyd Road, Richardson, TX 75080, USA
| | - Shaivee Fozdar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Reed Farrar
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA
| | - Meisam Arjmandi
- Translational Auditory Neuroscience Lab, Department of Communication Sciences and Disorders, Arnold School of Public Health, University of South Carolina, 1705 College Street, Columbia, SC 29208, USA; Institute for Mind and Brain, University of South Carolina, Barnwell Street, Columbia, SC 29208, USA.
| |
Collapse
|
2
|
Chiossi JSC, Patou F, Ng EHN, Faulkner KF, Lyxell B. Phonological discrimination and contrast detection in pupillometry. Front Psychol 2023; 14:1232262. [PMID: 38023001 PMCID: PMC10646334 DOI: 10.3389/fpsyg.2023.1232262] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2023] [Accepted: 10/12/2023] [Indexed: 12/01/2023] Open
Abstract
Introduction The perception of phonemes is guided by both low-level acoustic cues and high-level linguistic context. However, differentiating between these two types of processing can be challenging. In this study, we explore the utility of pupillometry as a tool to investigate both low- and high-level processing of phonological stimuli, with a particular focus on its ability to capture novelty detection and cognitive processing during speech perception. Methods Pupillometric traces were recorded from a sample of 22 Danish-speaking adults, with self-reported normal hearing, while performing two phonological-contrast perception tasks: a nonword discrimination task, which included minimal-pair combinations specific to the Danish language, and a nonword detection task involving the detection of phonologically modified words within sentences. The study explored the perception of contrasts in both unprocessed speech and degraded speech input, processed with a vocoder. Results No difference in peak pupil dilation was observed when the contrast occurred between two isolated nonwords in the nonword discrimination task. For unprocessed speech, higher peak pupil dilations were measured when phonologically modified words were detected within a sentence compared to sentences without the nonwords. For vocoded speech, higher peak pupil dilation was observed for sentence stimuli, but not for the isolated nonwords, although performance decreased similarly for both tasks. Conclusion Our findings demonstrate the complexity of pupil dynamics in the presence of acoustic and phonological manipulation. Pupil responses seemed to reflect higher-level cognitive and lexical processing related to phonological perception rather than low-level perception of acoustic cues. However, the incorporation of multiple talkers in the stimuli, coupled with the relatively low task complexity, may have affected the pupil dilation.
Collapse
Affiliation(s)
- Julia S. C. Chiossi
- Oticon A/S, Smørum, Denmark
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| | | | - Elaine Hoi Ning Ng
- Oticon A/S, Smørum, Denmark
- Department of Behavioural Sciences and Learning, Linnaeus Centre HEAD, Swedish Institute for Disability Research, Linköping University, Linköping, Sweden
| | | | - Björn Lyxell
- Department of Special Needs Education, University of Oslo, Oslo, Norway
| |
Collapse
|
3
|
Yang J, Xu L. Acoustic characteristics of sibilant fricatives and affricates in Mandarin-speaking children with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2023; 153:3501-3512. [PMID: 37378672 DOI: 10.1121/10.0019803] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/04/2023] [Accepted: 06/05/2023] [Indexed: 06/29/2023]
Abstract
The purpose of the study was to examine the acoustic features of sibilant fricatives and affricates produced by prelingually deafened Mandarin-speaking children with cochlear implants (CIs) in comparison to their age-matched normal-hearing (NH) peers. The speakers included 21 children with NH aged between 3.25 and 10 years old and 35 children with CIs aged between 3.77 and 15 years old who were assigned into chronological-age-matched and hearing-age-matched subgroups. All speakers were recorded producing Mandarin words containing nine sibilant fricatives and affricates (/s, ɕ, ʂ, ts, tsʰ, tɕ, tɕʰ, tʂ, tʂʰ/) located at the word-initial position. Acoustic analysis was conducted to examine consonant duration, normalized amplitude, rise time, and spectral peak. The results revealed that the CI children, regardless of whether chronological-age-matched or hearing-age-matched, approximated the NH peers in the features of duration, amplitude, and rise time. However, the spectral peaks of the alveolar and alveolopalatal sounds in the CI children were significantly lower than in the NH children. The lower spectral peaks of the alveolar and alveolopalatal sounds resulted in less distinctive place contrast with the retroflex sounds in the CI children than in the NH peers, which might partially account for the lower intelligibility of high-frequency consonants in children with CIs.
Collapse
Affiliation(s)
- Jing Yang
- Program of Communication Sciences and Disorders, University of Wisconsin-Milwaukee, Milwaukee, Wisconsin 53201, USA
| | - Li Xu
- Hearing, Speech & Language Sciences, Ohio University, Athens, Ohio 45701, USA
| |
Collapse
|
4
|
Individual Variability in Recalibrating to Spectrally Shifted Speech: Implications for Cochlear Implants. Ear Hear 2021; 42:1412-1427. [PMID: 33795617 DOI: 10.1097/aud.0000000000001043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implant (CI) recipients are at a severe disadvantage compared with normal-hearing listeners in distinguishing consonants that differ by place of articulation because the key relevant spectral differences are degraded by the implant. One component of that degradation is the upward shifting of spectral energy that occurs with a shallow insertion depth of a CI. The present study aimed to systematically measure the effects of spectral shifting on word recognition and phoneme categorization by specifically controlling the amount of shifting and using stimuli whose identification specifically depends on perceiving frequency cues. We hypothesized that listeners would be biased toward perceiving phonemes that contain higher-frequency components because of the upward frequency shift and that intelligibility would decrease as spectral shifting increased. DESIGN Normal-hearing listeners (n = 15) heard sine wave-vocoded speech with simulated upward frequency shifts of 0, 2, 4, and 6 mm of cochlear space to simulate shallow CI insertion depth. Stimuli included monosyllabic words and /b/-/d/ and /∫/-/s/ continua that varied systematically by formant frequency transitions or frication noise spectral peaks, respectively. Recalibration to spectral shifting was operationally defined as shifting perceptual acoustic-phonetic mapping commensurate with the spectral shift. In other words, adjusting frequency expectations for both phonemes upward so that there is still a perceptual distinction, rather than hearing all upward-shifted phonemes as the higher-frequency member of the pair. RESULTS For moderate amounts of spectral shifting, group data suggested a general "halfway" recalibration to spectral shifting, but individual data suggested a notably different conclusion: half of the listeners were able to recalibrate fully, while the other halves of the listeners were utterly unable to categorize shifted speech with any reliability. There were no participants who demonstrated a pattern intermediate to these two extremes. Intelligibility of words decreased with greater amounts of spectral shifting, also showing loose clusters of better- and poorer-performing listeners. Phonetic analysis of word errors revealed certain cues were more susceptible to being compromised due to a frequency shift (place and manner of articulation), while voicing was robust to spectral shifting. CONCLUSIONS Shifting the frequency spectrum of speech has systematic effects that are in line with known properties of speech acoustics, but the ensuing difficulties cannot be predicted based on tonotopic mismatch alone. Difficulties are subject to substantial individual differences in the capacity to adjust acoustic-phonetic mapping. These results help to explain why speech recognition in CI listeners cannot be fully predicted by peripheral factors like electrode placement and spectral resolution; even among listeners with functionally equivalent auditory input, there is an additional factor of simply being able or unable to flexibly adjust acoustic-phonetic mapping. This individual variability could motivate precise treatment approaches guided by an individual's relative reliance on wideband frequency representation (even if it is mismatched) or limited frequency coverage whose tonotopy is preserved.
Collapse
|
5
|
Winn MB. Accommodation of gender-related phonetic differences by listeners with cochlear implants and in a variety of vocoder simulations. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2020; 147:174. [PMID: 32006986 PMCID: PMC7341679 DOI: 10.1121/10.0000566] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Revised: 12/06/2019] [Accepted: 12/13/2019] [Indexed: 06/01/2023]
Abstract
Speech perception requires accommodation of a wide range of acoustic variability across talkers. A classic example is the perception of "sh" and "s" fricative sounds, which are categorized according to spectral details of the consonant itself, and also by the context of the voice producing it. Because women's and men's voices occupy different frequency ranges, a listener is required to make a corresponding adjustment of acoustic-phonetic category space for these phonemes when hearing different talkers. This pattern is commonplace in everyday speech communication, and yet might not be captured in accuracy scores for whole words, especially when word lists are spoken by a single talker. Phonetic accommodation for fricatives "s" and "sh" was measured in 20 cochlear implant (CI) users and in a variety of vocoder simulations, including those with noise carriers with and without peak picking, simulated spread of excitation, and pulsatile carriers. CI listeners showed strong phonetic accommodation as a group. Each vocoder produced phonetic accommodation except the 8-channel noise vocoder, despite its historically good match with CI users in word intelligibility. Phonetic accommodation is largely independent of linguistic factors and thus might offer information complementary to speech intelligibility tests which are partially affected by language processing.
Collapse
Affiliation(s)
- Matthew B Winn
- Department of Speech & Hearing Sciences, University of Minnesota, 164 Pillsbury Drive Southeast, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
6
|
Grasmeder ML, Verschuur CA, van Besouw RM, Wheatley AMH, Newman TA. Measurement of pitch perception as a function of cochlear implant electrode and its effect on speech perception with different frequency allocations. Int J Audiol 2018; 58:158-166. [PMID: 30370800 DOI: 10.1080/14992027.2018.1516048] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
OBJECTIVE An experiment was conducted to investigate the possibility that speech perception could be improved for some cochlear implant (CI) users by adjustment of the frequency allocation to the electrodes, following assessment of pitch perception along the electrode array. STUDY SAMPLE Thirteen adult CI users with MED-EL devices participated in the study. DESIGN Pitch perception was assessed for individual CI electrode pairs using the Pitch Contour Test (PCT), giving information on pitch discrimination and pitch ranking for adjacent electrodes. Sentence perception in noise was also assessed with ten different frequency allocations, including the default. RESULTS Pitch perception was found to be poorer for both discrimination and ranking scores at either end of the electrode array. A significant effect of frequency allocation was found for sentence scores [F(4.24,38.2) = 7.14, p < 0.001] and a significant interaction between sentence score and PCT ranking score for basal electrodes was found [F(4.24,38.2) = 2.95, p = 0.03]. Participants with poorer pitch perception at the basal end had poorer scores for some allocations with greater basal shift. CONCLUSIONS The results suggest that speech perception could be improved for CI users by assessment of pitch perception using the PCT and subsequent adjustment of pitch-related stimulation parameters.
Collapse
Affiliation(s)
- M L Grasmeder
- a Auditory Implant Service University of Southampton , Southampton , UK
| | - C A Verschuur
- a Auditory Implant Service University of Southampton , Southampton , UK
| | - R M van Besouw
- b Institute of Sound and Vibration Research, University of Southampton , UK
| | - A M H Wheatley
- b Institute of Sound and Vibration Research, University of Southampton , UK
| | - T A Newman
- c Southampton Neuroscience Group , University of Southampton , UK
| |
Collapse
|
7
|
Anis FN, Umat C, Ahmad K, Hamid BA. Patterns of recognition of Arabic consonants by non-native children with cochlear implants and normal hearing. Cochlear Implants Int 2018; 20:12-22. [PMID: 30293522 DOI: 10.1080/14670100.2018.1530420] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/28/2022]
Abstract
OBJECTIVE This study examined the patterns of recognition of Arabic consonants, via information transmission analysis for phonological features, in a group of Malay children with normal hearing (NH) and cochlear implants (CI). METHOD A total of 336 and 616 acoustic tokens were collected from six CI and 11 NH Malay children, respectively. The groups were matched for hearing age and duration of exposure to Arabic sounds. All the 28 Arabic consonants in the form of consonant-vowel /a/ were presented randomly twice via a loudspeaker at approximately 65 dB SPL. The participants were asked to repeat verbally the stimulus heard in each presentation. RESULTS Within the native Malay perceptual space, the two groups responded differently to the Arabic consonants. The dispersed uncategorized assimilation in the CI group was distinct in the confusion matrix (CM), as compared to the NH children. Consonants /ħ/, /tˁ/, /sˁ/ and /ʁ/ were difficult for the CI children, while the most accurate item was /k/ (84%). The CI group transmitted significantly reduced information, especially for place feature transmission, then the NH group (p < 0.001). Significant interactions between place-hearing status and manner-hearing status were also obtained, suggesting there were information transmission differences in the pattern of consonants recognition between the study groups. CONCLUSION CI and NH Malay children may be using different acoustic cues to recognize Arabic sounds, which contribute to the different assimilation categories' patterns within the Malay perceptual space.
Collapse
Affiliation(s)
- Farheen Naz Anis
- a Centre For Rehabilitation and Special Needs, Faculty of Health Sciences , Universiti Kebangsaan Malaysia , Jalan Raja Muda Abdul Aziz 50300 , Kuala Lumpur , Malaysia
| | - Cila Umat
- a Centre For Rehabilitation and Special Needs, Faculty of Health Sciences , Universiti Kebangsaan Malaysia , Jalan Raja Muda Abdul Aziz 50300 , Kuala Lumpur , Malaysia.,b Institute of Ear, Hearing & Speech, Universiti Kebangsaan Malaysia , Kuala Lumpur , Malaysia
| | - Kartini Ahmad
- a Centre For Rehabilitation and Special Needs, Faculty of Health Sciences , Universiti Kebangsaan Malaysia , Jalan Raja Muda Abdul Aziz 50300 , Kuala Lumpur , Malaysia
| | - Badrulzaman Abdul Hamid
- a Centre For Rehabilitation and Special Needs, Faculty of Health Sciences , Universiti Kebangsaan Malaysia , Jalan Raja Muda Abdul Aziz 50300 , Kuala Lumpur , Malaysia
| |
Collapse
|
8
|
Integration of acoustic and electric hearing is better in the same ear than across ears. Sci Rep 2017; 7:12500. [PMID: 28970567 PMCID: PMC5624923 DOI: 10.1038/s41598-017-12298-3] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2017] [Accepted: 09/06/2017] [Indexed: 11/26/2022] Open
Abstract
Advances in cochlear implant (CI) technology allow for acoustic and electric hearing to be combined within the same ear (electric-acoustic stimulation, or EAS) and/or across ears (bimodal listening). Integration efficiency (IE; the ratio between observed and predicted performance for acoustic-electric hearing) can be used to estimate how well acoustic and electric hearing are combined. The goal of this study was to evaluate factors that affect IE in EAS and bimodal listening. Vowel recognition was measured in normal-hearing subjects listening to simulations of unimodal, EAS, and bimodal listening. The input/output frequency range for acoustic hearing was 0.1–0.6 kHz. For CI simulations, the output frequency range was 1.2–8.0 kHz to simulate a shallow insertion depth and the input frequency range was varied to provide increasing amounts of speech information and tonotopic mismatch. Performance was best when acoustic and electric hearing was combined in the same ear. IE was significantly better for EAS than for bimodal listening; IE was sensitive to tonotopic mismatch for EAS, but not for bimodal listening. These simulation results suggest acoustic and electric hearing may be more effectively and efficiently combined within rather than across ears, and that tonotopic mismatch should be minimized to maximize the benefit of acoustic-electric hearing, especially for EAS.
Collapse
|
9
|
Grieco-Calub TM, Simeon KM, Snyder HE, Lew-Williams C. Word segmentation from noise-band vocoded speech. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:1344-1356. [PMID: 29977950 PMCID: PMC6028043 DOI: 10.1080/23273798.2017.1354129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 07/02/2017] [Indexed: 06/01/2023]
Abstract
Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated words (Experiment 2), or slowed by 33% (Experiment 3). Participants were tested on segmentation of familiar vs. novel syllable sequences and on recognition of individual syllables. As expected, vocoding hindered both word segmentation and syllable recognition. The addition of isolated words, but not slowed speech, improved segmentation. We conclude that syllable recognition is necessary but not sufficient for successful word segmentation, and that isolated words can facilitate listeners' access to the structure of acoustically degraded speech.
Collapse
Affiliation(s)
- Tina M. Grieco-Calub
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Katherine M. Simeon
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Hillary E. Snyder
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | | |
Collapse
|
10
|
Jaekel BN, Newman RS, Goupell MJ. Speech Rate Normalization and Phonemic Boundary Perception in Cochlear-Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1398-1416. [PMID: 28395319 PMCID: PMC5580678 DOI: 10.1044/2016_jslhr-h-15-0427] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/11/2015] [Revised: 05/04/2016] [Accepted: 10/14/2016] [Indexed: 05/29/2023]
Abstract
PURPOSE Normal-hearing (NH) listeners rate normalize, temporarily remapping phonemic category boundaries to account for a talker's speech rate. It is unknown if adults who use auditory prostheses called cochlear implants (CI) can rate normalize, as CIs transmit degraded speech signals to the auditory nerve. Ineffective adjustment to rate information could explain some of the variability in this population's speech perception outcomes. METHOD Phonemes with manipulated voice-onset-time (VOT) durations were embedded in sentences with different speech rates. Twenty-three CI and 29 NH participants performed a phoneme identification task. NH participants heard the same unprocessed stimuli as the CI participants or stimuli degraded by a sine vocoder, simulating aspects of CI processing. RESULTS CI participants showed larger rate normalization effects (6.6 ms) than the NH participants (3.7 ms) and had shallower (less reliable) category boundary slopes. NH participants showed similarly shallow slopes when presented acoustically degraded vocoded signals, but an equal or smaller rate effect in response to reductions in available spectral and temporal information. CONCLUSION CI participants can rate normalize, despite their degraded speech input, and show a larger rate effect compared to NH participants. CI participants may particularly rely on rate normalization to better maintain perceptual constancy of the speech signal.
Collapse
Affiliation(s)
- Brittany N. Jaekel
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Rochelle S. Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| | - Matthew J. Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park
| |
Collapse
|
11
|
Yang J, Vadlamudi J, Yin Z, Lee CY, Xu L. Production of word-initial fricatives of Mandarin Chinese in prelingually deafened children with cochlear implants. INTERNATIONAL JOURNAL OF SPEECH-LANGUAGE PATHOLOGY 2017; 19:153-164. [PMID: 27063694 DOI: 10.3109/17549507.2016.1143972] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2015] [Accepted: 01/15/2016] [Indexed: 06/05/2023]
Abstract
PURPOSE This study examined the production of fricatives by prelingually deafened Mandarin-speaking children with cochlear implants (CIs). METHOD Fourteen cochlear implant (CI) children (2.9-8.3 years old) and 60 age-matched normal-hearing (NH) children were recorded producing a list of 13 Mandarin words with four fricatives, /f, s, ɕ, ʂ/, occurring at the syllable-initial position evoked with a picture-naming task. Two phonetically-trained native Mandarin speakers transcribed the fricative productions. Acoustic analysis was conducted to examine acoustic measures including duration, normalised amplitude, spectral peak location and four spectral moments. RESULT The CI children showed much lower accuracy rates and more diverse error patterns on all four fricatives than their NH peers. Among these four fricatives, both CI and NH children showed the highest rate of mispronunciation of /s/. The acoustic results showed that the speech of the CI children differed from the NH children in spectral peak location, normalised amplitude, spectral mean and spectral skewness. In addition, the fricatives produced by the CI children showed less distinctive patterns of acoustic measures relative to the NH children. CONCLUSION In general, these results indicate that the CI children have not established distinct categories for the Mandarin fricatives in terms of the place of articulation.
Collapse
Affiliation(s)
- Jing Yang
- a Communication Sciences and Disorders, Speech Language and Hearing Center, University of Central Arkansas , Conway , AR , USA
| | - Jessica Vadlamudi
- b Communication Sciences and Disorders, Ohio University , Athens , OH , USA , and
| | - Zhigang Yin
- c Institute of Linguistics, Chinese Academy of Social Sciences , Beijing , PR China
| | - Chao-Yang Lee
- b Communication Sciences and Disorders, Ohio University , Athens , OH , USA , and
| | - Li Xu
- b Communication Sciences and Disorders, Ohio University , Athens , OH , USA , and
| |
Collapse
|
12
|
|
13
|
Devocht EM, Dees G, Arts RA, Smits JJ, George EL, van Hoof M, Stokroos RJ. Revisiting Place-Pitch Match in CI Recipients Using 3D Imaging Analysis. Ann Otol Rhinol Laryngol 2015; 125:378-84. [DOI: 10.1177/0003489415616130] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Objective: To improve the estimation of the perceived pitch in a single-sided deaf cochlear implant (CI) listener by using accurate 3-dimensional (3D) image analysis of the cochlear electrode positions together with the predicted tonotopical function for humans. Methods: An SSD CI user underwent a Cone-Beam computed tomography (CBCT) scan. Electrode contacts were marked in 3D space in relation to the nearest point on the cochlear lateral wall. Distance to the base of the lateral wall was calculated and plotted against the place-pitch function for humans. An adaptive procedure was used to elicit the perceived pitch of electrically evoked stimulation by matching it with a contralateral acoustic pitch. Results: The electrically evoked pitch percept matched well with the calculated frequency. The median mismatch in octaves was 0.12 for our method in comparison to 0.69 using the conventional Stenvers view. Conclusion: A method of improved image analysis is described that can be used to predict the pitch percept on corresponding cochlear electrode positions. This method shows the potential of 3D imaging in CI fitting optimization.
Collapse
Affiliation(s)
- Elke M.J. Devocht
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Guido Dees
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Remo A.G.J. Arts
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Jeroen J. Smits
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Erwin L.J. George
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Marc van Hoof
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| | - Robert J. Stokroos
- Department of ENT/Audiology, School for Mental Health and Neuroscience (MHENS), Maastricht University Medical Center, Maastricht, The Netherlands
| |
Collapse
|
14
|
Venail F, Mathiolon C, Menjot de Champfleur S, Piron JP, Sicard M, Villemus F, Vessigaud MA, Sterkers-Artieres F, Mondain M, Uziel A. Effects of Electrode Array Length on Frequency-Place Mismatch and Speech Perception with Cochlear Implants. Audiol Neurootol 2015; 20:102-11. [DOI: 10.1159/000369333] [Citation(s) in RCA: 22] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/17/2014] [Accepted: 10/20/2014] [Indexed: 11/19/2022] Open
Abstract
Frequency-place mismatch often occurs after cochlear implantation, yet its effect on speech perception outcome remains unclear. In this article, we propose a method, based on cochlea imaging, to determine the cochlear place-frequency map. We evaluated the effect of frequency-place mismatch on speech perception outcome in subjects implanted with 3 different lengths of electrode arrays. A deeper insertion was responsible for a larger frequency-place mismatch and a decreased and delayed speech perception improvement by comparison with a shallower insertion, for which a similar but slighter effect was noticed. Our results support the notion that selecting an electrode array length adapted to each individual's cochlear anatomy may reduce frequency-place mismatch and thus improve speech perception outcome.
Collapse
|
15
|
Zhou N, Pfingst BE. Relationship between multipulse integration and speech recognition with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:1257. [PMID: 25190399 PMCID: PMC4165232 DOI: 10.1121/1.4890640] [Citation(s) in RCA: 25] [Impact Index Per Article: 2.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/12/2023]
Abstract
Comparisons of performance with cochlear implants and postmortem conditions in the cochlea in humans have shown mixed results. The limitations in those studies favor the use of within-subject designs and non-invasive measures to estimate cochlear conditions. One non-invasive correlate of cochlear health is multipulse integration, established in an animal model. The present study used this measure to relate neural health in human cochlear implant users to their speech recognition performance. The multipulse-integration slopes were derived based on psychophysical detection thresholds measured for two pulse rates (80 and 640 pulses per second). A within-subject design was used in eight subjects with bilateral implants where the direction and magnitude of ear differences in the multipulse-integration slopes were compared with those of the speech-recognition results. The speech measures included speech reception threshold for sentences and phoneme recognition in noise. The magnitude of ear difference in the integration slopes was significantly correlated with the magnitude of ear difference in speech reception thresholds, consonant recognition in noise, and transmission of place of articulation of consonants. These results suggest that multipulse integration predicts speech recognition in noise and perception of features that use dynamic spectral cues.
Collapse
Affiliation(s)
- Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, North Carolina 27834
| | - Bryan E Pfingst
- Kresge Hearing Research Institute, Department of Otolaryngology, University of Michigan, Ann Arbor, Michigan 48109-5616
| |
Collapse
|
16
|
Melody recognition in dichotic listening with or without frequency-place mismatch. Ear Hear 2013; 35:379-82. [PMID: 24351609 DOI: 10.1097/aud.0000000000000013] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The purpose of the study was to examine recognition of degraded melodic stimuli in dichotic listening with or without frequency-place mismatch. DESIGN Melodic stimuli were noise vocoded with various number-of-channel conditions in a dichotic and monaural processor. In the dichotic zipper processor, the odd-indexed channels were tonotopically matched and presented to the left ear while the even-indexed channels were tonotopically matched or upward shifted in frequency and presented to the right ear. In the monaural processor, all channels either unshifted or shifted were presented to the left ear alone. Familiar melody recognition was measured in 16 normal-hearing adult listeners. RESULTS Performance for dichotically presented melodic stimuli did not differ from that for monaurally presented stimuli even with low spectral resolution (8 channels). With spectral shift introduced in one ear, melody recognition decreased with increasing spectral shift in a nonmonotonic fashion. With spectral shift, melody recognition in dichotic listening was either not different or superior in a few cases relative to the monaural condition. CONCLUSIONS With no spectral shift, cohesive fusion of dichotically presented melodic stimuli did not seem to depend on spectral resolution. In spectrally shifted conditions, listeners may have suppressed the partially shifted channels in the right ear and selectively attended only to the unshifted ones, resulting in dichotic advantages for melody recognition in some cases.
Collapse
|
17
|
Mandarin consonant contrast recognition among children with cochlear implants or hearing aids and normal-hearing children. Otol Neurotol 2013; 34:471-6. [PMID: 23486352 DOI: 10.1097/mao.0b013e318286836b] [Citation(s) in RCA: 8] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
HYPOTHESIS The purpose of the present study was to investigate the consonant recognition of Mandarin-speaking children with cochlear implants (CIs) and hearing aids (HAs) and to determine if they reach a level of consonant recognition similar to that of normal-hearing (NH) children. BACKGROUND Little information is available in the literature regarding the consonant perception abilities of prelingually deafened young children with either CIs or HAs. No studies have compared Mandarin-Chinese consonant contrast recognition in CI and HA children. METHODS Forty-one prelingually deafened children with CIs, 26 prelingually deafened children with HAs, and 30 NH children participated in this study. The 3 groups were matched for chronologic age (3-5 yr). The hearing-impaired groups were matched for age at fitting of the devices, duration of device use, and aided hearing threshold. All subjects completed a computerized Mandarin consonant phonetic contrast perception test. RESULTS CI and HA children scored, on average, approximately 8 percentage points below the mean NH group performance on the consonant contrast recognition. Approximately 40% of the CI and HA children had not reached a performance level of the NH group. No significant differences in the consonant recognition scores were found between the CI and HA groups. Age of implantation was correlated with consonant contrast recognition in the CI group. CONCLUSION When age at fitting of the devices, duration of device use, and aided thresholds are matched at the group level, consonant recognition is similar between the CI and HA children after 2 years of device use. Early implantation tends to yield better consonant contrast recognition in the young children with CIs. However, a large amount of variance in performance was not accounted for by the demographic variables studied.
Collapse
|
18
|
|