1
|
Rødvik AK, Torkildsen JVK, Wie OB, Tvete O, Skaug I, Silvola JT. Consonant and vowel confusions in well-performing adult cochlear implant users, measured with a nonsense syllable repetition test. Int J Audiol 2024; 63:260-268. [PMID: 36853200 DOI: 10.1080/14992027.2023.2177893] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/21/2021] [Accepted: 02/01/2023] [Indexed: 03/01/2023]
Abstract
OBJECTIVE The study's objective was to identify consonant and vowel confusions in cochlear implant (CI) users, using a nonsense syllable repetition test. DESIGN In this cross-sectional study, participants repeated recorded mono- and bisyllabic nonsense words and real-word monosyllables in an open-set design. STUDY SAMPLE Twenty-eight Norwegian-speaking, well-performing adult CI users (13 unilateral and 15 bilateral), using implants from Cochlear, Med-El and Advanced Bionics, and a reference group of 20 listeners with normal hearing participated. RESULTS For the CI users, consonants were confused more often than vowels (58% versus 71% correct). Voiced consonants were confused more often than unvoiced (54% versus 64% correct). Voiced stops were often repeated as unvoiced, whereas unvoiced stops were never repeated as voiced. The nasals were repeated correctly in one third of the cases and confused with other nasals in one third of the cases. The real-word monosyllable score was significantly higher than the nonsense syllable score (76% versus 63% correct). CONCLUSIONS The study revealed a general devoicing bias for the stops and a high confusion rate of nasals with other nasals, which suggests that the low-frequency coding in CIs is insufficient. Furthermore, the nonsense syllable test exposed more perception errors than the real word test.
Collapse
Affiliation(s)
- Arne K Rødvik
- Department of Special Needs Education, University of Oslo, Oslo, Norway
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | | | - Ona B Wie
- Department of Special Needs Education, University of Oslo, Oslo, Norway
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | - Ole Tvete
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
| | | | - Juha T Silvola
- Ear, Nose and Throat Department, Oslo University Hospital, Oslo, Norway
- Akershus University Hospital, Lørenskog, Norway
- Department of Clinical Medicine, University of Oslo, Oslo, Norway
| |
Collapse
|
2
|
Torppa R, Kuuluvainen S, Lipsanen J. The development of cortical processing of speech differs between children with cochlear implants and normal hearing and changes with parental singing. Front Neurosci 2022; 16:976767. [PMID: 36507354 PMCID: PMC9731313 DOI: 10.3389/fnins.2022.976767] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/23/2022] [Accepted: 11/04/2022] [Indexed: 11/21/2022] Open
Abstract
Objective The aim of the present study was to investigate speech processing development in children with normal hearing (NH) and cochlear implants (CI) groups using a multifeature event-related potential (ERP) paradigm. Singing is associated to enhanced attention and speech perception. Therefore, its connection to ERPs was investigated in the CI group. Methods The paradigm included five change types in a pseudoword: two easy- (duration, gap) and three difficult-to-detect (vowel, pitch, intensity) with CIs. The positive mismatch responses (pMMR), mismatch negativity (MMN), P3a and late differentiating negativity (LDN) responses of preschoolers (below 6 years 9 months) and schoolchildren (above 6 years 9 months) with NH or CIs at two time points (T1, T2) were investigated with Linear Mixed Modeling (LMM). For the CI group, the association of singing at home and ERP development was modeled with LMM. Results Overall, responses elicited by the easy- and difficult to detect changes differed between the CI and NH groups. Compared to the NH group, the CI group had smaller MMNs to vowel duration changes and gaps, larger P3a responses to gaps, and larger pMMRs and smaller LDNs to vowel identity changes. Preschoolers had smaller P3a responses and larger LDNs to gaps, and larger pMMRs to vowel identity changes than schoolchildren. In addition, the pMMRs to gaps increased from T1 to T2 in preschoolers. More parental singing in the CI group was associated with increasing pMMR and less parental singing with decreasing P3a amplitudes from T1 to T2. Conclusion The multifeature paradigm is suitable for assessing cortical speech processing development in children. In children with CIs, cortical discrimination is often reflected in pMMR and P3a responses, and in MMN and LDN responses in children with NH. Moreover, the cortical speech discrimination of children with CIs develops late, and over time and age, their speech sound change processing changes as does the processing of children with NH. Importantly, multisensory activities such as parental singing can lead to improvement in the discrimination and attention shifting toward speech changes in children with CIs. These novel results should be taken into account in future research and rehabilitation.
Collapse
Affiliation(s)
- Ritva Torppa
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Centre of Excellence in Music, Mind, Body and Brain, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| | - Soila Kuuluvainen
- Cognitive Brain Research Unit, Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland,Department of Digital Humanities, Faculty of Arts, University of Helsinki, Helsinki, Finland
| | - Jari Lipsanen
- Department of Psychology and Logopedics, Faculty of Medicine, University of Helsinki, Helsinki, Finland
| |
Collapse
|
3
|
van Wieringen A, Magits S, Francart T, Wouters J. Home-Based Speech Perception Monitoring for Clinical Use With Cochlear Implant Users. Front Neurosci 2021; 15:773427. [PMID: 34916902 PMCID: PMC8669965 DOI: 10.3389/fnins.2021.773427] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2021] [Accepted: 10/28/2021] [Indexed: 12/02/2022] Open
Abstract
Speech-perception testing is essential for monitoring outcomes with a hearing aid or cochlear implant (CI). However, clinical care is time-consuming and often challenging with an increasing number of clients. A potential approach to alleviating some clinical care and possibly making room for other outcome measures is to employ technologies that assess performance in the home environment. In this study, we investigate 3 different speech perception indices in the same 40 CI users: phoneme identification (vowels and consonants), digits in noise (DiN) and sentence recognition in noise (SiN). The first two tasks were implemented on a tablet and performed multiple times by each client in their home environment, while the sentence task was administered at the clinic. Speech perception outcomes in the same forty CI users showed that DiN assessed at home can serve as an alternative to SiN assessed at the clinic. DiN scores are in line with the SiN ones by 3–4 dB improvement and are useful to monitor performance at regular intervals and to detect changes in auditory performance. Phoneme identification in quiet also explains a significant part of speech perception in noise, and provides additional information on the detectability and discriminability of speech cues. The added benefit of the phoneme identification task, which also proved to be easy to administer at home, is the information transmission analysis in addition to the summary score. Performance changes for the different indices can be interpreted by comparing against measurement error and help to target personalized rehabilitation. Altogether, home-based speech testing is reliable and proves powerful to complement care in the clinic for CI users.
Collapse
Affiliation(s)
| | - Sara Magits
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Tom Francart
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| | - Jan Wouters
- Experimental ORL, Department of Neurosciences, KU Leuven, Leuven, Belgium
| |
Collapse
|
4
|
Geller J, Holmes A, Schwalje A, Berger JI, Gander PE, Choi I, McMurray B. Validation of the Iowa Test of Consonant Perception. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2131. [PMID: 34598595 PMCID: PMC8637717 DOI: 10.1121/10.0006246] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/04/2020] [Revised: 08/20/2021] [Accepted: 08/24/2021] [Indexed: 05/22/2023]
Abstract
Speech perception (especially in background noise) is a critical problem for hearing-impaired listeners and an important issue for cognitive hearing science. Despite a plethora of standardized measures, few single-word closed-set tests uniformly sample the most frequently used phonemes and use response choices that equally sample phonetic features like place and voicing. The Iowa Test of Consonant Perception (ITCP) attempts to solve this. It is a proportionally balanced phonemic word recognition task designed to assess perception of the initial consonant of monosyllabic consonant-vowel-consonant (CVC) words. The ITCP consists of 120 sampled CVC words. Words were recorded from four different talkers (two female) and uniformly sampled from all four quadrants of the vowel space to control for coarticulation. Response choices on each trial are balanced to equate difficulty and sample a single phonetic feature. This study evaluated the psychometric properties of ITCP by examining reliability (test-retest) and validity in a sample of online normal-hearing participants. Ninety-eight participants completed two sessions of the ITCP along with standardized tests of words and sentence in noise (CNC words and AzBio sentences). The ITCP showed good test-retest reliability and convergent validity with two popular tests presented in noise. All the materials to use the ITCP or to construct your own version of the ITCP are freely available [Geller, McMurray, Holmes, and Choi (2020). https://osf.io/hycdu/].
Collapse
Affiliation(s)
- Jason Geller
- Department of Psychological and Brain Sciences, University of Iowa, G60 Psychological and Brain Sciences Building, Iowa City, Iowa 52242, USA
| | - Ann Holmes
- Department of Psychological and Brain Sciences, University of Iowa, G60 Psychological and Brain Sciences Building, Iowa City, Iowa 52242, USA
| | - Adam Schwalje
- Department of Otolaryngology-Head and Neck Surgery, University of Iowa, 200 Hawkins Drive, 21151 Pomerantz Family Pavilion, Iowa City, Iowa 52242, USA
| | - Joel I Berger
- Department of Neurosurgery, University of Iowa, 200 Hawkins Drive, 1800 John Pappajohn Pavilion, Iowa City, Iowa 52242, USA
| | - Phillip E Gander
- Department of Neurosurgery, University of Iowa, 200 Hawkins Drive, 1800 John Pappajohn Pavilion, Iowa City, Iowa 52242, USA
| | - Inyong Choi
- Department of Communication Sciences and Disorders, University of Iowa, Wendell Johnson Speech and Hearing Center, Iowa City, Iowa 52242, USA
| | - Bob McMurray
- Department of Psychological and Brain Sciences, University of Iowa, G60 Psychological and Brain Sciences Building, Iowa City, Iowa 52242, USA
| |
Collapse
|
5
|
Links of Prosodic Stress Perception and Musical Activities to Language Skills of Children With Cochlear Implants and Normal Hearing. Ear Hear 2021; 41:395-410. [PMID: 31397704 DOI: 10.1097/aud.0000000000000763] [Citation(s) in RCA: 14] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/28/2023]
Abstract
OBJECTIVES A major issue in the rehabilitation of children with cochlear implants (CIs) is unexplained variance in their language skills, where many of them lag behind children with normal hearing (NH). Here, we assess links between generative language skills and the perception of prosodic stress, and with musical and parental activities in children with CIs and NH. Understanding these links is expected to guide future research and toward supporting language development in children with a CI. DESIGN Twenty-one unilaterally and early-implanted children and 31 children with NH, aged 5 to 13, were classified as musically active or nonactive by a questionnaire recording regularity of musical activities, in particular singing, and reading and other activities shared with parents. Perception of word and sentence stress, performance in word finding, verbal intelligence (Wechsler Intelligence Scale for Children (WISC) vocabulary), and phonological awareness (production of rhymes) were measured in all children. Comparisons between children with a CI and NH were made against a subset of 21 of the children with NH who were matched to children with CIs by age, gender, socioeconomic background, and musical activity. Regression analyses, run separately for children with CIs and NH, assessed how much variance in each language task was shared with each of prosodic perception, the child's own music activity, and activities with parents, including singing and reading. All statistical analyses were conducted both with and without control for age and maternal education. RESULTS Musically active children with CIs performed similarly to NH controls in all language tasks, while those who were not musically active performed more poorly. Only musically nonactive children with CIs made more phonological and semantic errors in word finding than NH controls, and word finding correlated with other language skills. Regression analysis results for word finding and VIQ were similar for children with CIs and NH. These language skills shared considerable variance with the perception of prosodic stress and musical activities. When age and maternal education were controlled for, strong links remained between perception of prosodic stress and VIQ (shared variance: CI, 32%/NH, 16%) and between musical activities and word finding (shared variance: CI, 53%/NH, 20%). Links were always stronger for children with CIs, for whom better phonological awareness was also linked to improved stress perception and more musical activity, and parental activities altogether shared significantly variance with word finding and VIQ. CONCLUSIONS For children with CIs and NH, better perception of prosodic stress and musical activities with singing are associated with improved generative language skills. In addition, for children with CIs, parental singing has a stronger positive association to word finding and VIQ than parental reading. These results cannot address causality, but they suggest that good perception of prosodic stress, musical activities involving singing, and parental singing and reading may all be beneficial for word finding and other generative language skills in implanted children.
Collapse
|
6
|
Abstract
INTRODUCTION Cochlear implants (CIs) are biomedical devices that restore sound perception for people with severe-to-profound sensorineural hearing loss. Most postlingually deafened CI users are able to achieve excellent speech recognition in quiet environments. However, current CI sound processors remain limited in their ability to deliver fine spectrotemporal information, making it difficult for CI users to perceive complex sounds. Limited access to complex acoustic cues such as music, environmental sounds, lexical tones, and voice emotion may have significant ramifications on quality of life, social development, and community interactions. AREAS COVERED The purpose of this review article is to summarize the literature on CIs and music perception, with an emphasis on music training in pediatric CI recipients. The findings have implications on our understanding of noninvasive, accessible methods for improving auditory processing and may help advance our ability to improve sound quality and performance for implantees. EXPERT OPINION Music training, particularly in the pediatric population, may be able to continue to enhance auditory processing even after performance plateaus. The effects of these training programs appear generalizable to non-trained musical tasks, speech prosody and, emotion perception. Future studies should employ rigorous control groups involving a non-musical acoustic intervention, standardized auditory stimuli, and the provision of feedback.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| |
Collapse
|
7
|
Rødvik AK, Tvete O, Torkildsen JVK, Wie OB, Skaug I, Silvola JT. Consonant and Vowel Confusions in Well-Performing Children and Adolescents With Cochlear Implants, Measured by a Nonsense Syllable Repetition Test. Front Psychol 2019; 10:1813. [PMID: 31474900 PMCID: PMC6702790 DOI: 10.3389/fpsyg.2019.01813] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/07/2019] [Accepted: 07/22/2019] [Indexed: 12/31/2022] Open
Abstract
Although the majority of early implanted, profoundly deaf children with cochlear implants (CIs), will develop correct pronunciation if they receive adequate oral language stimulation, many of them have difficulties with perceiving minute details of speech. The main aim of this study is to measure the confusion of consonants and vowels in well-performing children and adolescents with CIs. The study also aims to investigate how age at onset of severe to profound deafness influences perception. The participants are 36 children and adolescents with CIs (18 girls), with a mean (SD) age of 11.6 (3.0) years (range: 5.9-16.0 years). Twenty-nine of them are prelingually deaf and seven are postlingually deaf. Two reference groups of normal-hearing (NH) 6- and 13-year-olds are included. Consonant and vowel perception is measured by repetition of 16 bisyllabic vowel-consonant-vowel nonsense words and nine monosyllabic consonant-vowel-consonant nonsense words in an open-set design. For the participants with CIs, consonants were mostly confused with consonants with the same voicing and manner, and the mean (SD) voiced consonant repetition score, 63.9 (10.6)%, was considerably lower than the mean (SD) unvoiced consonant score, 76.9 (9.3)%. There was a devoicing bias for the stops; unvoiced stops were confused with other unvoiced stops and not with voiced stops, and voiced stops were confused with both unvoiced stops and other voiced stops. The mean (SD) vowel repetition score was 85.2 (10.6)% and there was a bias in the confusions of [i:] and [y:]; [y:] was perceived as [i:] twice as often as [y:] was repeated correctly. Subgroup analyses showed no statistically significant differences between the consonant scores for pre- and postlingually deaf participants. For the NH participants, the consonant repetition scores were substantially higher and the difference between voiced and unvoiced consonant repetition scores considerably lower than for the participants with CIs. The participants with CIs obtained scores close to ceiling on vowels and real-word monosyllables, but their perception was substantially lower for voiced consonants. This may partly be related to limitations in the CI technology for the transmission of low-frequency sounds, such as insertion depth of the electrode and ability to convey temporal information.
Collapse
Affiliation(s)
- Arne Kirkhorn Rødvik
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Ole Tvete
- Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | - Janne von Koss Torkildsen
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway
| | - Ona Bø Wie
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway
| | | | - Juha Tapio Silvola
- Department of Special Needs Education, Institute of Educational Sciences, University of Oslo, Oslo, Norway.,Cochlear Implant Unit, Department of Otorhinolaryngology, Division of Surgery and Clinical Neuroscience, Oslo University Hospital, Oslo, Norway.,Ear, Nose, and Throat Department, Division of Surgery, Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
8
|
Rødvik AK, von Koss Torkildsen J, Wie OB, Storaker MA, Silvola JT. Consonant and Vowel Identification in Cochlear Implant Users Measured by Nonsense Words: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:1023-1050. [PMID: 29623340 DOI: 10.1044/2018_jslhr-h-16-0463] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/09/2017] [Accepted: 12/18/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE The purpose of this systematic review and meta-analysis was to establish a baseline of the vowel and consonant identification scores in prelingually and postlingually deaf users of multichannel cochlear implants (CIs) tested with consonant-vowel-consonant and vowel-consonant-vowel nonsense syllables. METHOD Six electronic databases were searched for peer-reviewed articles reporting consonant and vowel identification scores in CI users measured by nonsense words. Relevant studies were independently assessed and screened by 2 reviewers. Consonant and vowel identification scores were presented in forest plots and compared between studies in a meta-analysis. RESULTS Forty-seven articles with 50 studies, including 647 participants, thereof 581 postlingually deaf and 66 prelingually deaf, met the inclusion criteria of this study. The mean performance on vowel identification tasks for the postlingually deaf CI users was 76.8% (N = 5), which was higher than the mean performance for the prelingually deaf CI users (67.7%; N = 1). The mean performance on consonant identification tasks for the postlingually deaf CI users was higher (58.4%; N = 44) than for the prelingually deaf CI users (46.7%; N = 6). The most common consonant confusions were found between those with same manner of articulation (/k/ as /t/, /m/ as /n/, and /p/ as /t/). CONCLUSIONS The mean performance on consonant identification tasks for the prelingually and postlingually deaf CI users was found. There were no statistically significant differences between the scores for prelingually and postlingually deaf CI users. The consonants that were incorrectly identified were typically confused with other consonants with the same acoustic properties, namely, voicing, duration, nasality, and silent gaps. A univariate metaregression model, although not statistically significant, indicated that duration of implant use in postlingually deaf adults predict a substantial portion of their consonant identification ability. As there is no ceiling effect, a nonsense syllable identification test may be a useful addition to the standard test battery in audiology clinics when assessing the speech perception of CI users.
Collapse
Affiliation(s)
- Arne Kirkhorn Rødvik
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Norway
| | | | - Ona Bø Wie
- Department of Special Needs Education, Faculty of Educational Sciences, University of Oslo, Norway
- Oslo University Hospital, Norway
| | - Marit Aarvaag Storaker
- Institute of Basic Medical Sciences, Faculty of Medicine, University of Oslo, Norway
- Lillehammer Hospital, Norway
| | - Juha Tapio Silvola
- Oslo University Hospital, Norway
- Institute of Basic Medical Sciences, Faculty of Medicine, University of Oslo, Norway
- Akershus University Hospital, Lørenskog, Norway
| |
Collapse
|
9
|
McMurray B, Farris-Trimble A, Rigler H. Waiting for lexical access: Cochlear implants or severely degraded input lead listeners to process speech less incrementally. Cognition 2017; 169:147-164. [PMID: 28917133 DOI: 10.1016/j.cognition.2017.08.013] [Citation(s) in RCA: 50] [Impact Index Per Article: 7.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/14/2016] [Revised: 08/27/2017] [Accepted: 08/31/2017] [Indexed: 11/30/2022]
Abstract
Spoken language unfolds over time. Consequently, there are brief periods of ambiguity, when incomplete input can match many possible words. Typical listeners solve this problem by immediately activating multiple candidates which compete for recognition. In two experiments using the visual world paradigm, we examined real-time lexical competition in prelingually deaf cochlear implant (CI) users, and normal hearing (NH) adults listening to severely degraded speech. In Experiment 1, adolescent CI users and NH controls matched spoken words to arrays of pictures including pictures of the target word and phonological competitors. Eye-movements to each referent were monitored asa measure of how strongly that candidate was considered over time. Relative to NH controls, CI users showed a large delay in fixating any object, less competition from onset competitors (e.g., sandwich after hearing sandal), and increased competition from rhyme competitors (e.g., candle after hearing sandal). Experiment 2 observed the same pattern with NH listeners hearing highly degraded speech. These studies suggests that in contrast to all prior studies of word recognition in typical listeners, listeners recognizing words in severely degraded conditions can exhibit a substantively different pattern of dynamics, waiting to begin lexical access until substantial information has accumulated.
Collapse
Affiliation(s)
- Bob McMurray
- Dept. of Psychological and Brain Sciences, University of Iowa, United States; Dept. of Communication Sciences and Disorders, University of Iowa, United States; Dept. of Otolaryngology, University of Iowa, United States; DeLTA Center, University of Iowa, United States.
| | | | - Hannah Rigler
- Dept. of Psychological and Brain Sciences, University of Iowa, United States
| |
Collapse
|
10
|
The MMN as a viable and objective marker of auditory development in CI users. Hear Res 2017; 353:57-75. [DOI: 10.1016/j.heares.2017.07.007] [Citation(s) in RCA: 25] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/20/2016] [Revised: 06/16/2017] [Accepted: 07/18/2017] [Indexed: 12/31/2022]
|
11
|
Grieco-Calub TM, Simeon KM, Snyder HE, Lew-Williams C. Word segmentation from noise-band vocoded speech. LANGUAGE, COGNITION AND NEUROSCIENCE 2017; 32:1344-1356. [PMID: 29977950 PMCID: PMC6028043 DOI: 10.1080/23273798.2017.1354129] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/08/2016] [Accepted: 07/02/2017] [Indexed: 06/01/2023]
Abstract
Spectral degradation reduces access to the acoustics of spoken language and compromises how learners break into its structure. We hypothesised that spectral degradation disrupts word segmentation, but that listeners can exploit other cues to restore detection of words. Normal-hearing adults were familiarised to artificial speech that was unprocessed or spectrally degraded by noise-band vocoding into 16 or 8 spectral channels. The monotonic speech stream was pause-free (Experiment 1), interspersed with isolated words (Experiment 2), or slowed by 33% (Experiment 3). Participants were tested on segmentation of familiar vs. novel syllable sequences and on recognition of individual syllables. As expected, vocoding hindered both word segmentation and syllable recognition. The addition of isolated words, but not slowed speech, improved segmentation. We conclude that syllable recognition is necessary but not sufficient for successful word segmentation, and that isolated words can facilitate listeners' access to the structure of acoustically degraded speech.
Collapse
Affiliation(s)
- Tina M. Grieco-Calub
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Katherine M. Simeon
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | - Hillary E. Snyder
- The Roxelyn & Richard Pepper Department of Communication Sciences and Disorders, Northwestern University, Evanston, IL, USA
| | | |
Collapse
|
12
|
Han W, Chun H, Kim G, Jin IK. Substitution Patterns of Phoneme Errors in Hearing Aid and Cochlear Implant Users. J Audiol Otol 2017; 21:28-32. [PMID: 28417105 PMCID: PMC5392003 DOI: 10.7874/jao.2017.21.1.28] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/11/2016] [Revised: 06/01/2016] [Accepted: 06/09/2016] [Indexed: 11/25/2022] Open
Abstract
Background and Objectives It is acknowledged that speech perceptual errors are increased in listeners who have sensorineural hearing loss as noise increases. However, there is a lack of detailed information for their error pattern. The purpose of the present study was to analyze substitution patterns of phoneme errors in Korean hearing aid (HA) and cochlear implant (CI) users who are postlingually deafened adults. Subjects and Methods In quiet and under two noise conditions, the phoneme errors of twenty HA and fourteen CI users were measured by using monosyllabic words, and a substitution pattern was analyzed in terms of manner of articulation. Results The results showed that both groups had a high percentage of nasal and plosive substitutions regardless of background conditions. Conclusions This finding will provide vital information for understanding the speech perception of hearing-impaired listeners and for improving their ability to communicate when applied to auditory training.
Collapse
Affiliation(s)
- Woojae Han
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Science, Hallym University, Chuncheon, Korea
| | - Hyungi Chun
- Department of Speech Pathology and Audiology, Hallym University Graduate School, Chuncheon, Korea
| | - Gibbeum Kim
- Department of Speech Pathology and Audiology, Hallym University Graduate School, Chuncheon, Korea
| | - In-Ki Jin
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Science, Hallym University, Chuncheon, Korea
| |
Collapse
|
13
|
Chun H, Ma S, Han W, Chun Y. Error Patterns Analysis of Hearing Aid and Cochlear Implant Users as a Function of Noise. J Audiol Otol 2015; 19:144-53. [PMID: 26771013 PMCID: PMC4704547 DOI: 10.7874/jao.2015.19.3.144] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2015] [Revised: 11/12/2015] [Accepted: 11/16/2015] [Indexed: 11/22/2022] Open
Abstract
Background and Objectives Not all impaired listeners may have the same speech perception ability although they will have similar pure-tone threshold and configuration. For this reason, the present study analyzes error patterns in the hearing-impaired compared to normal hearing (NH) listeners as a function of signal-to-noise ratio (SNR). Subjects and Methods Forty-four adults participated: 10 listeners with NH, 20 hearing aids (HA) users and 14 cochlear implants (CI) users. The Korean standardized monosyllables were presented as the stimuli in quiet and three different SNRs. Total error patterns were classified into types of substitution, omission, addition, fail, and no response, using stacked bar plots. Results Total error percent for the three groups significantly increased as the SNRs decreased. For error pattern analysis, the NH group showed substitution errors dominantly regardless of the SNRs compared to the other groups. Both the HA and CI groups had substitution errors that declined, while no response errors appeared as the SNRs increased. The CI group was characterized by lower substitution and higher fail errors than did the HA group. Substitutions of initial and final phonemes in the HA and CI groups were limited by place of articulation errors. However, the HA group had missed consonant place cues, such as formant transitions and stop consonant bursts, whereas the CI group usually had limited confusions of nasal consonants with low frequency characteristics. Interestingly, all three groups showed /k/ addition in the final phoneme, a trend that magnified as noise increased. Conclusions The HA and CI groups had their unique error patterns even though the aided thresholds of the two groups were similar. We expect that the results of this study will focus on high error patterns in auditory training of hearing-impaired listeners, resulting in reducing those errors and improving their speech perception ability.
Collapse
Affiliation(s)
- Hyungi Chun
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea
| | - Sunmi Ma
- Department of Speech Pathology and Audiology, Graduate School, Hallym University, Chuncheon, Korea
| | - Woojae Han
- Division of Speech Pathology and Audiology, Research Institute of Audiology and Speech Pathology, College of Natural Science, Hallym University, Chuncheon, Korea
| | | |
Collapse
|
14
|
Azadpour M, McKay CM, Smith RL. Estimating confidence intervals for information transfer analysis of confusion matrices. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 135:EL140-EL146. [PMID: 24606307 DOI: 10.1121/1.4865840] [Citation(s) in RCA: 26] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
A non-parametric bootstrapping statistical method is introduced and investigated for estimating confidence intervals resulting from information transfer (IT) analysis of confusion matrices. Confidence intervals can be used to statistically compare ITs from two or more confusion matrices obtained in an experiment. Information transfer is a nonlinear analysis and does not satisfy many of the assumptions of a parametric method. The bootstrapping method accurately estimated IT confidence intervals as long as the confusion matrices contained a sufficiently large number of presentations per stimulus category, which is also a condition for reduced bias in IT analysis.
Collapse
Affiliation(s)
- Mahan Azadpour
- Institute for Sensory Research, Syracuse University, 621 Skytop Road, Syracuse, New York 13244
| | - Colette M McKay
- Bionics Institute, 384 Albert Street, East Melbourne, Victoria 3002, Australia
| | - Robert L Smith
- Institute for Sensory Research, Syracuse University, 621 Skytop Road, Syracuse, New York 13244
| |
Collapse
|
15
|
Torppa R, Faulkner A, Huotilainen M, Järvikivi J, Lipsanen J, Laasonen M, Vainio M. The perception of prosody and associated auditory cues in early-implanted children: The role of auditory working memory and musical activities. Int J Audiol 2014; 53:182-91. [DOI: 10.3109/14992027.2013.872302] [Citation(s) in RCA: 47] [Impact Index Per Article: 4.7] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
16
|
Pottackal Mathai J, Yathiraj A. Effect of temporal modification and vowel context on speech perception in individuals with auditory neuropathy spectrum disorder (ANSD). HEARING BALANCE AND COMMUNICATION 2013. [DOI: 10.3109/21695717.2013.817064] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 11/13/2022]
|
17
|
Shafiro V, Levy ES, Khamis-Dakwar R, Kharkhurin A. Perceptual confusions of American-English vowels and consonants by native Arabic bilinguals. LANGUAGE AND SPEECH 2013; 56:145-161. [PMID: 23905278 DOI: 10.1177/0023830912442925] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2023]
Abstract
This study investigated the perception of American-English (AE) vowels and consonants by young adults who were either (a) early Arabic-English bilinguals whose native language was Arabic or (b) native speakers of the English dialects spoken in the United Arab Emirates (UAE), where both groups were studying. In a closed-set format, participants were asked to identify 12 AE vowels presented in /hVd/ context and 20 AE consonants (C) in three vocalic contexts: /aCa/, /iCi/, and /uCu/. Both native Arabic and native English groups demonstrated high accuracy in identification of vowels (70 and 80% correct, respectively) and consonants (94 and 95% correct, respectively). For both groups, the least-accurately identified vowels were /o/, /(see text)/, /ae/, while most consonant errors were found for /(see text)/, which was most frequently confused with /v/. However, for both groups, identification of /(see text)/ was vocalic-context dependent, with most errors occurring in liCil context and fewest errors occurring in luCu/ context. Lack of significant group differences suggests that speech sound identification patterns, including phonetic context effects for /(see text)/, were influenced more by the local English dialects than by listeners' Arabic language background. The findings also demonstrate consistent perceptual error patterns among listeners despite considerable variation in their native and second language dialectal backgrounds.
Collapse
|
18
|
Todd AE, Edwards JR, Litovsky RY. Production of contrast between sibilant fricatives by children with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:3969-3979. [PMID: 22225051 PMCID: PMC3253598 DOI: 10.1121/1.3652852] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2010] [Revised: 09/13/2011] [Accepted: 09/16/2011] [Indexed: 05/28/2023]
Abstract
Speech production by children with cochlear implants (CIs) is generally less intelligible and less accurate on a phonemic level than that of normally hearing children. Research has reported that children with CIs produce less acoustic contrast between phonemes than normally hearing children, but these studies have included correct and incorrect productions. The present study compared the extent of contrast between correct productions of /s/ and /∫/ by children with CIs and two comparison groups: (1) normally hearing children of the same chronological age as the children with CIs and (2) normally hearing children with the same duration of auditory experience. Spectral peaks and means were calculated from the frication noise of productions of /s/ and /∫/. Results showed that the children with CIs produced less contrast between /s/ and /∫/ than normally hearing children of the same chronological age and normally hearing children with the same duration of auditory experience due to production of /s/ with spectral peaks and means at lower frequencies. The results indicate that there may be differences between the speech sounds produced by children with CIs and their normally hearing peers even for sounds that adults judge as correct.
Collapse
Affiliation(s)
- Ann E Todd
- University of Wisconsin Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | | | | |
Collapse
|
19
|
Within-subjects comparison of the HiRes and Fidelity120 speech processing strategies: speech perception and its relation to place-pitch sensitivity. Ear Hear 2011; 32:238-50. [PMID: 21084987 DOI: 10.1097/aud.0b013e3181fb8390] [Citation(s) in RCA: 35] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Previous studies have confirmed that current steering can increase the number of discriminable pitches available to many cochlear implant (CI) users; however, the ability to perceive additional pitches has not been linked to improved speech perception. The primary goals of this study were to determine (1) whether adult CI users can achieve higher levels of spectral cue transmission with a speech processing strategy that implements current steering (Fidelity120) than with a predecessor strategy (HiRes) and, if so, (2) whether the magnitude of improvement can be predicted from individual differences in place-pitch sensitivity. A secondary goal was to determine whether Fidelity120 supports higher levels of speech recognition in noise than HiRes. DESIGN A within-subjects repeated measures design evaluated speech perception performance with Fidelity120 relative to HiRes in 10 adult CI users. Subjects used the novel strategy (either HiRes or Fidelity120) for 8 wks during the main study; a subset of five subjects used Fidelity120 for three additional months after the main study. Speech perception was assessed for the spectral cues related to vowel F1 frequency, vowel F2 frequency, and consonant place of articulation; overall transmitted information for vowels and consonants; and sentence recognition in noise. Place-pitch sensitivity was measured for electrode pairs in the apical, middle, and basal regions of the implanted array using a psychophysical pitch-ranking task. RESULTS With one exception, there was no effect of strategy (HiRes versus Fidelity120) on the speech measures tested, either during the main study (N = 10) or after extended use of Fidelity120 (N = 5). The exception was a small but significant advantage for HiRes over Fidelity120 for consonant perception during the main study. Examination of individual subjects' data revealed that 3 of 10 subjects demonstrated improved perception of one or more spectral cues with Fidelity120 relative to HiRes after 8 wks or longer experience with Fidelity120. Another three subjects exhibited initial decrements in spectral cue perception with Fidelity120 at the 8-wk time point; however, evidence from one subject suggested that such decrements may resolve with additional experience. Place-pitch thresholds were inversely related to improvements in vowel F2 frequency perception with Fidelity120 relative to HiRes. However, no relationship was observed between place-pitch thresholds and the other spectral measures (vowel F1 frequency or consonant place of articulation). CONCLUSIONS Findings suggest that Fidelity120 supports small improvements in the perception of spectral speech cues in some Advanced Bionics CI users; however, many users show no clear benefit. Benefits are more likely to occur for vowel spectral cues (related to F1 and F2 frequency) than for consonant spectral cues (related to place of articulation). There was an inconsistent relationship between place-pitch sensitivity and improvements in spectral cue perception with Fidelity120 relative to HiRes. This may partly reflect the small number of sites at which place-pitch thresholds were measured. Contrary to some previous reports, there was no clear evidence that Fidelity120 supports improved sentence recognition in noise.
Collapse
|
20
|
Svirsky MA, Sagi E, Meyer TA, Kaiser AR, Teoh SW. A mathematical model of medial consonant identification by cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 129:2191-2200. [PMID: 21476674 PMCID: PMC3087396 DOI: 10.1121/1.3531806] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/22/2009] [Revised: 11/08/2010] [Accepted: 11/21/2010] [Indexed: 05/30/2023]
Abstract
The multidimensional phoneme identification model is applied to consonant confusion matrices obtained from 28 postlingually deafened cochlear implant users. This model predicts consonant matrices based on these subjects' ability to discriminate a set of postulated spectral, temporal, and amplitude speech cues as presented to them by their device. The model produced confusion matrices that matched many aspects of individual subjects' consonant matrices, including information transfer for the voicing, manner, and place features, despite individual differences in age at implantation, implant experience, device and stimulation strategy used, as well as overall consonant identification level. The model was able to match the general pattern of errors between consonants, but not the full complexity of all consonant errors made by each individual. The present study represents an important first step in developing a model that can be used to test specific hypotheses about the mechanisms cochlear implant users employ to understand speech.
Collapse
Affiliation(s)
- Mario A Svirsky
- Department of Otolaryngology, New York University School of Medicine, New York, New York 10016, USA.
| | | | | | | | | |
Collapse
|
21
|
Leybaert J, LaSasso CJ. Cued speech for enhancing speech perception and first language development of children with cochlear implants. Trends Amplif 2010; 14:96-112. [PMID: 20724357 PMCID: PMC4111351 DOI: 10.1177/1084713810375567] [Citation(s) in RCA: 17] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
Nearly 300 million people worldwide have moderate to profound hearing loss. Hearing impairment, if not adequately managed, has strong socioeconomic and affective impact on individuals. Cochlear implants have become the most effective vehicle for helping profoundly deaf children and adults to understand spoken language, to be sensitive to environmental sounds, and, to some extent, to listen to music. The auditory information delivered by the cochlear implant remains non-optimal for speech perception because it delivers a spectrally degraded signal and lacks some of the fine temporal acoustic structure. In this article, we discuss research revealing the multimodal nature of speech perception in normally-hearing individuals, with important inter-subject variability in the weighting of auditory or visual information. We also discuss how audio-visual training, via Cued Speech, can improve speech perception in cochlear implantees, particularly in noisy contexts. Cued Speech is a system that makes use of visual information from speechreading combined with hand shapes positioned in different places around the face in order to deliver completely unambiguous information about the syllables and the phonemes of spoken language. We support our view that exposure to Cued Speech before or after the implantation could be important in the aural rehabilitation process of cochlear implantees. We describe five lines of research that are converging to support the view that Cued Speech can enhance speech perception in individuals with cochlear implants.
Collapse
|
22
|
Nishi K, Lewis DE, Hoover BM, Choi S, Stelmachowicz PG. Children's recognition of American English consonants in noise. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2010; 127:3177-88. [PMID: 21117766 PMCID: PMC2882671 DOI: 10.1121/1.3377080] [Citation(s) in RCA: 36] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
In contrast to the availability of consonant confusion studies with adults, to date, no investigators have compared children's consonant confusion patterns in noise to those of adults in a single study. To examine whether children's error patterns are similar to those of adults, three groups of children (24 each in 4-5, 6-7, and 8-9 yrs. old) and 24 adult native speakers of American English (AE) performed a recognition task for 15 AE consonants in /ɑ/-consonant-/ɑ/ nonsense syllables presented in a background of speech-shaped noise. Three signal-to-noise ratios (SNR: 0, +5, and +10 dB) were used. Although the performance improved as a function of age, the overall consonant recognition accuracy as a function of SNR improved at a similar rate for all groups. Detailed analyses using phonetic features (manner, place, and voicing) revealed that stop consonants were the most problematic for all groups. In addition, for the younger children, front consonants presented in the 0 dB SNR condition were more error prone than others. These results suggested that children's use of phonetic cues do not develop at the same rate for all phonetic features.
Collapse
Affiliation(s)
- Kanae Nishi
- Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA.
| | | | | | | | | |
Collapse
|
23
|
Kwon BJ. Effects of electrode separation between speech and noise signals on consonant identification in cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 126:3258-3267. [PMID: 20000939 PMCID: PMC2803724 DOI: 10.1121/1.3257200] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 10/18/2007] [Revised: 08/31/2009] [Accepted: 09/25/2009] [Indexed: 05/26/2023]
Abstract
The aim of the present study was to examine cochlear implant (CI) users' perceptual segregation of speech from background noise with differing degrees of electrode separation between speech and noise. Eleven users of the nucleus CI system were tested on consonant identification using an experimental processing scheme called "multi-stream processing" in which speech and noise stimuli were processed separately and interleaved. Speech was presented to either ten (every other electrode) or six electrodes (every fourth electrode). Noise was routed to either the same (the "overlapped" condition) or a different set of electrodes (the "interlaced" condition), where speech and noise electrodes were separated by one- and two-electrode spacings for ten- and six-electrode presentations, respectively. Results indicated a small but significant improvement in consonant recognition (5%-10%) in the interlaced condition with a two-electrode spacing (approximately 1.1 mm) in two subjects. It appears that the results were influenced by peripheral channel interactions, partially accounting for individual variability. Although the overall effect was small and observed from a small number of subjects, the present study demonstrated that CI users' performance on segregating the target from the background might be improved if these sounds were presented with sufficient peripheral separation.
Collapse
Affiliation(s)
- Bom Jun Kwon
- Department of Communication Sciences and Disorders, University of Utah, 390 S 1530 E, Salt Lake City, Utah 84112, USA.
| |
Collapse
|
24
|
Sagi E, Svirsky MA. Information transfer analysis: a first look at estimation bias. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2008; 123:2848-2857. [PMID: 18529200 PMCID: PMC2677320 DOI: 10.1121/1.2897914] [Citation(s) in RCA: 9] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/02/2007] [Revised: 01/08/2008] [Accepted: 02/25/2008] [Indexed: 05/26/2023]
Abstract
Information transfer analysis [G. A. Miller and P. E. Nicely, J. Acoust. Soc. Am. 27, 338-352 (1955)] is a tool used to measure the extent to which speech features are transmitted to a listener, e.g., duration or formant frequencies for vowels; voicing, place and manner of articulation for consonants. An information transfer of 100% occurs when no confusions arise between phonemes belonging to different feature categories, e.g., between voiced and voiceless consonants. Conversely, an information transfer of 0% occurs when performance is purely random. As asserted by Miller and Nicely, the maximum-likelihood estimate for information transfer is biased to overestimate its true value when the number of stimulus presentations is small. This small-sample bias is examined here for three cases: a model of random performance with pseudorandom data, a data set drawn from Miller and Nicely, and reported data from three studies of speech perception by hearing impaired listeners. The amount of overestimation can be substantial, depending on the number of samples, the size of the confusion matrix analyzed, as well as the manner in which data are partitioned therein.
Collapse
Affiliation(s)
- Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, 550 First Avenue, NBV-5E5, New York, New York 10016, USA
| | | |
Collapse
|