1
|
Borjigin A, Bakst S, Anderson K, Litovsky RY, Niziolek CA. Discrimination and sensorimotor adaptation of self-produced vowels in cochlear implant users. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2024; 155:1895-1908. [PMID: 38456732 DOI: 10.1121/10.0025063] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/13/2023] [Accepted: 02/11/2024] [Indexed: 03/09/2024]
Abstract
Humans rely on auditory feedback to monitor and adjust their speech for clarity. Cochlear implants (CIs) have helped over a million people restore access to auditory feedback, which significantly improves speech production. However, there is substantial variability in outcomes. This study investigates the extent to which CI users can use their auditory feedback to detect self-produced sensory errors and make adjustments to their speech, given the coarse spectral resolution provided by their implants. First, we used an auditory discrimination task to assess the sensitivity of CI users to small differences in formant frequencies of their self-produced vowels. Then, CI users produced words with altered auditory feedback in order to assess sensorimotor adaptation to auditory error. Almost half of the CI users tested can detect small, within-channel differences in their self-produced vowels, and they can utilize this auditory feedback towards speech adaptation. An acoustic hearing control group showed better sensitivity to the shifts in vowels, even in CI-simulated speech, and elicited more robust speech adaptation behavior than the CI users. Nevertheless, this study confirms that CI users can compensate for sensory errors in their speech and supports the idea that sensitivity to these errors may relate to variability in production.
Collapse
Affiliation(s)
- Agudemu Borjigin
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Sarah Bakst
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Katla Anderson
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Ruth Y Litovsky
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| | - Caroline A Niziolek
- Waisman Center, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
- Department of Communication Sciences and Disorders, University of Wisconsin-Madison, Madison, Wisconsin 53705, USA
| |
Collapse
|
2
|
Hartman J, Saffran J, Litovsky R. Word Learning in Deaf Adults Who Use Cochlear Implants: The Role of Talker Variability and Attention to the Mouth. Ear Hear 2024; 45:337-350. [PMID: 37695563 PMCID: PMC10920394 DOI: 10.1097/aud.0000000000001432] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 09/12/2023]
Abstract
OBJECTIVES Although cochlear implants (CIs) facilitate spoken language acquisition, many CI listeners experience difficulty learning new words. Studies have shown that highly variable stimulus input and audiovisual cues improve speech perception in CI listeners. However, less is known whether these two factors improve perception in a word learning context. Furthermore, few studies have examined how CI listeners direct their gaze to efficiently capture visual information available on a talker's face. The purpose of this study was two-fold: (1) to examine whether talker variability could improve word learning in CI listeners and (2) to examine how CI listeners direct their gaze while viewing a talker speak. DESIGN Eighteen adults with CIs and 10 adults with normal hearing (NH) learned eight novel word-object pairs spoken by a single talker or six different talkers (multiple talkers). The word learning task comprised of nonsense words following the phonotactic rules of English. Learning was probed using a novel talker in a two-alternative forced-choice eye gaze task. Learners' eye movements to the mouth and the target object (accuracy) were tracked over time. RESULTS Both groups performed near ceiling during the test phase, regardless of whether they learned from the same talker or different talkers. However, compared to listeners with NH, CI listeners directed their gaze significantly more to the talker's mouth while learning the words. CONCLUSIONS Unlike NH listeners who can successfully learn words without focusing on the talker's mouth, CI listeners tended to direct their gaze to the talker's mouth, which may facilitate learning. This finding is consistent with the hypothesis that CI listeners use a visual processing strategy that efficiently captures redundant audiovisual speech cues available at the mouth. Due to ceiling effects, however, it is unclear whether talker variability facilitated word learning for adult CI listeners, an issue that should be addressed in future work using more difficult listening conditions.
Collapse
Affiliation(s)
- Jasenia Hartman
- Department of Psychology and Neuroscience, Duke University; Durham, NC 27708
- Neuroscience Training Program, University of Wisconsin-Madison; Madison, WI 53706
| | - Jenny Saffran
- Department of Psychology, University of Wisconsin-Madison; Madison, WI 53706
| | - Ruth Litovsky
- Neuroscience Training Program, University of Wisconsin-Madison; Madison, WI 53706
- Communication and Science Disorders, University of Wisconsin-Madison; Madison, WI 53706
| |
Collapse
|
3
|
Cuadros J, Z-Rivera L, Castro C, Whitaker G, Otero M, Weinstein A, Martínez-Montes E, Prado P, Zañartu M. DIVA Meets EEG: Model Validation Using Formant-Shift Reflex. APPLIED SCIENCES (BASEL, SWITZERLAND) 2023; 13:7512. [PMID: 38435340 PMCID: PMC10906992 DOI: 10.3390/app13137512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Subscribe] [Scholar Register] [Indexed: 03/05/2024]
Abstract
The neurocomputational model 'Directions into Velocities of Articulators' (DIVA) was developed to account for various aspects of normal and disordered speech production and acquisition. The neural substrates of DIVA were established through functional magnetic resonance imaging (fMRI), providing physiological validation of the model. This study introduces DIVA_EEG an extension of DIVA that utilizes electroencephalography (EEG) to leverage the high temporal resolution and broad availability of EEG over fMRI. For the development of DIVA_EEG, EEG-like signals were derived from original equations describing the activity of the different DIVA maps. Synthetic EEG associated with the utterance of syllables was generated when both unperturbed and perturbed auditory feedback (first formant perturbations) were simulated. The cortical activation maps derived from synthetic EEG closely resembled those of the original DIVA model. To validate DIVA_EEG, the EEG of individuals with typical voices (N = 30) was acquired during an altered auditory feedback paradigm. The resulting empirical brain activity maps significantly overlapped with those predicted by DIVA_EEG. In conjunction with other recent model extensions, DIVA_EEG lays the foundations for constructing a complete neurocomputational framework to tackle vocal and speech disorders, which can guide model-driven personalized interventions.
Collapse
Affiliation(s)
- Jhosmary Cuadros
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Grupo de Bioingeniería, Decanato de Investigación, Universidad Nacional Experimental del Táchira, San Cristóbal 5001, Venezuela
| | - Lucía Z-Rivera
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Christian Castro
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | - Grace Whitaker
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| | - Mónica Otero
- Facultad de Ingeniería, Arquitectura y Diseño, Universidad San Sebastián, Santiago 8420524, Chile
- Centro Basal Ciencia & Vida, Universidad San Sebastián, Santiago 8580000, Chile
| | - Alejandro Weinstein
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Escuela de Ingeniería Civil Biomédica, Facultad de Ingeniería, Universidad de Valparaíso, Valparaíso 2350026, Chile
| | | | - Pavel Prado
- Escuela de Fonoaudiología, Facultad de Odontología y Ciencias de la Rehabilitación, Universidad San Sebastián, Santiago 7510602, Chile
| | - Matías Zañartu
- Department of Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
- Advanced Center for Electrical and Electronic Engineering, Universidad Técnica Federico Santa María, Valparaíso 2390123, Chile
| |
Collapse
|
4
|
Bochner J, Samar V, Prud'hommeaux E, Huenerfauth M. Phoneme Categorization in Prelingually Deaf Adult Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:4429-4453. [PMID: 36279201 DOI: 10.1044/2022_jslhr-22-00038] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/16/2023]
Abstract
PURPOSE Phoneme categorization (PC) for voice onset time and second formant transition was studied in adult cochlear implant (CI) users with early-onset deafness and hearing controls. METHOD Identification and discrimination tasks were administered to 30 participants implanted before 4 years of age, 21 participants implanted after 7 years of age, and 21 hearing individuals. RESULTS Distinctive identification and discrimination functions confirmed PC within all groups. Compared to hearing participants, the CI groups generally displayed longer/higher category boundaries, shallower identification function slopes, reduced identification consistency, and reduced discrimination performance. A principal component analysis revealed that identification consistency, discrimination accuracy, and identification function slope, but not boundary location, loaded on a single factor, reflecting general PC performance. Earlier implantation was associated with better PC performance within the early CI group, but not the late CI group. Within the early CI group, earlier implantation age but not PC performance was associated with better speech recognition. Conversely, within the late CI group, better PC performance but not earlier implantation age was associated with better speech recognition. CONCLUSIONS Results suggest that implantation timing within the sensitive period before 4 years of age partly determines the level of PC performance. They also suggest that early implantation may promote development of higher level processes that can compensate for relatively poor PC performance, as can occur in challenging listening conditions.
Collapse
Affiliation(s)
- Joseph Bochner
- National Technical Institute for the Deaf, Rochester Institute of Technology, NY
| | - Vincent Samar
- National Technical Institute for the Deaf, Rochester Institute of Technology, NY
| | | | - Matt Huenerfauth
- Golisano College of Computing and Information Sciences, Rochester Institute of Technology, NY
| |
Collapse
|
5
|
Wang Y, Sibaii F, Lee K, Gill MJ, Hatch JL. Meta-Analytic Findings on Reading in Children With Cochlear Implants. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2021; 26:336-350. [PMID: 33993237 PMCID: PMC8208105 DOI: 10.1093/deafed/enab010] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/12/2021] [Revised: 03/22/2021] [Accepted: 03/27/2021] [Indexed: 06/12/2023]
Abstract
This meta-analysis study aims to quantify the group differences in reading skills between children with cochlear implants and their hearing peers and between children with cochlear implants and children with hearing aids (aged between 3 and 18 years old). Of the 5,642 articles screened, 47 articles met predetermined inclusion criteria (published between 2002 and 2019). The robust variance estimation based meta-analysis models were used to synthesize all the effect sizes. Children with cochlear implants scored significantly lower than their hearing peers in phonological awareness (g = -1.62, p < 0.001), vocabulary (g = -1.50, p < 0.001), decoding (g = -1.24, p < 0.001), and reading comprehension (g = -1.39, p < 0.001), but not for fluency (g = -0.67, p = 0.054). Compared to children with hearing aids, children with cochlear implants scored significantly lower in phonological awareness (g = -0.30, p = 0.028). The percentage of unilateral cochlear implant negatively impacts the group difference between children with cochlear implants and their hearing peers. Findings from this study confirm a positive shift in reading outcomes for profoundly deaf children due to cochlear implantation. Some children with cochlear implants may need additional supports in educational settings.
Collapse
Affiliation(s)
- Yingying Wang
- Neuroimaging for Language, Literacy and Learning Lab, Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, NE 68583, USA
- Center for Brain, Biology and Behavior, University of Nebraska, Lincoln, NE 68588, USA
- Nebraska Center for Research on Children, Youth, Families and Schools, University of Nebraska, Lincoln, NE 68583, USA
| | - Fatima Sibaii
- Neuroimaging for Language, Literacy and Learning Lab, Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, NE 68583, USA
- Center for Brain, Biology and Behavior, University of Nebraska, Lincoln, NE 68588, USA
| | - Kejin Lee
- Nebraska Center for Research on Children, Youth, Families and Schools, University of Nebraska, Lincoln, NE 68583, USA
| | - Makayla J Gill
- Neuroimaging for Language, Literacy and Learning Lab, Department of Special Education and Communication Disorders, University of Nebraska, Lincoln, NE 68583, USA
- Center for Brain, Biology and Behavior, University of Nebraska, Lincoln, NE 68588, USA
| | - Jonathan L Hatch
- Department of Otolaryngology-Head & Neck Surgery, University of Nebraska Medical Center, Omaha, NE 68198, USA
| |
Collapse
|
6
|
Individual Variability in Recalibrating to Spectrally Shifted Speech: Implications for Cochlear Implants. Ear Hear 2021; 42:1412-1427. [PMID: 33795617 DOI: 10.1097/aud.0000000000001043] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implant (CI) recipients are at a severe disadvantage compared with normal-hearing listeners in distinguishing consonants that differ by place of articulation because the key relevant spectral differences are degraded by the implant. One component of that degradation is the upward shifting of spectral energy that occurs with a shallow insertion depth of a CI. The present study aimed to systematically measure the effects of spectral shifting on word recognition and phoneme categorization by specifically controlling the amount of shifting and using stimuli whose identification specifically depends on perceiving frequency cues. We hypothesized that listeners would be biased toward perceiving phonemes that contain higher-frequency components because of the upward frequency shift and that intelligibility would decrease as spectral shifting increased. DESIGN Normal-hearing listeners (n = 15) heard sine wave-vocoded speech with simulated upward frequency shifts of 0, 2, 4, and 6 mm of cochlear space to simulate shallow CI insertion depth. Stimuli included monosyllabic words and /b/-/d/ and /∫/-/s/ continua that varied systematically by formant frequency transitions or frication noise spectral peaks, respectively. Recalibration to spectral shifting was operationally defined as shifting perceptual acoustic-phonetic mapping commensurate with the spectral shift. In other words, adjusting frequency expectations for both phonemes upward so that there is still a perceptual distinction, rather than hearing all upward-shifted phonemes as the higher-frequency member of the pair. RESULTS For moderate amounts of spectral shifting, group data suggested a general "halfway" recalibration to spectral shifting, but individual data suggested a notably different conclusion: half of the listeners were able to recalibrate fully, while the other halves of the listeners were utterly unable to categorize shifted speech with any reliability. There were no participants who demonstrated a pattern intermediate to these two extremes. Intelligibility of words decreased with greater amounts of spectral shifting, also showing loose clusters of better- and poorer-performing listeners. Phonetic analysis of word errors revealed certain cues were more susceptible to being compromised due to a frequency shift (place and manner of articulation), while voicing was robust to spectral shifting. CONCLUSIONS Shifting the frequency spectrum of speech has systematic effects that are in line with known properties of speech acoustics, but the ensuing difficulties cannot be predicted based on tonotopic mismatch alone. Difficulties are subject to substantial individual differences in the capacity to adjust acoustic-phonetic mapping. These results help to explain why speech recognition in CI listeners cannot be fully predicted by peripheral factors like electrode placement and spectral resolution; even among listeners with functionally equivalent auditory input, there is an additional factor of simply being able or unable to flexibly adjust acoustic-phonetic mapping. This individual variability could motivate precise treatment approaches guided by an individual's relative reliance on wideband frequency representation (even if it is mismatched) or limited frequency coverage whose tonotopy is preserved.
Collapse
|
7
|
Grandon B, Vilain A. Development of fricative production in French-speaking school-aged children using cochlear implants and children with normal hearing. JOURNAL OF COMMUNICATION DISORDERS 2020; 86:105996. [PMID: 32485648 DOI: 10.1016/j.jcomdis.2020.105996] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2019] [Revised: 03/27/2020] [Accepted: 03/29/2020] [Indexed: 06/11/2023]
Abstract
In the course of productive phonological development, fricatives are among the last speech sounds to emerge and to be mastered by children, probably because of the high degree of articulatory precision they require or because of difficulties with their perception. Children with cochlear implants (CI) face additional difficulties with fricative perception, since high spectral frequency components are shown to be especially difficult to perceive with a cochlear implant. Studying fricative production in children with CIs allows to study how the partial transmission of speech sounds by cochlear implants influences children's speech production, and therefore to explore how perceptual abilities influence the late stages of phonological development. This acoustic study focuses on fricative production at three places of articulation (i.e., /f/, /s/ and /ʃ/), comparing productions by two groups of children (20 children with normal hearing (NH) vs. 13 children with CIs, all aged 5;7 to 10;7 years), and taking into account their consistency in coarticulation and the stability of their production across two different tasks (word-repetition and picture-naming). Statistical analyses were carried out by means of linear mixed-effect models. The results show that while both groups produce /ʃ/ with similar acoustic characteristics, between-group differences are found for /f/ and /s/. Furthermore, effects of consonant-vowel coarticulation are found for children with NH, and are absent for children with CIs. Effects of chronological age are only found for children with CIs (production in older children with CIs nearing that of children with NH). Our study shows that the development of fricative production of five- to 11-year-old children with CIs is affected by the children's hearing abilities and late access to auditory information. These limitations however do not prevent the children from eventually reaching a consistency similar to that of children with NH, as suggested by the fact that their production is still evolving during that age span. The results also show that the acquisition of coarticulation strategies can be impeded by degraded or delayed access to audio.
Collapse
Affiliation(s)
- Bénédicte Grandon
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France.
| | - Anne Vilain
- Université Grenoble Alpes, CNRS, Grenoble INP, GIPSA-lab, Grenoble, France; Institut Universitaire de France, France
| |
Collapse
|
8
|
Balkenhol T, Wallhäusser-Franke E, Rotter N, Servais JJ. Changes in Speech-Related Brain Activity During Adaptation to Electro-Acoustic Hearing. Front Neurol 2020; 11:161. [PMID: 32300327 PMCID: PMC7145411 DOI: 10.3389/fneur.2020.00161] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/05/2019] [Accepted: 02/19/2020] [Indexed: 12/17/2022] Open
Abstract
Objectives: Hearing improves significantly with bimodal provision, i.e., a cochlear implant (CI) at one ear and a hearing aid (HA) at the other, but performance shows a high degree of variability resulting in substantial uncertainty about the performance that can be expected by the individual CI user. The objective of this study was to explore how auditory event-related potentials (AERPs) of bimodal listeners in response to spoken words approximate the electrophysiological response of normal hearing (NH) listeners. Study Design: Explorative prospective analysis during the first 6 months of bimodal listening using a within-subject repeated measures design. Setting: Academic tertiary care center. Participants: Twenty-seven adult participants with bilateral sensorineural hearing loss who received a HiRes 90K CI and continued use of a HA at the non-implanted ear. Age-matched NH listeners served as controls. Intervention: Cochlear implantation. Main Outcome Measures: Obligatory auditory evoked potentials N1 and P2, and the event-related N2 potential in response to monosyllabic words and their reversed sound traces before, as well as 3 and 6 months post-implantation. The task required word/non-word classification. Stimuli were presented within speech-modulated noise. Loudness of word/non-word signals was adjusted individually to achieve the same intelligibility across groups and assessments. Results: Intelligibility improved significantly with bimodal hearing, and the N1-P2 response approximated the morphology seen in NH with enhanced and earlier responses to the words compared to their reversals. For bimodal listeners, a prominent negative deflection was present between 370 and 570 ms post stimulus onset (N2), irrespective of stimulus type. This was absent for NH controls; hence, this response did not approximate the NH response during the study interval. N2 source localization evidenced extended activation of general cognitive areas in frontal and prefrontal brain areas in the CI group. Conclusions: Prolonged and spatially extended processing in bimodal CI users suggests employment of additional auditory-cognitive mechanisms during speech processing. This does not reduce within 6 months of bimodal experience and may be a correlate of the enhanced listening effort described by CI listeners.
Collapse
|
9
|
Ohashi H, Ito T. Recalibration of auditory perception of speech due to orofacial somatosensory inputs during speech motor adaptation. J Neurophysiol 2019; 122:2076-2084. [PMID: 31509469 DOI: 10.1152/jn.00028.2019] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/22/2022] Open
Abstract
Speech motor control and learning rely on both somatosensory and auditory inputs. Somatosensory inputs associated with speech production can also affect the process of auditory perception of speech, and the somatosensory-auditory interaction may play a fundamental role in auditory perception of speech. In this report, we show that the somatosensory system contributes to perceptual recalibration, separate from its role in motor function. Subjects participated in speech motor adaptation to altered auditory feedback. Auditory perception of speech was assessed in phonemic identification tests before and after speech adaptation. To investigate a role of the somatosensory system in motor adaptation and subsequent perceptual change, we applied orofacial skin stretch in either a backward or forward direction during the auditory feedback alteration as a somatosensory modulation. We found that the somatosensory modulation did not affect the amount of adaptation at the end of training, although it changed the rate of adaptation. However, the perception following speech adaptation was altered depending on the direction of the somatosensory modulation. Somatosensory inflow rather than motor outflow thus drives changes to auditory perception of speech following speech adaptation, suggesting that somatosensory inputs play an important role in tuning of perceptual system.NEW & NOTEWORTHY This article reports that the somatosensory system works not equally with the motor system, but predominantly in the calibration of auditory perception of speech by speech production.
Collapse
Affiliation(s)
- Hiroki Ohashi
- Department of Psychology, McGill University, Montreal, Quebec, Canada.,Haskins Laboratories, New Haven, Connecticut
| | - Takayuki Ito
- Haskins Laboratories, New Haven, Connecticut.,Centre National de la Recherche Scientifique, GIPSA-Lab, Grenoble Institute of Technology, University of Grenoble-Alpes, Saint Martin d'Heres, France
| |
Collapse
|
10
|
Ruff S, Bocklet T, Nöth E, Müller J, Hoster E, Schuster M. Speech Production Quality of Cochlear Implant Users with Respect to Duration and Onset of Hearing Loss. ORL J Otorhinolaryngol Relat Spec 2017; 79:282-294. [PMID: 29131113 DOI: 10.1159/000479819] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/06/2017] [Accepted: 07/24/2017] [Indexed: 11/19/2022]
Abstract
PURPOSE To assess whether postlingual onset and shorter duration of deafness before cochlear implant (CI) provision predict higher speech intelligibility results of CI users. METHODS For an objective judgement of speech intelligibility, we used an automatic speech recognition system computing the word recognition rate (WR) of 50 adult CI users and 50 age-matched control individuals. All subjects were recorded reading a standardized text. Subjects were divided into three groups: pre- or perilingual deafness (A), both >2 years before implantation, postlingual deafness <2 years before implantation (B), or postlingual deafness >2 years before implantation (C). RESULTS CI users with short duration of postlingual deafness (B) had a significantly higher WR (median 74%) than CI users with long duration of postlingual deafness (C; 68%, p < 0.001) or pre-/perilingual onset (A; 56%, p < 0.001). Compared to their control groups only CI users with short duration of postlingual deafness reached similar WR, others showed significantly lower WR. Other factors such as hearing loss onset, duration of CI use, or duration of amplified hearing showed no consistent influence on speech quality. CONCLUSIONS The speech production quality of adult CI users shows dependencies on the onset and duration of deafness. These features need to be considered while planning rehabilitation.
Collapse
Affiliation(s)
- Suzan Ruff
- ORL Clinic Frankfurt/Oder, Frankfurt/Oder, Germany
| | | | | | | | | | | |
Collapse
|
11
|
Neural Correlates of Phonetic Learning in Postlingually Deafened Cochlear Implant Listeners. Ear Hear 2016; 37:514-28. [DOI: 10.1097/aud.0000000000000287] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
12
|
McMurray B, Farris-Trimble A, Seedorff M, Rigler H. The Effect of Residual Acoustic Hearing and Adaptation to Uncertainty on Speech Perception in Cochlear Implant Users: Evidence From Eye-Tracking. Ear Hear 2016; 37:e37-51. [PMID: 26317298 PMCID: PMC4717908 DOI: 10.1097/aud.0000000000000207] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/∫, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/∫ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ∫-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.
Collapse
Affiliation(s)
- Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, and Linguistics, University of Iowa, Iowa City, Iowa, USA
| | - Ashley Farris-Trimble
- Department of Linguistics, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Michael Seedorff
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Hannah Rigler
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
13
|
Nittrouer S, Caldwell-Tarr A, Moberly AC, Lowenstein JH. Perceptual weighting strategies of children with cochlear implants and normal hearing. JOURNAL OF COMMUNICATION DISORDERS 2014; 52:111-133. [PMID: 25307477 PMCID: PMC4250394 DOI: 10.1016/j.jcomdis.2014.09.003] [Citation(s) in RCA: 15] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 11/25/2013] [Revised: 08/12/2014] [Accepted: 09/01/2014] [Indexed: 05/30/2023]
Abstract
PURPOSE This study compared perceptual weighting strategies of children with cochlear implants (CIs) and children with normal hearing (NH), and asked if strategies are explained solely by degraded spectral representations, or if diminished language experience accounts for some of the effect. Relationships between weighting strategies and other language skills were examined. METHOD One hundred 8-year-olds (49 with NH and 51 with CIs) were tested on four measures: (1) labeling of cop-cob and sa-sha stimuli; (2) discrimination of the acoustic cues to the cop-cob decision; (3) phonemic awareness; and (4) word recognition. RESULTS No differences in weighting of cues to the cop-cob decision were observed between children with CIs and NH, suggesting that language experience was sufficient for the children with CIs. Differences in weighting of cues to the sa-sha decision were found, but were not entirely explained by auditory sensitivity. Weighting strategies were related to phonemic awareness and word recognition. CONCLUSIONS More salient cues facilitate stronger weighting of those cues. Nonetheless, individuals differ in how salient cues need to be to capture perceptual attention. Familiarity with stimuli also affects how reliably children attend to acoustic cues. Training should help children with CIs learn to categorize speech sounds with less-salient cues. LEARNING OUTCOMES After reading this article, the learner should be able to: (1) recognize methods and motivations for studying perceptual weighting strategies in speech perception; (2) explain how signal quality and language experience affect the development of weighting strategies for children with cochlear implants and children with normal hearing; and (3) summarize the importance of perceptual weighting strategies for other aspects of language functioning.
Collapse
|
14
|
Abstract
OBJECTIVE A key ingredient to academic success is being able to read. Deaf individuals have historically failed to develop literacy skills comparable with those of their normal-hearing (NH) peers, but early identification and cochlear implants (CIs) have improved prospects such that these children can learn to read at the levels of their peers. The goal of this study was to examine early, or emergent, literacy in these children. METHOD Twenty-seven deaf children with CIs, who had just completed kindergarten were tested on emergent literacy, and on cognitive and linguistic skills that support emergent literacy, specifically ones involving phonological awareness, executive functioning, and oral language. Seventeen kindergartners with NH and eight with hearing loss, but who used hearing aids served as controls. Outcomes were compared for these three groups of children, regression analyses were performed to see whether predictor variables for emergent literacy differed for children with NH and those with CIs, and factors related to the early treatment of hearing loss and prosthesis configuration were examined for children with CIs. RESULTS The performance of children with CIs was roughly 1 SD or more below the mean performance of children with NH on all tasks, except for syllable counting, reading fluency, and rapid serial naming. Oral language skills explained more variance in emergent literacy for children with CIs than for children with NH. Age of first implant explained moderate amounts of variance for several measures. Having one or two CIs had no effect, but children who had some amount of bimodal experience outperformed children who had none on several measures. CONCLUSIONS Even deaf children who have benefitted from early identification, intervention, and implantation are still at risk for problems with emergent literacy that could affect their academic success. This finding means that intensive language support needs to continue through at least the early elementary grades. Also, a period of bimodal stimulation during the preschool years can help boost emergent literacy skills to some extent.
Collapse
|
15
|
Perkell JS. Movement goals and feedback and feedforward control mechanisms in speech production. JOURNAL OF NEUROLINGUISTICS 2012; 25:382-407. [PMID: 22661828 PMCID: PMC3361736 DOI: 10.1016/j.jneuroling.2010.02.011] [Citation(s) in RCA: 61] [Impact Index Per Article: 5.1] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/04/2023]
Abstract
Studies of speech motor control are described that support a theoretical framework in which fundamental control variables for phonemic movements are multi-dimensional regions in auditory and somatosensory spaces. Auditory feedback is used to acquire and maintain auditory goals and in the development and function of feedback and feedforward control mechanisms. Several lines of evidence support the idea that speakers with more acute sensory discrimination acquire more distinct goal regions and therefore produce speech sounds with greater contrast. Feedback modification findings indicate that fluently produced sound sequences are encoded as feedforward commands, and feedback control serves to correct mismatches between expected and produced sensory consequences.
Collapse
Affiliation(s)
- Joseph S Perkell
- Speech Communication Group, Massachusetts Institute of Technology, Research Laboratory of Electronics, Room 36-591, 50 Vassar St., Cambridge, MA 02139-4307, United States
| |
Collapse
|
16
|
Todd AE, Edwards JR, Litovsky RY. Production of contrast between sibilant fricatives by children with cochlear implants. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2011; 130:3969-3979. [PMID: 22225051 PMCID: PMC3253598 DOI: 10.1121/1.3652852] [Citation(s) in RCA: 23] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/03/2010] [Revised: 09/13/2011] [Accepted: 09/16/2011] [Indexed: 05/28/2023]
Abstract
Speech production by children with cochlear implants (CIs) is generally less intelligible and less accurate on a phonemic level than that of normally hearing children. Research has reported that children with CIs produce less acoustic contrast between phonemes than normally hearing children, but these studies have included correct and incorrect productions. The present study compared the extent of contrast between correct productions of /s/ and /∫/ by children with CIs and two comparison groups: (1) normally hearing children of the same chronological age as the children with CIs and (2) normally hearing children with the same duration of auditory experience. Spectral peaks and means were calculated from the frication noise of productions of /s/ and /∫/. Results showed that the children with CIs produced less contrast between /s/ and /∫/ than normally hearing children of the same chronological age and normally hearing children with the same duration of auditory experience due to production of /s/ with spectral peaks and means at lower frequencies. The results indicate that there may be differences between the speech sounds produced by children with CIs and their normally hearing peers even for sounds that adults judge as correct.
Collapse
Affiliation(s)
- Ann E Todd
- University of Wisconsin Waisman Center, 1500 Highland Avenue, Madison, Wisconsin 53705, USA
| | | | | |
Collapse
|
17
|
Tourville JA, Guenther FH. The DIVA model: A neural theory of speech acquisition and production. ACTA ACUST UNITED AC 2011; 26:952-981. [PMID: 23667281 DOI: 10.1080/01690960903498424] [Citation(s) in RCA: 369] [Impact Index Per Article: 28.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/08/2023]
Abstract
The DIVA model of speech production provides a computationally and neuroanatomically explicit account of the network of brain regions involved in speech acquisition and production. An overview of the model is provided along with descriptions of the computations performed in the different brain regions represented in the model. The latest version of the model, which contains a new right-lateralized feedback control map in ventral premotor cortex, will be described, and experimental results that motivated this new model component will be discussed. Application of the model to the study and treatment of communication disorders will also be briefly described.
Collapse
Affiliation(s)
- Jason A Tourville
- Department of Cognitive and Neural Systems, Boston University, 677 Beacon Street, Boston, MA, 02215, Telephone: (617) 353-5765, Fax Number: (617) 353-7755,
| | | |
Collapse
|
18
|
Arweiler-Harbeck D, Janeschik S, Lang S, Bagus H. Suitability of Auditory Speech Sound Evaluation (A§E®) in German cochlear implant patients. Eur Arch Otorhinolaryngol 2011; 268:1259-66. [DOI: 10.1007/s00405-011-1505-2] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/08/2010] [Accepted: 01/20/2011] [Indexed: 11/30/2022]
|
19
|
Giezen MR, Escudero P, Baker A. Use of acoustic cues by children with cochlear implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2010; 53:1440-1457. [PMID: 20689031 DOI: 10.1044/1092-4388(2010/09-0252)] [Citation(s) in RCA: 24] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/29/2023]
Abstract
PURPOSE This study examined the use of different acoustic cues in auditory perception of consonant and vowel contrasts by profoundly deaf children with a cochlear implant (CI) in comparison to age-matched children and young adults with normal hearing. METHOD A speech sound categorization task in an XAB format was administered to 15 children ages 5-6 with a CI (mean age at implant: 1;8 [years;months]), 20 normal-hearing age-matched children, and 21 normal-hearing adults. Four contrasts were examined: //-/a/, /i/-/i/, /bu/-/pu/, and /fu/-/su/. Measures included phoneme endpoint identification, individual cue reliance, cue weighting, and classification slope. RESULTS The children with a CI used the spectral cues in the /fu/-/su/ contrast less effectively than the children with normal hearing, resulting in poorer phoneme endpoint identification and a shallower classification slope. Performance on the other 3 contrasts did not differ significantly. Adults consistently showed steeper classification slopes than the children, but similar cue-weighting patterns were observed in all 3 groups. CONCLUSIONS Despite their different auditory input, children with a CI appear to be able to use many acoustic cues effectively in speech perception. Most importantly, children with a CI and normal-hearing children were observed to use similar cue-weighting patterns.
Collapse
Affiliation(s)
- Marcel R Giezen
- Amsterdam Center for Language and Communication, University of Amsterdam, the Netherlands.
| | | | | |
Collapse
|
20
|
Shiller DM, Sato M, Gracco VL, Baum SR. Perceptual recalibration of speech sounds following speech motor learning. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2009; 125:1103-13. [PMID: 19206885 DOI: 10.1121/1.3058638] [Citation(s) in RCA: 69] [Impact Index Per Article: 4.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/14/2023]
Abstract
The functional sensorimotor nature of speech production has been demonstrated in studies examining speech adaptation to auditory and/or somatosensory feedback manipulations. These studies have focused primarily on flexible motor processes to explain their findings, without considering modifications to sensory representations resulting from the adaptation process. The present study explores whether the perceptual representation of the /s-/ contrast may be adjusted following the alteration of auditory feedback during the production of /s/-initial words. Consistent with prior studies of speech adaptation, talkers exposed to the feedback manipulation were found to adapt their motor plans for /s/-production in order to compensate for the effects of the sensory perturbation. In addition, a shift in the /s-/ category boundary was observed that reduced the functional impact of the auditory feedback manipulation by increasing the perceptual "distance" between the category boundary and subjects' altered /s/-stimuli-a pattern of perceptual adaptation that was not observed in two separate control groups. These results suggest that speech adaptation to altered auditory feedback is not limited to the motor domain, but rather involves changes in both motor output and auditory representations of speech sounds that together act to reduce the impact of the perturbation.
Collapse
Affiliation(s)
- Douglas M Shiller
- School of Communication Sciences and Disorders, McGill University, Montreal, Quebec, Canada.
| | | | | | | |
Collapse
|