1
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024:00003446-990000000-00312. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
2
|
Nagels L, Gaudrain E, Vickers D, Hendriks P, Başkent D. Prelingually Deaf Children With Cochlear Implants Show Better Perception of Voice Cues and Speech in Competing Speech Than Postlingually Deaf Adults With Cochlear Implants. Ear Hear 2024; 45:952-968. [PMID: 38616318 PMCID: PMC11175806 DOI: 10.1097/aud.0000000000001489] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/29/2022] [Accepted: 01/10/2024] [Indexed: 04/16/2024]
Abstract
OBJECTIVES Postlingually deaf adults with cochlear implants (CIs) have difficulties with perceiving differences in speakers' voice characteristics and benefit little from voice differences for the perception of speech in competing speech. However, not much is known yet about the perception and use of voice characteristics in prelingually deaf implanted children with CIs. Unlike CI adults, most CI children became deaf during the acquisition of language. Extensive neuroplastic changes during childhood could make CI children better at using the available acoustic cues than CI adults, or the lack of exposure to a normal acoustic speech signal could make it more difficult for them to learn which acoustic cues they should attend to. This study aimed to examine to what degree CI children can perceive voice cues and benefit from voice differences for perceiving speech in competing speech, comparing their abilities to those of normal-hearing (NH) children and CI adults. DESIGN CI children's voice cue discrimination (experiment 1), voice gender categorization (experiment 2), and benefit from target-masker voice differences for perceiving speech in competing speech (experiment 3) were examined in three experiments. The main focus was on the perception of mean fundamental frequency (F0) and vocal-tract length (VTL), the primary acoustic cues related to speakers' anatomy and perceived voice characteristics, such as voice gender. RESULTS CI children's F0 and VTL discrimination thresholds indicated lower sensitivity to differences compared with their NH-age-equivalent peers, but their mean discrimination thresholds of 5.92 semitones (st) for F0 and 4.10 st for VTL indicated higher sensitivity than postlingually deaf CI adults with mean thresholds of 9.19 st for F0 and 7.19 st for VTL. Furthermore, CI children's perceptual weighting of F0 and VTL cues for voice gender categorization closely resembled that of their NH-age-equivalent peers, in contrast with CI adults. Finally, CI children had more difficulties in perceiving speech in competing speech than their NH-age-equivalent peers, but they performed better than CI adults. Unlike CI adults, CI children showed a benefit from target-masker voice differences in F0 and VTL, similar to NH children. CONCLUSION Although CI children's F0 and VTL voice discrimination scores were overall lower than those of NH children, their weighting of F0 and VTL cues for voice gender categorization and their benefit from target-masker differences in F0 and VTL resembled that of NH children. Together, these results suggest that prelingually deaf implanted CI children can effectively utilize spectrotemporally degraded F0 and VTL cues for voice and speech perception, generally outperforming postlingually deaf CI adults in comparable tasks. These findings underscore the presence of F0 and VTL cues in the CI signal to a certain degree and suggest other factors contributing to the perception challenges faced by CI adults.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- CNRS UMR 5292, Lyon Neuroscience Research Center, Auditory Cognition and Psychoacoustics, Inserm UMRS 1028, Université Claude Bernard Lyon 1, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Cambridge Hearing Group, Sound Lab, Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, Groningen, The Netherlands
- W.J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
3
|
Buss E, Richter ME, Sweeney VN, Davis AG, Dillon MT, Park LR. Effect of Age and Unaided Acoustic Hearing on Pediatric Cochlear Implant Users' Ability to Distinguish Yes/No Statements and Questions. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1932-1944. [PMID: 38748909 DOI: 10.1044/2024_jslhr-23-00631] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2024]
Abstract
PURPOSE The purpose of this study was to evaluate the ability to discriminate yes/no questions from statements in three groups of children: bilateral cochlear implant (CI) users, nontraditional CI users with aidable hearing preoperatively in the ear to be implanted, and controls with normal hearing. Half of the nontraditional CI users had sufficient postoperative acoustic hearing in the implanted ear to use electric-acoustic stimulation, and half used a CI alone. METHOD Participants heard recorded sentences that were produced either as yes/no questions or as statements by three male and three female talkers. Three raters scored each participant response as either a question or a statement. Bilateral CI users (n = 40, 4-12 years old) and normal-hearing controls (n = 10, 4-12 years old) were tested binaurally in the free field. Nontraditional CI recipients (n = 22, 6-17 years old) were tested with direct audio input to the study ear. RESULTS For the bilateral CI users, performance was predicted by age but not by 125-Hz acoustic thresholds; just under half (n = 17) of the participants in this group had measurable 125-Hz thresholds in their better ear. For nontraditional CI recipients, better performance was predicted by lower 125-Hz acoustic thresholds in the test ear, and there was no association with participant age. Performance approached that of the normal-hearing controls for some participants in each group. CONCLUSIONS Results suggest that a 125-Hz acoustic hearing supports discrimination of yes/no questions and statements in pediatric CI users. Bilateral CI users with little or no acoustic hearing at 125 Hz develop the ability to perform this task, but that ability emerges later than for children with better acoustic hearing. These results underscore the importance of preserving acoustic hearing for pediatric CI users when possible.
Collapse
Affiliation(s)
- Emily Buss
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Margaret E Richter
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Victoria N Sweeney
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
- Center for Hearing Research, Boys Town National Research Hospitals, Omaha, NE
| | - Amanda G Davis
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Margaret T Dillon
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| | - Lisa R Park
- Department of Otolaryngology/Head and Neck Surgery, The University of North Carolina at Chapel Hill
| |
Collapse
|
4
|
Everhardt MK, Sarampalis A, Coler M, Bașkent D, Lowie W. Lexical Stress Identification in Cochlear Implant-Simulated Speech by Non-Native Listeners. LANGUAGE AND SPEECH 2024:238309231222207. [PMID: 38282517 DOI: 10.1177/00238309231222207] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 01/30/2024]
Abstract
This study investigates whether a presumed difference in the perceptibility of cues to lexical stress in spectro-temporally degraded simulated cochlear implant (CI) speech affects how listeners weight these cues during a lexical stress identification task, specifically in their non-native language. Previous research suggests that in English, listeners predominantly rely on a reduction in vowel quality as a cue to lexical stress. In Dutch, changes in the fundamental frequency (F0) contour seem to have a greater functional weight than the vowel quality contrast. Generally, non-native listeners use the cue-weighting strategies from their native language in the non-native language. Moreover, few studies have suggested that these cues to lexical stress are differently perceptible in spectro-temporally degraded electric hearing, as CI users appear to make more effective use of changes in vowel quality than of changes in the F0 contour as cues to linguistic phenomena. In this study, native Dutch learners of English identified stressed syllables in CI-simulated and non-CI-simulated Dutch and English words that contained changes in the F0 contour and vowel quality as cues to lexical stress. The results indicate that neither the cue-weighting strategies in the native language nor in the non-native language are influenced by the perceptibility of cues in the spectro-temporally degraded speech signal. These results are in contrast to our expectations based on previous research and support the idea that cue weighting is a flexible and transferable process.
Collapse
Affiliation(s)
- Marita K Everhardt
- Center for Language and Cognition Groningen, University of Groningen, The Netherlands; Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands
| | - Anastasios Sarampalis
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; Department of Psychology, University of Groningen, The Netherlands
| | - Matt Coler
- Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; Campus Fryslân, University of Groningen, The Netherlands
| | - Deniz Bașkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands; W. J. Kolff Institute for Biomedical Engineering and Materials Science, University Medical Center Groningen, University of Groningen, The Netherlands
| | - Wander Lowie
- Center for Language and Cognition Groningen, University of Groningen, The Netherlands; Research School of Behavioural and Cognitive Neurosciences, University of Groningen, The Netherlands
| |
Collapse
|
5
|
Lu HP, Lin CS, Wu CM, Peng SC, Feng IJ, Lin YS. The effect of lexical tone experience on English intonation perception in Mandarin-speaking cochlear-implanted children. Medicine (Baltimore) 2022; 101:e29567. [PMID: 35839064 PMCID: PMC11132337 DOI: 10.1097/md.0000000000029567] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/15/2021] [Accepted: 04/26/2022] [Indexed: 11/25/2022] Open
Abstract
To examine the effect of lexical tone experience on English intonation perception in Mandarin-speaking cochlear-implanted children during second language acquisition in Taiwan. A retrospective cohort study. A tertiary referred center. Fourteen children with cochlear implant (CI) in the experimental group, and 9 normal hearing children in the control group were enrolled in this study. Cochlear implantation and hearing rehabilitation. Two speech recognition accuracies were examined: (1) Lexical tone recognition (4-alternative forced choice, AFC), (2) English Sentence Intonation (2AFC). The overall accuracies for tone perception are 61.13% (standard deviation, SD = 10.84%) for CI group and 93.82% (SD = 1.80%) for normal hearing group. Tone 4 and Tone 1 were more easily to be recognized than tone 2 and tone 3 in the pediatric CI recipients (cCI) group. In English intonation perception, the overall accuracies are 61.82% (SD = 16.85%) for CI group, and 97.59% (SD = 4.73%) for normal hearing group. Significant high correlation (R = .919, P ≦ .000) between lexical tone perception and English intonation perception is noted. There is no significant difference for English intonation perception accuracies between Mandarin-speaking cCI (61.82%) and English-speaking cCI (70.13%, P = .11). Mandarin-speaking cochlear-implanted children showed significant deficits in perception of lexical tone and English intonation relative to normal hearing children. There was no tonal language benefit in Mandarin-speaking cochlear-implanted children's English intonation perception, compared to the English-speaking cochlear-implanted peers. For cochlear-implanted children, better lexical tone perception comes with better English intonation perception. Enhancing Mandarin prosodic perception for cochlear-implanted children may benefit their command of intonation in English.
Collapse
Affiliation(s)
- Hui-Ping Lu
- Center of Speech and Hearing, Department of Otolaryngology, Chi Mei Medical Center, Tainan, Taiwan
| | - Chih-Shin Lin
- Center of Speech and Hearing, Department of Otolaryngology, Chi Mei Medical Center, Tainan, Taiwan
- Department of Speech and Language Therapy, Chung Hwa University of Medical Technology, Tainan, Taiwan
| | - Che-Ming Wu
- Department of Otorhinolaryngology, New Taipei municipal TuCheng Hospital (built and operated by Chang Gung Medical Foundation), TuCheng, New Taipei City, Taiwan
- Department of Otorhinolaryngology, Chang Gung Memorial Hospital, Linkou, School of Medicine, Chang Gung University, Taoyuan, Taiwan
| | - Shu-Chen Peng
- Center for Devices and Radiological Health, United States Food and Drug Administration, Silver Spring, MD
| | - I. Jung Feng
- Institute of Precision Medicine, National Sun Yat-sen University, Kaohsiung, Taiwan
| | - Yung-Song Lin
- Center of Speech and Hearing, Department of Otolaryngology, Chi Mei Medical Center, Tainan, Taiwan
- Department of Otolaryngology, School of Medicine, College of Medicine, Taipei Medical University, Taipei, Taiwan
| |
Collapse
|
6
|
Chen Y, Luo Q, Liang M, Gao L, Yang J, Feng R, Liu J, Qiu G, Li Y, Zheng Y, Lu S. Children's Neural Sensitivity to Prosodic Features of Natural Speech and Its Significance to Speech Development in Cochlear Implanted Children. Front Neurosci 2022; 16:892894. [PMID: 35903806 PMCID: PMC9315047 DOI: 10.3389/fnins.2022.892894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Catchy utterances, such as proverbs, verses, and nursery rhymes (i.e., "No pain, no gain" in English), contain strong-prosodic (SP) features and are child-friendly in repeating and memorizing; yet the way those prosodic features encoded by neural activity and their influence on speech development in children are still largely unknown. Using functional near-infrared spectroscopy (fNIRS), this study investigated the cortical responses to the perception of natural speech sentences with strong/weak-prosodic (SP/WP) features and evaluated the speech communication ability in 21 pre-lingually deaf children with cochlear implantation (CI) and 25 normal hearing (NH) children. A comprehensive evaluation of speech communication ability was conducted on all the participants to explore the potential correlations between neural activities and children's speech development. The SP information evoked right-lateralized cortical responses across a broad brain network in NH children and facilitated the early integration of linguistic information, highlighting children's neural sensitivity to natural SP sentences. In contrast, children with CI showed significantly weaker cortical activation and characteristic deficits in speech perception with SP features, suggesting hearing loss at the early age of life, causing significantly impaired sensitivity to prosodic features of sentences. Importantly, the level of neural sensitivity to SP sentences was significantly related to the speech behaviors of all children participants. These findings demonstrate the significance of speech prosodic features in children's speech development.
Collapse
Affiliation(s)
- Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinqin Luo
- Department of Chinese Language and Literature, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Leyan Gao
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Department of Neurology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Ruiyan Feng
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Guoxin Qiu
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yi Li
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Shuo Lu
- School of Foreign Languages, Shenzhen University, Shenzhen, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
7
|
Rothermich K, Dixon S, Weiner M, Capps M, Dong L, Paquette S, Zhou N. Perception of speaker sincerity in complex social interactions by cochlear implant users. PLoS One 2022; 17:e0269652. [PMID: 35675356 PMCID: PMC9176755 DOI: 10.1371/journal.pone.0269652] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2021] [Accepted: 05/25/2022] [Indexed: 11/26/2022] Open
Abstract
Understanding insincere language (sarcasm and teasing) is a fundamental part of communication and crucial for maintaining social relationships. This can be a challenging task for cochlear implant (CIs) users who receive degraded suprasegmental information important for perceiving a speaker’s attitude. We measured the perception of speaker sincerity (literal positive, literal negative, sarcasm, and teasing) in 16 adults with CIs using an established video inventory. Participants were presented with audio-only and audio-visual social interactions between two people with and without supporting verbal context. They were instructed to describe the content of the conversation and answer whether the speakers meant what they said. Results showed that subjects could not always identify speaker sincerity, even when the content of the conversation was perfectly understood. This deficit was greater for perceiving insincere relative to sincere utterances. Performance improved when additional visual cues or verbal context cues were provided. Subjects who were better at perceiving the content of the interactions in the audio-only condition benefited more from having additional visual cues for judging the speaker’s sincerity, suggesting that the two modalities compete for cognitive recourses. Perception of content also did not correlate with perception of speaker sincerity, suggesting that what was said vs. how it was said were perceived using unrelated segmental versus suprasegmental cues. Our results further showed that subjects who had access to lower-order resolved harmonic information provided by hearing aids in the contralateral ear identified speaker sincerity better than those who used implants alone. These results suggest that measuring speech recognition alone in CI users does not fully describe the outcome. Our findings stress the importance of measuring social communication functions in people with CIs.
Collapse
Affiliation(s)
- Kathrin Rothermich
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Susannah Dixon
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Marti Weiner
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Madison Capps
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | - Lixue Dong
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
| | | | - Ning Zhou
- Department of Communication Sciences and Disorders, East Carolina University, Greenville, NC, United States of America
- * E-mail:
| |
Collapse
|
8
|
Kawar K, Kishon-Rabin L, Segal O. Identification and Comprehension of Narrow Focus by Arabic-Speaking Adolescents With Moderate-to-Profound Hearing Loss. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:2029-2046. [PMID: 35472256 DOI: 10.1044/2022_jslhr-21-00296] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE Processing narrow focus (NF), the stressed word in the sentence, includes both the perceptual ability to identify the stressed word in the sentence and the pragmatic-semantic ability to comprehend the nonexplicit linguistic message. NF and its underlying meaning can be conveyed only via the auditory modality. Therefore, NF can be considered as a measure for assessing the efficacy of the hearing aid (HA) and cochlear implants (CIs) for acquiring nonexplicit language skills. The purpose of this study was to assess identification and comprehension of NF by HA and CI users who are native speakers of Arabic and to associate NF outcomes with speech perception and cognitive and linguistic abilities. METHOD A total of 46 adolescents (age range: 11;2-18;8) participated: 18 with moderate-to-severe hearing loss who used HAs, 10 with severe-to-profound hearing loss who used CIs, and 18 with typical hearing (TH). Test materials included the Arabic Narrow Focus Test (ANFT), which includes three subtests assessing identification (ANFT1), comprehension of NF in simple four-word sentences (ANFT2), and longer sentences with a construction list at the clause or noun phrase level (ANFT3). In addition, speech perception, vocabulary, and working memory were assessed. RESULTS All the participants successfully identified the word carrying NF, with no significant difference between the groups. Comprehension of NF in ANFT2 and ANFT3 was reduced for HA and CI users compared with TH peers, and speech perception, hearing status, and memory for digits predicted the variability in the overall results of ANFT1, ANFT2, and ANFT3, respectively. CONCLUSIONS Arabic speakers who used HAs or CIs were able to identify NF successfully, suggesting that the acoustic cues were perceptually available to them. However, HA and CI users had considerable difficulty in understanding NF. Different factors may contribute to this difficulty, including the memory load during the task as well as pragmatic-linguistic knowledge on the possible meanings of NF.
Collapse
Affiliation(s)
- Khaloob Kawar
- Department of Special Education, Beit Berl College, Kfar Saba, Israel
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Liat Kishon-Rabin
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| | - Osnat Segal
- Department of Communication Disorders, Steyer School of Health Professions, Sackler Faculty of Medicine, Tel Aviv University, Israel
| |
Collapse
|
9
|
Age-Related Changes in Voice Emotion Recognition by Postlingually Deafened Listeners With Cochlear Implants. Ear Hear 2022; 43:323-334. [PMID: 34406157 PMCID: PMC8847542 DOI: 10.1097/aud.0000000000001095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Identification of emotional prosody in speech declines with age in normally hearing (NH) adults. Cochlear implant (CI) users have deficits in the perception of prosody, but the effects of age on vocal emotion recognition by adult postlingually deaf CI users are not known. The objective of the present study was to examine age-related changes in CI users' and NH listeners' emotion recognition. DESIGN Participants included 18 CI users (29.6 to 74.5 years) and 43 NH adults (25.8 to 74.8 years). Participants listened to emotion-neutral sentences spoken by a male and female talker in five emotions (happy, sad, scared, angry, neutral). NH adults heard them in four conditions: unprocessed (full spectrum) speech, 16-channel, 8-channel, and 4-channel noise-band vocoded speech. The adult CI users only listened to unprocessed (full spectrum) speech. Sensitivity (d') to emotions and Reaction Times were obtained using a single-interval, five-alternative, forced-choice paradigm. RESULTS For NH participants, results indicated age-related declines in Accuracy and d', and age-related increases in Reaction Time in all conditions. Results indicated an overall deficit, as well as age-related declines in overall d' for CI users, but Reaction Times were elevated compared with NH listeners and did not show age-related changes. Analysis of Accuracy scores (hit rates) were generally consistent with d' data. CONCLUSIONS Both CI users and NH listeners showed age-related deficits in emotion identification. The CI users' overall deficit in emotion perception, and their slower response times, suggest impaired social communication which may in turn impact overall well-being, particularly so for older CI users, as lower vocal emotion recognition scores have been associated with poorer subjective quality of life in CI patients.
Collapse
|
10
|
More Than Words: the Relative Roles of Prosody and Semantics in the Perception of Emotions in Spoken Language by Postlingual Cochlear Implant Users. Ear Hear 2022; 43:1378-1389. [PMID: 35030551 DOI: 10.1097/aud.0000000000001199] [Citation(s) in RCA: 7] [Impact Index Per Article: 3.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES The processing of emotional speech calls for the perception and integration of semantic and prosodic cues. Although cochlear implants allow for significant auditory improvements, they are limited in the transmission of spectro-temporal fine-structure information that may not support the processing of voice pitch cues. The goal of the current study is to compare the performance of postlingual cochlear implant (CI) users and a matched control group on perception, selective attention, and integration of emotional semantics and prosody. DESIGN Fifteen CI users and 15 normal hearing (NH) peers (age range, 18-65 years) 1istened to spoken sentences composed of different combinations of four discrete emotions (anger, happiness, sadness, and neutrality) presented in prosodic and semantic channels-T-RES: Test for Rating Emotions in Speech. In three separate tasks, listeners were asked to attend to the sentence as a whole, thus integrating both speech channels (integration), or to focus on one channel only (rating of target emotion) and ignore the other (selective attention). Their task was to rate how much they agreed that the sentence conveyed each of the predefined emotions. In addition, all participants performed standard tests of speech perception. RESULTS When asked to focus on one channel, semantics or prosody, both groups rated emotions similarly with comparable levels of selective attention. When the task was called for channel integration, group differences were found. CI users appeared to use semantic emotional information more than did their NH peers. CI users assigned higher ratings than did their NH peers to sentences that did not present the target emotion, indicating some degree of confusion. In addition, for CI users, individual differences in speech comprehension over the phone and identification of intonation were significantly related to emotional semantic and prosodic ratings, respectively. CONCLUSIONS CI users and NH controls did not differ in perception of prosodic and semantic emotions and in auditory selective attention. However, when the task called for integration of prosody and semantics, CI users overused the semantic information (as compared with NH). We suggest that as CI users adopt diverse cue weighting strategies with device experience, their weighting of prosody and semantics differs from those used by NH. Finally, CI users may benefit from rehabilitation strategies that strengthen perception of prosodic information to better understand emotional speech.
Collapse
|
11
|
Amichetti NM, Neukam J, Kinney AJ, Capach N, March SU, Svirsky MA, Wingfield A. Adults with cochlear implants can use prosody to determine the clausal structure of spoken sentences. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:4315. [PMID: 34972310 PMCID: PMC8674009 DOI: 10.1121/10.0008899] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/24/2021] [Revised: 11/04/2021] [Accepted: 11/08/2021] [Indexed: 06/14/2023]
Abstract
Speech prosody, including pitch contour, word stress, pauses, and vowel lengthening, can aid the detection of the clausal structure of a multi-clause sentence and this, in turn, can help listeners determine the meaning. However, for cochlear implant (CI) users, the reduced acoustic richness of the signal raises the question of whether CI users may have difficulty using sentence prosody to detect syntactic clause boundaries within sentences or whether this ability is rescued by the redundancy of the prosodic features that normally co-occur at clause boundaries. Twenty-two CI users, ranging in age from 19 to 77 years old, recalled three types of sentences: sentences in which the prosodic pattern was appropriate to the location of a clause boundary within the sentence (congruent prosody), sentences with reduced prosodic information, or sentences in which the location of the clause boundary and the prosodic marking of a clause boundary were placed in conflict. The results showed the presence of congruent prosody to be associated with superior sentence recall and a reduced processing effort as indexed by the pupil dilation. The individual differences in a standard test of word recognition (consonant-nucleus-consonant score) were related to the recall accuracy as well as the processing effort. The outcomes are discussed in terms of the redundancy of the prosodic features, which normally accompany a clause boundary and processing effort.
Collapse
Affiliation(s)
- Nicole M Amichetti
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Jonathan Neukam
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Alexander J Kinney
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Nicole Capach
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Samantha U March
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| | - Mario A Svirsky
- Department of Otolaryngology, New York University (NYU) Langone Medical Center, New York, New York 10016, USA
| | - Arthur Wingfield
- Department of Psychology, Brandeis University, Waltham, Massachusetts 02453, USA
| |
Collapse
|
12
|
Heffner CC, Jaekel BN, Newman RS, Goupell MJ. Accuracy and cue use in word segmentation for cochlear-implant listeners and normal-hearing listeners presented vocoded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:2936. [PMID: 34717484 PMCID: PMC8528550 DOI: 10.1121/10.0006448] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/03/2020] [Revised: 09/07/2021] [Accepted: 09/09/2021] [Indexed: 06/13/2023]
Abstract
Cochlear-implant (CI) listeners experience signal degradation, which leads to poorer speech perception than normal-hearing (NH) listeners. In the present study, difficulty with word segmentation, the process of perceptually parsing the speech stream into separate words, is considered as a possible contributor to this decrease in performance. CI listeners were compared to a group of NH listeners (presented with unprocessed speech and eight-channel noise-vocoded speech) in their ability to segment phrases with word segmentation ambiguities (e.g., "an iceman" vs "a nice man"). The results showed that CI listeners and NH listeners were worse at segmenting words when hearing processed speech than NH listeners were when presented with unprocessed speech. When viewed at a broad level, all of the groups used cues to word segmentation in similar ways. Detailed analyses, however, indicated that the two processed speech groups weighted top-down knowledge cues to word boundaries more and weighted acoustic cues to word boundaries less relative to NH listeners presented with unprocessed speech.
Collapse
Affiliation(s)
- Christopher C Heffner
- Program in Neuroscience and Cognitive Science, University of Maryland, College Park, Maryland 20742, USA
| | - Brittany N Jaekel
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Rochelle S Newman
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, Maryland 20742, USA
| |
Collapse
|
13
|
Kim S, Chou HH, Luo X. Mandarin tone recognition training with cochlear implant simulation: Amplitude envelope enhancement and cue weighting. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2021; 150:1218. [PMID: 34470277 DOI: 10.1121/10.0005878] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 03/08/2021] [Accepted: 07/22/2021] [Indexed: 06/13/2023]
Abstract
With limited fundamental frequency (F0) cues, cochlear implant (CI) users recognize Mandarin tones using amplitude envelope. This study investigated whether tone recognition training with amplitude envelope enhancement may improve tone recognition and cue weighting with CIs. Three groups of CI-simulation listeners received training using vowels with amplitude envelope modified to resemble F0 contour (enhanced-amplitude-envelope training), training using natural vowels (natural-amplitude-envelope training), and exposure to natural vowels without training, respectively. Tone recognition with natural and enhanced amplitude envelope cues and cue weighting of amplitude envelope and F0 contour were measured in pre-, post-, and retention-tests. It was found that with similar pre-test performance, both training groups had better tone recognition than the no-training group after training. Only enhanced-amplitude-envelope training increased the benefits of amplitude envelope enhancement in the post- and retention-tests than in the pre-test. Neither training paradigm increased the cue weighting of amplitude envelope and F0 contour more than stimulus exposure. Listeners attending more to amplitude envelope in the pre-test tended to have better tone recognition with enhanced amplitude envelope cues before training and improve more in tone recognition after enhanced-amplitude-envelope training. The results suggest that auditory training and speech enhancement may bring maximum benefits to CI users when combined.
Collapse
Affiliation(s)
- Seeon Kim
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| | - Hsiao-Hsiuan Chou
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| | - Xin Luo
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, Tempe, Arizona 85287, USA
| |
Collapse
|
14
|
Meta-Analysis on the Identification of Linguistic and Emotional Prosody in Cochlear Implant Users and Vocoder Simulations. Ear Hear 2021; 41:1092-1102. [PMID: 32251011 DOI: 10.1097/aud.0000000000000863] [Citation(s) in RCA: 22] [Impact Index Per Article: 7.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES This study quantitatively assesses how cochlear implants (CIs) and vocoder simulations of CIs influence the identification of linguistic and emotional prosody in nontonal languages. By means of meta-analysis, it was explored how accurately CI users and normal-hearing (NH) listeners of vocoder simulations (henceforth: simulation listeners) identify prosody compared with NH listeners of unprocessed speech (henceforth: NH listeners), whether this effect of electric hearing differs between CI users and simulation listeners, and whether the effect of electric hearing is influenced by the type of prosody that listeners identify or by the availability of specific cues in the speech signal. DESIGN Records were found by searching the PubMed Central, Web of Science, Scopus, Science Direct, and PsycINFO databases (January 2018) using the search terms "cochlear implant prosody" and "vocoder prosody." Records (published in English) were included that reported results of experimental studies comparing CI users' and/or simulation listeners' identification of linguistic and/or emotional prosody in nontonal languages to that of NH listeners (all ages included). Studies that met the inclusion criteria were subjected to a multilevel random-effects meta-analysis. RESULTS Sixty-four studies reported in 28 records were included in the meta-analysis. The analysis indicated that CI users and simulation listeners were less accurate in correctly identifying linguistic and emotional prosody compared with NH listeners, that the identification of emotional prosody was more strongly compromised by the electric hearing speech signal than linguistic prosody was, and that the low quality of transmission of fundamental frequency (f0) through the electric hearing speech signal was the main cause of compromised prosody identification in CI users and simulation listeners. Moreover, results indicated that the accuracy with which CI users and simulation listeners identified linguistic and emotional prosody was comparable, suggesting that vocoder simulations with carefully selected parameters can provide a good estimate of how prosody may be identified by CI users. CONCLUSIONS The meta-analysis revealed a robust negative effect of electric hearing, where CIs and vocoder simulations had a similar negative influence on the identification of linguistic and emotional prosody, which seemed mainly due to inadequate transmission of f0 cues through the degraded electric hearing speech signal of CIs and vocoder simulations.
Collapse
|
15
|
Hendrickson K, Spinelli J, Walker E. Cognitive processes underlying spoken word recognition during soft speech. Cognition 2020; 198:104196. [PMID: 32004934 DOI: 10.1016/j.cognition.2020.104196] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/12/2019] [Revised: 01/06/2020] [Accepted: 01/18/2020] [Indexed: 11/25/2022]
Abstract
In two eye-tracking experiments using the Visual World Paradigm, we examined how listeners recognize words when faced with speech at lower intensities (40, 50, and 65 dBA). After hearing the target word, participants (n = 32) clicked the corresponding picture from a display of four images - a target (e.g., money), a cohort competitor (e.g., mother), a rhyme competitor (e.g., honey) and an unrelated item (e.g., whistle) - while their eye-movements were tracked. For slightly soft speech (50 dBA), listeners demonstrated an increase in cohort activation, whereas for rhyme competitors, activation started later and was sustained longer in processing. For very soft speech (40 dBA), listeners waited until later in processing to activate potential words, as illustrated by a decrease in activation for cohorts, and an increase in activation for rhymes. Further, the extent to which words were considered depended on word length (mono- vs. bi-syllabic words), and speech-extrinsic factors such as the surrounding listening environment. These results advance current theories of spoken word recognition by considering a range of speech levels more typical of everyday listening environments. From an applied perspective, these results motivate models of how individuals who are hard of hearing approach the task of recognizing spoken words.
Collapse
Affiliation(s)
- Kristi Hendrickson
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America; Department of Psychological & Brain Sciences, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| | - Jessica Spinelli
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| | - Elizabeth Walker
- Department of Communication Sciences & Disorders, University of Iowa, 250 Hawkins Drive, 52242 Iowa City, IA, United States of America.
| |
Collapse
|
16
|
Nagels L, Bastiaanse R, Başkent D, Wagner A. Individual Differences in Lexical Access Among Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:286-304. [PMID: 31855606 DOI: 10.1044/2019_jslhr-19-00192] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/10/2023]
Abstract
Purpose The current study investigates how individual differences in cochlear implant (CI) users' sensitivity to word-nonword differences, reflecting lexical uncertainty, relate to their reliance on sentential context for lexical access in processing continuous speech. Method Fifteen CI users and 14 normal-hearing (NH) controls participated in an auditory lexical decision task (Experiment 1) and a visual-world paradigm task (Experiment 2). Experiment 1 tested participants' reliance on lexical statistics, and Experiment 2 studied how sentential context affects the time course and patterns of lexical competition leading to lexical access. Results In Experiment 1, CI users had lower accuracy scores and longer reaction times than NH listeners, particularly for nonwords. In Experiment 2, CI users' lexical competition patterns were, on average, similar to those of NH listeners, but the patterns of individual CI users varied greatly. Individual CI users' word-nonword sensitivity (Experiment 1) explained differences in the reliance on sentential context to resolve lexical competition, whereas clinical speech perception scores explained competition with phonologically related words. Conclusions The general analysis of CI users' lexical competition patterns showed merely quantitative differences with NH listeners in the time course of lexical competition, but our additional analysis revealed more qualitative differences in CI users' strategies to process speech. Individuals' word-nonword sensitivity explained different parts of individual variability than clinical speech perception scores. These results stress, particularly for heterogeneous clinical populations such as CI users, the importance of investigating individual differences in addition to group averages, as they can be informative for clinical rehabilitation. Supplemental Material https://doi.org/10.23641/asha.11368106.
Collapse
Affiliation(s)
- Leanne Nagels
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
| | - Roelien Bastiaanse
- Center for Language and Cognition Groningen, University of Groningen, the Netherlands
- National Research University Higher School of Economics, Moscow, Russia
| | - Deniz Başkent
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Anita Wagner
- Department of Otorhinolaryngology-Head & Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
17
|
Davidson LS, Geers AE, Uchanski RM, Firszt JB. Effects of Early Acoustic Hearing on Speech Perception and Language for Pediatric Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3620-3637. [PMID: 31518517 PMCID: PMC6808345 DOI: 10.1044/2019_jslhr-h-18-0255] [Citation(s) in RCA: 43] [Impact Index Per Article: 8.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/21/2018] [Revised: 12/14/2018] [Accepted: 06/19/2019] [Indexed: 06/10/2023]
Abstract
Purpose The overall goal of the current study was to identify an optimal level and duration of acoustic experience that facilitates language development for pediatric cochlear implant (CI) recipients-specifically, to determine whether there is an optimal duration of hearing aid (HA) use and unaided threshold levels that should be considered before proceeding to bilateral CIs. Method A total of 117 pediatric CI recipients (ages 5-9 years) were given speech perception and standardized tests of receptive vocabulary and language. The speech perception battery included tests of segmental perception (e.g., word recognition in quiet and noise, and vowels and consonants in quiet) and of suprasegmental perception (e.g., talker and stress discrimination, and emotion identification). Hierarchical regression analyses were used to determine the effects of speech perception on language scores, and the effects of residual hearing level (unaided pure-tone average [PTA]) and duration of HA use on speech perception. Results A continuum of residual hearing levels and the length of HA use were represented by calculating the unaided PTA of the ear with the longest duration of HA use for each child. All children wore 2 devices: Some wore bimodal devices, while others received their 2nd CI either simultaneously or sequentially, representing a wide range of HA use (0.03-9.05 years). Regression analyses indicate that suprasegmental perception contributes unique variance to receptive language scores and that both segmental and suprasegmental skills each contribute independently to receptive vocabulary scores. Also, analyses revealed an optimal duration of HA use for each of 3 ranges of hearing loss severity (with mean PTAs of 73, 92, and 111 dB HL) that maximizes suprasegmental perception. Conclusions For children with the most profound losses, early bilateral CIs provide the greatest opportunity for developing good spoken language skills. For those with moderate-to-severe losses, however, a prescribed period of bimodal use may be more advantageous for developing good spoken language skills.
Collapse
Affiliation(s)
| | | | | | - Jill B. Firszt
- Washington University School of Medicine in St. Louis, MO
| |
Collapse
|
18
|
Nie K, Hannaford S, Director HM, Nishigaki MA, Drennan WR, Rubinstein JT. Mandarin tone recognition in English speakers with normal hearing and with cochlear implants. Int J Audiol 2019; 58:913-922. [DOI: 10.1080/14992027.2019.1632498] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/26/2022]
Affiliation(s)
- Kaibao Nie
- University of Washington-Seattle, Seattle, WA, USA
- University of Washington-Bothell, Bothell, WA, USA
| | | | | | | | | | | |
Collapse
|
19
|
Deroche MLD, Lu HP, Lin YS, Chatterjee M, Peng SC. Processing of Acoustic Information in Lexical Tone Production and Perception by Pediatric Cochlear Implant Recipients. Front Neurosci 2019; 13:639. [PMID: 31281237 PMCID: PMC6596315 DOI: 10.3389/fnins.2019.00639] [Citation(s) in RCA: 11] [Impact Index Per Article: 2.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/09/2019] [Accepted: 06/03/2019] [Indexed: 11/13/2022] Open
Abstract
Purpose: This study examined the utilization of multiple types of acoustic information in lexical tone production and perception by pediatric cochlear implant (CI) recipients who are native speakers of Mandarin Chinese. Methods: Lexical tones were recorded from CI recipients and their peers with normal hearing (NH). Each participant was asked to produce a disyllabic word, yan jing, with which the first syllable was pronounced as Tone 3 (a low dipping tone) while the second syllable was pronounced as Tone 1 (a high level tone, meaning "eyes") or as Tone 4 (a high falling tone, meaning "eyeglasses"). In addition, a parametric manipulation in fundamental frequency (F0) and duration of Tones 1 and 4 used in a lexical tone recognition task in Peng et al. (2017) was adopted to evaluate the perceptual reliance on each dimension. Results: Mixed-effect analyses of duration, intensity, and F0 cues revealed that NH children focused exclusively on marking distinct F0 contours, while CI participants shortened Tone 4 or prolonged Tone 1 to enhance their contrast. In line with these production strategies, NH children relied primarily on F0 cues to identify the two tones, whereas CI children showed greater reliance on duration cues. Moreover, CI participants who placed greater perceptual weight on duration cues also tended to exhibit smaller changes in their F0 production. Conclusion: Pediatric CI recipients appear to contrast the secondary acoustic dimension (duration) in addition to F0 contours for both lexical tone production and perception. These findings suggest that perception and production strategies of lexical tones are well coupled in this pediatric CI population.
Collapse
Affiliation(s)
| | | | - Yung-Song Lin
- Chi-Mei Medical Center, Tainan, Taiwan.,Taipei Medical University, Taipei, Taiwan
| | | | - Shu-Chen Peng
- United States Food and Drug Administration, Silver Spring, MD, United States
| |
Collapse
|
20
|
Evaluation of the Optimized Pitch and Language Strategy in Cochlear Implant Recipients. Ear Hear 2019; 40:555-567. [DOI: 10.1097/aud.0000000000000627] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
|
21
|
Children's Recognition of Emotional Prosody in Spectrally Degraded Speech Is Predicted by Their Age and Cognitive Status. Ear Hear 2019; 39:874-880. [PMID: 29337761 DOI: 10.1097/aud.0000000000000546] [Citation(s) in RCA: 18] [Impact Index Per Article: 3.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES It is known that school-aged children with cochlear implants show deficits in voice emotion recognition relative to normal-hearing peers. Little, however, is known about normal-hearing children's processing of emotional cues in cochlear implant-simulated, spectrally degraded speech. The objective of this study was to investigate school-aged, normal-hearing children's recognition of voice emotion, and the degree to which their performance could be predicted by their age, vocabulary, and cognitive factors such as nonverbal intelligence and executive function. DESIGN Normal-hearing children (6-19 years old) and young adults were tested on a voice emotion recognition task under three different conditions of spectral degradation using cochlear implant simulations (full-spectrum, 16-channel, and 8-channel noise-vocoded speech). Measures of vocabulary, nonverbal intelligence, and executive function were obtained as well. RESULTS Adults outperformed children on all tasks, and a strong developmental effect was observed. The children's age, the degree of spectral resolution, and nonverbal intelligence were predictors of performance, but vocabulary and executive functions were not, and no interactions were observed between age and spectral resolution. CONCLUSIONS These results indicate that cognitive function and age play important roles in children's ability to process emotional prosody in spectrally degraded speech. The lack of an interaction between the degree of spectral resolution and children's age further suggests that younger and older children may benefit similarly from improvements in spectral resolution. The findings imply that younger and older children with cochlear implants may benefit similarly from technical advances that improve spectral resolution.
Collapse
|
22
|
Reybrouck M, Podlipniak P. Preconceptual Spectral and Temporal Cues as a Source of Meaning in Speech and Music. Brain Sci 2019; 9:E53. [PMID: 30832292 PMCID: PMC6468545 DOI: 10.3390/brainsci9030053] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/23/2019] [Revised: 02/18/2019] [Accepted: 02/26/2019] [Indexed: 11/24/2022] Open
Abstract
This paper explores the importance of preconceptual meaning in speech and music, stressing the role of affective vocalizations as a common ancestral instrument in communicative interactions. Speech and music are sensory rich stimuli, both at the level of production and perception, which involve different body channels, mainly the face and the voice. However, this bimodal approach has been challenged as being too restrictive. A broader conception argues for an action-oriented embodied approach that stresses the reciprocity between multisensory processing and articulatory-motor routines. There is, however, a distinction between language and music, with the latter being largely unable to function referentially. Contrary to the centrifugal tendency of language to direct the attention of the receiver away from the text or speech proper, music is centripetal in directing the listener's attention to the auditory material itself. Sound, therefore, can be considered as the meeting point between speech and music and the question can be raised as to the shared components between the interpretation of sound in the domain of speech and music. In order to answer these questions, this paper elaborates on the following topics: (i) The relationship between speech and music with a special focus on early vocalizations in humans and non-human primates; (ii) the transition from sound to meaning in speech and music; (iii) the role of emotion and affect in early sound processing; (iv) vocalizations and nonverbal affect burst in communicative sound comprehension; and (v) the acoustic features of affective sound with a special emphasis on temporal and spectrographic cues as parts of speech prosody and musical expressiveness.
Collapse
Affiliation(s)
- Mark Reybrouck
- Musicology Research Group, KU Leuven⁻University of Leuven, 3000 Leuven, Belgium and IPEM⁻Department of Musicology, Ghent University, 9000 Ghent, Belgium.
| | - Piotr Podlipniak
- Institute of Musicology, Adam Mickiewicz University in Poznań, ul. Umultowska 89D, 61-614 Poznań, Poland.
| |
Collapse
|
23
|
Lehnert-LeHouillier H, Spencer LJ, Machmer EL, Burchell KL. The Production of Question Intonation by Young Adult Cochlear Implant Users: Does Age at Implantation Matter? JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:257-271. [PMID: 30950697 PMCID: PMC6436888 DOI: 10.1044/2018_jslhr-s-17-0468] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/19/2017] [Revised: 05/30/2018] [Accepted: 10/29/2018] [Indexed: 06/09/2023]
Abstract
Purpose The purpose of this observational study was to investigate the properties of sentence-final prosody in yes/no questions produced by cochlear implant (CI) users in order to determine whether and how the age at CI implantation impacts CI users' production of question intonation later in life. Method We acoustically analyzed recordings from 46 young adult CI users and 10 young adults with normal hearing who read yes/no questions. Of the 46 CI users, 20 had received their CI before the age of 4.0 years (early implantation group), 15 between ages 4.0 and 8.11 years (midimplantation group), and 11 at the age of 9.0 years or later (late implantation group). We assessed the prosodic properties of the produced questions for each implantation group and the normal hearing comparison group (a) by measuring the sentence-final rise in fundamental frequency, (b) by labeling the question-final intonation contour using the Tones and Breaks Index ( Beckman & Ayers, 1994 ; Silverman, Beckman, et al., 1992 ; Veilleux, Shattuck-Hufnagel, & Brugos, 2006 ), and (c) by assessing phrase-final lengthening. Results The fundamental frequency rises produced by all CI users exhibited a smaller magnitude than those produced by the normal hearing comparison group, although the difference between early implanted CI users and the normal hearing group did not reach statistical significance. Early implanted CI users were more comparable in their use of question-final intonation contours to the individuals with typical hearing than to those users with CI implanted later in life. All CI users exhibited significantly less phrase-final lengthening than the normal hearing comparison group, regardless of age at CI implantation. Conclusion The results of this investigation of question intonation produced by CI users suggest that those CI users who were implanted with CI earlier in life produce yes/no question intonation in a manner that is more similar to, albeit not the same as, individuals with normal hearing when compared to the productions of those users with CI implanted after 4.0 years of age.
Collapse
Affiliation(s)
| | - Linda J. Spencer
- Department of Speech-Language Pathology, Rocky Mountain University of Health Professions, Provo, UT
| | - Elizabeth L. Machmer
- Department of Communication Studies and Services, Rochester Institute of Technology/National Technical Institute for the Deaf, NY
| | - Kristy L. Burchell
- Department of Communication Disorders, New Mexico State University, Las Cruces
| |
Collapse
|
24
|
A tonal-language benefit for pitch in normally-hearing and cochlear-implanted children. Sci Rep 2019; 9:109. [PMID: 30643156 PMCID: PMC6331606 DOI: 10.1038/s41598-018-36393-1] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/18/2018] [Accepted: 11/21/2018] [Indexed: 11/08/2022] Open
Abstract
In tonal languages, voice pitch inflections change the meaning of words, such that the brain processes pitch not merely as an acoustic characterization of sound but as semantic information. In normally-hearing (NH) adults, this linguistic pressure on pitch appears to sharpen its neural encoding and can lead to perceptual benefits, depending on the task relevance, potentially generalizing outside of the speech domain. In children, however, linguistic systems are still malleable, meaning that their encoding of voice pitch information might not receive as much neural specialization but might generalize more easily to ecologically irrelevant pitch contours. This would seem particularly true for early-deafened children wearing a cochlear implant (CI), who must exhibit great adaptability to unfamiliar sounds as their sense of pitch is severely degraded. Here, we provide the first demonstration of a tonal language benefit in dynamic pitch sensitivity among NH children (using both a sweep discrimination and labelling task) which extends partially to children with CI (i.e., in the labelling task only). Strong age effects suggest that sensitivity to pitch contours reaches adult-like levels early in tonal language speakers (possibly before 6 years of age) but continues to develop in non-tonal language speakers well into the teenage years. Overall, we conclude that language-dependent neuroplasticity can enhance behavioral sensitivity to dynamic pitch, even in extreme cases of auditory degradation, but it is most easily observable early in life.
Collapse
|
25
|
Müller V, Klünter H, Fürstenberg D, Meister H, Walger M, Lang-Roth R. Examination of Prosody and Timbre Perception in Adults With Cochlear Implants Comparing Different Fine Structure Coding Strategies. Am J Audiol 2018. [PMID: 29536106 DOI: 10.1044/2017_aja-17-0046] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
PURPOSE This study aimed to investigate whether adults with cochlear implants benefit from a change of fine structure (FS) coding strategies regarding the discrimination of prosodic speech cues, timbre cues, and the identification of natural instruments. The FS processing (FSP) coding strategy was compared to 2 settings of the FS4 strategy. METHOD A longitudinal crossover, double-blinded study was conducted. This study consisted of 2 parts, with 14 participants in the first part and 12 participants in the second part. Each part lasted 3 months, in which participants were alternately fitted with either the established FSP strategy or 1 of the 2 newly developed FS4 settings. Participants had to complete an intonation identification test; a timbre discrimination test in which 1 of 2 isolated cues changed, either the spectral centroid or the spectral irregularity; and an instrument identification test. RESULTS A significant effect was seen in the discrimination of spectral irregularity with 1 of the 2 FS4 settings. The improvement was seen in the FS4 setting in which the upper envelope channels had a low stimulation rate. This improvement was not seen with the FS4 setting that had a higher stimulation rate on the envelope channels. CONCLUSIONS In general, the FSP strategy and the 2 settings of the FS4 strategy provided similar levels in the perception of prosody and timbre cues, as well as in the identification of instruments.
Collapse
Affiliation(s)
- Verena Müller
- Clinic of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Centre, University of Cologne, Germany
| | - Heinz Klünter
- Clinic of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Centre, University of Cologne, Germany
| | - Dirk Fürstenberg
- Clinic of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Centre, University of Cologne, Germany
| | - Hartmut Meister
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Germany
| | - Martin Walger
- Clinic of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Centre, University of Cologne, Germany
- Jean Uhrmacher Institute for Clinical ENT-Research, University of Cologne, Germany
| | - Ruth Lang-Roth
- Clinic of Otorhinolaryngology, Head and Neck Surgery and Cochlear Implant Centre, University of Cologne, Germany
| |
Collapse
|
26
|
Clarke J, Kazanoğlu D, Başkent D, Gaudrain E. Effect of F0 contours on top-down repair of interrupted speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 142:EL7. [PMID: 28764445 DOI: 10.1121/1.4990398] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/13/2023]
Abstract
Top-down repair of interrupted speech can be influenced by bottom-up acoustic cues such as voice pitch (F0). This study aims to investigate the role of the dynamic information of pitch, i.e., F0 contours, in top-down repair of speech. Intelligibility of sentences interrupted with silence or noise was measured in five F0 contour conditions (inverted, flat, original, exaggerated with a factor of 1.5 and 1.75). The main hypothesis was that manipulating F0 contours would impair linking successive segments of interrupted speech and thus negatively affect top-down repair. Intelligibility of interrupted speech was impaired only by misleading dynamic information (inverted F0 contours). The top-down repair of interrupted speech was not affected by any F0 contours manipulation.
Collapse
Affiliation(s)
- Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands , , ,
| | - Deniz Kazanoğlu
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands , , ,
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands , , ,
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands , , ,
| |
Collapse
|
27
|
Huang YT, Newman RS, Catalano A, Goupell MJ. Using prosody to infer discourse prominence in cochlear-implant users and normal-hearing listeners. Cognition 2017; 166:184-200. [PMID: 28578222 DOI: 10.1016/j.cognition.2017.05.029] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2016] [Revised: 05/09/2017] [Accepted: 05/19/2017] [Indexed: 11/16/2022]
Abstract
Cochlear implants (CIs) provide speech perception to adults with severe-to-profound hearing loss, but the acoustic signal remains severely degraded. Limited access to pitch cues is thought to decrease sensitivity to prosody in CI users, but co-occurring changes in intensity and duration may provide redundant cues. The current study investigates how listeners use these cues to infer discourse prominence. CI users and normal-hearing (NH) listeners were presented with sentences varying in prosody (accented vs. unaccented words) while their eye-movements were measured to referents varying in discourse status (given vs. new categories). In Experiment 1, all listeners inferred prominence when prosody on nouns distinguished categories ("SANDWICH"→not sandals). In Experiment 2, CI users and NH listeners presented with natural speech inferred prominence when prosody on adjectives implied contrast across both categories and properties ("PINK horse"→not the orange horse). In contrast, NH listeners presented with simulated CI (vocoded) speech were sensitive to acoustic differences in prosody, but did not use these cues to infer discourse status. Together, this suggests that exploiting redundant cues for comprehension varies with the demands of language processing and prior experience with the degraded signal.
Collapse
Affiliation(s)
- Yi Ting Huang
- University of Maryland, College Park, United States.
| | | | | | | |
Collapse
|
28
|
Caldwell MT, Jiam NT, Limb CJ. Assessment and improvement of sound quality in cochlear implant users. Laryngoscope Investig Otolaryngol 2017; 2:119-124. [PMID: 28894831 PMCID: PMC5527361 DOI: 10.1002/lio2.71] [Citation(s) in RCA: 31] [Impact Index Per Article: 4.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 01/19/2017] [Accepted: 01/21/2017] [Indexed: 11/29/2022] Open
Abstract
Objectives Cochlear implants (CIs) have successfully provided speech perception to individuals with sensorineural hearing loss. Recent research has focused on more challenging acoustic stimuli such as music and voice emotion. The purpose of this review is to evaluate and describe sound quality in CI users with the purposes of summarizing novel findings and crucial information about how CI users experience complex sounds. Data Sources Here we review the existing literature on PubMed and Scopus to present what is known about perceptual sound quality in CI users, discuss existing measures of sound quality, explore how sound quality may be effectively studied, and examine potential strategies of improving sound quality in the CI population. Results Sound quality, defined here as the perceived richness of an auditory stimulus, is an attribute of implant‐mediated listening that remains poorly studied. Sound quality is distinct from appraisal, which is generally defined as the subjective likability or pleasantness of a sound. Existing studies suggest that sound quality perception in the CI population is limited by a range of factors, most notably pitch distortion and dynamic range compression. Although there are currently very few objective measures of sound quality, the CI‐MUSHRA has been used as a means of evaluating sound quality. There exist a number of promising strategies to improve sound quality perception in the CI population including apical cochlear stimulation, pitch tuning, and noise reduction processing strategies. Conclusions In the published literature, sound quality perception is severely limited among CI users. Future research should focus on developing systematic, objective, and quantitative sound quality metrics and designing therapies to mitigate poor sound quality perception in CI users. Level of Evidence NA
Collapse
Affiliation(s)
- Meredith T Caldwell
- Department of Otolaryngology-Head & Neck Surgery University of California San Francisco California
| | - Nicole T Jiam
- Department of Otolaryngology-Head & Neck Surgery University of California San Francisco California.,Johns Hopkins University School of Medicine Baltimore Maryland
| | - Charles J Limb
- Department of Otolaryngology-Head & Neck Surgery University of California San Francisco California
| |
Collapse
|
29
|
Peng SC, Lu HP, Lu N, Lin YS, Deroche MLD, Chatterjee M. Processing of Acoustic Cues in Lexical-Tone Identification by Pediatric Cochlear-Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2017; 60:1223-1235. [PMID: 28388709 PMCID: PMC5755546 DOI: 10.1044/2016_jslhr-s-16-0048] [Citation(s) in RCA: 21] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/05/2016] [Revised: 07/19/2016] [Accepted: 10/27/2016] [Indexed: 05/23/2023]
Abstract
PURPOSE The objective was to investigate acoustic cue processing in lexical-tone recognition by pediatric cochlear-implant (CI) recipients who are native Mandarin speakers. METHOD Lexical-tone recognition was assessed in pediatric CI recipients and listeners with normal hearing (NH) in 2 tasks. In Task 1, participants identified naturally uttered words that were contrastive in lexical tones. For Task 2, a disyllabic word (yanjing) was manipulated orthogonally, varying in fundamental-frequency (F0) contours and duration patterns. Participants identified each token with the second syllable jing pronounced with Tone 1 (a high level tone) as eyes or with Tone 4 (a high falling tone) as eyeglasses. RESULTS CI participants' recognition accuracy was significantly lower than NH listeners' in Task 1. In Task 2, CI participants' reliance on F0 contours was significantly less than that of NH listeners; their reliance on duration patterns, however, was significantly higher than that of NH listeners. Both CI and NH listeners' performance in Task 1 was significantly correlated with their reliance on F0 contours in Task 2. CONCLUSION For pediatric CI recipients, lexical-tone recognition using naturally uttered words is primarily related to their reliance on F0 contours, although duration patterns may be used as an additional cue.
Collapse
Affiliation(s)
- Shu-Chen Peng
- Center for Devices and Radiological Health, United States Food and Drug Administration, Silver Spring, MD
| | | | - Nelson Lu
- Center for Devices and Radiological Health, United States Food and Drug Administration, Silver Spring, MD
| | - Yung-Song Lin
- Chi-Mei Medical Center, Tainan, Taiwan
- Taipei Medical University, Taiwan
| | | | | |
Collapse
|
30
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
31
|
Abstract
OBJECTIVE To determine whether exaggerating the variations in fundamental frequency (F0) contours of Mandarin-based pitch fluctuations could improve tone identification by cochlear implant (CI) users. METHODS Twelve normal-hearing (NH) listeners and 11 CI users were tested for their ability to recognize F0 contours modeled after Mandarin tones, in 4- or 5-alternatives forced-choice paradigms. Two types of stimuli were used: computer-generated complex tones and voice recordings. Four contours were tested with voice recordings: flat, rise, fall, and dip. A fifth contour, peak, was added for complex tones. The F0 range of each contour was varied in an adaptive manner. A maximum-likelihood technique was used to fit a psychometric function to the performance data and extract threshold at 70% accuracy. RESULTS As F0 range increased, performance in tone identification improved but did not reach 100% for some CI users, suggesting that confusions between contours could always be made even with extremely exaggerated contours. Compared with NH participants, CI users required substantially larger F0 ranges to identify tones, on the order of 9.3 versus 0.4 semitones. CI users achieved better performance for complex tones than for voice recordings, whereas the reverse was true for NH participants. Confusion matrices showed that the "flat" tone was often a default option when the tone contour's F0 range presented was too narrow for participants to respond correctly. CONCLUSION These results demonstrate markedly impaired ability for CI users to identify tonal contours, but suggest that the use of exaggerated pitch contours may be helpful for tonal language perception.
Collapse
|
32
|
Word Recognition Variability With Cochlear Implants: "Perceptual Attention" Versus "Auditory Sensitivity". Ear Hear 2016; 37:14-26. [PMID: 26301844 DOI: 10.1097/aud.0000000000000204] [Citation(s) in RCA: 26] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022]
Abstract
OBJECTIVES Cochlear implantation does not automatically result in robust spoken language understanding for postlingually deafened adults. Enormous outcome variability exists, related to the complexity of understanding spoken language through cochlear implants (CIs), which deliver degraded speech representations. This investigation examined variability in word recognition as explained by "perceptual attention" and "auditory sensitivity" to acoustic cues underlying speech perception. DESIGN Thirty postlingually deafened adults with CIs and 20 age-matched controls with normal hearing (NH) were tested. Participants underwent assessment of word recognition in quiet and perceptual attention (cue-weighting strategies) based on labeling tasks for two phonemic contrasts: (1) "cop"-"cob," based on a duration cue (easily accessible through CIs) or a dynamic spectral cue (less accessible through CIs), and (2) "sa"-"sha," based on static or dynamic spectral cues (both potentially poorly accessible through CIs). Participants were also assessed for auditory sensitivity to the speech cues underlying those labeling decisions. RESULTS Word recognition varied widely among CI users (20 to 96%), but it was generally poorer than for NH participants. Implant users and NH controls showed similar perceptual attention and auditory sensitivity to the duration cue, while CI users showed poorer attention and sensitivity to all spectral cues. Both attention and sensitivity to spectral cues predicted variability in word recognition. CONCLUSIONS For CI users, both perceptual attention and auditory sensitivity are important in word recognition. Efforts should be made to better represent spectral cues through implants, while also facilitating attention to these cues through auditory training.
Collapse
|
33
|
Barone P, Chambaudie L, Strelnikov K, Fraysse B, Marx M, Belin P, Deguine O. Crossmodal interactions during non-linguistic auditory processing in cochlear-implanted deaf patients. Cortex 2016; 83:259-70. [PMID: 27622640 DOI: 10.1016/j.cortex.2016.08.005] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.9] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/26/2016] [Revised: 05/17/2016] [Accepted: 08/15/2016] [Indexed: 12/13/2022]
Abstract
Due to signal distortion, speech comprehension in cochlear-implanted (CI) patients relies strongly on visual information, a compensatory strategy supported by important cortical crossmodal reorganisations. Though crossmodal interactions are evident for speech processing, it is unclear whether a visual influence is observed in CI patients during non-linguistic visual-auditory processing, such as face-voice interactions, which are important in social communication. We analyse and compare visual-auditory interactions in CI patients and normal-hearing subjects (NHS) at equivalent auditory performance levels. Proficient CI patients and NHS performed a voice-gender categorisation in the visual-auditory modality from a morphing-generated voice continuum between male and female speakers, while ignoring the presentation of a male or female visual face. Our data show that during the face-voice interaction, CI deaf patients are strongly influenced by visual information when performing an auditory gender categorisation task, in spite of maximum recovery of auditory speech. No such effect is observed in NHS, even in situations of CI simulation. Our hypothesis is that the functional crossmodal reorganisation that occurs in deafness could influence nonverbal processing, such as face-voice interaction; this is important for patient internal supramodal representation.
Collapse
Affiliation(s)
- Pascal Barone
- Université Toulouse, CerCo, Université Paul Sabatier, France; CNRS, UMR 5549, Toulouse, France.
| | - Laure Chambaudie
- Université Toulouse, CerCo, Université Paul Sabatier, France; CNRS, UMR 5549, Toulouse, France
| | - Kuzma Strelnikov
- Université Toulouse, CerCo, Université Paul Sabatier, France; CNRS, UMR 5549, Toulouse, France
| | - Bernard Fraysse
- Service Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - Mathieu Marx
- Service Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| | - Pascal Belin
- Voice Neurocognition Laboratory, Institute of Neuroscience and Psychology, University of Glasgow, Glasgow, UK; Institut de Neurosciences de la Timone, CNRS UMR 7289 et Aix-Marseille Université, Marseille, France
| | - Olivier Deguine
- Université Toulouse, CerCo, Université Paul Sabatier, France; CNRS, UMR 5549, Toulouse, France; Service Oto-Rhino-Laryngologie et Oto-Neurologie, Hopital Purpan, Toulouse, France
| |
Collapse
|
34
|
Kong YY, Winn MB, Poellmann K, Donaldson GS. Discriminability and Perceptual Saliency of Temporal and Spectral Cues for Final Fricative Consonant Voicing in Simulated Cochlear-Implant and Bimodal Hearing. Trends Hear 2016; 20:20/0/2331216516652145. [PMID: 27317666 PMCID: PMC5562340 DOI: 10.1177/2331216516652145] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Multiple redundant acoustic cues can contribute to the perception of a single phonemic contrast. This study investigated the effect of spectral degradation on the discriminability and perceptual saliency of acoustic cues for identification of word-final fricative voicing in "loss" versus "laws", and possible changes that occurred when low-frequency acoustic cues were restored. Three acoustic cues that contribute to the word-final /s/-/z/ contrast (first formant frequency [F1] offset, vowel-consonant duration ratio, and consonant voicing duration) were systematically varied in synthesized words. A discrimination task measured listeners' ability to discriminate differences among stimuli within a single cue dimension. A categorization task examined the extent to which listeners make use of a given cue to label a syllable as "loss" versus "laws" when multiple cues are available. Normal-hearing listeners were presented with stimuli that were either unprocessed, processed with an eight-channel noise-band vocoder to approximate spectral degradation in cochlear implants, or low-pass filtered. Listeners were tested in four listening conditions: unprocessed, vocoder, low-pass, and a combined vocoder + low-pass condition that simulated bimodal hearing. Results showed a negative impact of spectral degradation on F1 cue discrimination and a trading relation between spectral and temporal cues in which listeners relied more heavily on the temporal cues for "loss-laws" identification when spectral cues were degraded. Furthermore, the addition of low-frequency fine-structure cues in simulated bimodal hearing increased the perceptual saliency of the F1 cue for "loss-laws" identification compared with vocoded speech. Findings suggest an interplay between the quality of sensory input and cue importance.
Collapse
Affiliation(s)
- Ying-Yee Kong
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Matthew B Winn
- Department of Speech and Hearing Sciences, University of Washington, Seattle, WA, USA
| | - Katja Poellmann
- Department of Communication Sciences and Disorders, Northeastern University, Boston, MA, USA
| | - Gail S Donaldson
- Department of Communication Sciences & Disorders, University of South Florida, Tampa, FL, USA
| |
Collapse
|
35
|
Deroche MLD, Kulkarni AM, Christensen JA, Limb CJ, Chatterjee M. Deficits in the Sensitivity to Pitch Sweeps by School-Aged Children Wearing Cochlear Implants. Front Neurosci 2016; 10:73. [PMID: 26973451 PMCID: PMC4776214 DOI: 10.3389/fnins.2016.00073] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 02/17/2016] [Indexed: 11/13/2022] Open
Abstract
Sensitivity to static changes in pitch has been shown to be poorer in school-aged children wearing cochlear implants (CIs) than children with normal hearing (NH), but it is unclear whether this is also the case for dynamic changes in pitch. Yet, dynamically changing pitch has considerable ecological relevance in terms of natural speech, particularly aspects such as intonation, emotion, or lexical tone information. Twenty one children with NH and 23 children wearing a CI participated in this study, along with 18 NH adults and 6 CI adults for comparison. Listeners with CIs used their clinically assigned settings with envelope-based coding strategies. Percent correct was measured in one- or three-interval two-alternative forced choice tasks, for the direction or discrimination of harmonic complexes based on a linearly rising or falling fundamental frequency. Sweep rates were adjusted per subject, in a logarithmic scale, so as to cover the full extent of the psychometric function. Data for up- and down-sweeps were fitted separately, using a maximum-likelihood technique. Fits were similar for up- and down-sweeps in the discrimination task, but diverged in the direction task because psychometric functions for down-sweeps were very shallow. Hits and false alarms were then converted into d′ and beta values, from which a threshold was extracted at a d′ of 0.77. Thresholds were very consistent between the two tasks and considerably higher (worse) for CI listeners than for their NH peers. Thresholds were also higher for children than adults. Factors such as age at implantation, age at profound hearing loss, and duration of CI experience did not play any major role in this sensitivity. Thresholds of dynamic pitch sensitivity (in either task) also correlated with thresholds for static pitch sensitivity and with performance in tasks related to speech prosody.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - Aditya M Kulkarni
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| | - Julie A Christensen
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| | - Charles J Limb
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine San Francisco, CA, USA
| | - Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| |
Collapse
|
36
|
Clarke J, Başkent D, Gaudrain E. Pitch and spectral resolution: A systematic comparison of bottom-up cues for top-down repair of degraded speech. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2016; 139:395-405. [PMID: 26827034 DOI: 10.1121/1.4939962] [Citation(s) in RCA: 13] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/28/2023]
Abstract
The brain is capable of restoring missing parts of speech, a top-down repair mechanism that enhances speech understanding in noisy environments. This enhancement can be quantified using the phonemic restoration paradigm, i.e., the improvement in intelligibility when silent interruptions of interrupted speech are filled with noise. Benefit from top-down repair of speech differs between cochlear implant (CI) users and normal-hearing (NH) listeners. This difference could be due to poorer spectral resolution and/or weaker pitch cues inherent to CI transmitted speech. In CIs, those two degradations cannot be teased apart because spectral degradation leads to weaker pitch representation. A vocoding method was developed to evaluate independently the roles of pitch and spectral resolution for restoration in NH individuals. Sentences were resynthesized with different spectral resolutions and with either retaining the original pitch cues or discarding them all. The addition of pitch significantly improved restoration only at six-bands spectral resolution. However, overall intelligibility of interrupted speech was improved both with the addition of pitch and with the increase in spectral resolution. This improvement may be due to better discrimination of speech segments from the filler noise, better grouping of speech segments together, and/or better bottom-up cues available in the speech segments.
Collapse
Affiliation(s)
- Jeanne Clarke
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, P.O. Box 30.001, BB21, 9700 RB Groningen, The Netherlands
| |
Collapse
|
37
|
McMurray B, Farris-Trimble A, Seedorff M, Rigler H. The Effect of Residual Acoustic Hearing and Adaptation to Uncertainty on Speech Perception in Cochlear Implant Users: Evidence From Eye-Tracking. Ear Hear 2016; 37:e37-51. [PMID: 26317298 PMCID: PMC4717908 DOI: 10.1097/aud.0000000000000207] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
OBJECTIVES While outcomes with cochlear implants (CIs) are generally good, performance can be fragile. The authors examined two factors that are crucial for good CI performance. First, while there is a clear benefit for adding residual acoustic hearing to CI stimulation (typically in low frequencies), it is unclear whether this contributes directly to phonetic categorization. Thus, the authors examined perception of voicing (which uses low-frequency acoustic cues) and fricative place of articulation (s/∫, which does not) in CI users with and without residual acoustic hearing. Second, in speech categorization experiments, CI users typically show shallower identification functions. These are typically interpreted as deriving from noisy encoding of the signal. However, psycholinguistic work suggests shallow slopes may also be a useful way to adapt to uncertainty. The authors thus employed an eye-tracking paradigm to examine this in CI users. DESIGN Participants were 30 CI users (with a variety of configurations) and 22 age-matched normal hearing (NH) controls. Participants heard tokens from six b/p and six s/∫ continua (eight steps) spanning real words (e.g., beach/peach, sip/ship). Participants selected the picture corresponding to the word they heard from a screen containing four items (a b-, p-, s- and ∫-initial item). Eye movements to each object were monitored as a measure of how strongly they were considering each interpretation in the moments leading up to their final percept. RESULTS Mouse-click results (analogous to phoneme identification) for voicing showed a shallower slope for CI users than NH listeners, but no differences between CI users with and without residual acoustic hearing. For fricatives, CI users also showed a shallower slope, but unexpectedly, acoustic + electric listeners showed an even shallower slope. Eye movements showed a gradient response to fine-grained acoustic differences for all listeners. Even considering only trials in which a participant clicked "b" (for example), and accounting for variation in the category boundary, participants made more looks to the competitor ("p") as the voice onset time neared the boundary. CI users showed a similar pattern, but looked to the competitor more than NH listeners, and this was not different at different continuum steps. CONCLUSION Residual acoustic hearing did not improve voicing categorization suggesting it may not help identify these phonetic cues. The fact that acoustic + electric users showed poorer performance on fricatives was unexpected as they usually show a benefit in standardized perception measures, and as sibilants contain little energy in the low-frequency (acoustic) range. The authors hypothesize that these listeners may overweight acoustic input, and have problems when this is not available (in fricatives). Thus, the benefit (or cost) of acoustic hearing for phonetic categorization may be complex. Eye movements suggest that in both CI and NH listeners, phoneme categorization is not a process of mapping continuous cues to discrete categories. Rather listeners preserve gradiency as a way to deal with uncertainty. CI listeners appear to adapt to their implant (in part) by amplifying competitor activation to preserve their flexibility in the face of potential misperceptions.
Collapse
Affiliation(s)
- Bob McMurray
- Departments of Psychological and Brain Sciences, Communication Sciences and Disorders, and Linguistics, University of Iowa, Iowa City, Iowa, USA
| | - Ashley Farris-Trimble
- Department of Linguistics, Simon Fraser University, Burnaby, British Columbia, Canada
| | - Michael Seedorff
- Department of Biostatistics, University of Iowa, Iowa City, Iowa, USA
| | - Hannah Rigler
- Department of Psychological and Brain Sciences, University of Iowa, Iowa City, Iowa, USA
| |
Collapse
|
38
|
Melodic pitch perception and lexical tone perception in Mandarin-speaking cochlear implant users. Ear Hear 2015; 36:102-10. [PMID: 25099401 DOI: 10.1097/aud.0000000000000086] [Citation(s) in RCA: 29] [Impact Index Per Article: 3.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES To examine the relationship between lexical tone perception and melodic pitch perception in Mandarin-speaking cochlear implant (CI) users and to investigate the influence of previous acoustic hearing on CI users' speech and music perception. DESIGN Lexical tone perception and melodic contour identification (MCI) were measured in 21 prelingual and 11 postlingual young (aged 6-26 years) Mandarin-speaking CI users. Lexical tone recognition was measured for four tonal patterns: tone 1 (flat F0), tone 2 (rising F0), tone 3 (falling-rising F0), and tone 4 (falling F0). MCI was measured using nine five-note melodic patterns that contained changes in pitch contour, as well as different semitone spacing between notes. RESULTS Lexical tone recognition was generally good (overall mean = 81% correct), and there was no significant difference between subject groups. MCI performance was generally poor (mean = 23% correct). MCI performance was significantly better for postlingual (mean = 32% correct) than for prelingual CI participants (mean = 18% correct). After correcting for outliers, there was no significant correlation between lexical tone recognition and MCI performance for prelingual or postlingual CI participants. Age at deafness was significantly correlated with MCI performance only for postlingual participants. CI experience was significantly correlated with MCI performance for both prelingual and postlingual participants. Duration of deafness was significantly correlated with tone recognition only for prelingual participants. CONCLUSIONS Despite the prevalence of pitch cues in Mandarin, the present CI participants had great difficulty perceiving melodic pitch. The availability of amplitude and duration cues in lexical tones most likely compensated for the poor pitch perception observed with these CI listeners. Previous acoustic hearing experience seemed to benefit postlingual CI users' melodic pitch perception. Longer CI experience was associated with better MCI performance for both subject groups, suggesting that CI users' music perception may improve as they gain experience with their device.
Collapse
|
39
|
Chatterjee M, Zion DJ, Deroche ML, Burianek BA, Limb CJ, Goren AP, Kulkarni AM, Christensen JA. Voice emotion recognition by cochlear-implanted children and their normally-hearing peers. Hear Res 2015; 322:151-62. [PMID: 25448167 PMCID: PMC4615700 DOI: 10.1016/j.heares.2014.10.003] [Citation(s) in RCA: 88] [Impact Index Per Article: 9.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2014] [Revised: 08/27/2014] [Accepted: 10/06/2014] [Indexed: 10/24/2022]
Abstract
Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups' mean performance is similar to aNHs' performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA.
| | - Danielle J Zion
- Department of Hearing & Speech Sciences, University of Maryland, 0100 LeFrak Hall, College Park, MD 20742, USA
| | - Mickael L Deroche
- Department of Otolaryngology, Johns Hopkins University School of Medicine, 818 Ross Research Building, 720 Rutland Avenue, Baltimore, MD, USA
| | - Brooke A Burianek
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| | - Charles J Limb
- Department of Otolaryngology, Johns Hopkins University School of Medicine, 818 Ross Research Building, 720 Rutland Avenue, Baltimore, MD, USA
| | - Alison P Goren
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA; Department of Hearing & Speech Sciences, University of Maryland, 0100 LeFrak Hall, College Park, MD 20742, USA
| | - Aditya M Kulkarni
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| | - Julie A Christensen
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| |
Collapse
|
40
|
Chatterjee M, Kulkarni AM. Sensitivity to pulse phase duration in cochlear implant listeners: effects of stimulation mode. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2014; 136:829-40. [PMID: 25096116 PMCID: PMC4144184 DOI: 10.1121/1.4884773] [Citation(s) in RCA: 11] [Impact Index Per Article: 1.1] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/31/2013] [Revised: 06/09/2014] [Accepted: 06/11/2014] [Indexed: 05/23/2023]
Abstract
The objective of this study was to investigate charge-integration at threshold by cochlear implant listeners using pulse train stimuli in different stimulation modes (monopolar, bipolar, tripolar). The results partially confirmed and extended the findings of previous studies conducted in animal models showing that charge-integration depends on the stimulation mode. The primary overall finding was that threshold vs pulse phase duration functions had steeper slopes in monopolar mode and shallower slopes in more spatially restricted modes. While the result was clear-cut in eight users of the Cochlear Corporation(TM) device, the findings with the six user of the Advanced Bionics(TM) device who participated were less consistent. It is likely that different stimulation modes excite different neuronal populations and/or sites of excitation on the same neuron (e.g., peripheral process vs central axon). These differences may influence not only charge integration but possibly also temporal dynamics at suprathreshold levels and with more speech-relevant stimuli. Given the present interest in focused stimulation modes, these results have implications for cochlear implant speech processor design and protocols used to map acoustic amplitude to electric stimulation parameters.
Collapse
Affiliation(s)
- Monita Chatterjee
- Boys Town National Research Hospital, 555 N 30th Street, Omaha, Nebraska 68131
| | - Aditya M Kulkarni
- Boys Town National Research Hospital, 555 N 30th Street, Omaha, Nebraska 68131
| |
Collapse
|