1
|
Yu Q, Li H, Li S, Tang P. Prosodic and Visual Cues Facilitate Irony Comprehension by Mandarin-Speaking Children With Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:2172-2190. [PMID: 38820233 DOI: 10.1044/2024_jslhr-23-00701] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/02/2024]
Abstract
PURPOSE This study investigated irony comprehension by Mandarin-speaking children with cochlear implants, focusing on how prosodic and visual cues contribute to their comprehension, and whether second-order Theory of Mind is required for using these cues. METHOD We tested 52 Mandarin-speaking children with cochlear implants (aged 3-7 years) and 52 age- and gender-matched children with normal hearing. All children completed a Theory of Mind test and a story comprehension test. Ironic stories were presented in three conditions, each providing different cues: (a) context-only, (b) context and prosody, and (c) context, prosody, and visual cues. Comparisons were conducted on the accuracy of story understanding across the three conditions to examine the role of prosodic and visual cues. RESULTS The results showed that, compared to the context-only condition, the additional prosodic and visual cues both improved the accuracy of irony comprehension for children with cochlear implants, similar to their normal-hearing peers. Furthermore, such improvements were observed for all children, regardless of whether they passed the second-order Theory of Mind test or not. CONCLUSIONS This study is the first to demonstrate the benefits of prosodic and visual cues in irony comprehension, without reliance on second-order Theory of Mind, for Mandarin-speaking children with cochlear implants. It implies potential insights for utilizing prosodic and visual cues in intervention strategies to promote irony comprehension.
Collapse
Affiliation(s)
- Qianxi Yu
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Honglan Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Shanpeng Li
- School of Foreign Studies, Nanjing University of Science and Technology, China
| | - Ping Tang
- School of Foreign Studies, Nanjing University of Science and Technology, China
| |
Collapse
|
2
|
Yüksel M, Sarlik E, Çiprut A. Emotions and Psychological Mechanisms of Listening to Music in Cochlear Implant Recipients. Ear Hear 2023; 44:1451-1463. [PMID: 37280743 DOI: 10.1097/aud.0000000000001388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Music is a multidimensional phenomenon and is classified by its arousal properties, emotional quality, and structural characteristics. Although structural features of music (i.e., pitch, timbre, and tempo) and music emotion recognition in cochlear implant (CI) recipients are popular research topics, music-evoked emotions, and related psychological mechanisms that reflect both the individual and social context of music are largely ignored. Understanding the music-evoked emotions (the "what") and related mechanisms (the "why") can help professionals and CI recipients better comprehend the impact of music on CI recipients' daily lives. Therefore, the purpose of this study is to evaluate these aspects in CI recipients and compare their findings to those of normal hearing (NH) controls. DESIGN This study included 50 CI recipients with diverse auditory experiences who were prelingually deafened (deafened at or before 6 years of age)-early implanted (N = 21), prelingually deafened-late implanted (implanted at or after 12 years of age-N = 13), and postlingually deafened (N = 16) as well as 50 age-matched NH controls. All participants completed the same survey, which included 28 emotions and 10 mechanisms (Brainstem reflex, Rhythmic entrainment, Evaluative Conditioning, Contagion, Visual imagery, Episodic memory, Musical expectancy, Aesthetic judgment, Cognitive appraisal, and Lyrics). Data were presented in detail for CI groups and compared between CI groups and between CI and NH groups. RESULTS The principal component analysis showed five emotion factors that are explained by 63.4% of the total variance, including anxiety and anger, happiness and pride, sadness and pain, sympathy and tenderness, and serenity and satisfaction in the CI group. Positive emotions such as happiness, tranquility, love, joy, and trust ranked as most often experienced in all groups, whereas negative and complex emotions such as guilt, fear, anger, and anxiety ranked lowest. The CI group ranked lyrics and rhythmic entrainment highest in the emotion mechanism, and there was a statistically significant group difference in the episodic memory mechanism, in which the prelingually deafened, early implanted group scored the lowest. CONCLUSION Our findings indicate that music can evoke similar emotions in CI recipients with diverse auditory experiences as it does in NH individuals. However, prelingually deafened and early implanted individuals lack autobiographical memories associated with music, which affects the feelings evoked by music. In addition, the preference for rhythmic entrainment and lyrics as mechanisms of music-elicited emotions suggests that rehabilitation programs should pay particular attention to these cues.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Ankara Medipol University School of Health Sciences, Department of Speech and Language Therapy, Ankara, Turkey
| | - Esra Sarlik
- Marmara University Institute of Health Sciences, Audiology and Speech Disorders Program, Istanbul, Turkey
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Department of Audiology, Istanbul, Turkey
| |
Collapse
|
3
|
Cartocci G, Inguscio BMS, Giorgi A, Vozzi A, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Fetoni AR, Freni F, Ciodaro F, Galletti F, Albera R, Canale A, Piccioni LO, Babiloni F. Music in noise recognition: An EEG study of listening effort in cochlear implant users and normal hearing controls. PLoS One 2023; 18:e0288461. [PMID: 37561758 PMCID: PMC10414671 DOI: 10.1371/journal.pone.0288461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/01/2022] [Accepted: 06/27/2023] [Indexed: 08/12/2023] Open
Abstract
Despite the plethora of studies investigating listening effort and the amount of research concerning music perception by cochlear implant (CI) users, the investigation of the influence of background noise on music processing has never been performed. Given the typical speech in noise recognition task for the listening effort assessment, the aim of the present study was to investigate the listening effort during an emotional categorization task on musical pieces with different levels of background noise. The listening effort was investigated, in addition to participants' ratings and performances, using EEG features known to be involved in such phenomenon, that is alpha activity in parietal areas and in the left inferior frontal gyrus (IFG), that includes the Broca's area. Results showed that CI users performed worse than normal hearing (NH) controls in the recognition of the emotional content of the stimuli. Furthermore, when considering the alpha activity corresponding to the listening to signal to noise ratio (SNR) 5 and SNR10 conditions subtracted of the activity while listening to the Quiet condition-ideally removing the emotional content of the music and isolating the difficulty level due to the SNRs- CI users reported higher levels of activity in the parietal alpha and in the homologous of the left IFG in the right hemisphere (F8 EEG channel), in comparison to NH. Finally, a novel suggestion of a particular sensitivity of F8 for SNR-related listening effort in music was provided.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Andrea Giorgi
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| | | | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Anna Rita Fetoni
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli," IRCCS, Rome, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Turin, Italy
| | - Lucia Oriella Piccioni
- Department of Otolaryngology-Head and Neck Surgery, IRCCS San Raffaele Scientific Institute, Milan, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
- BrainSigns ltd, Rome, Italy
| |
Collapse
|
4
|
Lee J, Han JH, Lee HJ. Development of Novel Musical Stimuli to Investigate the Perception of Musical Emotions in Individuals With Hearing Loss. J Korean Med Sci 2023; 38:e82. [PMID: 36974396 PMCID: PMC10042730 DOI: 10.3346/jkms.2023.38.e82] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/20/2022] [Accepted: 12/15/2022] [Indexed: 03/29/2023] Open
Abstract
BACKGROUND Many studies have examined the perception of musical emotion using excerpts from familiar music that includes highly expressed emotions to classify emotional choices. However, using familiar music to study musical emotions in people with acquired hearing loss could produce ambiguous results as to whether the emotional perception is due to previous experiences or listening to the current musical stimuli. To overcome this limitation, we developed new musical stimuli to study emotional perception without the effects of episodic memory. METHODS A musician was instructed to compose five melodies with evenly distributed pitches around 1 kHz. The melodies were created to express the emotions of happy, sad, angry, tender, and neutral. To evaluate whether these melodies expressed the intended emotions, two methods were applied. First, we classified the expressed emotions of melodies with selected musical features from 60 features using genetic algorithm-based k-nearest neighbors. Second, forty-four people with normal hearing participated in an online survey regarding the emotional perception of music based on dimensional and discrete approaches to evaluate the musical stimuli set. RESULTS Twenty-four selected musical features produced classification for intended emotions with an accuracy of 76%. The results of the online survey in the normal hearing (NH) group showed that the intended emotions were selected significantly more often than the others. K-means clustering analysis revealed that melodies with arousal and valence ratings corresponded to representative quadrants of interest. Additionally, the applicability of the stimuli was tested in 4 individuals with high-frequency hearing loss. CONCLUSION By applying the individuals with NH, the musical stimuli were shown to classify emotions with high accuracy, as expressed. These results confirm that the set of musical stimuli can be used to study the perceived emotion in music, demonstrating the validity of the musical stimuli, independent of innate musical bias such as due to episodic memory. Furthermore, musical stimuli could be helpful for further studying perceived musical emotion in people with hearing loss because of the controlled pitch for each emotion.
Collapse
Affiliation(s)
- Jihyun Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Ji-Hye Han
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
| | - Hyo-Jeong Lee
- Laboratory of Brain & Cognitive Sciences for Convergence Medicine, Hallym University College of Medicine, Anyang, Korea
- Ear and Interaction Center, Doheun Institute for Digital Innovation in Medicine (D.I.D.I.M.), Hallym University Medical Center, Anyang, Korea
- Department of Otorhinolaryngology, Hallym University College of Medicine, Chuncheon, Korea.
| |
Collapse
|
5
|
Harding EE, Gaudrain E, Hrycyk IJ, Harris RL, Tillmann B, Maat B, Free RH, Başkent D. Musical Emotion Categorization with Vocoders of Varying Temporal and Spectral Content. Trends Hear 2023; 27:23312165221141142. [PMID: 36628512 PMCID: PMC9837297 DOI: 10.1177/23312165221141142] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/12/2023] Open
Abstract
While previous research investigating music emotion perception of cochlear implant (CI) users observed that temporal cues informing tempo largely convey emotional arousal (relaxing/stimulating), it remains unclear how other properties of the temporal content may contribute to the transmission of arousal features. Moreover, while detailed spectral information related to pitch and harmony in music - often not well perceived by CI users- reportedly conveys emotional valence (positive, negative), it remains unclear how the quality of spectral content contributes to valence perception. Therefore, the current study used vocoders to vary temporal and spectral content of music and tested music emotion categorization (joy, fear, serenity, sadness) in 23 normal-hearing participants. Vocoders were varied with two carriers (sinewave or noise; primarily modulating temporal information), and two filter orders (low or high; primarily modulating spectral information). Results indicated that emotion categorization was above-chance in vocoded excerpts but poorer than in a non-vocoded control condition. Among vocoded conditions, better temporal content (sinewave carriers) improved emotion categorization with a large effect while better spectral content (high filter order) improved it with a small effect. Arousal features were comparably transmitted in non-vocoded and vocoded conditions, indicating that lower temporal content successfully conveyed emotional arousal. Valence feature transmission steeply declined in vocoded conditions, revealing that valence perception was difficult for both lower and higher spectral content. The reliance on arousal information for emotion categorization of vocoded music suggests that efforts to refine temporal cues in the CI user signal may immediately benefit their music emotion perception.
Collapse
Affiliation(s)
- Eleanor E. Harding
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands,Eleanor E. Harding, Department of Otorhinolarynology, University Medical Center Groningen, Hanzeplein 1 9713 GZ, Groningen, The Netherlands.
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Imke J. Hrycyk
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| | - Robert L. Harris
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Prins Claus Conservatoire, Hanze University of Applied Sciences, Groningen, The Netherlands
| | - Barbara Tillmann
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, Université Lyon 1, Université de Saint-Etienne, Lyon, France
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Rolien H. Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands,Cochlear Implant Center Northern Netherlands, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen,
The Netherlands,Graduate School of Medical Sciences, Research School of Behavioural
and Cognitive Neurosciences, University of Groningen, Groningen,
The Netherlands
| |
Collapse
|
6
|
Leterme G, Guigou C, Guenser G, Bigand E, Bozorg Grayeli A. Effect of Sound Coding Strategies on Music Perception with a Cochlear Implant. J Clin Med 2022; 11:jcm11154425. [PMID: 35956042 PMCID: PMC9369156 DOI: 10.3390/jcm11154425] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/25/2022] [Revised: 07/15/2022] [Accepted: 07/26/2022] [Indexed: 11/21/2022] Open
Abstract
The goal of this study was to evaluate the music perception of cochlear implantees with two different sound processing strategies. Methods: Twenty-one patients with unilateral or bilateral cochlear implants (Oticon Medical®) were included. A music trial evaluated emotions (sad versus happy based on tempo and/or minor versus major modes) with three tests of increasing difficulty. This was followed by a test evaluating the perception of musical dissonances (marked out of 10). A novel sound processing strategy reducing spectral distortions (CrystalisXDP, Oticon Medical) was compared to the standard strategy (main peak interleaved sampling). Each strategy was used one week before the music trial. Results: Total music score was higher with CrystalisXDP than with the standard strategy. Nine patients (21%) categorized music above the random level (>5) on test 3 only based on mode with either of the strategies. In this group, CrystalisXDP improved the performances. For dissonance detection, 17 patients (40%) scored above random level with either of the strategies. In this group, CrystalisXDP did not improve the performances. Conclusions: CrystalisXDP, which enhances spectral cues, seemed to improve the categorization of happy versus sad music. Spectral cues could participate in musical emotions in cochlear implantees and improve the quality of musical perception.
Collapse
Affiliation(s)
- Gaëlle Leterme
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| | - Caroline Guigou
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
- Correspondence: ; Tel.: +33-615718531
| | - Geoffrey Guenser
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
| | - Emmanuel Bigand
- LEAD Research Laboratory, CNRS UMR 5022, Bourgogne-Franche-Comté University, 21000 Dijon, France;
| | - Alexis Bozorg Grayeli
- Otolaryngology, Head and Neck Surgery Department, Dijon University Hospital, 21000 Dijon, France; (G.L.); (G.G.); (A.B.G.)
- ImVia Research Laboratory, Bourgogne-Franche-Comté University, 21000 Dijon, France
| |
Collapse
|
7
|
Lin Y, Wu C, Limb CJ, Lu H, Feng IJ, Peng S, Deroche MLD, Chatterjee M. Voice emotion recognition by Mandarin-speaking pediatric cochlear implant users in Taiwan. Laryngoscope Investig Otolaryngol 2022; 7:250-258. [PMID: 35155805 PMCID: PMC8823186 DOI: 10.1002/lio2.732] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/29/2021] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross-culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. METHODS This cohort study enrolled 60 cochlear-implanted (cCI) Mandarin-speaking, school-aged children who underwent cochlear implantation before 5 years of age and 53 normal-hearing children (cNH) in Taiwan. The emotion recognition and the sensitivity of fundamental frequency (F0) changes for those school-aged cNH and cCI (6-17 years old) were examined in a tertiary referred center. RESULTS The mean emotion recognition score of the cNH group was significantly better than the cCI. Female speakers' vocal emotions are more easily to be recognized than male speakers' emotion. There was a significant effect of age at test on voice recognition performance. The average score of cCI with full-spectrum speech was close to the average score of cNH with eight-channel narrowband vocoder speech. The average performance of voice emotion recognition across speakers for cCI could be predicted by their sensitivity to changes in F0. CONCLUSIONS Better pitch discrimination ability comes with better voice emotion recognition for Mandarin-speaking cCI. Besides the F0 cues, cCI are likely to adapt their voice emotion recognition by relying more on secondary cues such as intensity and duration. Although cross-culture differences exist for the acoustic features of voice emotion, Mandarin-speaking cCI and their English-speaking cCI peer expressed a positive effect for age at test on emotion recognition, suggesting the learning effect and brain plasticity. Therefore, further device/processor development to improve presentation of pitch information and more rehabilitative efforts are needed to improve the transmission and perception of voice emotion in Mandarin. LEVEL OF EVIDENCE 3.
Collapse
Affiliation(s)
- Yung‐Song Lin
- Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
- Department of OtolaryngologySchool of Medicine, College of Medicine, Taipei Medical UniversityTaipeiTaiwan
| | - Che‐Ming Wu
- Department of OtorhinolaryngologyNew Taipei Municipal TuCheng Hospital (built and operated by Chang Gung Medical Foundation)New Taipei CityTaiwan
- Department of OtorhinolaryngologyChang Gung Memorial HospitalTaoyuanTaiwan
- School of Medicine, Chang Gung UniversityTaoyuanTaiwan
| | - Charles J. Limb
- School of Medicine, University of California San FranciscoSan FranciscoCaliforniaUSA
| | - Hui‐Ping Lu
- Center of Speech and Hearing, Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
| | - I. Jung Feng
- Institute of Precision Medicine, National Sun Yat‐sen UniversityKaohsiungTaiwan
| | - Shu‐Chen Peng
- Center for Devices and Radiological HealthUnited States Food and Drug AdministrationSilver SpringMarylandUSA
| | | | | |
Collapse
|
8
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
9
|
Sparreboom M, Ausili S, Agterberg MJH, Mylanus EAM. Bimodal Fitting and Bilateral Cochlear Implants in Children With Significant Residual Hearing: The Impact of Asymmetry in Spatial Release of Masking on Localization. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4030-4043. [PMID: 34525311 DOI: 10.1044/2021_jslhr-20-00720] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to gain more insight into the primary auditory abilities of children with significant residual hearing in order to improve decision making when choosing between bimodal fitting or sequential bilateral cochlear implantation. Method Sound localization abilities, spatial release of masking, and fundamental frequency perception were tested. Nine children with bimodal fitting and seven children with sequential bilateral cochlear implants were included in the study. As a reference, 15 children with normal hearing and two children with simultaneous bilateral cochlear implants were included. Results On all outcome measures, the implanted children performed worse than the normal hearing children. For high-frequency localization, children with sequential bilateral cochlear implants performed significantly better than children with bimodal fitting. Compared to children with normal hearing, the left-right asymmetry in spatial release of masking was significant. When the implant was hindered by noise, bimodally fitted children obtained significantly lower spatial release of masking compared to when the hearing aid was hindered by noise. Overall, the larger the left-right asymmetry in spatial release of masking, the poorer the localization skills. No significant differences were found in fundamental frequency perception between the implant groups. Conclusions The data hint to an advantage of bilateral implantation over bimodal fitting. The extent of asymmetry in spatial release of masking is a promising tool for decision making when choosing whether to continue with the hearing aid or to provide a second cochlear implant in children with significant residual hearing.
Collapse
Affiliation(s)
- Marloes Sparreboom
- Department of Otorhinolaryngology-Head and Neck Surgery, Hearing and Implants, and Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, the Netherlands
| | | | - Martijn J H Agterberg
- Department of Otorhinolaryngology-Head and Neck Surgery, Hearing and Implants, and Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, the Netherlands
- Department of Biophysics and Donders Institute for Brain, Cognition and Behaviour, Radboud University, Nijmegen, the Netherlands
| | - Emmanuel A M Mylanus
- Department of Otorhinolaryngology-Head and Neck Surgery, Hearing and Implants, and Donders Institute for Brain, Cognition and Behaviour, Radboud University Medical Centre, Nijmegen, the Netherlands
| |
Collapse
|
10
|
Rapid Assessment of Non-Verbal Auditory Perception in Normal-Hearing Participants and Cochlear Implant Users. J Clin Med 2021; 10:jcm10102093. [PMID: 34068067 PMCID: PMC8152499 DOI: 10.3390/jcm10102093] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 04/26/2021] [Accepted: 05/06/2021] [Indexed: 01/17/2023] Open
Abstract
In the case of hearing loss, cochlear implants (CI) allow for the restoration of hearing. Despite the advantages of CIs for speech perception, CI users still complain about their poor perception of their auditory environment. Aiming to assess non-verbal auditory perception in CI users, we developed five listening tests. These tests measure pitch change detection, pitch direction identification, pitch short-term memory, auditory stream segregation, and emotional prosody recognition, along with perceived intensity ratings. In order to test the potential benefit of visual cues for pitch processing, the three pitch tests included half of the trials with visual indications to perform the task. We tested 10 normal-hearing (NH) participants with material being presented as original and vocoded sounds, and 10 post-lingually deaf CI users. With the vocoded sounds, the NH participants had reduced scores for the detection of small pitch differences, and reduced emotion recognition and streaming abilities compared to the original sounds. Similarly, the CI users had deficits for small differences in the pitch change detection task and emotion recognition, as well as a decreased streaming capacity. Overall, this assessment allows for the rapid detection of specific patterns of non-verbal auditory perception deficits. The current findings also open new perspectives about how to enhance pitch perception capacities using visual cues.
Collapse
|
11
|
D'Onofrio KL, Gifford RH. Bimodal Benefit for Music Perception: Effect of Acoustic Bandwidth. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1341-1353. [PMID: 33784471 PMCID: PMC8608177 DOI: 10.1044/2020_jslhr-20-00390] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 07/06/2020] [Revised: 10/15/2020] [Accepted: 12/04/2020] [Indexed: 05/29/2023]
Abstract
Purpose The challenges associated with cochlear implant (CI)-mediated listening are well documented; however, they can be mitigated through the provision of aided acoustic hearing in the contralateral ear-a configuration termed bimodal hearing. This study extends previous literature to examine the effect of acoustic bandwidth in the non-CI ear for music perception. The primary aim was to determine the minimum and optimum acoustic bandwidth necessary to obtain bimodal benefit for music perception and speech perception. Method Participants included 12 adult bimodal listeners and 12 adult control listeners with normal hearing. Music perception was assessed via measures of timbre perception and subjective sound quality of real-world music samples. Speech perception was assessed via monosyllabic word recognition in quiet. Acoustic stimuli were presented to the non-CI ear in the following filter conditions: < 125, < 250, < 500, and < 750 Hz, and wideband (full bandwidth). Results Generally, performance for all stimuli improved with increasing acoustic bandwidth; however, the bandwidth that is both minimally and optimally beneficial may be dependent upon stimulus type. On average, music sound quality required wideband amplification, whereas speech recognition with a male talker in quiet required a narrower acoustic bandwidth (< 250 Hz) for significant benefit. Still, average speech recognition performance continued to improve with increasing bandwidth. Conclusion Further research is warranted to examine optimal acoustic bandwidth for additional stimulus types; however, these findings indicate that wideband amplification is most appropriate for speech and music perception in individuals with bimodal hearing.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| | - René H Gifford
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, TN
| |
Collapse
|
12
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
13
|
Steel MM, Polonenko MJ, Giannantonio S, Hopyan T, Papsin BC, Gordon KA. Music Perception Testing Reveals Advantages and Continued Challenges for Children Using Bilateral Cochlear Implants. Front Psychol 2020; 10:3015. [PMID: 32038391 PMCID: PMC6985588 DOI: 10.3389/fpsyg.2019.03015] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/28/2019] [Accepted: 12/19/2019] [Indexed: 11/25/2022] Open
Abstract
A modified version of the child’s Montreal Battery of Evaluation of Amusia (cMBEA) was used to assess music perception in children using bilateral cochlear implants. Our overall aim was to promote better performance by children with CIs on the cMBEA by modifying the complement of instruments used in the test and adding pieces transposed in frequency. The 10 test trials played by piano were removed and two high and two low frequency trials added to each of five subtests (20 additional). The modified cMBEA was completed by 14 children using bilateral cochlear implants and 23 peers with normal hearing. Results were compared with performance on the original version of the cMBEA previously reported in groups of similar aged children: 2 groups with normal hearing (n = 23: Hopyan et al., 2012; n = 16: Polonenko et al., 2017), 1 group using bilateral cochlear implants (CIs) (n = 26: Polonenko et al., 2017), 1 group using bimodal (hearing aid and CI) devices (n = 8: Polonenko et al., 2017), and 1 group using unilateral CI (n = 23: Hopyan et al., 2012). Children with normal hearing had high scores on the modified version of the cMBEA and there were no significant score differences from children with normal hearing who completed the original cMBEA. Children with CIs showed no significant improvement in scores on the modified cMBEA compared to peers with CIs who completed the original version of the test. The group with bilateral CIs who completed the modified cMBEA showed a trend toward better abilities to remember music compared to children listening through a unilateral CI but effects were smaller than in previous cohorts of children with bilateral CIs and bimodal devices who completed the original cMBEA. Results confirmed that musical perception changes with the type of instrument and is better for music transposed to higher rather than lower frequencies for children with normal hearing but not for children using bilateral CIs. Overall, the modified version of the cMBEA revealed that modifications to music do not overcome the limitations of the CI to improve music perception for children.
Collapse
Affiliation(s)
- Morrison M Steel
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Melissa J Polonenko
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Sara Giannantonio
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Talar Hopyan
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada
| | - Blake C Papsin
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| | - Karen A Gordon
- Archie's Cochlear Implant Laboratory, The Hospital for Sick Children, Toronto, ON, Canada.,Department of Otolaryngology-Head and Neck Surgery, University of Toronto, Toronto, ON, Canada
| |
Collapse
|
14
|
Midya V, Valla J, Balasubramanian H, Mathur A, Singh NC. Cultural differences in the use of acoustic cues for musical emotion experience. PLoS One 2019; 14:e0222380. [PMID: 31518379 PMCID: PMC6743780 DOI: 10.1371/journal.pone.0222380] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/05/2019] [Accepted: 08/29/2019] [Indexed: 01/21/2023] Open
Abstract
Does music penetrate cultural differences with its ability to evoke emotion? The ragas of Hindustani music are specific sequences of notes that elicit various emotions: happy, romantic, devotion, calm, angry, longing, tension and sad. They can be presented in two modes, alaap and gat, which differ in rhythm, but match in tonality. Participants from Indian and Non-Indian cultures (N = 144 and 112, respectively) rated twenty-four pieces of Hindustani ragas on eight dimensions of emotion, in a free response task. Of the 192 between-group comparisons, ratings differed in only 9% of the instances, showing universality across multiple musical emotions. Robust regression analyses and machine learning methods revealed tonality best explained emotion ratings for Indian participants whereas rhythm was the primary predictor in Non-Indian listeners. Our results provide compelling evidence for universality in emotions in the auditory domain in the realm of musical emotion, driven by distinct acoustic features that depend on listeners’ cultural backgrounds.
Collapse
Affiliation(s)
- Vishal Midya
- Language, Literacy, and Music Laboratory, National Brain Research Centre, Manesar, Haryana, India.,Division of Biostatistics and Bioinformatics, Department of Public Health, Penn State College of Medicine, Pennsylvania State University, Hershey, Pennsylvania, United States of America
| | - Jeffrey Valla
- Language, Literacy, and Music Laboratory, National Brain Research Centre, Manesar, Haryana, India
| | | | - Avantika Mathur
- Language, Literacy, and Music Laboratory, National Brain Research Centre, Manesar, Haryana, India
| | - Nandini Chatterjee Singh
- Language, Literacy, and Music Laboratory, National Brain Research Centre, Manesar, Haryana, India
| |
Collapse
|
15
|
Yüksel M, Meredith MA, Rubinstein JT. Effects of Low Frequency Residual Hearing on Music Perception and Psychoacoustic Abilities in Pediatric Cochlear Implant Recipients. Front Neurosci 2019; 13:924. [PMID: 31551687 PMCID: PMC6733978 DOI: 10.3389/fnins.2019.00924] [Citation(s) in RCA: 20] [Impact Index Per Article: 4.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2019] [Accepted: 08/19/2019] [Indexed: 12/02/2022] Open
Abstract
Studies have demonstrated the benefits of low frequency residual hearing in music perception and for psychoacoustic abilities of adult cochlear implant (CI) users, but less is known about these effects in the pediatric group. Understanding the contribution of combined electric and acoustic stimulation in this group can help to gain a better perspective on decisions regarding bilateral implantation. We evaluated the performance of six unilaterally implanted children between 9 and 13 years of age with contralateral residual hearing using the Clinical Assessment of Music Perception (CAMP), spectral ripple discrimination (SRD), and temporal modulation transfer function (TMTF) tests and compared findings with previous research. Our study sample performed similarly to normal hearing subjects in pitch direction discrimination (0.81 semitones) and performed well above typical CI users in melody recognition (43.37%). The performance difference was less in timbre recognition (48.61%), SRD (1.47 ripple/octave), and TMTF for four modulation frequencies. These findings suggest that the combination of low frequency acoustic hearing with the broader frequency range of electric hearing can help to increase clinical CI benefit in pediatric users and decisions regarding second-side implantation should consider these factors.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Audiology and Speech Disorders Program, Institute of Health Sciences, Marmara University, Istanbul, Turkey
| | - Margaret A Meredith
- Childhood Communication Center, Seattle Children's Hospital, Seattle, WA, United States
| | - Jay T Rubinstein
- Virginia Merrill Bloedel Hearing Research Center, Department of Otolaryngology - Head and Neck Surgery, University of Washington, Seattle, WA, United States
| |
Collapse
|
16
|
Gordon K, Kral A. Animal and human studies on developmental monaural hearing loss. Hear Res 2019; 380:60-74. [DOI: 10.1016/j.heares.2019.05.011] [Citation(s) in RCA: 27] [Impact Index Per Article: 5.4] [Reference Citation Analysis] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 11/14/2018] [Revised: 05/29/2019] [Accepted: 05/30/2019] [Indexed: 11/26/2022]
|
17
|
Polonenko MJ, Papsin BC, Gordon KA. Limiting asymmetric hearing improves benefits of bilateral hearing in children using cochlear implants. Sci Rep 2018; 8:13201. [PMID: 30181590 PMCID: PMC6123397 DOI: 10.1038/s41598-018-31546-8] [Citation(s) in RCA: 36] [Impact Index Per Article: 6.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/10/2018] [Accepted: 08/17/2018] [Indexed: 11/08/2022] Open
Abstract
Neurodevelopmental changes occur with asymmetric hearing loss, limiting binaural/spatial hearing and putting children at risk for social and educational challenges. These deficits may be mitigated by providing bilateral hearing in children through auditory prostheses. Effects on speech perception and spatial hearing were measured in a large cohort of >450 children who were deaf and used bilateral cochlear implants or bimodal devices (one cochlear implant and a contralateral hearing aid). Results revealed an advantage of bilateral over unilateral device use but this advantage decreased as hearing in the two ears became increasingly asymmetric. Delayed implantation of an ear with severe to profound deafness allowed asymmetric hearing, creating aural preference for the better hearing ear. These findings indicate that bilateral input with the most appropriate device for each ear should be provided early and without delay during development.
Collapse
Affiliation(s)
- Melissa Jane Polonenko
- Institute of Medical Science, The University of Toronto, Toronto, ON, M5S 1A8, Canada.
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, M5G 1X8, Canada.
| | - Blake Croll Papsin
- Institute of Medical Science, The University of Toronto, Toronto, ON, M5S 1A8, Canada
- Department of Otolaryngology - Head & Neck Surgery, The University of Toronto, Toronto, ON, M5G 2N2, Canada
- Otolaryngology - Head & Neck Surgery, The Hospital for Sick Children, Toronto, ON, M5G 1X8, Canada
| | - Karen Ann Gordon
- Institute of Medical Science, The University of Toronto, Toronto, ON, M5S 1A8, Canada
- Neurosciences and Mental Health, The Hospital for Sick Children, Toronto, ON, M5G 1X8, Canada
- Department of Otolaryngology - Head & Neck Surgery, The University of Toronto, Toronto, ON, M5G 2N2, Canada
- Otolaryngology - Head & Neck Surgery, The Hospital for Sick Children, Toronto, ON, M5G 1X8, Canada
| |
Collapse
|
18
|
Waaramaa T, Kukkonen T, Mykkänen S, Geneid A. Vocal Emotion Identification by Children Using Cochlear Implants, Relations to Voice Quality, and Musical Interests. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:973-985. [PMID: 29587304 DOI: 10.1044/2017_jslhr-h-17-0054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Accepted: 12/11/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE Listening tests for emotion identification were conducted with 8-17-year-old children with hearing impairment (HI; N = 25) using cochlear implants, and their 12-year-old peers with normal hearing (N = 18). The study examined the impact of musical interests and acoustics of the stimuli on correct emotion identification. METHOD The children completed a questionnaire with their background information and noting musical interests. They then listened to vocal stimuli produced by actors (N = 5) and consisting of nonsense sentences and prolonged vowels ([a:], [i:], and [u:]; N = 32) expressing excitement, anger, contentment, and fear. The children's task was to identify the emotions they heard in the sample by choosing from the provided options. Acoustics of the samples were studied using Praat software, and statistics were examined using SPSS 24 software. RESULTS The children with HI identified the emotions with 57% accuracy and the normal hearing children with 75% accuracy. Female listeners were more accurate than male listeners in both groups. Those who were implanted before age of 3 years identified emotions more accurately than others (p < .05). No connection between the child's audiogram and correct identification was observed. Musical interests and voice quality parameters were found to be related to correct identification. CONCLUSIONS Implantation age, musical interests, and voice quality tended to have an impact on correct emotion identification. Thus, in developing the cochlear implants, it may be worth paying attention to the acoustic structures of vocal emotional expressions, especially the formant frequency of F3. Supporting the musical interests of children with HI may help their emotional development and improve their social lives.
Collapse
Affiliation(s)
- Teija Waaramaa
- Tampere Research Centre for Journalism, Media and Communication (COMET), Faculty of Communication Sciences, University of Tampere, Finland
| | - Tarja Kukkonen
- Faculty of Social Sciences/Logopedics, University of Tampere, Finland
| | - Sari Mykkänen
- Hearing Centre, Tampere University Hospital, Finland
| | - Ahmed Geneid
- Department of Otorhinolaryngology and Phoniatrics-Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Finland
| |
Collapse
|
19
|
Yamazaki H, Easwar V, Polonenko MJ, Jiwani S, Wong DDE, Papsin BC, Gordon KA. Cortical hemispheric asymmetries are present at young ages and further develop into adolescence. Hum Brain Mapp 2017; 39:941-954. [PMID: 29134751 DOI: 10.1002/hbm.23893] [Citation(s) in RCA: 17] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 11/20/2016] [Revised: 10/07/2017] [Accepted: 11/08/2017] [Indexed: 02/01/2023] Open
Abstract
Specialization of the auditory cortices for pure tone listening may develop with age. In adults, the right hemisphere dominates when listening to pure tones and music; we thus hypothesized that (a) asymmetric function between auditory cortices increases with age and (b) this development is specific to tonal rather than broadband/non-tonal stimuli. Cortical responses to tone-bursts and broadband click-trains were recorded by multichannel electroencephalography in young children (5.1 ± 0.8 years old) and adolescents (15.2 ± 1.7 years old) with normal hearing. Peak dipole moments indicating activity strength in right and left auditory cortices were calculated using the Time Restricted, Artefact and Coherence source Suppression (TRACS) beamformer. Monaural click-trains and tone-bursts in young children evoked a dominant response in the contralateral right cortex by left ear stimulation and, similarly, a contralateral left cortex response to click-trains in the right ear. Responses to tone-bursts in the right ear were more bilateral. In adolescents, peak activity dominated in the right cortex in most conditions (tone-bursts from either ear and to clicks from the left ear). Bilateral activity was evoked by right ear click stimulation. Thus, right hemispheric specialization for monaural tonal stimuli begins in children as young as 5 years of age and becomes more prominent by adolescence. These changes were marked by consistent dipole moments in the right auditory cortex with age in contrast to decreases in dipole activity in all other stimulus conditions. Together, the findings reveal increasingly asymmetric function for the two auditory cortices, potentially to support greater cortical specialization with development into adolescence.
Collapse
Affiliation(s)
- Hiroshi Yamazaki
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Vijayalakshmi Easwar
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Melissa Jane Polonenko
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
| | - Salima Jiwani
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Daniel D E Wong
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada
| | - Blake Croll Papsin
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, University of Toronto, Toronto, Ontario, Canada
| | - Karen Ann Gordon
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Toronto, Ontario, Canada.,Department of Otolaryngology, University of Toronto, Toronto, Ontario, Canada.,Institute of Medical Sciences, University of Toronto, Toronto, Ontario, Canada
| |
Collapse
|
20
|
Polonenko MJ, Papsin BC, Gordon KA. Delayed access to bilateral input alters cortical organization in children with asymmetric hearing. NEUROIMAGE-CLINICAL 2017; 17:415-425. [PMID: 29159054 PMCID: PMC5683809 DOI: 10.1016/j.nicl.2017.10.036] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 10/25/2017] [Accepted: 10/31/2017] [Indexed: 11/19/2022]
Abstract
Bilateral hearing in early development protects auditory cortices from reorganizing to prefer the better ear. Yet, such protection could be disrupted by mismatched bilateral input in children with asymmetric hearing who require electric stimulation of the auditory nerve from a cochlear implant in their deaf ear and amplified acoustic sound from a hearing aid in their better ear (bimodal hearing). Cortical responses to bimodal stimulation were measured by electroencephalography in 34 bimodal users and 16 age-matched peers with normal hearing, and compared with the same measures previously reported for 28 age-matched bilateral implant users. Both auditory cortices increasingly favoured the better ear with delay to implanting the deaf ear; the time course mirrored that occurring with delay to bilateral implantation in unilateral implant users. Preference for the implanted ear tended to occur with ongoing implant use when hearing was poor in the non-implanted ear. Speech perception deteriorated with longer deprivation and poorer access to high-frequencies. Thus, cortical preference develops in children with asymmetric hearing but can be avoided by early provision of balanced bimodal stimulation. Although electric and acoustic stimulation differ, these inputs can work sympathetically when used bilaterally given sufficient hearing in the non-implanted ear.
Collapse
Affiliation(s)
- Melissa Jane Polonenko
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada; Neurosciences & Mental Health, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada.
| | - Blake Croll Papsin
- Department of Otolaryngology - Head & Neck Surgery, University of Toronto, Toronto, ON M5G 2N2, Canada; Otolaryngology - Head & Neck Surgery, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| | - Karen Ann Gordon
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada; Neurosciences & Mental Health, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada; Department of Otolaryngology - Head & Neck Surgery, University of Toronto, Toronto, ON M5G 2N2, Canada; Otolaryngology - Head & Neck Surgery, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| |
Collapse
|
21
|
Polonenko MJ, Giannantonio S, Papsin BC, Marsella P, Gordon KA. Music perception improves in children with bilateral cochlear implants or bimodal devices. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2017; 141:4494. [PMID: 28679263 DOI: 10.1121/1.4985123] [Citation(s) in RCA: 23] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/07/2023]
Abstract
The objectives of this study were to determine if music perception by pediatric cochlear implant users can be improved by (1) providing access to bilateral hearing through two cochlear implants or a cochlear implant and a contralateral hearing aid (bimodal users) and (2) any history of music training. The Montreal Battery of Evaluation of Musical Ability test was presented via soundfield to 26 bilateral cochlear implant users, 8 bimodal users and 16 children with normal hearing. Response accuracy and reaction time were recorded via an iPad application. Bilateral cochlear implant and bimodal users perceived musical characteristics less accurately and more slowly than children with normal hearing. Children who had music training were faster and more accurate, regardless of their hearing status. Reaction time on specific subtests decreased with age, years of musical training and, for implant users, better residual hearing. Despite effects of these factors on reaction time, bimodal and bilateral cochlear implant users' responses were less accurate than those of their normal hearing peers. This means children using bilateral cochlear implants and bimodal devices continue to experience challenges perceiving music that are related to hearing impairment and/or device limitations during development.
Collapse
Affiliation(s)
- Melissa J Polonenko
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, Bambino Gesù Pediatric Hospital, Piazza di Sant'Onofrio 4, 00165, Rome, Italy
| | - Blake C Papsin
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| | - Pasquale Marsella
- Audiology and Otosurgery Unit, Bambino Gesù Pediatric Hospital, Piazza di Sant'Onofrio 4, 00165, Rome, Italy
| | - Karen A Gordon
- Archie's Cochlear Implant Laboratory, Department of Otolaryngology, The Hospital for Sick Children, Room 6D08, Toronto M5G 1X8, Canada
| |
Collapse
|
22
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
23
|
Litovsky RY, Gordon K. Bilateral cochlear implants in children: Effects of auditory experience and deprivation on auditory perception. Hear Res 2016; 338:76-87. [PMID: 26828740 PMCID: PMC5647834 DOI: 10.1016/j.heares.2016.01.003] [Citation(s) in RCA: 70] [Impact Index Per Article: 8.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/07/2015] [Revised: 01/07/2016] [Accepted: 01/11/2016] [Indexed: 11/29/2022]
Abstract
Spatial hearing skills are essential for children as they grow, learn and play. These skills provide critical cues for determining the locations of sources in the environment, and enable segregation of important sounds, such as speech, from background maskers or interferers. Spatial hearing depends on availability of monaural cues and binaural cues. The latter result from integration of inputs arriving at the two ears from sounds that vary in location. The binaural system has exquisite mechanisms for capturing differences between the ears in both time of arrival and intensity. The major cues that are thus referred to as being vital for binaural hearing are: interaural differences in time (ITDs) and interaural differences in levels (ILDs). In children with normal hearing (NH), spatial hearing abilities are fairly well developed by age 4-5 years. In contrast, most children who are deaf and hear through cochlear implants (CIs) do not have an opportunity to experience normal, binaural acoustic hearing early in life. These children may function by having to utilize auditory cues that are degraded with regard to numerous stimulus features. In recent years there has been a notable increase in the number of children receiving bilateral CIs, and evidence suggests that while having two CIs helps them function better than when listening through a single CI, these children generally perform worse than their NH peers. This paper reviews some of the recent work on bilaterally implanted children. The focus is on measures of spatial hearing, including sound localization, release from masking for speech understanding in noise and binaural sensitivity using research processors. Data from behavioral and electrophysiological studies are included, with a focus on the recent work of the authors and their collaborators. The effects of auditory plasticity and deprivation on the emergence of binaural and spatial hearing are discussed along with evidence for reorganized processing from both behavioral and electrophysiological studies. The consequences of both unilateral and bilateral auditory deprivation during development suggest that the relevant set of issues is highly complex with regard to successes and the limitations experienced by children receiving bilateral cochlear implants. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Ruth Y Litovsky
- University of Wisconsin-Madison, 1500 Highland Ave, Madison, WI, 53705, United States.
| | | |
Collapse
|