1
|
Moore BCJ. The perception of emotion in music by people with hearing loss and people with cochlear implants. Philos Trans R Soc Lond B Biol Sci 2024; 379:20230258. [PMID: 39005027 DOI: 10.1098/rstb.2023.0258] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/12/2023] [Accepted: 10/02/2023] [Indexed: 07/16/2024] Open
Abstract
Music is an important part of life for many people. It can evoke a wide range of emotions, including sadness, happiness, anger, tension, relief and excitement. People with hearing loss and people with cochlear implants have reduced abilities to discriminate some of the features of musical sounds that may be involved in evoking emotions. This paper reviews these changes in perceptual abilities and describes how they affect the perception of emotion in music. For people with acquired partial hearing loss, it appears that the perception of emotion in music is almost normal, whereas congenital partial hearing loss is associated with impaired perception of music emotion. For people with cochlear implants, the ability to discriminate changes in fundamental frequency (associated with perceived pitch) is much worse than normal and musical harmony is hardly perceived. As a result, people with cochlear implants appear to judge emotion in music primarily using tempo and rhythm cues, and this limits the range of emotions that can be judged. This article is part of the theme issue 'Sensing and feeling: an integrative approach to sensory processing and emotional experience'.
Collapse
Affiliation(s)
- Brian C J Moore
- Cambridge Hearing Group, Department of Psychology, University of Cambridge, Downing Street, Cambridge CB2 3EB, UK
| |
Collapse
|
2
|
de Jong TJ, van der Schroeff MP, Hakkesteegt M, Vroegop JL. Emotional prosodic expression of children with hearing aids or cochlear implants, rated by adults and peers. Int J Audiol 2024:1-8. [PMID: 39126382 DOI: 10.1080/14992027.2024.2380098] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/25/2023] [Revised: 07/08/2024] [Accepted: 07/09/2024] [Indexed: 08/12/2024]
Abstract
OBJECTIVE The emotional prosodic expression potential of children with cochlear implants is poorer than that of normal hearing peers. Though little is known about children with hearing aids. DESIGN This study was set up to generate a better understanding of hearing aid users' prosodic identifiability compared to cochlear implant users and peers without hearing loss. STUDY SAMPLE Emotional utterances of 75 Dutch speaking children (7 - 12 yr; 26 CHA, 23 CCI, 26 CNH) were gathered. Utterances were evaluated blindly by normal hearing Dutch listeners: 22 children and 9 adults (17 - 24 yrs) for resemblance to three emotions (happiness, sadness, anger). RESULTS Emotions were more accurately recognised by adults than by children. Both children and adults correctly judged happiness significantly less often in CCI than in CNH. Also, adult listeners confused happiness with sadness more often in both CHA and CCI than in CNH. CONCLUSIONS Children and adults are able to accurately evaluate the emotions expressed through speech by children with varying degrees of hearing loss, ranging from mild to profound, nearly as well as they can with typically hearing children. The favourable outcomes emphasise the resilience of children with hearing loss in developing effective emotional communication skills.
Collapse
Affiliation(s)
- Tjeerd J de Jong
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Marc P van der Schroeff
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Marieke Hakkesteegt
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| | - Jantien L Vroegop
- Department of Otorhinolaryngology and Head and Neck Surgery, Erasmus University Medical Center Rotterdam, Rotterdam, The Netherlands
| |
Collapse
|
3
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024:00003446-990000000-00312. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study (Taitlebaum-Swead et al. 2022; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
4
|
Valentin O, Lehmann A, Nguyen D, Paquette S. Integrating Emotion Perception in Rehabilitation Programs for Cochlear Implant Users: A Call for a More Comprehensive Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1635-1642. [PMID: 38619441 DOI: 10.1044/2024_jslhr-23-00660] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
PURPOSE Postoperative rehabilitation programs for cochlear implant (CI) recipients primarily emphasize enhancing speech perception. However, effective communication in everyday social interactions necessitates consideration of diverse verbal social cues to facilitate language comprehension. Failure to discern emotional expressions may lead to maladjusted social behavior, underscoring the importance of integrating social cues perception into rehabilitation initiatives to enhance CI users' well-being. After conventional rehabilitation, CI users demonstrate varying levels of emotion perception abilities. This disparity notably impacts young CI users, whose emotion perception deficit can extend to social functioning, encompassing coping strategies and social competence, even when relying on nonauditory cues such as facial expressions. Knowing that emotion perception abilities generally decrease with age, acknowledging emotion perception impairments in aging CI users is crucial, especially since a direct correlation between quality-of-life scores and vocal emotion recognition abilities has been observed in adult CI users. After briefly reviewing the scope of CI rehabilitation programs and summarizing the mounting evidence on CI users' emotion perception deficits and their impact, we will present our recommendations for embedding emotional training as part of enriched and standardized evaluation/rehabilitation programs that can improve CI users' social integration and quality of life. CONCLUSIONS Evaluating all aspects, including emotion perception, in CI rehabilitation programs is crucial because it ensures a comprehensive approach that enhances speech comprehension and the emotional dimension of communication, potentially improving CI users' social interaction and overall well-being. The development of emotion perception training holds promises for CI users and individuals grappling with various forms of hearing loss and sensory deficits. Ultimately, adopting such a comprehensive approach has the potential to significantly elevate the overall quality of life for a broad spectrum of patients.
Collapse
Affiliation(s)
- Olivier Valentin
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Don Nguyen
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Psychology, Faculty of Arts and Science, Trent University, Peterborough, Ontario, Canada
| |
Collapse
|
5
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
6
|
Chatterjee M, Gajre S, Kulkarni AM, Barrett KC, Limb CJ. Predictors of Emotional Prosody Identification by School-Age Children With Cochlear Implants and Their Peers With Normal Hearing. Ear Hear 2024; 45:411-424. [PMID: 37811966 PMCID: PMC10922148 DOI: 10.1097/aud.0000000000001436] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/10/2023]
Abstract
OBJECTIVES Children with cochlear implants (CIs) vary widely in their ability to identify emotions in speech. The causes of this variability are unknown, but this knowledge will be crucial if we are to design improvements in technological or rehabilitative interventions that are effective for individual patients. The objective of this study was to investigate how well factors such as age at implantation, duration of device experience (hearing age), nonverbal cognition, vocabulary, and socioeconomic status predict prosody-based emotion identification in children with CIs, and how the key predictors in this population compare to children with normal hearing who are listening to either normal emotional speech or to degraded speech. DESIGN We measured vocal emotion identification in 47 school-age CI recipients aged 7 to 19 years in a single-interval, 5-alternative forced-choice task. None of the participants had usable residual hearing based on parent/caregiver report. Stimuli consisted of a set of semantically emotion-neutral sentences that were recorded by 4 talkers in child-directed and adult-directed prosody corresponding to five emotions: neutral, angry, happy, sad, and scared. Twenty-one children with normal hearing were also tested in the same tasks; they listened to both original speech and to versions that had been noise-vocoded to simulate CI information processing. RESULTS Group comparison confirmed the expected deficit in CI participants' emotion identification relative to participants with normal hearing. Within the CI group, increasing hearing age (correlated with developmental age) and nonverbal cognition outcomes predicted emotion recognition scores. Stimulus-related factors such as talker and emotional category also influenced performance and were involved in interactions with hearing age and cognition. Age at implantation was not predictive of emotion identification. Unlike the CI participants, neither cognitive status nor vocabulary predicted outcomes in participants with normal hearing, whether listening to original speech or CI-simulated speech. Age-related improvements in outcomes were similar in the two groups. Participants with normal hearing listening to original speech showed the greatest differences in their scores for different talkers and emotions. Participants with normal hearing listening to CI-simulated speech showed significant deficits compared with their performance with original speech materials, and their scores also showed the least effect of talker- and emotion-based variability. CI participants showed more variation in their scores with different talkers and emotions than participants with normal hearing listening to CI-simulated speech, but less so than participants with normal hearing listening to original speech. CONCLUSIONS Taken together, these results confirm previous findings that pediatric CI recipients have deficits in emotion identification based on prosodic cues, but they improve with age and experience at a rate that is similar to peers with normal hearing. Unlike participants with normal hearing, nonverbal cognition played a significant role in CI listeners' emotion identification. Specifically, nonverbal cognition predicted the extent to which individual CI users could benefit from some talkers being more expressive of emotions than others, and this effect was greater in CI users who had less experience with their device (or were younger) than CI users who had more experience with their device (or were older). Thus, in young prelingually deaf children with CIs performing an emotional prosody identification task, cognitive resources may be harnessed to a greater degree than in older prelingually deaf children with CIs or than children with normal hearing.
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Shivani Gajre
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Aditya M Kulkarni
- Auditory Prostheses & Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 N 30 St., Omaha, NE 68131, USA
| | - Karen C Barrett
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| | - Charles J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, California, USA
| |
Collapse
|
7
|
Sendesen İ, Sendesen E, Yücel E. Evaluation of musical emotion perception and language development in children with cochlear implants. Int J Pediatr Otorhinolaryngol 2023; 175:111753. [PMID: 37839291 DOI: 10.1016/j.ijporl.2023.111753] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 08/30/2023] [Revised: 10/05/2023] [Accepted: 10/10/2023] [Indexed: 10/17/2023]
Abstract
OBJECTIVES While the primary purpose of cochlear implant (CI) fitting is to improve individuals' receptive and expressive skills, musical emotion perception (MEP) is generally ignored. This study assesses the MEP and language skills (LS) of children using CI. METHODS 26 CI users and 26 matched healthy controls between the ages of 6 and 9 were included in the study. The Test of Language Development (TOLD) was applied to evaluate the LS of the participants, and the Montreal Emotion Identification Test (MEI) was applied to evaluate the MEP. RESULTS MEI test scores and all subtests of TOLD were statistically significantly lower in the CI group. Also, there was a statistically significant and moderate correlation between the listening subtest of TOLD and the MEI test. CONCLUSIONS MEP and language skills are poor in children with CI. Although language skills are primarily targeted in CI performance, improving MEP should also be included in rehabilitation programs. The relationship between music and the TOLD's listening subtest may provide evidence that listening skills can be improved by paying attention to the MEP, which is frequently ignored in rehabilitation programs.
Collapse
Affiliation(s)
- İrem Sendesen
- Department of Audiology, Gazi University, Ankara, Turkey; Ankara University, Faculty of Medicine, Otolaryngology Department, Audiology, Speech, Balance Disorders Diagnosis and Rehabilitation Unit, Ankara, Turkey.
| | - Eser Sendesen
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| | - Esra Yücel
- Department of Audiology, Hacettepe University, Ankara, Turkey.
| |
Collapse
|
8
|
Yüksel M, Sarlik E, Çiprut A. Emotions and Psychological Mechanisms of Listening to Music in Cochlear Implant Recipients. Ear Hear 2023; 44:1451-1463. [PMID: 37280743 DOI: 10.1097/aud.0000000000001388] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
OBJECTIVES Music is a multidimensional phenomenon and is classified by its arousal properties, emotional quality, and structural characteristics. Although structural features of music (i.e., pitch, timbre, and tempo) and music emotion recognition in cochlear implant (CI) recipients are popular research topics, music-evoked emotions, and related psychological mechanisms that reflect both the individual and social context of music are largely ignored. Understanding the music-evoked emotions (the "what") and related mechanisms (the "why") can help professionals and CI recipients better comprehend the impact of music on CI recipients' daily lives. Therefore, the purpose of this study is to evaluate these aspects in CI recipients and compare their findings to those of normal hearing (NH) controls. DESIGN This study included 50 CI recipients with diverse auditory experiences who were prelingually deafened (deafened at or before 6 years of age)-early implanted (N = 21), prelingually deafened-late implanted (implanted at or after 12 years of age-N = 13), and postlingually deafened (N = 16) as well as 50 age-matched NH controls. All participants completed the same survey, which included 28 emotions and 10 mechanisms (Brainstem reflex, Rhythmic entrainment, Evaluative Conditioning, Contagion, Visual imagery, Episodic memory, Musical expectancy, Aesthetic judgment, Cognitive appraisal, and Lyrics). Data were presented in detail for CI groups and compared between CI groups and between CI and NH groups. RESULTS The principal component analysis showed five emotion factors that are explained by 63.4% of the total variance, including anxiety and anger, happiness and pride, sadness and pain, sympathy and tenderness, and serenity and satisfaction in the CI group. Positive emotions such as happiness, tranquility, love, joy, and trust ranked as most often experienced in all groups, whereas negative and complex emotions such as guilt, fear, anger, and anxiety ranked lowest. The CI group ranked lyrics and rhythmic entrainment highest in the emotion mechanism, and there was a statistically significant group difference in the episodic memory mechanism, in which the prelingually deafened, early implanted group scored the lowest. CONCLUSION Our findings indicate that music can evoke similar emotions in CI recipients with diverse auditory experiences as it does in NH individuals. However, prelingually deafened and early implanted individuals lack autobiographical memories associated with music, which affects the feelings evoked by music. In addition, the preference for rhythmic entrainment and lyrics as mechanisms of music-elicited emotions suggests that rehabilitation programs should pay particular attention to these cues.
Collapse
Affiliation(s)
- Mustafa Yüksel
- Ankara Medipol University School of Health Sciences, Department of Speech and Language Therapy, Ankara, Turkey
| | - Esra Sarlik
- Marmara University Institute of Health Sciences, Audiology and Speech Disorders Program, Istanbul, Turkey
| | - Ayça Çiprut
- Marmara University Faculty of Medicine, Department of Audiology, Istanbul, Turkey
| |
Collapse
|
9
|
Paquette S, Deroche MLD, Goffi-Gomez MV, Hoshino ACH, Lehmann A. Predicting emotion perception abilities for cochlear implant users. Int J Audiol 2023; 62:946-954. [PMID: 36047767 DOI: 10.1080/14992027.2022.2111611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE In daily life, failure to perceive emotional expressions can result in maladjusted behaviour. For cochlear implant users, perceiving emotional cues in sounds remains challenging, and the factors explaining the variability in patients' sensitivity to emotions are currently poorly understood. Understanding how these factors relate to auditory proficiency is a major challenge of cochlear implant research and is critical in addressing patients' limitations. DESIGN To fill this gap, we evaluated different auditory perception aspects in implant users (pitch discrimination, music processing and speech intelligibility) and correlated them to their performance in an emotion recognition task. STUDY SAMPLE Eighty-four adults (18-76 years old) participated in our investigation; 42 cochlear implant users and 42 controls. Cochlear implant users performed worse than their controls on all tasks, and emotion perception abilities were correlated to their age and their clinical outcome as measured in the speech intelligibility task. RESULTS As previously observed, emotion perception abilities declined with age (here by about 2-3% in a decade). Interestingly, even when emotional stimuli were musical, CI users' skills relied more on processes underlying speech intelligibility. CONCLUSIONS These results suggest that speech processing remains a clinical priority even when one is interested in affective skills.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| | - M L D Deroche
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
- Laboratory for Hearing and Cognition, Psychology Department, Concordia University, Montreal, Canada
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
10
|
Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear Hear 2022; 43:1178-1188. [PMID: 34999594 PMCID: PMC9197138 DOI: 10.1097/aud.0000000000001181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
Collapse
|
11
|
Lin Y, Wu C, Limb CJ, Lu H, Feng IJ, Peng S, Deroche MLD, Chatterjee M. Voice emotion recognition by Mandarin-speaking pediatric cochlear implant users in Taiwan. Laryngoscope Investig Otolaryngol 2022; 7:250-258. [PMID: 35155805 PMCID: PMC8823186 DOI: 10.1002/lio2.732] [Citation(s) in RCA: 4] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/29/2021] [Accepted: 12/29/2021] [Indexed: 11/06/2022] Open
Abstract
OBJECTIVES To explore the effects of obligatory lexical tone learning on speech emotion recognition and the cross-culture differences between United States and Taiwan for speech emotion understanding in children with cochlear implant. METHODS This cohort study enrolled 60 cochlear-implanted (cCI) Mandarin-speaking, school-aged children who underwent cochlear implantation before 5 years of age and 53 normal-hearing children (cNH) in Taiwan. The emotion recognition and the sensitivity of fundamental frequency (F0) changes for those school-aged cNH and cCI (6-17 years old) were examined in a tertiary referred center. RESULTS The mean emotion recognition score of the cNH group was significantly better than the cCI. Female speakers' vocal emotions are more easily to be recognized than male speakers' emotion. There was a significant effect of age at test on voice recognition performance. The average score of cCI with full-spectrum speech was close to the average score of cNH with eight-channel narrowband vocoder speech. The average performance of voice emotion recognition across speakers for cCI could be predicted by their sensitivity to changes in F0. CONCLUSIONS Better pitch discrimination ability comes with better voice emotion recognition for Mandarin-speaking cCI. Besides the F0 cues, cCI are likely to adapt their voice emotion recognition by relying more on secondary cues such as intensity and duration. Although cross-culture differences exist for the acoustic features of voice emotion, Mandarin-speaking cCI and their English-speaking cCI peer expressed a positive effect for age at test on emotion recognition, suggesting the learning effect and brain plasticity. Therefore, further device/processor development to improve presentation of pitch information and more rehabilitative efforts are needed to improve the transmission and perception of voice emotion in Mandarin. LEVEL OF EVIDENCE 3.
Collapse
Affiliation(s)
- Yung‐Song Lin
- Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
- Department of OtolaryngologySchool of Medicine, College of Medicine, Taipei Medical UniversityTaipeiTaiwan
| | - Che‐Ming Wu
- Department of OtorhinolaryngologyNew Taipei Municipal TuCheng Hospital (built and operated by Chang Gung Medical Foundation)New Taipei CityTaiwan
- Department of OtorhinolaryngologyChang Gung Memorial HospitalTaoyuanTaiwan
- School of Medicine, Chang Gung UniversityTaoyuanTaiwan
| | - Charles J. Limb
- School of Medicine, University of California San FranciscoSan FranciscoCaliforniaUSA
| | - Hui‐Ping Lu
- Center of Speech and Hearing, Department of OtolaryngologyChi Mei Medical CenterTainanTaiwan
| | - I. Jung Feng
- Institute of Precision Medicine, National Sun Yat‐sen UniversityKaohsiungTaiwan
| | - Shu‐Chen Peng
- Center for Devices and Radiological HealthUnited States Food and Drug AdministrationSilver SpringMarylandUSA
| | | | | |
Collapse
|
12
|
Panzeri F, Cavicchiolo S, Giustolisi B, Di Berardino F, Ajmone PF, Vizziello P, Donnini V, Zanetti D. Irony Comprehension in Children With Cochlear Implants: The Role of Language Competence, Theory of Mind, and Prosody Recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3212-3229. [PMID: 34284611 DOI: 10.1044/2021_jslhr-20-00671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Aims of this research were (a) to investigate higher order linguistic and cognitive skills of Italian children with cochlear implants (CIs); (b) to correlate them with the comprehension of irony, which has never been systematically studied in this population; and (c) to identify the factors that facilitate the development of this competence. Method We tested 28 Italian children with CI (mean chronological age = 101 [SD = 25.60] months, age range: 60-144 months), and two control groups of normal-hearing (NH) peers matched for chronological age and for hearing age, on a series of tests assessing their cognitive abilities (nonverbal intelligence and theory of mind), linguistic skills (morphosyntax and prosody recognition), and irony comprehension. Results Despite having grammatical abilities in line with the group of NH children matched for hearing age, children with CI lag behind both groups of NH peers on the recognition of emotions through prosody and on the comprehension of ironic stories, even if these two abilities were not related. Conclusions This is the first study that targeted irony comprehension in children with CI, and we found that this competence, which is crucial for maintaining good social relationships with peers, is impaired in this population. In line with other studies, we found a correlation between this ability and advanced theory of mind skills, but at the same time, a deeper investigation is needed, to account for the high variability of performance in children with CI.
Collapse
Affiliation(s)
| | - Sara Cavicchiolo
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| | | | - Federica Di Berardino
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| | - Paola Francesca Ajmone
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Paola Vizziello
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Veronica Donnini
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Diego Zanetti
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| |
Collapse
|
13
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|
14
|
Cartocci G, Giorgi A, Inguscio BMS, Scorpecci A, Giannantonio S, De Lucia A, Garofalo S, Grassia R, Leone CA, Longo P, Freni F, Malerba P, Babiloni F. Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study. Front Neurosci 2021; 15:608156. [PMID: 33767607 PMCID: PMC7985439 DOI: 10.3389/fnins.2021.608156] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/01/2021] [Indexed: 12/21/2022] Open
Abstract
In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development.
Collapse
Affiliation(s)
- Giulia Cartocci
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Andrea Giorgi
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Bianca M S Inguscio
- BrainSigns Srl, Rome, Italy.,Cochlear Implant Unit, Department of Sensory Organs, Sapienza University of Rome, Rome, Italy
| | - Alessandro Scorpecci
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Antonietta De Lucia
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Sabina Garofalo
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Patrizia Longo
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | | | - Fabio Babiloni
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy.,Department of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, China
| |
Collapse
|
15
|
Bahadori M, Barumerli R, Geronazzo M, Cesari P. Action planning and affective states within the auditory peripersonal space in normal hearing and cochlear-implanted listeners. Neuropsychologia 2021; 155:107790. [PMID: 33636155 DOI: 10.1016/j.neuropsychologia.2021.107790] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 12/28/2020] [Accepted: 02/12/2021] [Indexed: 11/24/2022]
Abstract
Fast reaction to approaching stimuli is vital for survival. When sounds enter the auditory peripersonal space (PPS), sounds perceived as being nearer elicit higher motor cortex activation. There is a close relationship between motor preparation and the perceptual components of sounds, particularly of highly arousing sounds. Here we compared the ability to recognize, evaluate, and react to affective stimuli entering the PPS between 20 normal-hearing (NH, 7 women) and 10 cochlear-implanted (CI, 3 women) subjects. The subjects were asked to quickly flex their arm in reaction to positive (P), negative (N), and neutral (Nu) affective sounds ending virtually at five distances from their body. Pre-motor reaction time (pm-RT) was detected via electromyography from the postural muscles to measure action anticipation at the sound-stopping distance; the sounds were also evaluated for their perceived level of valence and arousal. While both groups were able to localize sound distance, only the NH group modulated their pm-RT based on the perceived sound distance. Furthermore, when the sound carried no affective components, the pm-RT to the Nu sounds was shorter compared to the P and the N sounds for both groups. Only the NH group perceived the closer sounds as more arousing than the distant sounds, whereas both groups perceived sound valence similarly. Our findings underline the role of emotional states in action preparation and describe the perceptual components essential for prompt reaction to sounds approaching the peripersonal space.
Collapse
Affiliation(s)
- Mehrdad Bahadori
- Department of Neurosciences, Biomedicine & Movement Sciences, University of Verona, 37131, Verona, Italy.
| | - Roberto Barumerli
- Department of Information Engineering, University of Padova, 35131, Padova, Italy
| | - Michele Geronazzo
- Dyson School of Design Engineering, Imperial College London, London, SW7 2AZ, United Kingdom
| | - Paola Cesari
- Department of Neurosciences, Biomedicine & Movement Sciences, University of Verona, 37131, Verona, Italy
| |
Collapse
|
16
|
Lo CY, Looi V, Thompson WF, McMahon CM. Music Training for Children With Sensorineural Hearing Loss Improves Speech-in-Noise Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1990-2015. [PMID: 32543961 DOI: 10.1044/2020_jslhr-19-00391] [Citation(s) in RCA: 22] [Impact Index Per Article: 5.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose A growing body of evidence suggests that long-term music training provides benefits to auditory abilities for typical-hearing adults and children. The purpose of this study was to evaluate how music training may provide perceptual benefits (such as speech-in-noise, spectral resolution, and prosody) for children with hearing loss. Method Fourteen children aged 6-9 years with prelingual sensorineural hearing loss using bilateral cochlear implants, bilateral hearing aids, or bimodal configuration participated in a 12-week music training program, with nine participants completing the full testing requirements of the music training. Activities included weekly group-based music therapy and take-home music apps three times a week. The design was a pseudorandomized, longitudinal study (half the cohort was wait-listed, initially serving as a passive control group prior to music training). The test battery consisted of tasks related to music perception, music appreciation, and speech perception. As a comparison, 16 age-matched children with typical hearing also completed this test battery, but without participation in the music training. Results There were no changes for any outcomes for the passive control group. After music training, perception of speech-in-noise, question/statement prosody, musical timbre, and spectral resolution improved significantly, as did measures of music appreciation. There were no benefits for emotional prosody or pitch perception. Conclusion The findings suggest even a modest amount of music training has benefits for music and speech outcomes. These preliminary results provide further evidence that music training is a suitable complementary means of habilitation to improve the outcomes for children with hearing loss.
Collapse
Affiliation(s)
- Chi Yhun Lo
- Department of Linguistics, Macquarie University, Sydney, New South Wales, Australia
- The HEARing CRC, Melbourne, Victoria, Australia
- ARC Centre of Excellence in Cognition and its Disorders, Sydney, New South Wales, Australia
| | - Valerie Looi
- SCIC Cochlear Implant Program-An RIDBC Service, Sydney, New South Wales, Australia
| | - William Forde Thompson
- ARC Centre of Excellence in Cognition and its Disorders, Sydney, New South Wales, Australia
- Department of Psychology, Macquarie University, Sydney, New South Wales, Australia
| | - Catherine M McMahon
- Department of Linguistics, Macquarie University, Sydney, New South Wales, Australia
- The HEARing CRC, Melbourne, Victoria, Australia
| |
Collapse
|
17
|
Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech. Ear Hear 2020; 40:1197-1209. [PMID: 30762600 DOI: 10.1097/aud.0000000000000701] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.
Collapse
|
18
|
Ritter C, Vongpaisal T. Multimodal and Spectral Degradation Effects on Speech and Emotion Recognition in Adult Listeners. Trends Hear 2019; 22:2331216518804966. [PMID: 30378469 PMCID: PMC6236866 DOI: 10.1177/2331216518804966] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
For cochlear implant (CI) users, degraded spectral input hampers the
understanding of prosodic vocal emotion, especially in difficult listening
conditions. Using a vocoder simulation of CI hearing, we examined the extent to
which informative multimodal cues in a talker’s spoken expressions improve
normal hearing (NH) adults’ speech and emotion perception under different levels
of spectral degradation (two, three, four, and eight spectral bands).
Participants repeated the words verbatim and identified emotions (among four
alternative options: happy, sad, angry, and neutral) in meaningful sentences
that are semantically congruent with the expression of the intended emotion.
Sentences were presented in their natural speech form and in speech sampled
through a noise-band vocoder in sound (auditory-only) and video
(auditory–visual) recordings of a female talker. Visual information had a more
pronounced benefit in enhancing speech recognition in the lower spectral band
conditions. Spectral degradation, however, did not interfere with emotion
recognition performance when dynamic visual cues in a talker’s expression are
provided as participants scored at ceiling levels across all spectral band
conditions. Our use of familiar sentences that contained congruent semantic and
prosodic information have high ecological validity, which likely optimized
listener performance under simulated CI hearing and may better predict CI users’
outcomes in everyday listening contexts.
Collapse
Affiliation(s)
- Chantel Ritter
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| | - Tara Vongpaisal
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| |
Collapse
|
19
|
VAN DE Velde DJ, Schiller NO, Levelt CC, VAN Heuven VJ, Beers M, Briaire JJ, Frijns JHM. Prosody perception and production by children with cochlear implants. JOURNAL OF CHILD LANGUAGE 2019; 46:111-141. [PMID: 30334510 DOI: 10.1017/s0305000918000387] [Citation(s) in RCA: 15] [Impact Index Per Article: 3.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/08/2023]
Abstract
The perception and production of emotional and linguistic (focus) prosody were compared in children with cochlear implants (CI) and normally hearing (NH) peers. Thirteen CI and thirteen hearing-age-matched school-aged NH children were tested, as baseline, on non-verbal emotion understanding, non-word repetition, and stimulus identification and naming. Main tests were verbal emotion discrimination, verbal focus position discrimination, acted emotion production, and focus production. Productions were evaluated by NH adult Dutch listeners. All scores between groups were comparable, except a lower score for the CI group for non-word repetition. Emotional prosody perception and production scores correlated weakly for CI children but were uncorrelated for NH children. In general, hearing age weakly predicted emotion production but not perception. Non-verbal emotional (but not linguistic) understanding predicted CI children's (but not controls') emotion perception and production. In conclusion, increasing time in sound might facilitate vocal emotional expression, possibly requiring independently maturing emotion perception skills.
Collapse
Affiliation(s)
- Daan J VAN DE Velde
- Leiden University Centre for Linguistics, Leiden University,Van Wijkplaats 3,2311 BX,Leiden
| | - Niels O Schiller
- Leiden University Centre for Linguistics, Leiden University,Van Wijkplaats 3,2311 BX,Leiden
| | - Claartje C Levelt
- Leiden University Centre for Linguistics, Leiden University,Van Wijkplaats 3,2311 BX,Leiden
| | - Vincent J VAN Heuven
- Department of Hungarian and Applied Linguistics,Pannon Egyetem,10 Egyetem Ut.,8200 Veszprém,Hungary
| | - Mieke Beers
- Leiden University Medical Center,ENT Department,Postbus 9600,2300 RC,Leiden
| | - Jeroen J Briaire
- Leiden University Medical Center,ENT Department,Postbus 9600,2300 RC,Leiden
| | - Johan H M Frijns
- Leiden Institute for Brain and Cognition,Postbus 9600, 2300 RC,Leiden
| |
Collapse
|
20
|
Ahmed DG, Paquette S, Zeitouni A, Lehmann A. Neural Processing of Musical and Vocal Emotions Through Cochlear Implants Simulation. Clin EEG Neurosci 2018; 49:143-151. [PMID: 28958161 DOI: 10.1177/1550059417733386] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Indexed: 12/16/2022]
Abstract
Cochlear implants (CIs) partially restore the sense of hearing in the deaf. However, the ability to recognize emotions in speech and music is reduced due to the implant's electrical signal limitations and the patient's altered neural pathways. Electrophysiological correlations of these limitations are not yet well established. Here we aimed to characterize the effect of CIs on auditory emotion processing and, for the first time, directly compare vocal and musical emotion processing through a CI-simulator. We recorded 16 normal hearing participants' electroencephalographic activity while listening to vocal and musical emotional bursts in their original form and in a degraded (CI-simulated) condition. We found prolonged P50 latency and reduced N100-P200 complex amplitude in the CI-simulated condition. This points to a limitation in encoding sound signals processed through CI simulation. When comparing the processing of vocal and musical bursts, we found a delay in latency with the musical bursts compared to the vocal bursts in both conditions (original and CI-simulated). This suggests that despite the cochlear implants' limitations, the auditory cortex can distinguish between vocal and musical stimuli. In addition, it adds to the literature supporting the complexity of musical emotion. Replicating this study with actual CI users might lead to characterizing emotional processing in CI users and could ultimately help develop optimal rehabilitation programs or device processing strategies to improve CI users' quality of life.
Collapse
Affiliation(s)
- Duha G Ahmed
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada.,3 Department of Otolaryngology, Head and Neck Surgery, King Abdulaziz University, Rabigh Medical College, Jeddah, Saudi Arabia
| | - Sebastian Paquette
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,4 Neurology Department, Beth Israel Deaconess Medical Center, Harvard Medical School, Boston, MA, USA
| | - Anthony Zeitouni
- 2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| | - Alexandre Lehmann
- 1 International Laboratory for Brain Music and Sound Research, Center for Research on Brain, Language and Music, Department of Psychology, University of Montreal, Montreal, Quebec, Canada.,2 Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Quebec, Canada
| |
Collapse
|
21
|
Wang Y, Zhou W, Cheng Y, Bian X. Gaze Patterns in Auditory-Visual Perception of Emotion by Children with Hearing Aids and Hearing Children. Front Psychol 2017; 8:2281. [PMID: 29312104 PMCID: PMC5743909 DOI: 10.3389/fpsyg.2017.02281] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/15/2017] [Accepted: 12/14/2017] [Indexed: 12/30/2022] Open
Abstract
This study investigated eye-movement patterns during emotion perception for children with hearing aids and hearing children. Seventy-eight participants aged from 3 to 7 were asked to watch videos with a facial expression followed by an oral statement, and these two cues were either congruent or incongruent in emotional valence. Results showed that while hearing children paid more attention to the upper part of the face, children with hearing aids paid more attention to the lower part of the face after the oral statement was presented, especially for the neutral facial expression/neutral oral statement condition. These results suggest that children with hearing aids have an altered eye contact pattern with others and a difficulty in matching visual and voice cues in emotion perception. The negative cause and effect of these gaze patterns should be avoided in earlier rehabilitation for hearing-impaired children with assistive devices.
Collapse
Affiliation(s)
- Yifang Wang
- School of Psychology, Capital Normal University, Beijing, China
| | - Wei Zhou
- School of Psychology, Capital Normal University, Beijing, China
| | | | - Xiaoying Bian
- School of Psychology, Capital Normal University, Beijing, China
| |
Collapse
|
22
|
Polonenko MJ, Papsin BC, Gordon KA. Delayed access to bilateral input alters cortical organization in children with asymmetric hearing. NEUROIMAGE-CLINICAL 2017; 17:415-425. [PMID: 29159054 PMCID: PMC5683809 DOI: 10.1016/j.nicl.2017.10.036] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 07/06/2017] [Revised: 10/25/2017] [Accepted: 10/31/2017] [Indexed: 11/19/2022]
Abstract
Bilateral hearing in early development protects auditory cortices from reorganizing to prefer the better ear. Yet, such protection could be disrupted by mismatched bilateral input in children with asymmetric hearing who require electric stimulation of the auditory nerve from a cochlear implant in their deaf ear and amplified acoustic sound from a hearing aid in their better ear (bimodal hearing). Cortical responses to bimodal stimulation were measured by electroencephalography in 34 bimodal users and 16 age-matched peers with normal hearing, and compared with the same measures previously reported for 28 age-matched bilateral implant users. Both auditory cortices increasingly favoured the better ear with delay to implanting the deaf ear; the time course mirrored that occurring with delay to bilateral implantation in unilateral implant users. Preference for the implanted ear tended to occur with ongoing implant use when hearing was poor in the non-implanted ear. Speech perception deteriorated with longer deprivation and poorer access to high-frequencies. Thus, cortical preference develops in children with asymmetric hearing but can be avoided by early provision of balanced bimodal stimulation. Although electric and acoustic stimulation differ, these inputs can work sympathetically when used bilaterally given sufficient hearing in the non-implanted ear.
Collapse
Affiliation(s)
- Melissa Jane Polonenko
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada; Neurosciences & Mental Health, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada.
| | - Blake Croll Papsin
- Department of Otolaryngology - Head & Neck Surgery, University of Toronto, Toronto, ON M5G 2N2, Canada; Otolaryngology - Head & Neck Surgery, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| | - Karen Ann Gordon
- Institute of Medical Sciences, University of Toronto, Toronto, ON M5S 1A8, Canada; Neurosciences & Mental Health, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada; Department of Otolaryngology - Head & Neck Surgery, University of Toronto, Toronto, ON M5G 2N2, Canada; Otolaryngology - Head & Neck Surgery, Hospital for Sick Children, Toronto, ON M5G 1X8, Canada
| |
Collapse
|
23
|
|
24
|
Jiam NT, Caldwell M, Deroche ML, Chatterjee M, Limb CJ. Voice emotion perception and production in cochlear implant users. Hear Res 2017; 352:30-39. [PMID: 28088500 DOI: 10.1016/j.heares.2017.01.006] [Citation(s) in RCA: 46] [Impact Index Per Article: 6.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/05/2016] [Revised: 12/14/2016] [Accepted: 01/06/2017] [Indexed: 10/20/2022]
Abstract
Voice emotion is a fundamental component of human social interaction and social development. Unfortunately, cochlear implant users are often forced to interface with highly degraded prosodic cues as a result of device constraints in extraction, processing, and transmission. As such, individuals with cochlear implants frequently demonstrate significant difficulty in recognizing voice emotions in comparison to their normal hearing counterparts. Cochlear implant-mediated perception and production of voice emotion is an important but relatively understudied area of research. However, a rich understanding of the voice emotion auditory processing offers opportunities to improve upon CI biomedical design and to develop training programs benefiting CI performance. In this review, we will address the issues, current literature, and future directions for improved voice emotion processing in cochlear implant users.
Collapse
Affiliation(s)
- N T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M Caldwell
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA
| | - M L Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - M Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital, Omaha, NE, USA
| | - C J Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco, School of Medicine, San Francisco, CA, USA.
| |
Collapse
|
25
|
Deroche MLD, Kulkarni AM, Christensen JA, Limb CJ, Chatterjee M. Deficits in the Sensitivity to Pitch Sweeps by School-Aged Children Wearing Cochlear Implants. Front Neurosci 2016; 10:73. [PMID: 26973451 PMCID: PMC4776214 DOI: 10.3389/fnins.2016.00073] [Citation(s) in RCA: 21] [Impact Index Per Article: 2.6] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 01/15/2016] [Accepted: 02/17/2016] [Indexed: 11/13/2022] Open
Abstract
Sensitivity to static changes in pitch has been shown to be poorer in school-aged children wearing cochlear implants (CIs) than children with normal hearing (NH), but it is unclear whether this is also the case for dynamic changes in pitch. Yet, dynamically changing pitch has considerable ecological relevance in terms of natural speech, particularly aspects such as intonation, emotion, or lexical tone information. Twenty one children with NH and 23 children wearing a CI participated in this study, along with 18 NH adults and 6 CI adults for comparison. Listeners with CIs used their clinically assigned settings with envelope-based coding strategies. Percent correct was measured in one- or three-interval two-alternative forced choice tasks, for the direction or discrimination of harmonic complexes based on a linearly rising or falling fundamental frequency. Sweep rates were adjusted per subject, in a logarithmic scale, so as to cover the full extent of the psychometric function. Data for up- and down-sweeps were fitted separately, using a maximum-likelihood technique. Fits were similar for up- and down-sweeps in the discrimination task, but diverged in the direction task because psychometric functions for down-sweeps were very shallow. Hits and false alarms were then converted into d′ and beta values, from which a threshold was extracted at a d′ of 0.77. Thresholds were very consistent between the two tasks and considerably higher (worse) for CI listeners than for their NH peers. Thresholds were also higher for children than adults. Factors such as age at implantation, age at profound hearing loss, and duration of CI experience did not play any major role in this sensitivity. Thresholds of dynamic pitch sensitivity (in either task) also correlated with thresholds for static pitch sensitivity and with performance in tasks related to speech prosody.
Collapse
Affiliation(s)
- Mickael L D Deroche
- Centre for Research on Brain, Language and Music, McGill University Montreal, QC, Canada
| | - Aditya M Kulkarni
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| | - Julie A Christensen
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| | - Charles J Limb
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine San Francisco, CA, USA
| | - Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Boys Town National Research Hospital Omaha, NE, USA
| |
Collapse
|
26
|
Lehmann A, Paquette S. Cross-domain processing of musical and vocal emotions in cochlear implant users. Front Neurosci 2015; 9:343. [PMID: 26441512 PMCID: PMC4585154 DOI: 10.3389/fnins.2015.00343] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/01/2015] [Accepted: 09/10/2015] [Indexed: 01/08/2023] Open
Affiliation(s)
- Alexandre Lehmann
- Department of Otolaryngology Head and Neck Surgery, McGill University Montreal, QC, Canada ; International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research, Center for Research on Brain, Language and Music Montreal, QC, Canada ; Department of Psychology, University of Montreal Montreal, QC, Canada
| |
Collapse
|
27
|
Vannson N, Innes-Brown H, Marozeau J. Dichotic Listening Can Improve Perceived Clarity of Music in Cochlear Implant Users. Trends Hear 2015; 19:19/0/2331216515598971. [PMID: 26316123 PMCID: PMC4593516 DOI: 10.1177/2331216515598971] [Citation(s) in RCA: 9] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Musical enjoyment for cochlear implant (CI) recipients is often reported to be unsatisfactory. Our goal was to determine whether the musical experience of postlingually deafened adult CI recipients could be enriched by presenting the bass and treble clef parts of short polyphonic piano pieces separately to each ear (dichotic). Dichotic presentation should artificially enhance the lateralization cues of each part and help the listeners to better segregate them and thus provide greater clarity. We also hypothesized that perception of the intended emotion of the pieces and their overall enjoyment would be enhanced in the dichotic mode compared with the monophonic (both parts in the same ear) and the diotic mode (both parts in both ears). Twenty-eight piano pieces specifically composed to induce sad or happy emotions were selected. The tempo of the pieces, which ranged from lento to presto covaried with the intended emotion (from sad to happy). Thirty participants (11 normal-hearing listeners, 11 bimodal CI and hearing-aid users, and 8 bilaterally implanted CI users) participated in this study. Participants were asked to rate the perceived clarity, the intended emotion, and their preference of each piece in different listening modes. Results indicated that dichotic presentation produced small significant improvements in subjective ratings based on perceived clarity. We also found that preference and clarity ratings were significantly higher for pieces with fast tempi compared with slow tempi. However, no significant differences between diotic and dichotic presentation were found for the participants’ preference ratings, or their judgments of intended emotion.
Collapse
Affiliation(s)
- Nicolas Vannson
- Centre de Recherche Cerveau et Cognition, Université de Toulouse, UPS, France CerCo, CNRS, France Cochlear France S.A.S, France
| | | | - Jeremy Marozeau
- Hearing Systems Group, Department of Electrical Engineering, Technical University of Denmark, Lyngby, Denmark
| |
Collapse
|
28
|
Kondaurova MV, Bergeson TR, Xu H, Kitamura C. Affective Properties of Mothers' Speech to Infants With Hearing Impairment and Cochlear Implants. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2015; 58:590-600. [PMID: 25679195 PMCID: PMC4610283 DOI: 10.1044/2015_jslhr-s-14-0095] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 04/02/2014] [Revised: 10/01/2014] [Accepted: 01/21/2015] [Indexed: 05/08/2023]
Abstract
PURPOSE The affective properties of infant-directed speech influence the attention of infants with normal hearing to speech sounds. This study explored the affective quality of maternal speech to infants with hearing impairment (HI) during the 1st year after cochlear implantation as compared to speech to infants with normal hearing. METHOD Mothers of infants with HI and mothers of infants with normal hearing matched by age (NH-AM) or hearing experience (NH-EM) were recorded playing with their infants during 3 sessions over a 12-month period. Speech samples of 25 s were low-pass filtered, leaving intonation but not speech information intact. Sixty adults rated the stimuli along 5 scales: positive/negative affect and intention to express affection, to encourage attention, to comfort/soothe, and to direct behavior. RESULTS Low-pass filtered speech to HI and NH-EM groups was rated as more positive, affective, and comforting compared with the such speech to the NH-AM group. Speech to infants with HI and with NH-AM was rated as more directive than speech to the NH-EM group. Mothers decreased affective qualities in speech to all infants but increased directive qualities in speech to infants with NH-EM over time. CONCLUSIONS Mothers fine-tune communicative intent in speech to their infant's developmental stage. They adjust affective qualities to infants' hearing experience rather than to chronological age but adjust directive qualities of speech to the chronological age of their infants.
Collapse
Affiliation(s)
| | | | - Huiping Xu
- Indiana University–Purdue University Indianapolis
| | | |
Collapse
|
29
|
Chatterjee M, Zion DJ, Deroche ML, Burianek BA, Limb CJ, Goren AP, Kulkarni AM, Christensen JA. Voice emotion recognition by cochlear-implanted children and their normally-hearing peers. Hear Res 2015; 322:151-62. [PMID: 25448167 PMCID: PMC4615700 DOI: 10.1016/j.heares.2014.10.003] [Citation(s) in RCA: 97] [Impact Index Per Article: 10.8] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2014] [Revised: 08/27/2014] [Accepted: 10/06/2014] [Indexed: 10/24/2022]
Abstract
Despite their remarkable success in bringing spoken language to hearing impaired listeners, the signal transmitted through cochlear implants (CIs) remains impoverished in spectro-temporal fine structure. As a consequence, pitch-dominant information such as voice emotion, is diminished. For young children, the ability to correctly identify the mood/intent of the speaker (which may not always be visible in their facial expression) is an important aspect of social and linguistic development. Previous work in the field has shown that children with cochlear implants (cCI) have significant deficits in voice emotion recognition relative to their normally hearing peers (cNH). Here, we report on voice emotion recognition by a cohort of 36 school-aged cCI. Additionally, we provide for the first time, a comparison of their performance to that of cNH and NH adults (aNH) listening to CI simulations of the same stimuli. We also provide comparisons to the performance of adult listeners with CIs (aCI), most of whom learned language primarily through normal acoustic hearing. Results indicate that, despite strong variability, on average, cCI perform similarly to their adult counterparts; that both groups' mean performance is similar to aNHs' performance with 8-channel noise-vocoded speech; that cNH achieve excellent scores in voice emotion recognition with full-spectrum speech, but on average, show significantly poorer scores than aNH with 8-channel noise-vocoded speech. A strong developmental effect was observed in the cNH with noise-vocoded speech in this task. These results point to the considerable benefit obtained by cochlear-implanted children from their devices, but also underscore the need for further research and development in this important and neglected area. This article is part of a Special Issue entitled .
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA.
| | - Danielle J Zion
- Department of Hearing & Speech Sciences, University of Maryland, 0100 LeFrak Hall, College Park, MD 20742, USA
| | - Mickael L Deroche
- Department of Otolaryngology, Johns Hopkins University School of Medicine, 818 Ross Research Building, 720 Rutland Avenue, Baltimore, MD, USA
| | - Brooke A Burianek
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| | - Charles J Limb
- Department of Otolaryngology, Johns Hopkins University School of Medicine, 818 Ross Research Building, 720 Rutland Avenue, Baltimore, MD, USA
| | - Alison P Goren
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA; Department of Hearing & Speech Sciences, University of Maryland, 0100 LeFrak Hall, College Park, MD 20742, USA
| | - Aditya M Kulkarni
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| | - Julie A Christensen
- Auditory Prostheses & Perception Lab., Boys Town National Research Hospital, 555 N 30th St, Omaha, NE 68131, USA
| |
Collapse
|
30
|
Phillips-Silver J, Toiviainen P, Gosselin N, Turgeon C, Lepore F, Peretz I. Cochlear implant users move in time to the beat of drum music. Hear Res 2015; 321:25-34. [PMID: 25575604 DOI: 10.1016/j.heares.2014.12.007] [Citation(s) in RCA: 18] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 06/26/2014] [Revised: 12/18/2014] [Accepted: 12/22/2014] [Indexed: 11/28/2022]
Abstract
Cochlear implant users show a profile of residual, yet poorly understood, musical abilities. An ability that has received little to no attention in this population is entrainment to a musical beat. We show for the first time that a heterogeneous group of cochlear implant users is able to find the beat and move their bodies in time to Latin Merengue music, especially when the music is presented in unpitched drum tones. These findings not only reveal a hidden capacity for feeling musical rhythm through the body in the deaf and hearing impaired population, but illuminate promising avenues for designing early childhood musical training that can engage implanted children in social musical activities with benefits potentially extending to non-musical domains.
Collapse
Affiliation(s)
- Jessica Phillips-Silver
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 boul. Mont Royal, University of Montreal, Case Postale 6128, Station Centre-Ville, Montreal Québec H3C 3J7, Canada; Department of Psychology, University of Montreal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada.
| | - Petri Toiviainen
- Finnish Centre of Excellence in Interdisciplinary Music Research, University of Jyväskylä, Department of Music, P.O. Box 35, FI-40014, University of Jyväskylä, Finland.
| | - Nathalie Gosselin
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 boul. Mont Royal, University of Montreal, Case Postale 6128, Station Centre-Ville, Montreal Québec H3C 3J7, Canada.
| | - Christine Turgeon
- Department of Psychology, University of Montreal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada.
| | - Franco Lepore
- Department of Psychology, University of Montreal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada.
| | - Isabelle Peretz
- International Laboratory for Brain, Music and Sound Research (BRAMS), Pavillon 1420 boul. Mont Royal, University of Montreal, Case Postale 6128, Station Centre-Ville, Montreal Québec H3C 3J7, Canada; Department of Psychology, University of Montreal, C.P. 6128, Succursale Centre-Ville, Montréal, Québec H3C 3J7, Canada.
| |
Collapse
|
31
|
Volkova A, Trehub SE, Schellenberg EG, Papsin BC, Gordon KA. Children's identification of familiar songs from pitch and timing cues. Front Psychol 2014; 5:863. [PMID: 25147537 PMCID: PMC4123732 DOI: 10.3389/fpsyg.2014.00863] [Citation(s) in RCA: 7] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/25/2014] [Accepted: 07/20/2014] [Indexed: 11/13/2022] Open
Abstract
The goal of the present study was to ascertain whether children with normal hearing and prelingually deaf children with cochlear implants could use pitch or timing cues alone or in combination to identify familiar songs. Children 4–7 years of age were required to identify the theme songs of familiar TV shows in a simple task with excerpts that preserved (1) the relative pitch and timing cues of the melody but not the original instrumentation, (2) the timing cues only (rhythm, meter, and tempo), and (3) the relative pitch cues only (pitch contour and intervals). Children with normal hearing performed at high levels and comparably across the three conditions. The performance of child implant users was well above chance levels when both pitch and timing cues were available, marginally above chance with timing cues only, and at chance with pitch cues only. This is the first demonstration that children can identify familiar songs from monotonic versions—timing cues but no pitch cues—and from isochronous versions—pitch cues but no timing cues. The study also indicates that, in the context of a very simple task, young implant users readily identify songs from melodic versions that preserve pitch and timing cues.
Collapse
Affiliation(s)
- Anna Volkova
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Sandra E Trehub
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - E Glenn Schellenberg
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | - Blake C Papsin
- Department of Otolaryngology, University of Toronto Toronto, ON, Canada
| | - Karen A Gordon
- Department of Otolaryngology, University of Toronto Toronto, ON, Canada
| |
Collapse
|
32
|
Mildner V, Koska T. Recognition and production of emotions in children with cochlear implants. CLINICAL LINGUISTICS & PHONETICS 2014; 28:543-554. [PMID: 25000377 DOI: 10.3109/02699206.2014.927000] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.6] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/03/2023]
Abstract
The aim of this study was to examine auditory recognition and vocal production of emotions in three prelingually bilaterally profoundly deaf children aged 6-7 who received cochlear implants before age 2, and compare them with age-matched normally hearing children. No consistent advantage was found for the normally hearing participants. In both groups, sadness was recognized best and disgust was the most difficult. Confusion matrices among other emotions (anger, happiness, and fear) showed that children with and without hearing impairment may rely on different cues. Both groups of children showed that perception is superior to production. Normally hearing children were more successful in the production of sadness, happiness, and fear, but not anger or disgust. The data set is too small to draw any definite conclusions, but it seems that a combination of early implantation and regular auditory-oral-based therapy enables children with cochlear implants to process and produce emotional content comparable with children with normal hearing.
Collapse
Affiliation(s)
- Vesna Mildner
- Faculty of Humanities and Social Sciences, Department of Phonetics, University of Zagreb , Zagreb , Croatia
| | | |
Collapse
|
33
|
Marsella P, Scorpecci A, Vecchiato G, Colosimo A, Maglione AG, Babiloni F. Neuroelectrical imaging study of music perception by children with unilateral and bilateral cochlear implants. Cochlear Implants Int 2014; 15 Suppl 1:S68-71. [PMID: 24869449 DOI: 10.1179/1467010014z.000000000171] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.1] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/31/2022]
Abstract
OBJECTIVE To investigate by means of non-invasive neuroelectrical imaging the differences in the perceived pleasantness of music between children with cochlear implants (CI) and normal-hearing (NH) children. METHODS 5 NH children and 5 children who received a sequential bilateral CI were assessed by means of High-Resolution EEG with Source Reconstruction as they watched a musical cartoon. Implanted children were tested before and after the second implant. For each subject the scalp Power Spectral Density was calculated in order to investigate the EEG alpha asymmetry. RESULTS The scalp topographic distribution of the EEG power spectrum in the alpha band was different in children using one CI as compared to NH children (see figure). With two CIs the cortical activation pattern changed significantly, becoming more similar to the one observed in NH children. CONCLUSIONS The findings support the hypothesis that bilateral CI users have a closer-to-normal perception of the pleasantness of music than unilaterally implanted children.
Collapse
|
34
|
Schellenberg EG, Corrigall KA, Ladinig O, Huron D. Changing the Tune: Listeners Like Music that Expresses a Contrasting Emotion. Front Psychol 2012; 3:574. [PMID: 23269918 PMCID: PMC3529308 DOI: 10.3389/fpsyg.2012.00574] [Citation(s) in RCA: 12] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/26/2012] [Accepted: 12/05/2012] [Indexed: 11/24/2022] Open
Abstract
Theories of esthetic appreciation propose that (1) a stimulus is liked because it is expected or familiar, (2) a stimulus is liked most when it is neither too familiar nor too novel, or (3) a novel stimulus is liked because it elicits an intensified emotional response. We tested the third hypothesis by examining liking for music as a function of whether the emotion it expressed contrasted with the emotion expressed by music heard previously. Stimuli were 30-s happy- or sad-sounding excerpts from recordings of classical piano music. On each trial, listeners heard a different excerpt and made liking and emotion-intensity ratings. The emotional character of consecutive excerpts was repeated with varying frequencies, followed by an excerpt that expressed a contrasting emotion. As the number of presentations of the background emotion increased, liking and intensity ratings became lower compared to those for the contrasting emotion. Consequently, when the emotional character of the music was relatively novel, listeners’ responses intensified and their appreciation increased.
Collapse
Affiliation(s)
- E Glenn Schellenberg
- Department of Psychology, University of Toronto Mississauga Mississauga, ON, Canada
| | | | | | | |
Collapse
|