1
|
Lukaschyk J, Illg A. Subjective Voice Handicap and Vocal Tract Discomfort in Patients With Cochlear Implant. J Voice 2025; 39:287.e11-287.e18. [PMID: 35945098 DOI: 10.1016/j.jvoice.2022.07.005] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/24/2022] [Revised: 07/05/2022] [Accepted: 07/06/2022] [Indexed: 10/15/2022]
Abstract
OBJECTIVES Changes in the auditory system, for example due to hearing impairment, can cause changes in breathing, phonation, and articulation. Aim of this study was to provide first data on subjective Voice Handicap and Vocal Tract Discomfort in subjects with hearing impairment and cochlear implant (CI) after initial fitting. STUDY DESIGN Prospective cross-sectional study METHODS: A total of 111 participants (57 female and 54 male) between 20 and 85 years of age (mean = 58.21, SD = 14.96) were recruited between October 2019 and March 2020 from the Clinic of Otorhinolaryngology at Medical University of Hannover. Participants were tested after initial CI fitting, six weeks after implantation, using the German version of the VTD (Vocal Tract Discomfort) Scale and VHI (9i) (Voice Handicap Index) as well as speech comprehension tests and a specifically developed questionnaire evaluating data concerning voice usage and other influential factors. Statistics included descriptive analysis, group comparisons (t-Test), Pearson correlation coefficient between VTD Scale and VHI, and hearing status. RESULTS Patients with CI did show low scores in VTD Scale and VHI-9i (VTD mean = 7.85 [SD = 10.4]; VHI-9i mean = 4.04 [SD = 5.77]). We found neither a correlation between any of the speech comprehension tests and the VTD Scale nor the VHI-9i. Further, we could show no correlation between subjective Voice Handicap and/or Vocal Tract Discomfort and age or the kind of treatment. CONCLUSION Patients included in this study did not show more subjective Voice Handicap or Vocal Tract Discomfort than normal hearing peers. Scores of VTD Scale and VHI-9i did not depend on the duration of hearing loss, speech comprehension, kind of treatment or age.
Collapse
Affiliation(s)
- Julia Lukaschyk
- ENT, Phoniatrics and Pedaudiology - Klosterstern, Eppendorfer Baum 3, Hamburg 20249, Germany.
| | - Angelika Illg
- Department of Otolaryngology, Hannover Medical University, Hannover, Germany
| |
Collapse
|
2
|
Taitelbaum-Swead R, Ben-David BM. The Role of Early Intact Auditory Experience on the Perception of Spoken Emotions, Comparing Prelingual to Postlingual Cochlear Implant Users. Ear Hear 2024; 45:1585-1599. [PMID: 39004788 DOI: 10.1097/aud.0000000000001550] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/16/2024]
Abstract
OBJECTIVES Cochlear implants (CI) are remarkably effective, but have limitations regarding the transformation of the spectro-temporal fine structures of speech. This may impair processing of spoken emotions, which involves the identification and integration of semantic and prosodic cues. Our previous study found spoken-emotions-processing differences between CI users with postlingual deafness (postlingual CI) and normal hearing (NH) matched controls (age range, 19 to 65 years). Postlingual CI users over-relied on semantic information in incongruent trials (prosody and semantics present different emotions), but rated congruent trials (same emotion) similarly to controls. Postlingual CI's intact early auditory experience may explain this pattern of results. The present study examined whether CI users without intact early auditory experience (prelingual CI) would generally perform worse on spoken emotion processing than NH and postlingual CI users, and whether CI use would affect prosodic processing in both CI groups. First, we compared prelingual CI users with their NH controls. Second, we compared the results of the present study to our previous study ( Taitlebaum-Swead et al. 2022 ; postlingual CI). DESIGN Fifteen prelingual CI users and 15 NH controls (age range, 18 to 31 years) listened to spoken sentences composed of different combinations (congruent and incongruent) of three discrete emotions (anger, happiness, sadness) and neutrality (performance baseline), presented in prosodic and semantic channels (Test for Rating of Emotions in Speech paradigm). Listeners were asked to rate (six-point scale) the extent to which each of the predefined emotions was conveyed by the sentence as a whole (integration of prosody and semantics), or to focus only on one channel (rating the target emotion [RTE]) and ignore the other (selective attention). In addition, all participants performed standard tests of speech perception. Performance on the Test for Rating of Emotions in Speech was compared with the previous study (postlingual CI). RESULTS When asked to focus on one channel, semantics or prosody, both CI groups showed a decrease in prosodic RTE (compared with controls), but only the prelingual CI group showed a decrease in semantic RTE. When the task called for channel integration, both groups of CI users used semantic emotional information to a greater extent than their NH controls. Both groups of CI users rated sentences that did not present the target emotion higher than their NH controls, indicating some degree of confusion. However, only the prelingual CI group rated congruent sentences lower than their NH controls, suggesting reduced accumulation of information across channels. For prelingual CI users, individual differences in identification of monosyllabic words were significantly related to semantic identification and semantic-prosodic integration. CONCLUSIONS Taken together with our previous study, we found that the degradation of acoustic information by the CI impairs the processing of prosodic emotions, in both CI user groups. This distortion appears to lead CI users to over-rely on the semantic information when asked to integrate across channels. Early intact auditory exposure among CI users was found to be necessary for the effective identification of semantic emotions, as well as the accumulation of emotional information across the two channels. Results suggest that interventions for spoken-emotion processing should not ignore the onset of hearing loss.
Collapse
Affiliation(s)
- Riki Taitelbaum-Swead
- Department of Communication Disorders, Speech Perception and Listening Effort Lab in the name of Prof. Mordechai Himelfarb, Ariel University, Israel
- Meuhedet Health Services, Tel Aviv, Israel
| | - Boaz M Ben-David
- Baruch Ivcher School of Psychology, Reichman University (IDC), Herzliya, Israel
- Department of Speech-Language Pathology, University of Toronto, Toronto, ON, Canada
- KITE Research Institute, Toronto Rehabilitation Institute-University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
3
|
Sahana P, Manjula P. Vocal Emotion Perception in Children Using Cochlear Implant. J Int Adv Otol 2024; 20:383-389. [PMID: 39388519 PMCID: PMC11562208 DOI: 10.5152/iao.2024.241480] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/06/2024] [Accepted: 07/22/2024] [Indexed: 10/12/2024] Open
Abstract
The significance of emotional prosody in social communication is well-established, yet research on emotion perception among cochlear implant (CI) users is less extensive. This study aims to explore vocal emotion perception in children using CI and bimodal hearing devices and compare them with their normal hearing (NH) peers. The study involved children aged 4-10 years with unilateral CI and contralateral hearing aid (HA), matched with NH peers by gender and listening age. Children were selected using snowball sampling for the CI group and purposive sampling for the NH group. Vocal emotion perception was assessed for semantically neutral sentences in "happy," "sad," and "angry" emotions using a 3 alternate forced choice test. The NH group demonstrated significantly superior emotion perception (P=.002) compared to the CI group. Both groups accurately identified the "happy" emotion. However, the NH group had higher scores for the "angry" emotion compared to the "sad" emotion, while the CI group showed better scores for "sad" than "angry" emotion. Bimodal hearing devices improved recognition of "sad"and "angry" emotions, with a decrease in confusion percentages. The unbiased hit (Hu) value provided more substantial insight than the hit score. Bimodal hearing devices enhance the perception of "sad" and "angry" vocal emotions compared to using a CI alone, likely due to the HA providing the temporal fine structure cues, thereby better representing fundamental frequency variations. Children with unilateral CI benefit significantly in the perception of emotions by using a HA in the contralateral ear, aiding in better socio-emotional development.
Collapse
Affiliation(s)
- Puttaraju Sahana
- Center of Excellence (C-PEC), All India Institute of Speech and Hearing, Mysuru, India
| | - Puttabasappa Manjula
- Department of Audiology, All India Institute of Speech and Hearing, Mysuru, India
| |
Collapse
|
4
|
Cartocci G, Inguscio BMS, Giorgi A, Rossi D, Di Nardo W, Di Cesare T, Leone CA, Grassia R, Galletti F, Ciodaro F, Galletti C, Albera R, Canale A, Babiloni F. Investigation of Deficits in Auditory Emotional Content Recognition by Adult Cochlear Implant Users through the Study of Electroencephalographic Gamma and Alpha Asymmetry and Alexithymia Assessment. Brain Sci 2024; 14:927. [PMID: 39335422 PMCID: PMC11430703 DOI: 10.3390/brainsci14090927] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/02/2024] [Revised: 09/12/2024] [Accepted: 09/13/2024] [Indexed: 09/30/2024] Open
Abstract
BACKGROUND/OBJECTIVES Given the importance of emotion recognition for communication purposes, and the impairment for such skill in CI users despite impressive language performances, the aim of the present study was to investigate the neural correlates of emotion recognition skills, apart from language, in adult unilateral CI (UCI) users during a music in noise (happy/sad) recognition task. Furthermore, asymmetry was investigated through electroencephalographic (EEG) rhythm, given the traditional concept of hemispheric lateralization for emotional processing, and the intrinsic asymmetry due to the clinical UCI condition. METHODS Twenty adult UCI users and eight normal hearing (NH) controls were recruited. EEG gamma and alpha band power was assessed as there is evidence of a relationship between gamma and emotional response and between alpha asymmetry and tendency to approach or withdraw from stimuli. The TAS-20 questionnaire (alexithymia) was completed by the participants. RESULTS The results showed no effect of background noise, while supporting that gamma activity related to emotion processing shows alterations in the UCI group compared to the NH group, and that these alterations are also modulated by the etiology of deafness. In particular, relative higher gamma activity in the CI side corresponds to positive processes, correlated with higher emotion recognition abilities, whereas gamma activity in the non-CI side may be related to positive processes inversely correlated with alexithymia and also inversely correlated with age; a correlation between TAS-20 scores and age was found only in the NH group. CONCLUSIONS EEG gamma activity appears to be fundamental to the processing of the emotional aspect of music and also to the psychocognitive emotion-related component in adults with CI.
Collapse
Affiliation(s)
- Giulia Cartocci
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, 00161 Rome, Italy
- BrainSigns Ltd., Via Tirso 14, 00198 Rome, Italy
| | - Bianca Maria Serena Inguscio
- BrainSigns Ltd., Via Tirso 14, 00198 Rome, Italy
- Department of Computer, Control, and Management Engineering "Antonio Ruberti", Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Andrea Giorgi
- BrainSigns Ltd., Via Tirso 14, 00198 Rome, Italy
- Department of Anatomical, Histological, Forensic & Orthopedic Sciences, Sapienza University of Rome, Piazzale Aldo Moro 5, 00185 Rome, Italy
| | - Dario Rossi
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, 00161 Rome, Italy
- BrainSigns Ltd., Via Tirso 14, 00198 Rome, Italy
| | - Walter Di Nardo
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli", IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Tiziana Di Cesare
- Institute of Otorhinolaryngology, Catholic University of Sacred Heart, Fondazione Policlinico "A Gemelli", IRCCS, Largo Agostino Gemelli 8, 00168 Rome, Italy
| | - Carlo Antonio Leone
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Via Leonardo Bianchi, 80131 Naples, Italy
| | - Rosa Grassia
- Department of Otolaringology Head-Neck Surgery, Monaldi Hospital, Via Leonardo Bianchi, 80131 Naples, Italy
| | - Francesco Galletti
- Department of Otorhinolaryngology, University of Messina, Piazza Pugliatti 1, 98122 Messina, Italy
| | - Francesco Ciodaro
- Department of Otorhinolaryngology, University of Messina, Piazza Pugliatti 1, 98122 Messina, Italy
| | - Cosimo Galletti
- Department of Otorhinolaryngology, University of Messina, Piazza Pugliatti 1, 98122 Messina, Italy
| | - Roberto Albera
- Department of Surgical Sciences, University of Turin, Via Genova 3, 10126 Turin, Italy
| | - Andrea Canale
- Department of Surgical Sciences, University of Turin, Via Genova 3, 10126 Turin, Italy
| | - Fabio Babiloni
- Department of Molecular Medicine, Sapienza University of Rome, Viale Regina Elena 291, 00161 Rome, Italy
- BrainSigns Ltd., Via Tirso 14, 00198 Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou 310018, China
| |
Collapse
|
5
|
Jahn KN, Wiegand-Shahani BM, Moturi V, Kashiwagura ST, Doak KR. Cochlear-implant simulated spectral degradation attenuates emotional responses to environmental sounds. Int J Audiol 2024:1-7. [PMID: 39146030 DOI: 10.1080/14992027.2024.2385552] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/12/2023] [Accepted: 07/22/2024] [Indexed: 08/17/2024]
Abstract
OBJECTIVE Cochlear implants (CI) provide users with a spectrally degraded acoustic signal that could impact their auditory emotional experiences. This study evaluated the effects of CI-simulated spectral degradation on emotional valence and arousal elicited by environmental sounds. DESIGN Thirty emotionally evocative sounds were filtered through a noise-band vocoder. Participants rated the perceived valence and arousal elicited by each of the full-spectrum and vocoded stimuli. These ratings were compared across acoustic conditions (full-spectrum, vocoded) and as a function of stimulus type (unpleasant, neutral, pleasant). STUDY SAMPLE Twenty-five young adults (age 19 to 34 years) with normal hearing. RESULTS Emotional responses were less extreme for spectrally degraded (i.e., vocoded) sounds than for full-spectrum sounds. Specifically, spectrally degraded stimuli were perceived as more negative and less arousing than full-spectrum stimuli. CONCLUSION By meticulously replicating CI spectral degradation while controlling for variables that are confounded within CI users, these findings indicate that CI spectral degradation can compress the range of sound-induced emotion independent of hearing loss and other idiosyncratic device- or person-level variables. Future work will characterize emotional reactions to sound in CI users via objective, psychoacoustic, and subjective measures.
Collapse
Affiliation(s)
- Kelly N Jahn
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Braden M Wiegand-Shahani
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Vaishnavi Moturi
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
| | - Sean Takamoto Kashiwagura
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| | - Karlee R Doak
- Department of Speech, Language, and Hearing, The University of Texas at Dallas, Richardson, TX, USA
- Callier Center for Communication Disorders, The University of Texas at Dallas, Dallas, TX, USA
| |
Collapse
|
6
|
von Eiff CI, Kauk J, Schweinberger SR. The Jena Audiovisual Stimuli of Morphed Emotional Pseudospeech (JAVMEPS): A database for emotional auditory-only, visual-only, and congruent and incongruent audiovisual voice and dynamic face stimuli with varying voice intensities. Behav Res Methods 2024; 56:5103-5115. [PMID: 37821750 PMCID: PMC11289065 DOI: 10.3758/s13428-023-02249-4] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Accepted: 09/18/2023] [Indexed: 10/13/2023]
Abstract
We describe JAVMEPS, an audiovisual (AV) database for emotional voice and dynamic face stimuli, with voices varying in emotional intensity. JAVMEPS includes 2256 stimulus files comprising (A) recordings of 12 speakers, speaking four bisyllabic pseudowords with six naturalistic induced basic emotions plus neutral, in auditory-only, visual-only, and congruent AV conditions. It furthermore comprises (B) caricatures (140%), original voices (100%), and anti-caricatures (60%) for happy, fearful, angry, sad, disgusted, and surprised voices for eight speakers and two pseudowords. Crucially, JAVMEPS contains (C) precisely time-synchronized congruent and incongruent AV (and corresponding auditory-only) stimuli with two emotions (anger, surprise), (C1) with original intensity (ten speakers, four pseudowords), (C2) and with graded AV congruence (implemented via five voice morph levels, from caricatures to anti-caricatures; eight speakers, two pseudowords). We collected classification data for Stimulus Set A from 22 normal-hearing listeners and four cochlear implant users, for two pseudowords, in auditory-only, visual-only, and AV conditions. Normal-hearing individuals showed good classification performance (McorrAV = .59 to .92), with classification rates in the auditory-only condition ≥ .38 correct (surprise: .67, anger: .51). Despite compromised vocal emotion perception, CI users performed above chance levels of .14 for auditory-only stimuli, with best rates for surprise (.31) and anger (.30). We anticipate JAVMEPS to become a useful open resource for researchers into auditory emotion perception, especially when adaptive testing or calibration of task difficulty is desirable. With its time-synchronized congruent and incongruent stimuli, JAVMEPS can also contribute to filling a gap in research regarding dynamic audiovisual integration of emotion perception via behavioral or neurophysiological recordings.
Collapse
Affiliation(s)
- Celina I von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| | - Julian Kauk
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany
| | - Stefan R Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Am Steiger 3, 07743, Jena, Germany.
- Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, Leutragraben 1, 07743, Jena, Germany.
- DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany.
- Jena University Hospital, 07747, Jena, Germany.
| |
Collapse
|
7
|
Valentin O, Lehmann A, Nguyen D, Paquette S. Integrating Emotion Perception in Rehabilitation Programs for Cochlear Implant Users: A Call for a More Comprehensive Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1635-1642. [PMID: 38619441 DOI: 10.1044/2024_jslhr-23-00660] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
PURPOSE Postoperative rehabilitation programs for cochlear implant (CI) recipients primarily emphasize enhancing speech perception. However, effective communication in everyday social interactions necessitates consideration of diverse verbal social cues to facilitate language comprehension. Failure to discern emotional expressions may lead to maladjusted social behavior, underscoring the importance of integrating social cues perception into rehabilitation initiatives to enhance CI users' well-being. After conventional rehabilitation, CI users demonstrate varying levels of emotion perception abilities. This disparity notably impacts young CI users, whose emotion perception deficit can extend to social functioning, encompassing coping strategies and social competence, even when relying on nonauditory cues such as facial expressions. Knowing that emotion perception abilities generally decrease with age, acknowledging emotion perception impairments in aging CI users is crucial, especially since a direct correlation between quality-of-life scores and vocal emotion recognition abilities has been observed in adult CI users. After briefly reviewing the scope of CI rehabilitation programs and summarizing the mounting evidence on CI users' emotion perception deficits and their impact, we will present our recommendations for embedding emotional training as part of enriched and standardized evaluation/rehabilitation programs that can improve CI users' social integration and quality of life. CONCLUSIONS Evaluating all aspects, including emotion perception, in CI rehabilitation programs is crucial because it ensures a comprehensive approach that enhances speech comprehension and the emotional dimension of communication, potentially improving CI users' social interaction and overall well-being. The development of emotion perception training holds promises for CI users and individuals grappling with various forms of hearing loss and sensory deficits. Ultimately, adopting such a comprehensive approach has the potential to significantly elevate the overall quality of life for a broad spectrum of patients.
Collapse
Affiliation(s)
- Olivier Valentin
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Don Nguyen
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Psychology, Faculty of Arts and Science, Trent University, Peterborough, Ontario, Canada
| |
Collapse
|
8
|
Mahrous MM, Abdelgoad AA, Said NM, Telmesani LM, Alrusayyis DF. Voice acoustic characteristics of children with late-onset cochlear implantation: Correlation to auditory performance. Cochlear Implants Int 2024; 25:1-10. [PMID: 38171933 DOI: 10.1080/14670100.2023.2295159] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/05/2024]
Abstract
OBJECTIVES To study the voice acoustic parameters of congenitally deaf children with delayed access to sounds due to late-onset cochlear implantation and to correlate their voice characteristics with their auditory performance. METHODS The study included 84 children: a control group consisting of 50 children with normal hearing and normal speech development; and a study group consisting of 34 paediatric cochlear implant (CI) recipients who had suffered profound hearing loss since birth. According to speech recognition scores and pure-tone thresholds, the study group was further subdivided into two subgroups: 24 children with excellent auditory performance and 10 children with fair auditory performance. The mean age at the time of implantation was 3.6 years for excellent auditory performance group and 3.2 years for fair auditory performance group. Voice acoustic analysis was conducted on all study participants. RESULTS Analysis of voice acoustic parameters revealed a statistically significant delay in both study groups in comparison to the control group. However, there was no statistically significant difference between the two study groups. DISCUSSION Interestingly, in both excellent and fair performance study groups, the gap in comparison to normal hearing children was still present. While late-implanted children performed better on segmental perception (e.g. word recognition), suprasegmental perception (e.g. as demonstrated by objective acoustic voice analysis) did not progress to the same extent. CONCLUSION On the suprasegmental speech performance level, objective acoustic voice measurements demonstrated a significant delay in the suprasegmental speech performance of children with late-onset CI, even those with excellent auditory performance.
Collapse
Affiliation(s)
- Mahmoud M Mahrous
- Audio-Vestibular Medicine Unit, Otorhinolaryngology Department, King Fahad Hospital of University, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
- Audio-Vestibular Medicine Unit, Otorhinolaryngology Department, Faculty of Medicine, Ain Shams University, Cairo, Egypt
| | - Ahmed A Abdelgoad
- Phoniatrics Unit, Otorhinolaryngology Department, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Nithreen M Said
- Audio-Vestibular Medicine Unit, Otorhinolaryngology Department, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Laila M Telmesani
- Otorhinolaryngology-head and neck surgery Department, Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| | - Danah F Alrusayyis
- Faculty of Medicine, Imam Abdulrahman Bin Faisal University, Dammam, Saudi Arabia
| |
Collapse
|
9
|
Lukovenko T, Sikinbayev B, Shterts O, Mironova E. Parental Competence as a Teacher in the Auditory Development of Children with Cochlear Implants. JOURNAL OF PSYCHOLINGUISTIC RESEARCH 2023; 52:2119-2133. [PMID: 37480449 DOI: 10.1007/s10936-023-09995-8] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Accepted: 07/07/2023] [Indexed: 07/24/2023]
Abstract
The number of children with partial or total hearing loss is increasing every day, and most of them are undergoing cochlear implant surgery. The paper aims to assess the teaching competence of parents of children with cochlear implants. The study took one year (2022) in 1 [Almaty, Kazakhstan] kindergarten and 1 specialized school. Twenty-four parents of children (mean age 3.5 ± 0.5 years) and 20 parents of children of primary school age (10.0 ± 0.5 years) who underwent surgery at the age of 1-2 and 6-9 years were included in the study. A minimal number of parents had a high level of competence; sufficient competence was noticed among the two times larger number of parents; however, most of the parents had insufficient competence. The indicators of children were as follows: 3 children had a high level of listening perception; twice as many of them had a sufficient level; the same number had an insufficient level. There were more children with a low level, 3 times more than with a high level. A high level of pedagogical competence of parents correlated with a high level of children's auditory verbal abilities (on the scale of auditory ability integration). There was also a direct relationship with the level of speech development (on the scale of speech use) for children who had the surgery a year earlier. The obtained data can apply to the educational process for children with cochlear implants to improve their auditory and speech skills as quickly as possible. The involvement of parents in the education and rehabilitation of children with cochlear implants is crucial for the successful adaptation and development of the child. Parents can become irreplaceable partners of specialists and educational institutions, providing their children with optimal support and assistance on their way to the development of auditory and communication skills. To enhance parental competence in the area of auditory development of children with cochlear implants, it is recommended to participate in specialized educational programs designed for parents, offered by professionals and organizations. Additionally, actively engaging with educational resources, online materials, and informational communities is beneficial for acquiring up-to-date knowledge and receiving support from other parents, specialists, and experts.
Collapse
Affiliation(s)
- Tatiana Lukovenko
- Department of Theory and Methodology of Pedagogical and Defectological Education, Pacific State University, Khabarovsk, Russia
| | - Bauyrzhan Sikinbayev
- Department of Special Pedagogy, Kazakh National Womens Teacher Training University, Almaty, Kazakhstan.
| | - Olga Shterts
- Department of Psychology, Kazan Federal University, Elabuga, Russia
| | - Ekaterina Mironova
- Department of Polyclinic Therapy, Institute of Clinical Medicine named after N.V. Sklifosovsky, I.M. Sechenov First Moscow State Medical University (Sechenov University), Moscow, Russia
| |
Collapse
|
10
|
Gedik Toker Ö, Hüsam H, Behmen MB, Bal N, Gültekin M, Toker K. Validity and Reliability of the Turkish Version of the Emotional Communication in Hearing Questionnaire. Am J Audiol 2023:1-13. [PMID: 37956697 DOI: 10.1044/2023_aja-23-00093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023] Open
Abstract
PURPOSE The Emotional Communication in Hearing Questionnaire (EMO-CHeQ) is designed to evaluate awareness of vocal emotion information and perception of emotion. This study sought to translate the EMO-CHeQ into Turkish in accordance with international standards and to ascertain its validity and reliability statistically by administering it to native Turkish-speaking subjects. METHOD This empirical study involved collecting data from participants using a scale. A total of 460 individuals, comprising 158 women and 302 men (Mage = 33.43 ± 13.14 years), participated. The data encompassed 295 subjects with normal hearing, 101 hearing aid users, and 64 cochlear implant users. Exploratory factor analysis, followed by confirmatory factor analysis, was employed to ensure construct validity. Internal consistency was assessed with Cronbach's alpha reliability analysis, and content validity was applied to examine how effectively the Turkish version of the scale fulfilled its intended purpose. RESULTS The total Cronbach's alpha internal consistency coefficient of the scale was .949, and the explained variance was 74.385%. The Turkish version of the EMO-CHeQ demonstrated high construct validity, internal consistency, and explanatory efficacy. The scale revealed significant differences (p < .05) in emotional communication among the normal-hearing group, hearing aid users, and cochlear implant users. CONCLUSIONS The Turkish adaptation of the EMO-CHeQ is a credible and robust tool for evaluating how individuals perceive emotion in speech. Emotion perception was found to be suboptimal among hearing aid users compared to cochlear implant users, although it was most proficient in those with normal hearing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24520624.
Collapse
Affiliation(s)
- Özge Gedik Toker
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Hilal Hüsam
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Meliha Başöz Behmen
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Nilüfer Bal
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
- Department of Audiology, Faculty of Medicine, Marmara University, Istanbul, Turkey
| | | | - Kerem Toker
- Department of Health Management, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| |
Collapse
|
11
|
Koelewijn T, Gaudrain E, Shehab T, Treczoks T, Başkent D. The Role of Word Content, Sentence Information, and Vocoding for Voice Cue Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3665-3676. [PMID: 37556819 DOI: 10.1044/2023_jslhr-22-00491] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 08/11/2023]
Abstract
PURPOSE For voice perception, two voice cues, the fundamental frequency (fo) and/or vocal tract length (VTL), seem to largely contribute to identification of voices and speaker characteristics. Acoustic content related to these voice cues is altered in cochlear implant transmitted speech, rendering voice perception difficult for the implant user. In everyday listening, there could be some facilitation from top-down compensatory mechanisms such as from use of linguistic content. Recently, we have shown a lexical content benefit on just-noticeable differences (JNDs) in VTL perception, which was not affected by vocoding. Whether this observed benefit relates to lexicality or phonemic content and whether additional sentence information can affect voice cue perception as well were investigated in this study. METHOD This study examined lexical benefit on VTL perception, by comparing words, time-reversed words, and nonwords, to investigate the contribution of lexical (words vs. nonwords) or phonetic (nonwords vs. reversed words) information. In addition, we investigated the effect of amount of speech (auditory) information on fo and VTL voice cue perception, by comparing words to sentences. In both experiments, nonvocoded and vocoded auditory stimuli were presented. RESULTS The outcomes showed a replication of the detrimental effect reversed words have on VTL perception. Smaller JNDs were shown for stimuli containing lexical and/or phonemic information. Experiment 2 showed a benefit in processing full sentences compared to single words in both fo and VTL perception. In both experiments, there was an effect of vocoding, which only interacted with sentence information for fo. CONCLUSIONS In addition to previous findings suggesting a lexical benefit, the current results show, more specifically, that lexical and phonemic information improves VTL perception. fo and VTL perception benefits from more sentence information compared to words. These results indicate that cochlear implant users may be able to partially compensate for voice cue perception difficulties by relying on the linguistic content and rich acoustic cues of everyday speech. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.23796405.
Collapse
Affiliation(s)
- Thomas Koelewijn
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
- Lyon Neuroscience Research Center, CNRS UMR5292, Inserm U1028, UCBL, UJM, Lyon, France
| | - Thawab Shehab
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Neurolinguistics, Faculty of Arts, University of Groningen, the Netherlands
| | - Tobias Treczoks
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Medical Physics and Cluster of Excellence "Hearing4all," Department of Medical Physics and Acoustics, Faculty VI Medicine and Health Sciences, Carl von Ossietzky Universität Oldenburg, Germany
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands
- Research School of Behavioural and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, the Netherlands
| |
Collapse
|
12
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
13
|
von Eiff CI, Frühholz S, Korth D, Guntinas-Lichius O, Schweinberger SR. Crossmodal benefits to vocal emotion perception in cochlear implant users. iScience 2022; 25:105711. [PMID: 36578321 PMCID: PMC9791346 DOI: 10.1016/j.isci.2022.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 10/17/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Collapse
Affiliation(s)
- Celina Isabelle von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany,Corresponding author
| | - Sascha Frühholz
- Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Daniela Korth
- Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
| | | | - Stefan Robert Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
| |
Collapse
|
14
|
Grantham H, Davidson LS, Geers AE, Uchanski RM. Effects of Segmental and Suprasegmental Speech Perception on Reading in Pediatric Cochlear Implant Recipients. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:3583-3594. [PMID: 36001864 PMCID: PMC9913132 DOI: 10.1044/2022_jslhr-22-00035] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/18/2022] [Revised: 04/22/2022] [Accepted: 06/04/2022] [Indexed: 06/15/2023]
Abstract
PURPOSE The aim of this study was to determine whether suprasegmental speech perception contributes unique variance in predictions of reading decoding and comprehension for prelingually deaf children using two devices, at least one of which is a cochlear implant (CI). METHOD A total of 104, 5- to 9-year-old CI recipients completed tests of segmental perception (e.g., word recognition in quiet and noise, recognition of vowels and consonants in quiet), suprasegmental perception (e.g., talker and stress discrimination, nonword stress repetition, and emotion identification), and nonverbal intelligence. Two years later, participants completed standardized tests of reading decoding and comprehension. Using regression analyses, the unique contribution of suprasegmental perception to reading skills was determined after controlling for demographic characteristics and segmental perception performance. RESULTS Standardized reading scores of the CI recipients increased with nonverbal intelligence for both decoding and comprehension. Female gender was associated with higher comprehension scores. After controlling for gender and nonverbal intelligence, segmental perception accounted for approximately 4% and 2% of the variance in decoding and comprehension, respectively. After controlling for nonverbal intelligence, gender, and segmental perception, suprasegmental perception accounted for an extra 4% and 7% unique variance in reading decoding and reading comprehension, respectively. CONCLUSIONS Suprasegmental perception operates independently from segmental perception to facilitate good reading outcomes for these children with CIs. Clinicians and educators should be mindful that early perceptual skills may have long-term benefits for literacy. Research on how to optimize suprasegmental perception, perhaps through hearing-device programming and/or training strategies, is needed.
Collapse
Affiliation(s)
- Heather Grantham
- Central Institute for the Deaf, St. Louis, MO
- Washington University School of Medicine in St. Louis, MO
| | | | | | | |
Collapse
|
15
|
Fleming JT, Winn MB. Strategic perceptual weighting of acoustic cues for word stress in listeners with cochlear implants, acoustic hearing, or simulated bimodal hearing. THE JOURNAL OF THE ACOUSTICAL SOCIETY OF AMERICA 2022; 152:1300. [PMID: 36182279 PMCID: PMC9439712 DOI: 10.1121/10.0013890] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/27/2022] [Revised: 08/08/2022] [Accepted: 08/16/2022] [Indexed: 05/28/2023]
Abstract
Perception of word stress is an important aspect of recognizing speech, guiding the listener toward candidate words based on the perceived stress pattern. Cochlear implant (CI) signal processing is likely to disrupt some of the available cues for word stress, particularly vowel quality and pitch contour changes. In this study, we used a cue weighting paradigm to investigate differences in stress cue weighting patterns between participants listening with CIs and those with normal hearing (NH). We found that participants with CIs gave less weight to frequency-based pitch and vowel quality cues than NH listeners but compensated by upweighting vowel duration and intensity cues. Nonetheless, CI listeners' stress judgments were also significantly influenced by vowel quality and pitch, and they modulated their usage of these cues depending on the specific word pair in a manner similar to NH participants. In a series of separate online experiments with NH listeners, we simulated aspects of bimodal hearing by combining low-pass filtered speech with a vocoded signal. In these conditions, participants upweighted pitch and vowel quality cues relative to a fully vocoded control condition, suggesting that bimodal listening holds promise for restoring the stress cue weighting patterns exhibited by listeners with NH.
Collapse
Affiliation(s)
- Justin T Fleming
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| | - Matthew B Winn
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, Minnesota 55455, USA
| |
Collapse
|
16
|
Schweinberger SR, von Eiff CI. Enhancing socio-emotional communication and quality of life in young cochlear implant recipients: Perspectives from parameter-specific morphing and caricaturing. Front Neurosci 2022; 16:956917. [PMID: 36090287 PMCID: PMC9453832 DOI: 10.3389/fnins.2022.956917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information—it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone. In this manuscript, we describe a family of new methods, termed parameter-specific facial and vocal morphing. We propose that these provide novel perspectives for assessing sensory determinants of human communication, but also for enhancing socio-emotional communication and QoL in the context of sensory handicaps, via training with digitally enhanced, caricatured stimuli. Based on promising initial results with various target groups including people with age-related macular degeneration, people with low abilities to recognize faces, older people, and adult CI users, we discuss chances and challenges for perceptual training interventions for young CI users based on enhanced auditory stimuli, as well as perspectives for CI sound processing technology.
Collapse
Affiliation(s)
- Stefan R. Schweinberger
- Voice Research Unit, Friedrich Schiller University Jena, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
- Deutsche Forschungsgemeinschaft (DFG) Research Unit Person Perception, Friedrich Schiller University Jena, Jena, Germany
- *Correspondence: Stefan R. Schweinberger,
| | - Celina I. von Eiff
- Voice Research Unit, Friedrich Schiller University Jena, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
17
|
Chen Y, Luo Q, Liang M, Gao L, Yang J, Feng R, Liu J, Qiu G, Li Y, Zheng Y, Lu S. Children's Neural Sensitivity to Prosodic Features of Natural Speech and Its Significance to Speech Development in Cochlear Implanted Children. Front Neurosci 2022; 16:892894. [PMID: 35903806 PMCID: PMC9315047 DOI: 10.3389/fnins.2022.892894] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/09/2022] [Accepted: 06/14/2022] [Indexed: 11/13/2022] Open
Abstract
Catchy utterances, such as proverbs, verses, and nursery rhymes (i.e., "No pain, no gain" in English), contain strong-prosodic (SP) features and are child-friendly in repeating and memorizing; yet the way those prosodic features encoded by neural activity and their influence on speech development in children are still largely unknown. Using functional near-infrared spectroscopy (fNIRS), this study investigated the cortical responses to the perception of natural speech sentences with strong/weak-prosodic (SP/WP) features and evaluated the speech communication ability in 21 pre-lingually deaf children with cochlear implantation (CI) and 25 normal hearing (NH) children. A comprehensive evaluation of speech communication ability was conducted on all the participants to explore the potential correlations between neural activities and children's speech development. The SP information evoked right-lateralized cortical responses across a broad brain network in NH children and facilitated the early integration of linguistic information, highlighting children's neural sensitivity to natural SP sentences. In contrast, children with CI showed significantly weaker cortical activation and characteristic deficits in speech perception with SP features, suggesting hearing loss at the early age of life, causing significantly impaired sensitivity to prosodic features of sentences. Importantly, the level of neural sensitivity to SP sentences was significantly related to the speech behaviors of all children participants. These findings demonstrate the significance of speech prosodic features in children's speech development.
Collapse
Affiliation(s)
- Yuebo Chen
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Qinqin Luo
- Department of Chinese Language and Literature, The Chinese University of Hong Kong, Hong Kong, Hong Kong SAR, China
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Maojin Liang
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
| | - Leyan Gao
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jingwen Yang
- Department of Neurology, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Ruiyan Feng
- Neurolinguistics Teaching Laboratory, Department of Chinese Language and Literature, Sun Yat-sen University, Guangzhou, China
| | - Jiahao Liu
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Guoxin Qiu
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| | - Yi Li
- School of Foreign Languages, Shenzhen University, Shenzhen, China
| | - Yiqing Zheng
- Department of Otolaryngology, Sun Yat-sen Memorial Hospital, Sun Yat-sen University, Guangzhou, China
- Hearing and Speech Science Department, Guangzhou Xinhua University, Guangzhou, China
| | - Shuo Lu
- School of Foreign Languages, Shenzhen University, Shenzhen, China
- Department of Clinical Neurolinguistics Research, Mental and Neurological Diseases Research Center, The Third Affiliated Hospital of Sun Yat-sen University, Guangzhou, China
| |
Collapse
|
18
|
Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear Hear 2022; 43:1178-1188. [PMID: 34999594 PMCID: PMC9197138 DOI: 10.1097/aud.0000000000001181] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
Collapse
|
19
|
Inguscio BMS, Mancini P, Greco A, Nicastri M, Giallini I, Leone CA, Grassia R, Di Nardo W, Di Cesare T, Rossi F, Canale A, Albera A, Giorgi A, Malerba P, Babiloni F, Cartocci G. ‘Musical effort’ and ‘musical pleasantness’: a pilot study on the neurophysiological correlates of classical music listening in adults normal hearing and unilateral cochlear implant users. HEARING, BALANCE AND COMMUNICATION 2022. [DOI: 10.1080/21695717.2022.2079325] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 10/18/2022]
Affiliation(s)
| | - Patrizia Mancini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Antonio Greco
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Maria Nicastri
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Ilaria Giallini
- Department of Sense Organs, Sapienza University of Rome, Rome, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Walter Di Nardo
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Tiziana Di Cesare
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Federica Rossi
- Otorhinolaryngology and Physiology, Catholic University of Rome, Rome, Italy
| | - Andrea Canale
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | - Andrea Albera
- Division of Otorhinolaryngology, Department of Surgical Sciences, University of Turin, Italy
| | | | | | - Fabio Babiloni
- BrainSigns Srl, Rome, Italy
- Department of Computer Science, Hangzhou Dianzi University, Hangzhou, China
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| | - Giulia Cartocci
- BrainSigns Srl, Rome, Italy
- Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy
| |
Collapse
|
20
|
Kao C, Sera MD, Zhang Y. Emotional Speech Processing in 3- to 12-Month-Old Infants: Influences of Emotion Categories and Acoustic Parameters. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2022; 65:487-500. [PMID: 35015972 DOI: 10.1044/2021_jslhr-21-00234] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/14/2023]
Abstract
PURPOSE The aim of this study was to investigate infants' listening preference for emotional prosodies in spoken words and identify their acoustic correlates. METHOD Forty-six 3- to-12-month-old infants (M age = 7.6 months) completed a central fixation (or look-to-listen) paradigm in which four emotional prosodies (happy, sad, angry, and neutral) were presented. Infants' looking time to the string of words was recorded as a proxy of their listening attention. Five acoustic variables-mean fundamental frequency (F0), word duration, intensity variation, harmonics-to-noise ratio (HNR), and spectral centroid-were also analyzed to account for infants' attentiveness to each emotion. RESULTS Infants generally preferred affective over neutral prosody, with more listening attention to the happy and sad voices. Happy sounds with breathy voice quality (low HNR) and less brightness (low spectral centroid) maintained infants' attention more. Sad speech with shorter word duration (i.e., faster speech rate), less breathiness, and more brightness gained infants' attention more than happy speech did. Infants listened less to angry than to happy and sad prosodies, and none of the acoustic variables were associated with infants' listening interests in angry voices. Neutral words with a lower F0 attracted infants' attention more than those with a higher F0. Neither age nor sex effects were observed. CONCLUSIONS This study provides evidence for infants' sensitivity to the prosodic patterns for the basic emotion categories in spoken words and how the acoustic properties of emotional speech may guide their attention. The results point to the need to study the interplay between early socioaffective and language development.
Collapse
Affiliation(s)
- Chieh Kao
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
| | - Maria D Sera
- Institute of Child Development, University of Minnesota, Twin Cities, Minneapolis
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Twin Cities, Minneapolis
- Center for Neurobehavioral Development, University of Minnesota, Twin Cities, Minneapolis
| |
Collapse
|
21
|
Ratnanather JT, Wang LC, Bae SH, O'Neill ER, Sagi E, Tward DJ. Visualization of Speech Perception Analysis via Phoneme Alignment: A Pilot Study. Front Neurol 2022; 12:724800. [PMID: 35087462 PMCID: PMC8787339 DOI: 10.3389/fneur.2021.724800] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/14/2021] [Accepted: 12/13/2021] [Indexed: 11/13/2022] Open
Abstract
Objective: Speech tests assess the ability of people with hearing loss to comprehend speech with a hearing aid or cochlear implant. The tests are usually at the word or sentence level. However, few tests analyze errors at the phoneme level. So, there is a need for an automated program to visualize in real time the accuracy of phonemes in these tests. Method: The program reads in stimulus-response pairs and obtains their phonemic representations from an open-source digital pronouncing dictionary. The stimulus phonemes are aligned with the response phonemes via a modification of the Levenshtein Minimum Edit Distance algorithm. Alignment is achieved via dynamic programming with modified costs based on phonological features for insertion, deletions and substitutions. The accuracy for each phoneme is based on the F1-score. Accuracy is visualized with respect to place and manner (consonants) or height (vowels). Confusion matrices for the phonemes are used in an information transfer analysis of ten phonological features. A histogram of the information transfer for the features over a frequency-like range is presented as a phonemegram. Results: The program was applied to two datasets. One consisted of test data at the sentence and word levels. Stimulus-response sentence pairs from six volunteers with different degrees of hearing loss and modes of amplification were analyzed. Four volunteers listened to sentences from a mobile auditory training app while two listened to sentences from a clinical speech test. Stimulus-response word pairs from three lists were also analyzed. The other dataset consisted of published stimulus-response pairs from experiments of 31 participants with cochlear implants listening to 400 Basic English Lexicon sentences via different talkers at four different SNR levels. In all cases, visualization was obtained in real time. Analysis of 12,400 actual and random pairs showed that the program was robust to the nature of the pairs. Conclusion: It is possible to automate the alignment of phonemes extracted from stimulus-response pairs from speech tests in real time. The alignment then makes it possible to visualize the accuracy of responses via phonological features in two ways. Such visualization of phoneme alignment and accuracy could aid clinicians and scientists.
Collapse
Affiliation(s)
- J Tilak Ratnanather
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Lydia C Wang
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Seung-Ho Bae
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States
| | - Erin R O'Neill
- Center for Applied and Translational Sensory Sciences, University of Minnesota, Minneapolis, MN, United States
| | - Elad Sagi
- Department of Otolaryngology, New York University School of Medicine, New York, NY, United States
| | - Daniel J Tward
- Center for Imaging Science and Institute for Computational Medicine, Department of Biomedical Engineering, Johns Hopkins University, Baltimore, MD, United States.,Departments of Computational Medicine and Neurology, University of California, Los Angeles, Los Angeles, CA, United States
| |
Collapse
|
22
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
23
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
24
|
Tamati TN, Moberly AC. Talker Adaptation and Lexical Difficulty Impact Word Recognition in Adults with Cochlear Implants. Audiol Neurootol 2021; 27:260-270. [PMID: 34535583 DOI: 10.1159/000518643] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/19/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker ("talker adaptation"), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. METHODS Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically "easy" and "hard." Recognition accuracy was assessed "early" and "late" (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). RESULTS CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. CONCLUSION Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA.,Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Aaron C Moberly
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
25
|
Panzeri F, Cavicchiolo S, Giustolisi B, Di Berardino F, Ajmone PF, Vizziello P, Donnini V, Zanetti D. Irony Comprehension in Children With Cochlear Implants: The Role of Language Competence, Theory of Mind, and Prosody Recognition. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:3212-3229. [PMID: 34284611 DOI: 10.1044/2021_jslhr-20-00671] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose Aims of this research were (a) to investigate higher order linguistic and cognitive skills of Italian children with cochlear implants (CIs); (b) to correlate them with the comprehension of irony, which has never been systematically studied in this population; and (c) to identify the factors that facilitate the development of this competence. Method We tested 28 Italian children with CI (mean chronological age = 101 [SD = 25.60] months, age range: 60-144 months), and two control groups of normal-hearing (NH) peers matched for chronological age and for hearing age, on a series of tests assessing their cognitive abilities (nonverbal intelligence and theory of mind), linguistic skills (morphosyntax and prosody recognition), and irony comprehension. Results Despite having grammatical abilities in line with the group of NH children matched for hearing age, children with CI lag behind both groups of NH peers on the recognition of emotions through prosody and on the comprehension of ironic stories, even if these two abilities were not related. Conclusions This is the first study that targeted irony comprehension in children with CI, and we found that this competence, which is crucial for maintaining good social relationships with peers, is impaired in this population. In line with other studies, we found a correlation between this ability and advanced theory of mind skills, but at the same time, a deeper investigation is needed, to account for the high variability of performance in children with CI.
Collapse
Affiliation(s)
| | - Sara Cavicchiolo
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| | | | - Federica Di Berardino
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| | - Paola Francesca Ajmone
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Paola Vizziello
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Veronica Donnini
- Child and Adolescent Neuropsychiatric Service (UONPIA), Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
| | - Diego Zanetti
- Audiology Unit, Department of Specialist Surgical Sciences, Fondazione IRCCS Ca' Granda, Ospedale Maggiore Policlinico, Milan, Italy
- Department of Clinical Sciences and Community Health, University of Milan, Italy
| |
Collapse
|
26
|
Wang Y, Liu L, Zhang Y, Wei C, Xin T, He Q, Hou X, Liu Y. The Neural Processing of Vocal Emotion After Hearing Reconstruction in Prelingual Deaf Children: A Functional Near-Infrared Spectroscopy Brain Imaging Study. Front Neurosci 2021; 15:705741. [PMID: 34393716 PMCID: PMC8355545 DOI: 10.3389/fnins.2021.705741] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/06/2021] [Accepted: 07/08/2021] [Indexed: 11/24/2022] Open
Abstract
As elucidated by prior research, children with hearing loss have impaired vocal emotion recognition compared with their normal-hearing peers. Cochlear implants (CIs) have achieved significant success in facilitating hearing and speech abilities for people with severe-to-profound sensorineural hearing loss. However, due to the current limitations in neuroimaging tools, existing research has been unable to detail the neural processing for perception and the recognition of vocal emotions during early stage CI use in infant and toddler CI users (ITCI). In the present study, functional near-infrared spectroscopy (fNIRS) imaging was employed during preoperative and postoperative tests to describe the early neural processing of perception in prelingual deaf ITCIs and their recognition of four vocal emotions (fear, anger, happiness, and neutral). The results revealed that the cortical response elicited by vocal emotional stimulation on the left pre-motor and supplementary motor area (pre-SMA), right middle temporal gyrus (MTG), and right superior temporal gyrus (STG) were significantly different between preoperative and postoperative tests. These findings indicate differences between the preoperative and postoperative neural processing associated with vocal emotional stimulation. Further results revealed that the recognition of vocal emotional stimuli appeared in the right supramarginal gyrus (SMG) after CI implantation, and the response elicited by fear was significantly greater than the response elicited by anger, indicating a negative bias. These findings indicate that the development of emotional bias and the development of emotional perception and recognition capabilities in ITCIs occur on a different timeline and involve different neural processing from those in normal-hearing peers. To assess the speech perception and production abilities, the Infant-Toddler Meaningful Auditory Integration Scale (IT-MAIS) and Speech Intelligibility Rating (SIR) were used. The results revealed no significant differences between preoperative and postoperative tests. Finally, the correlates of the neurobehavioral results were investigated, and the results demonstrated that the preoperative response of the right SMG to anger stimuli was significantly and positively correlated with the evaluation of postoperative behavioral outcomes. And the postoperative response of the right SMG to anger stimuli was significantly and negatively correlated with the evaluation of postoperative behavioral outcomes.
Collapse
Affiliation(s)
- Yuyang Wang
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Lili Liu
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Ying Zhang
- Department of Otolaryngology, Head and Neck Surgery, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Chaogang Wei
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Tianyu Xin
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| | - Qiang He
- Department of Otolaryngology, Head and Neck Surgery, The Second Hospital of Hebei Medical University, Shijiazhuang, China
| | - Xinlin Hou
- Department of Pediatrics, Peking University First Hospital, Beijing, China
| | - Yuhe Liu
- Department of Otolaryngology, Head and Neck Surgery, Peking University First Hospital, Beijing, China
| |
Collapse
|
27
|
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) show reduced word and auditory emotion recognition abilities relative to their peers with normal hearing. Modern CI processing strategies are designed to preserve acoustic cues requisite for word recognition rather than those cues required for accessing other signal information (e.g., talker gender or emotional state). While word recognition is undoubtedly important for communication, the inaccessibility of this additional signal information in speech may lead to negative social experiences and outcomes for individuals with hearing loss. This study aimed to evaluate whether the emphasis on word recognition preservation in CI processing has unintended consequences on the perception of other talker information, such as emotional state. DESIGN Twenty-four young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence (word recognition task) or selected the emotion of the talker (emotion recognition task) from a list of options (Angry, Calm, Happy, and Sad). Sentences were blocked by task type (emotion recognition versus word recognition) and processing condition (unprocessed versus 8-channel noise vocoder) and presented randomly within the block at three signal-to-noise ratios (SNRs) in a background of speech-shaped noise. Confusion matrices showed the number of errors in emotion recognition by listeners. RESULTS Listeners demonstrated better emotion recognition performance than word recognition performance at the same SNR. Unprocessed speech resulted in higher recognition rates than vocoded stimuli. Recognition performance (for both words and emotions) decreased with worsening SNR. Vocoding speech resulted in a greater negative impact on emotion recognition than it did for word recognition. CONCLUSIONS These data confirm prior work that suggests that in background noise, emotional prosodic information in speech is easier to recognize than word information, even after simulated CI processing. However, emotion recognition may be more negatively impacted by background noise and CI processing than word recognition. Future work could explore CI processing strategies that better encode prosodic information and investigate this effect in individuals with CIs as opposed to vocoded simulation. This study emphasized the need for clinicians to consider not only word recognition but also other aspects of speech that are critical to successful social communication.
Collapse
|
28
|
Yue T, Chen Y, Zheng Q, Xu Z, Wang W, Ni G. Screening Tools and Assessment Methods of Cognitive Decline Associated With Age-Related Hearing Loss: A Review. Front Aging Neurosci 2021; 13:677090. [PMID: 34335227 PMCID: PMC8316923 DOI: 10.3389/fnagi.2021.677090] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/07/2021] [Accepted: 06/24/2021] [Indexed: 12/13/2022] Open
Abstract
Strong links between hearing and cognitive function have been confirmed by a growing number of cross-sectional and longitudinal studies. Seniors with age-related hearing loss (ARHL) have a significantly higher cognitive impairment incidence than those with normal hearing. The correlation mechanism between ARHL and cognitive decline is not fully elucidated to date. However, auditory intervention for patients with ARHL may reduce the risk of cognitive decline, as early cognitive screening may improve related treatment strategies. Currently, clinical audiology examinations rarely include cognitive screening tests, partly due to the lack of objective quantitative indicators with high sensitivity and specificity. Questionnaires are currently widely used as a cognitive screening tool, but the subject's performance may be negatively affected by hearing loss. Numerous electroencephalogram (EEG) and magnetic resonance imaging (MRI) studies analyzed brain structure and function changes in patients with ARHL. These objective electrophysiological tools can be employed to reveal the association mechanism between auditory and cognitive functions, which may also find biological markers to be more extensively applied in assessing the progression towards cognitive decline and observing the effects of rehabilitation training for patients with ARHL. In this study, we reviewed clinical manifestations, pathological changes, and causes of ARHL and discussed their cognitive function effects. Specifically, we focused on current cognitive screening tools and assessment methods and analyzed their limitations and potential integration.
Collapse
Affiliation(s)
- Tao Yue
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Tianjin International Engineering Institute, Tianjin University, Tianjin, China
| | - Yu Chen
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
| | - Qi Zheng
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
| | - Zihao Xu
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| | - Wei Wang
- Department of Otorhinolaryngology Head and Neck Surgery, Tianjin First Central Hospital, Tianjin, China
| | - Guangjian Ni
- Department of Biomedical Engineering, College of Precision Instruments and Optoelectronics Engineering, Tianjin University, Tianjin, China
- Academy of Medical Engineering and Translational Medicine, Tianjin University, Tianjin, China
| |
Collapse
|
29
|
Voice Emotion Recognition by Mandarin-Speaking Children with Cochlear Implants. Ear Hear 2021; 43:165-180. [PMID: 34288631 DOI: 10.1097/aud.0000000000001085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Objectives Emotional expressions are very important in social interactions. Children with cochlear implants can have voice emotion recognition deficits due to device limitations. Mandarin-speaking children with cochlear implants may face greater challenges than those speaking nontonal languages; the pitch information is not well preserved in cochlear implants, and such children could benefit from child-directed speech, which carries more exaggerated distinctive acoustic cues for different emotions. This study investigated voice emotion recognition, using both adult-directed and child-directed materials, in Mandarin-speaking children with cochlear implants compared with normal hearing peers. The authors hypothesized that both the children with cochlear implants and those with normal hearing would perform better with child-directed materials than with adult-directed materials. Design Thirty children (7.17-17 years of age) with cochlear implants and 27 children with normal hearing (6.92-17.08 years of age) were recruited in this study. Participants completed a nonverbal reasoning test, speech recognition tests, and a voice emotion recognition task. Children with cochlear implants over the age of 10 years also completed the Chinese version of the Nijmegen Cochlear Implant Questionnaire to evaluate the health-related quality of life. The voice emotion recognition task was a five-alternative, forced-choice paradigm, which contains sentences spoken with five emotions (happy, angry, sad, scared, and neutral) in a child-directed or adult-directed manner. Results Acoustic analyses showed substantial variations across emotions in all materials, mainly on measures of mean fundamental frequency and fundamental frequency range. Mandarin-speaking children with cochlear implants displayed a significantly poorer performance than normal hearing peers in voice emotion perception tasks, regardless of whether the performance is measured in accuracy scores, Hu value, or reaction time. Children with cochlear implants and children with normal hearing were mainly affected by the mean fundamental frequency in speech emotion recognition tasks. Chronological age had a significant effect on speech emotion recognition in children with normal hearing; however, there was no significant correlation between chronological age and accuracy scores in speech emotion recognition in children with implants. Significant effects of specific emotion and test materials (better performance with child-directed materials) in both groups of children were observed. Among the children with cochlear implants, age at implantation, percentage scores of nonverbal intelligence quotient test, and sentence recognition threshold in quiet could predict recognition performance in both accuracy scores and Hu values. Time wearing cochlear implant could predict reaction time in emotion perception tasks among children with cochlear implants. No correlation was observed between the accuracy score in voice emotion perception and the self-reported scores of health-related quality of life; however, the latter were significantly correlated with speech recognition skills among Mandarin-speaking children with cochlear implants. Conclusions Mandarin-speaking children with cochlear implants could have significant deficits in voice emotion recognition tasks compared with their normally hearing peers and can benefit from the exaggerated prosody of child-directed speech. The effects of age at cochlear implantation, speech and language development, and cognition could play an important role in voice emotion perception by Mandarin-speaking children with cochlear implants.
Collapse
|
30
|
Perception of Child-Directed Versus Adult-Directed Emotional Speech in Pediatric Cochlear Implant Users. Ear Hear 2021; 41:1372-1382. [PMID: 32149924 DOI: 10.1097/aud.0000000000000862] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Cochlear implants (CIs) are remarkable in allowing individuals with severe to profound hearing loss to perceive speech. Despite these gains in speech understanding, however, CI users often struggle to perceive elements such as vocal emotion and prosody, as CIs are unable to transmit the spectro-temporal detail needed to decode affective cues. This issue becomes particularly important for children with CIs, but little is known about their emotional development. In a previous study, pediatric CI users showed deficits in voice emotion recognition with child-directed stimuli featuring exaggerated prosody. However, the large intersubject variability and differential developmental trajectory known in this population incited us to question the extent to which exaggerated prosody would facilitate performance in this task. Thus, the authors revisited the question with both adult-directed and child-directed stimuli. DESIGN Vocal emotion recognition was measured using both child-directed (CDS) and adult-directed (ADS) speech conditions. Pediatric CI users, aged 7-19 years old, with no cognitive or visual impairments and who communicated through oral communication with English as the primary language participated in the experiment (n = 27). Stimuli comprised 12 sentences selected from the HINT database. The sentences were spoken by male and female talkers in a CDS or ADS manner, in each of the five target emotions (happy, sad, neutral, scared, and angry). The chosen sentences were semantically emotion-neutral. Percent correct emotion recognition scores were analyzed for each participant in each condition (CDS vs. ADS). Children also completed cognitive tests of nonverbal IQ and receptive vocabulary, while parents completed questionnaires of CI and hearing history. It was predicted that the reduced prosodic variations found in the ADS condition would result in lower vocal emotion recognition scores compared with the CDS condition. Moreover, it was hypothesized that cognitive factors, perceptual sensitivity to complex pitch changes, and elements of each child's hearing history may serve as predictors of performance on vocal emotion recognition. RESULTS Consistent with our hypothesis, pediatric CI users scored higher on CDS compared with ADS speech stimuli, suggesting that speaking with an exaggerated prosody-akin to "motherese"-may be a viable way to convey emotional content. Significant talker effects were also observed in that higher scores were found for the female talker for both conditions. Multiple regression analysis showed that nonverbal IQ was a significant predictor of CDS emotion recognition scores while Years using CI was a significant predictor of ADS scores. Confusion matrix analyses revealed a dependence of results on specific emotions; for the CDS condition's female talker, participants had high sensitivity (d' scores) to happy and low sensitivity to the neutral sentences while for the ADS condition, low sensitivity was found for the scared sentences. CONCLUSIONS In general, participants had higher vocal emotion recognition to the CDS condition which also had more variability in pitch and intensity and thus more exaggerated prosody, in comparison to the ADS condition. Results suggest that pediatric CI users struggle with vocal emotion perception in general, particularly to adult-directed speech. The authors believe these results have broad implications for understanding how CI users perceive emotions both from an auditory communication standpoint and a socio-developmental perspective.
Collapse
|
31
|
Cartocci G, Giorgi A, Inguscio BMS, Scorpecci A, Giannantonio S, De Lucia A, Garofalo S, Grassia R, Leone CA, Longo P, Freni F, Malerba P, Babiloni F. Higher Right Hemisphere Gamma Band Lateralization and Suggestion of a Sensitive Period for Vocal Auditory Emotional Stimuli Recognition in Unilateral Cochlear Implant Children: An EEG Study. Front Neurosci 2021; 15:608156. [PMID: 33767607 PMCID: PMC7985439 DOI: 10.3389/fnins.2021.608156] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/19/2020] [Accepted: 02/01/2021] [Indexed: 12/21/2022] Open
Abstract
In deaf children, huge emphasis was given to language; however, emotional cues decoding and production appear of pivotal importance for communication capabilities. Concerning neurophysiological correlates of emotional processing, the gamma band activity appears a useful tool adopted for emotion classification and related to the conscious elaboration of emotions. Starting from these considerations, the following items have been investigated: (i) whether emotional auditory stimuli processing differs between normal-hearing (NH) children and children using a cochlear implant (CI), given the non-physiological development of the auditory system in the latter group; (ii) whether the age at CI surgery influences emotion recognition capabilities; and (iii) in light of the right hemisphere hypothesis for emotional processing, whether the CI side influences the processing of emotional cues in unilateral CI (UCI) children. To answer these matters, 9 UCI (9.47 ± 2.33 years old) and 10 NH (10.95 ± 2.11 years old) children were asked to recognize nonverbal vocalizations belonging to three emotional states: positive (achievement, amusement, contentment, relief), negative (anger, disgust, fear, sadness), and neutral (neutral, surprise). Results showed better performances in NH than UCI children in emotional states recognition. The UCI group showed increased gamma activity lateralization index (LI) (relative higher right hemisphere activity) in comparison to the NH group in response to emotional auditory cues. Moreover, LI gamma values were negatively correlated with the percentage of correct responses in emotion recognition. Such observations could be explained by a deficit in UCI children in engaging the left hemisphere for more demanding emotional task, or alternatively by a higher conscious elaboration in UCI than NH children. Additionally, for the UCI group, there was no difference between the CI side and the contralateral side in gamma activity, but a higher gamma activity in the right in comparison to the left hemisphere was found. Therefore, the CI side did not appear to influence the physiologic hemispheric lateralization of emotional processing. Finally, a negative correlation was shown between the age at the CI surgery and the percentage of correct responses in emotion recognition and then suggesting the occurrence of a sensitive period for CI surgery for best emotion recognition skills development.
Collapse
Affiliation(s)
- Giulia Cartocci
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Andrea Giorgi
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy
| | - Bianca M S Inguscio
- BrainSigns Srl, Rome, Italy.,Cochlear Implant Unit, Department of Sensory Organs, Sapienza University of Rome, Rome, Italy
| | - Alessandro Scorpecci
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Sara Giannantonio
- Audiology and Otosurgery Unit, "Bambino Gesù" Pediatric Hospital and Research Institute, Rome, Italy
| | - Antonietta De Lucia
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Sabina Garofalo
- Otology and Cochlear Implant Unit, Regional Referral Centre Children's Hospital "Santobono-Pausilipon", Naples, Italy
| | - Rosa Grassia
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Carlo Antonio Leone
- Department of Otolaryngology/Head and Neck Surgery, Monaldi Hospital, Naples, Italy
| | - Patrizia Longo
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | - Francesco Freni
- Department of Otorhinolaryngology, University of Messina, Messina, Italy
| | | | - Fabio Babiloni
- Laboratory of Industrial Neuroscience, Department of Molecular Medicine, Sapienza University of Rome, Rome, Italy.,BrainSigns Srl, Rome, Italy.,Department of Computer Science and Technology, Hangzhou Dianzi University, Xiasha Higher Education Zone, Hangzhou, China
| |
Collapse
|
32
|
Zhang H, Zhang J, Peng G, Ding H, Zhang Y. Bimodal Benefits Revealed by Categorical Perception of Lexical Tones in Mandarin-Speaking Kindergarteners With a Cochlear Implant and a Contralateral Hearing Aid. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:4238-4251. [PMID: 33186505 DOI: 10.1044/2020_jslhr-20-00224] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Pitch reception poses challenges for individuals with cochlear implants (CIs), and adding a hearing aid (HA) in the nonimplanted ear is potentially beneficial. The current study used fine-scale synthetic speech stimuli to investigate the bimodal benefit for lexical tone categorization in Mandarin-speaking kindergarteners using a CI and an HA in opposite ears. Method The data were collected from 16 participants who were required to complete two classical tasks for speech categorical perception (CP) with CI + HA device condition and CI alone condition. Linear mixed-effects models were constructed to evaluate the identification and discrimination scores across different device conditions. Results The bimodal kindergarteners showed CP for the continuum varying from Mandarin Tone 1 and Tone 2. Moreover, the additional acoustic information from the contralateral HA contributes to improved lexical tone categorization, with a steeper slope, a higher discrimination score of between-category stimuli pair, and an improved peakedness score (i.e., an increased benefit magnitude for discriminations of between-category over within-category pairs) for the CI + HA condition than the CI alone condition. The bimodal kindergarteners with better residual hearing thresholds at 250 Hz level in the nonimplanted ear could perceive lexical tones more categorically. Conclusion The enhanced CP results with bimodal listening provide clear evidence for the clinical practice to fit a contralateral HA in the nonimplanted ear in kindergarteners with unilateral CIs with direct benefits from the low-frequency acoustic hearing.
Collapse
Affiliation(s)
- Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University
| | - Jing Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Gang Peng
- Research Centre for Language, Cognition, and Neuroscience, Department of Chinese and Bilingual Studies, The Hong Kong Polytechnic University
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences and Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
33
|
Buono GH, Crukley J, Hornsby BWY, Picou EM. Loss of high- or low-frequency audibility can partially explain effects of hearing loss on emotional responses to non-speech sounds. Hear Res 2020; 401:108153. [PMID: 33360158 DOI: 10.1016/j.heares.2020.108153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 11/20/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Hearing loss can disrupt emotional responses to sound. However, the impact of stimulus modality (multisensory versus unisensory) on this disruption, and the underlying mechanisms responsible, are unclear. The purposes of this project were to evaluate the effects of stimulus modality and filtering on emotional responses to non-speech stimuli. It was hypothesized that low- and high-pass filtering would result in less extreme ratings, but only for unisensory stimuli. Twenty-four adults (22- 34 years old; 12 male) with normal hearing participated. Participants made ratings of valence and arousal in response to pleasant, neutral, and unpleasant non-speech sounds and/or pictures. Each participant completed ratings of five stimulus modalities: auditory-only, visual-only, auditory-visual, filtered auditory-only, and filtered auditory-visual. Half of the participants rated low-pass filtered stimuli (800 Hz cutoff), and half of the participants rated high-pass filtered stimuli (2000 Hz cutoff). Combining auditory and visual modalities resulted in more extreme (more pleasant and more unpleasant) ratings of valence in response to pleasant and unpleasant stimuli. In addition, low- and high-pass filtering of sounds resulted in less extreme ratings of valence (less pleasant and less unpleasant) and arousal (less exciting) in response to both auditory-only and auditory-visual stimuli. These results suggest that changes in audible spectral information are partially responsible for the noted changes in emotional responses to sound that accompany hearing loss. The findings also suggest the effects of hearing loss will generalize to multisensory stimuli if the stimuli include sound, although further work is warranted to confirm this in listeners with hearing loss.
Collapse
Affiliation(s)
- Gabrielle H Buono
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Jeffery Crukley
- Department of Speech-Language Pathology, University of Toronto, Canada
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States.
| |
Collapse
|
34
|
Abstract
INTRODUCTION Cochlear implants (CIs) are biomedical devices that restore sound perception for people with severe-to-profound sensorineural hearing loss. Most postlingually deafened CI users are able to achieve excellent speech recognition in quiet environments. However, current CI sound processors remain limited in their ability to deliver fine spectrotemporal information, making it difficult for CI users to perceive complex sounds. Limited access to complex acoustic cues such as music, environmental sounds, lexical tones, and voice emotion may have significant ramifications on quality of life, social development, and community interactions. AREAS COVERED The purpose of this review article is to summarize the literature on CIs and music perception, with an emphasis on music training in pediatric CI recipients. The findings have implications on our understanding of noninvasive, accessible methods for improving auditory processing and may help advance our ability to improve sound quality and performance for implantees. EXPERT OPINION Music training, particularly in the pediatric population, may be able to continue to enhance auditory processing even after performance plateaus. The effects of these training programs appear generalizable to non-trained musical tasks, speech prosody and, emotion perception. Future studies should employ rigorous control groups involving a non-musical acoustic intervention, standardized auditory stimuli, and the provision of feedback.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California San Francisco School of Medicine , San Francisco, CA, USA
| |
Collapse
|
35
|
Skuk VG, Kirchen L, Oberhoffner T, Guntinas-Lichius O, Dobel C, Schweinberger SR. Parameter-Specific Morphing Reveals Contributions of Timbre and Fundamental Frequency Cues to the Perception of Voice Gender and Age in Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:3155-3175. [PMID: 32881631 DOI: 10.1044/2020_jslhr-20-00026] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose Using naturalistic synthesized speech, we determined the relative importance of acoustic cues in voice gender and age perception in cochlear implant (CI) users. Method We investigated 28 CI users' abilities to utilize fundamental frequency (F0) and timbre in perceiving voice gender (Experiment 1) and vocal age (Experiment 2). Parameter-specific voice morphing was used to selectively control acoustic cues (F0; time; timbre, i.e., formant frequencies, spectral-level information, and aperiodicity, as defined in TANDEM-STRAIGHT) in voice stimuli. Individual differences in CI users' performance were quantified via deviations from the mean performance of 19 normal-hearing (NH) listeners. Results CI users' gender perception seemed exclusively based on F0, whereas NH listeners efficiently used timbre. For age perception, timbre was more informative than F0 for both groups, with minor contributions of temporal cues. While a few CI users performed comparable to NH listeners overall, others were at chance. Separate analyses confirmed that even high-performing CI users classified gender almost exclusively based on F0. While high performers could discriminate age in male and female voices, low performers were close to chance overall but used F0 as a misleading cue to age (classifying female voices as young and male voices as old). Satisfaction with CI generally correlated with performance in age perception. Conclusions We confirmed that CI users' gender classification is mainly based on F0. However, high performers could make reasonable usage of timbre cues in age perception. Overall, parameter-specific morphing can serve to objectively assess individual profiles of CI users' abilities to perceive nonverbal social-communicative vocal signals.
Collapse
Affiliation(s)
- Verena G Skuk
- DFG Research Unit Person Perception, Friedrich Schiller University of Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Louisa Kirchen
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Social-Pediatric Centre and Centre for Adults With Special Needs, Trier, Germany
| | - Tobias Oberhoffner
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
- Department of Otorhinolaryngology, Head and Neck Surgery, "Otto Körner," University Medical Center Rostock, Germany
| | - Orlando Guntinas-Lichius
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Christian Dobel
- Department of Otorhinolaryngology, Institute of Phoniatry and Pedaudiology, Jena University Hospital, Germany
| | - Stefan R Schweinberger
- DFG Research Unit Person Perception, Friedrich Schiller University of Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University of Jena, Germany
- Swiss Center for Affective Science, Geneva, Switzerland
| |
Collapse
|
36
|
Abstract
OBJECTIVES Children with hearing loss (HL), in spite of early cochlear implantation, often struggle considerably with language acquisition. Previous research has shown a benefit of rhythmic training on linguistic skills in children with HL, suggesting that improving rhythmic capacities could help attenuating language difficulties. However, little is known about general rhythmic skills of children with HL and how they relate to speech perception. The aim of this study is twofold: (1) to assess the abilities of children with HL in different rhythmic sensorimotor synchronization tasks compared to a normal-hearing control group and (2) to investigate a possible relation between sensorimotor synchronization abilities and speech perception abilities in children with HL. DESIGN A battery of sensorimotor synchronization tests with stimuli of varying acoustic and temporal complexity was used: a metronome, different musical excerpts, and complex rhythmic patterns. Synchronization abilities were assessed in 32 children (aged from 5 to 10 years) with a severe to profound HL mainly fitted with one or two cochlear implants (n = 28) or with hearing aids (n = 4). Working memory and sentence repetition abilities were also assessed. Performance was compared to an age-matched control group of 24 children with normal hearing. The comparison took into account variability in working memory capacities. For children with HL only, we computed linear regressions on speech, sensorimotor synchronization, and working memory abilities, including device-related variables such as onset of device use, type of device, and duration of use. RESULTS Compared to the normal-hearing group, children with HL performed poorly in all sensorimotor synchronization tasks, but the effect size was greater for complex as compared to simple stimuli. Group differences in working memory did not explain this result. Linear regression analysis revealed that working memory, synchronization to complex rhythms performances, age, and duration of device use predicted the number of correct syllables produced in a sentence repetition task. CONCLUSION Despite early cochlear implantation or hearing aid use, hearing impairment affects the quality of temporal processing of acoustic stimuli in congenitally deaf children. This deficit seems to be more severe with stimuli of increasing rhythmic complexity highlighting a difficulty in structuring sounds according to a temporal hierarchy.
Collapse
|
37
|
Weed E, Fusaroli R. Acoustic Measures of Prosody in Right-Hemisphere Damage: A Systematic Review and Meta-Analysis. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1762-1775. [PMID: 32432947 DOI: 10.1044/2020_jslhr-19-00241] [Citation(s) in RCA: 10] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose The aim of the study was to use systematic review and meta-analysis to quantitatively assess the currently available acoustic evidence for prosodic production impairments as a result of right-hemisphere damage (RHD), as well as to develop methodological recommendations for future studies. Method We systematically reviewed papers reporting acoustic features of prosodic production in RHD in order to identify shortcomings in the literature and make recommendations for future studies. We estimated the meta-analytic effect size of the acoustic features. We extracted standardized mean differences from 16 papers and estimated aggregated effect sizes using hierarchical Bayesian regression models. Results RHD did present reduced fundamental frequency variation, but the trait was shared with left-hemisphere damage. RHD also presented evidence for increased pause duration. No meta-analytic evidence for an effect of prosody type (emotional vs. linguistic) was found. Conclusions Taken together, the currently available acoustic data show only a weak specific effect of RHD on prosody production. However, the results are not definitive, as more reliable analyses are hindered by small sample sizes, lack of detail on lesion location, and divergent measuring techniques. We propose recommendations to overcome these issues: Cumulative science practices (e.g., open data and code sharing), more nuanced speech signal processing techniques, and the integration of acoustic measures and perceptual judgments are recommended to more effectively investigate prosody in RHD.
Collapse
Affiliation(s)
- Ethan Weed
- School of Communication and Culture, Aarhus University, Denmark
| | | |
Collapse
|
38
|
Zhang H, Zhang J, Ding H, Zhang Y. Bimodal Benefits for Lexical Tone Recognition: An Investigation on Mandarin-speaking Preschoolers with a Cochlear Implant and a Contralateral Hearing Aid. Brain Sci 2020; 10:brainsci10040238. [PMID: 32316466 PMCID: PMC7226140 DOI: 10.3390/brainsci10040238] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/22/2020] [Revised: 04/08/2020] [Accepted: 04/15/2020] [Indexed: 11/16/2022] Open
Abstract
Pitch perception is known to be difficult for individuals with cochlear implant (CI), and adding a hearing aid (HA) in the non-implanted ear is potentially beneficial. The current study aimed to investigate the bimodal benefit for lexical tone recognition in Mandarin-speaking preschoolers using a CI and an HA in opposite ears. The child participants were required to complete tone identification in quiet and in noise with CI + HA in comparison with CI alone. While the bimodal listeners showed confusion between Tone 2 and Tone 3 in recognition, the additional acoustic information from the contralateral HA alleviated confusion between these two tones in quiet. Moreover, significant improvement was demonstrated in the CI + HA condition over the CI alone condition in noise. The bimodal benefit for individual subjects could be predicted by the low-frequency hearing threshold of the non-implanted ear and the duration of bimodal use. The findings support the clinical practice to fit a contralateral HA in the non-implanted ear for the potential benefit in Mandarin tone recognition in CI children. The limitations call for further studies on auditory plasticity on an individual basis to gain insights on the contributing factors to the bimodal benefit or its absence.
Collapse
Affiliation(s)
- Hao Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
| | - Jing Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, Shanghai 200240, China; (H.Z.); (J.Z.)
- Correspondence: (H.D.); (Y.Z.); Tel.: +1-612-624-7878 (Y.Z.)
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences, University of Minnesota, Minneapolis, MN 55455, USA
- Correspondence: (H.D.); (Y.Z.); Tel.: +1-612-624-7878 (Y.Z.)
| |
Collapse
|
39
|
Figueroa M, Darbra S, Silvestre N. Reading and Theory of Mind in Adolescents with Cochlear Implant. JOURNAL OF DEAF STUDIES AND DEAF EDUCATION 2020; 25:212-223. [PMID: 32091587 DOI: 10.1093/deafed/enz046] [Citation(s) in RCA: 4] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/25/2019] [Revised: 10/02/2019] [Accepted: 10/20/2019] [Indexed: 06/10/2023]
Abstract
Previous research has shown a possible link between reading comprehension and theory of mind (ToM), but these findings are unclear in adolescents with cochlear implants (CI). In the present study, reading comprehension and ToM were assessed in adolescents with CI and the relation between both skills was also studied. Two sessions were performed on two groups of adolescents aged between 12 and 16 years of age (36 adolescents with CI and 54 participants with typical hearing, TH). They were evaluated by means of a standardized reading battery, a false belief task, and Faux Pas stories. The results indicated that reading and cognitive ToM were more developed in the TH group than in adolescents with CI. However, early-CI and binaural group performance were close to the TH group in narrative and expository comprehension and cognitive ToM. The results also indicated that cognitive ToM and reading comprehension appear to be related in deaf adolescents.
Collapse
Affiliation(s)
- Mario Figueroa
- Department of Basic, Developmental and Educational Psychology, Autonoumous University of Barcelona
| | - Sònia Darbra
- Department of Psychobiology and Methodology of Health Sciences, Neurosciences Institute, Autonomous University of Barcelona
| | - Núria Silvestre
- Department of Basic, Developmental and Educational Psychology, Autonoumous University of Barcelona
| |
Collapse
|
40
|
Neurophysiological Differences in Emotional Processing by Cochlear Implant Users, Extending Beyond the Realm of Speech. Ear Hear 2020; 40:1197-1209. [PMID: 30762600 DOI: 10.1097/aud.0000000000000701] [Citation(s) in RCA: 8] [Impact Index Per Article: 1.6] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/25/2023]
Abstract
OBJECTIVE Cochlear implants (CIs) restore a sense of hearing in deaf individuals. However, they do not transmit the acoustic signal with sufficient fidelity, leading to difficulties in recognizing emotions in voice and in music. The study aimed to explore the neurophysiological bases of these limitations. DESIGN Twenty-two adults (18 to 70 years old) with CIs and 22 age-matched controls with normal hearing participated. Event-related potentials (ERPs) were recorded in response to emotional bursts (happy, sad, or neutral) produced in each modality (voice or music) that were for the most part correctly identified behaviorally. RESULTS Compared to controls, the N1 and P2 components were attenuated and prolonged in CI users. To a smaller degree, N1 and P2 were also attenuated and prolonged in music compared to voice, in both populations. The N1-P2 complex was emotion-dependent (e.g., reduced and prolonged response to sadness), but this was also true in both populations. In contrast, the later portion of the response, between 600 and 850 ms, differentiated happy and sad from neutral stimuli in normal hearing but not in CI listeners. CONCLUSIONS The early portion of the ERP waveform reflected primarily the general reduction in sensory encoding by CI users (largely due to CI processing itself), whereas altered emotional processing (by CI users) could be found in the later portion of the ERP and extended beyond the realm of speech.
Collapse
|
41
|
D'Onofrio KL, Caldwell M, Limb C, Smith S, Kessler DM, Gifford RH. Musical Emotion Perception in Bimodal Patients: Relative Weighting of Musical Mode and Tempo Cues. Front Neurosci 2020; 14:114. [PMID: 32174809 PMCID: PMC7054459 DOI: 10.3389/fnins.2020.00114] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/30/2019] [Accepted: 01/29/2020] [Indexed: 11/13/2022] Open
Abstract
Several cues are used to convey musical emotion, the two primary being musical mode and musical tempo. Specifically, major and minor modes tend to be associated with positive and negative valence, respectively, and songs at fast tempi have been associated with more positive valence compared to songs at slow tempi (Balkwill and Thompson, 1999; Webster and Weir, 2005). In Experiment I, we examined the relative weighting of musical tempo and musical mode among adult cochlear implant (CI) users combining electric and contralateral acoustic stimulation, or "bimodal" hearing. Our primary hypothesis was that bimodal listeners would utilize both tempo and mode cues in their musical emotion judgments in a manner similar to normal-hearing listeners. Our secondary hypothesis was that low-frequency (LF) spectral resolution in the non-implanted ear, as quantified via psychophysical tuning curves (PTCs) at 262 and 440 Hz, would be significantly correlated with degree of bimodal benefit for musical emotion perception. In Experiment II, we investigated across-channel spectral resolution using a spectral modulation detection (SMD) task and neural representation of temporal fine structure via the frequency following response (FFR) for a 170-ms /da/ stimulus. Results indicate that CI-alone performance was driven almost exclusively by tempo cues, whereas bimodal listening demonstrated use of both tempo and mode. Additionally, bimodal benefit for musical emotion perception may be correlated with spectral resolution in the non-implanted ear via SMD, as well as neural representation of F0 amplitude via FFR - though further study with a larger sample size is warranted. Thus, contralateral acoustic hearing can offer significant benefit for musical emotion perception, and the degree of benefit may be dependent upon spectral resolution of the non-implanted ear.
Collapse
Affiliation(s)
- Kristen L D'Onofrio
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | | | - Charles Limb
- Department of Otolaryngology-Head and Neck Surgery, University of California, San Francisco, San Francisco, CA, United States
| | - Spencer Smith
- Department of Communication Sciences and Disorders, The University of Texas at Austin, Austin, TX, United States
| | - David M Kessler
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| | - René H Gifford
- Cochlear Implant Research Laboratory, Department of Hearing and Speech Sciences, Vanderbilt University, Nashville, TN, United States
| |
Collapse
|
42
|
Chatterjee M, Kulkarni AM, Siddiqui RM, Christensen JA, Hozan M, Sis JL, Damm SA. Acoustics of Emotional Prosody Produced by Prelingually Deaf Children With Cochlear Implants. Front Psychol 2019; 10:2190. [PMID: 31632320 PMCID: PMC6779094 DOI: 10.3389/fpsyg.2019.02190] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/11/2019] [Indexed: 11/27/2022] Open
Abstract
Purpose: Cochlear implants (CIs) provide reasonable levels of speech recognition quietly, but voice pitch perception is severely impaired in CI users. The central question addressed here relates to how access to acoustic input pre-implantation influences vocal emotion production by individuals with CIs. The objective of this study was to compare acoustic characteristics of vocal emotions produced by prelingually deaf school-aged children with cochlear implants (CCIs) who were implanted at the age of 2 and had no usable hearing before implantation with those produced by children with normal hearing (CNH), adults with normal hearing (ANH), and postlingually deaf adults with cochlear implants (ACI) who developed with good access to acoustic information prior to losing their hearing and receiving a CI. Method: A set of 20 sentences without lexically based emotional information was recorded by 13 CCI, 9 CNH, 9 ANH, and 10 ACI, each with a happy emotion and a sad emotion, without training or guidance. The sentences were analyzed for primary acoustic characteristics of the productions. Results: Significant effects of Emotion were observed in all acoustic features analyzed (mean voice pitch, standard deviation of voice pitch, intensity, duration, and spectral centroid). ACI and ANH did not differ in any of the analyses. Of the four groups, CCI produced the smallest acoustic contrasts between the emotions in voice pitch and emotions in its standard deviation. Effects of developmental age (highly correlated with the duration of device experience) and age at implantation (moderately correlated with duration of device experience) were observed, and interactions with the children's sex were also observed. Conclusion: Although prelingually deaf CCI and postlingually deaf ACI are listening to similar degraded speech and show similar deficits in vocal emotion perception, these groups are distinct in their productions of contrastive vocal emotions. The results underscore the importance of access to acoustic hearing in early childhood for the production of speech prosody and also suggest the need for a greater role of speech therapy in this area.
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, United States
| | | | | | | | | | | | | |
Collapse
|
43
|
Jiam NT, Limb CJ. Rhythm processing in cochlear implant-mediated music perception. Ann N Y Acad Sci 2019; 1453:22-28. [PMID: 31168793 DOI: 10.1111/nyas.14130] [Citation(s) in RCA: 16] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/25/2019] [Revised: 04/24/2019] [Accepted: 05/03/2019] [Indexed: 11/29/2022]
Abstract
Cochlear implants (CIs) are biomedical devices that provide sound to people with severe-to-profound hearing loss by direct electrical stimulation of auditory neurons in the cochlea. Despite the remarkable achievements with respect to speech perception in quiet environments, music perception with CIs remains generally poor due to the degradation of auditory input. Prior studies have shown that both pitch perception and timbre discrimination are poor in CI users, whereas the performance on rhythmic tasks is nearly equivalent to normal hearing participants. There are several caveats, however, to this generalization regarding rhythm processing for CI users. The purpose of this article is to summarize the literature on rhythmic perception for CI users while highlighting important limitations within these studies. We will also identify areas for future research and development of CI-mediated music processing. It is likely that rhythm processing will continue to advance as our understanding of electrical current delivery to the auditory nerve improves.
Collapse
Affiliation(s)
- Nicole T Jiam
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California
| | - Charles J Limb
- Department of Otolaryngology - Head and Neck Surgery, University of California San Francisco School of Medicine, San Francisco, California
| |
Collapse
|
44
|
Ritter C, Vongpaisal T. Multimodal and Spectral Degradation Effects on Speech and Emotion Recognition in Adult Listeners. Trends Hear 2019; 22:2331216518804966. [PMID: 30378469 PMCID: PMC6236866 DOI: 10.1177/2331216518804966] [Citation(s) in RCA: 3] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/22/2022] Open
Abstract
For cochlear implant (CI) users, degraded spectral input hampers the
understanding of prosodic vocal emotion, especially in difficult listening
conditions. Using a vocoder simulation of CI hearing, we examined the extent to
which informative multimodal cues in a talker’s spoken expressions improve
normal hearing (NH) adults’ speech and emotion perception under different levels
of spectral degradation (two, three, four, and eight spectral bands).
Participants repeated the words verbatim and identified emotions (among four
alternative options: happy, sad, angry, and neutral) in meaningful sentences
that are semantically congruent with the expression of the intended emotion.
Sentences were presented in their natural speech form and in speech sampled
through a noise-band vocoder in sound (auditory-only) and video
(auditory–visual) recordings of a female talker. Visual information had a more
pronounced benefit in enhancing speech recognition in the lower spectral band
conditions. Spectral degradation, however, did not interfere with emotion
recognition performance when dynamic visual cues in a talker’s expression are
provided as participants scored at ceiling levels across all spectral band
conditions. Our use of familiar sentences that contained congruent semantic and
prosodic information have high ecological validity, which likely optimized
listener performance under simulated CI hearing and may better predict CI users’
outcomes in everyday listening contexts.
Collapse
Affiliation(s)
- Chantel Ritter
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| | - Tara Vongpaisal
- 1 Department of Psychology, MacEwan University, Alberta, Canada
| |
Collapse
|
45
|
Fuller CD, Galvin JJ, Maat B, Başkent D, Free RH. Comparison of Two Music Training Approaches on Music and Speech Perception in Cochlear Implant Users. Trends Hear 2019; 22:2331216518765379. [PMID: 29621947 PMCID: PMC5894911 DOI: 10.1177/2331216518765379] [Citation(s) in RCA: 20] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
In normal-hearing (NH) adults, long-term music training may benefit music and speech perception, even when listening to spectro-temporally degraded signals as experienced by cochlear implant (CI) users. In this study, we compared two different music training approaches in CI users and their effects on speech and music perception, as it remains unclear which approach to music training might be best. The approaches differed in terms of music exercises and social interaction. For the pitch/timbre group, melodic contour identification (MCI) training was performed using computer software. For the music therapy group, training involved face-to-face group exercises (rhythm perception, musical speech perception, music perception, singing, vocal emotion identification, and music improvisation). For the control group, training involved group nonmusic activities (e.g., writing, cooking, and woodworking). Training consisted of weekly 2-hr sessions over a 6-week period. Speech intelligibility in quiet and noise, vocal emotion identification, MCI, and quality of life (QoL) were measured before and after training. The different training approaches appeared to offer different benefits for music and speech perception. Training effects were observed within-domain (better MCI performance for the pitch/timbre group), with little cross-domain transfer of music training (emotion identification significantly improved for the music therapy group). While training had no significant effect on QoL, the music therapy group reported better perceptual skills across training sessions. These results suggest that more extensive and intensive training approaches that combine pitch training with the social aspects of music therapy may further benefit CI users.
Collapse
Affiliation(s)
- Christina D Fuller
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - John J Galvin
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands.,4 House Ear Institute, Los Angeles, CA, USA.,5 Department of Head and Neck Surgery, David Geffen School of Medicine, UCLA, CA, USA
| | - Bert Maat
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Deniz Başkent
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| | - Rolien H Free
- 1 Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, the Netherlands.,2 Graduate School of Medical Sciences, University of Groningen, the Netherlands.,3 Research School of Behavioral and Cognitive Neurosciences, University of Groningen, the Netherlands
| |
Collapse
|
46
|
Abstract
OBJECTIVES Cochlear implant (CI) users suffer from a range of speech impairments, such as stuttering and vocal control of pitch and intensity. Though little research has focused on the role of auditory feedback in the speech of CI users, these speech impairments could be due in part to limited access to low-frequency cues inherent in CI-mediated listening. Phantom electrode stimulation (PES) represents a novel application of current steering that extends access to low frequencies for CI recipients. It is important to note that PES transmits frequencies below 300 Hz, whereas Baseline does not. The objective of this study was to explore the effects of PES on multiple frequency-related characteristics of voice production. DESIGN Eight postlingually deafened, adult Advanced Bionics CI users underwent a series of vocal production tests including Tone Repetition, Vowel Sound Production, Passage Reading, and Picture Description. Participants completed all of these tests twice: once with PES and once using their program used for everyday listening (Baseline). An additional test, Automatic Modulation, was included to measure acute effects of PES and was completed only once. This test involved switching between PES and Baseline at specific time intervals in real time as participants read a series of short sentences. Finally, a subjective Vocal Effort measurement was also included. RESULTS In Tone Repetition, the fundamental frequencies (F0) of tones produced using PES and the size of musical intervals produced using PES were significantly more accurate (closer to the target) compared with Baseline in specific gender, target tone range, and target tone type testing conditions. In the Vowel Sound Production task, vowel formant profiles produced using PES were closer to that of the general population compared with those produced using Baseline. The Passage Reading and Picture Description task results suggest that PES reduces measures of pitch variability (F0 standard deviation and range) in natural speech production. No significant results were found in comparisons of PES and Baseline in the Automatic Modulation task nor in the Vocal Effort task. CONCLUSIONS The findings of this study suggest that usage of PES increases accuracy of pitch matching in repeated sung tones and frequency intervals, possibly due to more accurate F0 representation. The results also suggest that PES partially normalizes the vowel formant profiles of select vowel sounds. PES seems to decrease pitch variability of natural speech and appears to have limited acute effects on natural speech production, though this finding may be due in part to paradigm limitations. On average, subjective ratings of vocal effort were unaffected by the usage of PES versus Baseline.
Collapse
|
47
|
Goy H, Pichora-Fuller MK, Singh G, Russo FA. Hearing Aids Benefit Recognition of Words in Emotional Speech but Not Emotion Identification. Trends Hear 2018; 22:2331216518801736. [PMID: 30249171 PMCID: PMC6156210 DOI: 10.1177/2331216518801736] [Citation(s) in RCA: 5] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022] Open
Abstract
Vocal emotion perception is an important part of speech communication and social interaction. Although older adults with normal audiograms are known to be less accurate at identifying vocal emotion compared to younger adults, little is known about how older adults with hearing loss perceive vocal emotion or whether hearing aids improve the perception of emotional speech. In the main experiment, older hearing aid users were presented with sentences spoken in seven emotion conditions, with and without their own hearing aids. Listeners reported the words that they heard as well as the emotion portrayed in each sentence. The use of hearing aids improved word-recognition accuracy in quiet from 38.1% (unaided) to 65.1% (aided) but did not significantly change emotion-identification accuracy (36.0% unaided, 41.8% aided). In a follow-up experiment, normal-hearing young listeners were tested on the same stimuli. Normal-hearing younger listeners and older listeners with hearing loss showed similar patterns in how emotion affected word-recognition performance but different patterns in how emotion affected emotion-identification performance. In contrast to the present findings, previous studies did not find age-related differences between younger and older normal-hearing listeners in how emotion affected emotion-identification performance. These findings suggest that there are changes to emotion identification caused by hearing loss that are beyond those that can be attributed to normal aging, and that hearing aids do not compensate for these changes.
Collapse
Affiliation(s)
- Huiwen Goy
- 1 Ryerson University, Toronto, Ontario, Canada
| | | | - Gurjit Singh
- 1 Ryerson University, Toronto, Ontario, Canada.,3 Phonak AG, Stäfa, Switzerland.,4 Department of Speech-Language Pathology, University of Toronto, Toronto, Ontario, Canada.,5 Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada
| | - Frank A Russo
- 1 Ryerson University, Toronto, Ontario, Canada.,5 Toronto Rehabilitation Institute, University Health Network, Toronto, Ontario, Canada
| |
Collapse
|
48
|
Waaramaa T, Kukkonen T, Mykkänen S, Geneid A. Vocal Emotion Identification by Children Using Cochlear Implants, Relations to Voice Quality, and Musical Interests. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2018; 61:973-985. [PMID: 29587304 DOI: 10.1044/2017_jslhr-h-17-0054] [Citation(s) in RCA: 10] [Impact Index Per Article: 1.4] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 02/09/2017] [Accepted: 12/11/2017] [Indexed: 06/08/2023]
Abstract
PURPOSE Listening tests for emotion identification were conducted with 8-17-year-old children with hearing impairment (HI; N = 25) using cochlear implants, and their 12-year-old peers with normal hearing (N = 18). The study examined the impact of musical interests and acoustics of the stimuli on correct emotion identification. METHOD The children completed a questionnaire with their background information and noting musical interests. They then listened to vocal stimuli produced by actors (N = 5) and consisting of nonsense sentences and prolonged vowels ([a:], [i:], and [u:]; N = 32) expressing excitement, anger, contentment, and fear. The children's task was to identify the emotions they heard in the sample by choosing from the provided options. Acoustics of the samples were studied using Praat software, and statistics were examined using SPSS 24 software. RESULTS The children with HI identified the emotions with 57% accuracy and the normal hearing children with 75% accuracy. Female listeners were more accurate than male listeners in both groups. Those who were implanted before age of 3 years identified emotions more accurately than others (p < .05). No connection between the child's audiogram and correct identification was observed. Musical interests and voice quality parameters were found to be related to correct identification. CONCLUSIONS Implantation age, musical interests, and voice quality tended to have an impact on correct emotion identification. Thus, in developing the cochlear implants, it may be worth paying attention to the acoustic structures of vocal emotional expressions, especially the formant frequency of F3. Supporting the musical interests of children with HI may help their emotional development and improve their social lives.
Collapse
Affiliation(s)
- Teija Waaramaa
- Tampere Research Centre for Journalism, Media and Communication (COMET), Faculty of Communication Sciences, University of Tampere, Finland
| | - Tarja Kukkonen
- Faculty of Social Sciences/Logopedics, University of Tampere, Finland
| | - Sari Mykkänen
- Hearing Centre, Tampere University Hospital, Finland
| | - Ahmed Geneid
- Department of Otorhinolaryngology and Phoniatrics-Head and Neck Surgery, University of Helsinki and Helsinki University Hospital, Finland
| |
Collapse
|
49
|
Picou EM, Singh G, Goy H, Russo F, Hickson L, Oxenham AJ, Buono GH, Ricketts TA, Launer S. Hearing, Emotion, Amplification, Research, and Training Workshop: Current Understanding of Hearing Loss and Emotion Perception and Priorities for Future Research. Trends Hear 2018; 22:2331216518803215. [PMID: 30270810 PMCID: PMC6168729 DOI: 10.1177/2331216518803215] [Citation(s) in RCA: 20] [Impact Index Per Article: 2.9] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/13/2018] [Revised: 08/18/2018] [Accepted: 09/03/2018] [Indexed: 12/19/2022] Open
Abstract
The question of how hearing loss and hearing rehabilitation affect patients' momentary emotional experiences is one that has received little attention but has considerable potential to affect patients' psychosocial function. This article is a product from the Hearing, Emotion, Amplification, Research, and Training workshop, which was convened to develop a consensus document describing research on emotion perception relevant for hearing research. This article outlines conceptual frameworks for the investigation of emotion in hearing research; available subjective, objective, neurophysiologic, and peripheral physiologic data acquisition research methods; the effects of age and hearing loss on emotion perception; potential rehabilitation strategies; priorities for future research; and implications for clinical audiologic rehabilitation. More broadly, this article aims to increase awareness about emotion perception research in audiology and to stimulate additional research on the topic.
Collapse
Affiliation(s)
- Erin M. Picou
- Vanderbilt University School of
Medicine, Nashville, TN, USA
| | - Gurjit Singh
- Phonak Canada, Mississauga, ON,
Canada
- Department of Speech-Language Pathology,
University of Toronto, ON, Canada
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Huiwen Goy
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Frank Russo
- Department of Psychology, Ryerson
University, Toronto, ON, Canada
| | - Louise Hickson
- School of Health and Rehabilitation
Sciences, University of Queensland, Brisbane, Australia
| | | | | | | | | |
Collapse
|
50
|
Fengler I, Nava E, Villwock AK, Büchner A, Lenarz T, Röder B. Multisensory emotion perception in congenitally, early, and late deaf CI users. PLoS One 2017; 12:e0185821. [PMID: 29023525 PMCID: PMC5638301 DOI: 10.1371/journal.pone.0185821] [Citation(s) in RCA: 6] [Impact Index Per Article: 0.8] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/05/2017] [Accepted: 09/20/2017] [Indexed: 11/20/2022] Open
Abstract
Emotions are commonly recognized by combining auditory and visual signals (i.e., vocal and facial expressions). Yet it is unknown whether the ability to link emotional signals across modalities depends on early experience with audio-visual stimuli. In the present study, we investigated the role of auditory experience at different stages of development for auditory, visual, and multisensory emotion recognition abilities in three groups of adolescent and adult cochlear implant (CI) users. CI users had a different deafness onset and were compared to three groups of age- and gender-matched hearing control participants. We hypothesized that congenitally deaf (CD) but not early deaf (ED) and late deaf (LD) CI users would show reduced multisensory interactions and a higher visual dominance in emotion perception than their hearing controls. The CD (n = 7), ED (deafness onset: <3 years of age; n = 7), and LD (deafness onset: >3 years; n = 13) CI users and the control participants performed an emotion recognition task with auditory, visual, and audio-visual emotionally congruent and incongruent nonsense speech stimuli. In different blocks, participants judged either the vocal (Voice task) or the facial expressions (Face task). In the Voice task, all three CI groups performed overall less efficiently than their respective controls and experienced higher interference from incongruent facial information. Furthermore, the ED CI users benefitted more than their controls from congruent faces and the CD CI users showed an analogous trend. In the Face task, recognition efficiency of the CI users and controls did not differ. Our results suggest that CI users acquire multisensory interactions to some degree, even after congenital deafness. When judging affective prosody they appear impaired and more strongly biased by concurrent facial information than typically hearing individuals. We speculate that limitations inherent to the CI contribute to these group differences.
Collapse
Affiliation(s)
- Ineke Fengler
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Elena Nava
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Agnes K. Villwock
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| | - Andreas Büchner
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Thomas Lenarz
- German Hearing Centre, Department of Otorhinolaryngology, Medical University of Hannover, Hannover, Germany
| | - Brigitte Röder
- Biological Psychology and Neuropsychology, Institute for Psychology, Faculty of Psychology and Human Movement Science, University of Hamburg, Hamburg, Germany
| |
Collapse
|