1
|
Arslan NO, Luo X. Effects of pulse shape on pitch sensitivity of cochlear implant users. Hear Res 2024; 450:109075. [PMID: 38986164 DOI: 10.1016/j.heares.2024.109075] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 02/26/2024] [Revised: 05/23/2024] [Accepted: 07/02/2024] [Indexed: 07/12/2024]
Abstract
Contemporary cochlear implants (CIs) use cathodic-leading symmetric biphasic (C-BP) pulses for electrical stimulation. It remains unclear whether asymmetric pulses emphasizing the anodic or cathodic phase may improve spectral and temporal coding with CIs. This study tested place- and temporal-pitch sensitivity with C-BP, anodic-centered triphasic (A-TP), and cathodic-centered triphasic (C-TP) pulse trains on apical, middle, and basal electrodes in 10 implanted ears. Virtual channel ranking (VCR) thresholds (for place-pitch sensitivity) were measured at both a low and a high pulse rate of 99 (Experiment 1) and 1000 (Experiment 2) pulses per second (pps), and amplitude modulation frequency ranking (AMFR) thresholds (for temporal-pitch sensitivity) were measured at a 1000-pps pulse rate in Experiment 3. All stimuli were presented in monopolar mode. Results of all experiments showed that detection thresholds, most comfortable levels (MCLs), VCR thresholds, and AMFR thresholds were higher on more basal electrodes. C-BP pulses had longer active phase duration and thus lower detection thresholds and MCLs than A-TP and C-TP pulses. Compared to C-TP pulses, A-TP pulses had lower detection thresholds at the 99-pps but not the 1000-pps pulse rate, and had lower MCLs at both pulse rates. A-TP pulses led to lower VCR thresholds than C-BP pulses, and in turn than C-TP pulses, at the 1000-pps pulse rate. However, pulse shape did not affect VCR thresholds at the 99-pps pulse rate (possibly due to the fixed temporal pitch) or AMFR thresholds at the 1000-pps pulse rate (where the overall high performance may have reduced the changes with different pulse shapes). Notably, stronger polarity effect on VCR thresholds (or more improvement in VCR with A-TP than with C-TP pulses) at the 1000-pps pulse rate was associated with stronger polarity effect on detection thresholds at the 99-pps pulse rate (consistent with more degeneration of auditory nerve peripheral processes). The results suggest that A-TP pulses may improve place-pitch sensitivity or spectral coding for CI users, especially in situations with peripheral process degeneration.
Collapse
Affiliation(s)
- Niyazi O Arslan
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, 975 S. Myrtle Av., Tempe, AZ 85287, USA
| | - Xin Luo
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, 975 S. Myrtle Av., Tempe, AZ 85287, USA.
| |
Collapse
|
2
|
Valentin O, Lehmann A, Nguyen D, Paquette S. Integrating Emotion Perception in Rehabilitation Programs for Cochlear Implant Users: A Call for a More Comprehensive Approach. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1635-1642. [PMID: 38619441 DOI: 10.1044/2024_jslhr-23-00660] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 04/16/2024]
Abstract
PURPOSE Postoperative rehabilitation programs for cochlear implant (CI) recipients primarily emphasize enhancing speech perception. However, effective communication in everyday social interactions necessitates consideration of diverse verbal social cues to facilitate language comprehension. Failure to discern emotional expressions may lead to maladjusted social behavior, underscoring the importance of integrating social cues perception into rehabilitation initiatives to enhance CI users' well-being. After conventional rehabilitation, CI users demonstrate varying levels of emotion perception abilities. This disparity notably impacts young CI users, whose emotion perception deficit can extend to social functioning, encompassing coping strategies and social competence, even when relying on nonauditory cues such as facial expressions. Knowing that emotion perception abilities generally decrease with age, acknowledging emotion perception impairments in aging CI users is crucial, especially since a direct correlation between quality-of-life scores and vocal emotion recognition abilities has been observed in adult CI users. After briefly reviewing the scope of CI rehabilitation programs and summarizing the mounting evidence on CI users' emotion perception deficits and their impact, we will present our recommendations for embedding emotional training as part of enriched and standardized evaluation/rehabilitation programs that can improve CI users' social integration and quality of life. CONCLUSIONS Evaluating all aspects, including emotion perception, in CI rehabilitation programs is crucial because it ensures a comprehensive approach that enhances speech comprehension and the emotional dimension of communication, potentially improving CI users' social interaction and overall well-being. The development of emotion perception training holds promises for CI users and individuals grappling with various forms of hearing loss and sensory deficits. Ultimately, adopting such a comprehensive approach has the potential to significantly elevate the overall quality of life for a broad spectrum of patients.
Collapse
Affiliation(s)
- Olivier Valentin
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Alexandre Lehmann
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Otolaryngology-Head and Neck Surgery, Faculty of Medicine and Health Sciences, McGill University, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Don Nguyen
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Research Institute of the McGill University Health Centre, Montréal, Québec, Canada
| | - Sébastien Paquette
- International Laboratory for Brain, Music and Sound Research and Centre for Research on Brain, Language and Music (BRAMS and CRBLM), Montréal, Québec, Canada
- Centre for Interdisciplinary Research in Music Media and Technology, Montréal, Québec, Canada
- Department of Psychology, Faculty of Arts and Science, Trent University, Peterborough, Ontario, Canada
| |
Collapse
|
3
|
Paquette S, Gouin S, Lehmann A. Improving emotion perception in cochlear implant users: insights from machine learning analysis of EEG signals. BMC Neurol 2024; 24:115. [PMID: 38589815 PMCID: PMC11000345 DOI: 10.1186/s12883-024-03616-0] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/23/2023] [Accepted: 03/29/2024] [Indexed: 04/10/2024] Open
Abstract
BACKGROUND Although cochlear implants can restore auditory inputs to deafferented auditory cortices, the quality of the sound signal transmitted to the brain is severely degraded, limiting functional outcomes in terms of speech perception and emotion perception. The latter deficit negatively impacts cochlear implant users' social integration and quality of life; however, emotion perception is not currently part of rehabilitation. Developing rehabilitation programs incorporating emotional cognition requires a deeper understanding of cochlear implant users' residual emotion perception abilities. METHODS To identify the neural underpinnings of these residual abilities, we investigated whether machine learning techniques could be used to identify emotion-specific patterns of neural activity in cochlear implant users. Using existing electroencephalography data from 22 cochlear implant users, we employed a random forest classifier to establish if we could model and subsequently predict from participants' brain responses the auditory emotions (vocal and musical) presented to them. RESULTS Our findings suggest that consistent emotion-specific biomarkers exist in cochlear implant users, which could be used to develop effective rehabilitation programs incorporating emotion perception training. CONCLUSIONS This study highlights the potential of machine learning techniques to improve outcomes for cochlear implant users, particularly in terms of emotion perception.
Collapse
Affiliation(s)
- Sebastien Paquette
- Psychology Department, Faculty of Arts and Science, Trent University, Peterborough, ON, Canada.
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada.
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada.
| | - Samir Gouin
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| | - Alexandre Lehmann
- Research Institute of the McGill University Health Centre (RI-MUHC), Montreal, QC, Canada
- Centre for Research On Brain, Language, and Music (CRBLM), International Laboratory for Brain, Music & Sound Research (BRAMS), Psychology Department, University of Montreal, Montreal, QC, Canada
- Faculty of Medicine and Health Sciences, Department of Otolaryngology-Head and Neck Surgery, McGill University, Montreal, QC, Canada
| |
Collapse
|
4
|
Paquette S, Deroche MLD, Goffi-Gomez MV, Hoshino ACH, Lehmann A. Predicting emotion perception abilities for cochlear implant users. Int J Audiol 2023; 62:946-954. [PMID: 36047767 DOI: 10.1080/14992027.2022.2111611] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/06/2021] [Accepted: 08/05/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE In daily life, failure to perceive emotional expressions can result in maladjusted behaviour. For cochlear implant users, perceiving emotional cues in sounds remains challenging, and the factors explaining the variability in patients' sensitivity to emotions are currently poorly understood. Understanding how these factors relate to auditory proficiency is a major challenge of cochlear implant research and is critical in addressing patients' limitations. DESIGN To fill this gap, we evaluated different auditory perception aspects in implant users (pitch discrimination, music processing and speech intelligibility) and correlated them to their performance in an emotion recognition task. STUDY SAMPLE Eighty-four adults (18-76 years old) participated in our investigation; 42 cochlear implant users and 42 controls. Cochlear implant users performed worse than their controls on all tasks, and emotion perception abilities were correlated to their age and their clinical outcome as measured in the speech intelligibility task. RESULTS As previously observed, emotion perception abilities declined with age (here by about 2-3% in a decade). Interestingly, even when emotional stimuli were musical, CI users' skills relied more on processes underlying speech intelligibility. CONCLUSIONS These results suggest that speech processing remains a clinical priority even when one is interested in affective skills.
Collapse
Affiliation(s)
- S Paquette
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| | - M L D Deroche
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
- Laboratory for Hearing and Cognition, Psychology Department, Concordia University, Montreal, Canada
| | - M V Goffi-Gomez
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A C H Hoshino
- Cochlear Implant Group, School of Medicine, Hospital das Clínicas, Universidade de São Paulo, São Paulo, Canada
| | - A Lehmann
- International Laboratory for Brain Music and Sound Research, Department of Psychology, University of Montréal, Montreal, Canada
- Centre for Research on Brain, Language and Music, McGill University, Montreal, Canada
- Department of Otolaryngology, Head and Neck Surgery, McGill University, Montreal, Canada
| |
Collapse
|
5
|
Ross P, Williams E, Herbert G, Manning L, Lee B. Turn that music down! Affective musical bursts cause an auditory dominance in children recognizing bodily emotions. J Exp Child Psychol 2023; 230:105632. [PMID: 36731279 DOI: 10.1016/j.jecp.2023.105632] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/10/2022] [Revised: 12/16/2022] [Accepted: 01/13/2023] [Indexed: 02/01/2023]
Abstract
Previous work has shown that different sensory channels are prioritized across the life course, with children preferentially responding to auditory information. The aim of the current study was to investigate whether the mechanism that drives this auditory dominance in children occurs at the level of encoding (overshadowing) or when the information is integrated to form a response (response competition). Given that response competition is dependent on a modality integration attempt, a combination of stimuli that could not be integrated was used so that if children's auditory dominance persisted, this would provide evidence for the overshadowing over the response competition mechanism. Younger children (≤7 years), older children (8-11 years), and adults (18+ years) were asked to recognize the emotion (happy or fearful) in either nonvocal auditory musical emotional bursts or human visual bodily expressions of emotion in three conditions: unimodal, congruent bimodal, and incongruent bimodal. We found that children performed significantly worse at recognizing emotional bodies when they heard (and were told to ignore) musical emotional bursts. This provides the first evidence for auditory dominance in both younger and older children when presented with modally incongruent emotional stimuli. The continued presence of auditory dominance, despite the lack of modality integration, was taken as supportive evidence for the overshadowing explanation. These findings are discussed in relation to educational considerations, and future sensory dominance investigations and models are proposed.
Collapse
Affiliation(s)
- Paddy Ross
- Department of Psychology, Durham University, Durham DH1 3LE, UK.
| | - Ella Williams
- Department of Psychology, Durham University, Durham DH1 3LE, UK; Oxford Neuroscience, University of Oxford, Oxford OX3 9DU, UK
| | - Gemma Herbert
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Laura Manning
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| | - Becca Lee
- Department of Psychology, Durham University, Durham DH1 3LE, UK
| |
Collapse
|
6
|
von Eiff CI, Frühholz S, Korth D, Guntinas-Lichius O, Schweinberger SR. Crossmodal benefits to vocal emotion perception in cochlear implant users. iScience 2022; 25:105711. [PMID: 36578321 PMCID: PMC9791346 DOI: 10.1016/j.isci.2022.105711] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/14/2022] [Revised: 10/17/2022] [Accepted: 11/29/2022] [Indexed: 12/03/2022] Open
Abstract
Speech comprehension counts as a benchmark outcome of cochlear implants (CIs)-disregarding the communicative importance of efficient integration of audiovisual (AV) socio-emotional information. We investigated effects of time-synchronized facial information on vocal emotion recognition (VER). In Experiment 1, 26 CI users and normal-hearing (NH) individuals classified emotions for auditory-only, AV congruent, or AV incongruent utterances. In Experiment 2, we compared crossmodal effects between groups with adaptive testing, calibrating auditory difficulty via voice morphs from emotional caricatures to anti-caricatures. CI users performed lower than NH individuals, and VER was correlated with life quality. Importantly, they showed larger benefits to VER with congruent facial emotional information even at equal auditory-only performance levels, suggesting that their larger crossmodal benefits result from deafness-related compensation rather than degraded acoustic representations. Crucially, vocal caricatures enhanced CI users' VER. Findings advocate AV stimuli during CI rehabilitation and suggest perspectives of caricaturing for both perceptual trainings and sound processor technology.
Collapse
Affiliation(s)
- Celina Isabelle von Eiff
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany,Corresponding author
| | - Sascha Frühholz
- Department of Psychology (Cognitive and Affective Neuroscience), Faculty of Arts and Social Sciences, University of Zurich, 8050 Zurich, Switzerland,Department of Psychology, University of Oslo, 0373 Oslo, Norway
| | - Daniela Korth
- Department of Otorhinolaryngology, Jena University Hospital, 07747 Jena, Germany
| | | | - Stefan Robert Schweinberger
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,Voice Research Unit, Institute of Psychology, Friedrich Schiller University Jena, 07743 Jena, Germany,DFG SPP 2392 Visual Communication (ViCom), Frankfurt am Main, Germany
| |
Collapse
|
7
|
Arslan NO, Luo X. Assessing the Relationship Between Pitch Perception and Neural Health in Cochlear Implant Users. J Assoc Res Otolaryngol 2022; 23:875-887. [PMID: 36329369 PMCID: PMC9789247 DOI: 10.1007/s10162-022-00876-w] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2022] [Accepted: 10/17/2022] [Indexed: 11/06/2022] Open
Abstract
Various neural health estimates have been shown to indicate the density of spiral ganglion neurons in animal and modeling studies of cochlear implants (CIs). However, when applied to human CI users, these neural health estimates based on psychophysical and electrophysiological measures are not consistently correlated with each other or with the speech recognition performance. This study investigated whether the neural health estimates have stronger correlations with the temporal and place pitch sensitivity than with the speech recognition performance. On five electrodes in 12 tested ears of eight adult CI users, polarity effect (PE), multipulse integration (MPI), and interphase gap (IPG) effect on the amplitude growth function (AGF) of electrically evoked compound action potential (ECAP) were measured to estimate neural health, while thresholds of amplitude modulation frequency ranking (AMFR) and virtual channel ranking (VCR) were measured to indicate temporal and place pitch sensitivity. AzBio sentence recognition in noise was measured using the clinical CI processor for each ear. The results showed significantly poorer AMFR and VCR thresholds on the basal electrodes than on the apical and middle electrodes. Across ears and electrodes, only the IPG offset effect on ECAP AGF had a nearly significant negative correlation with the VCR threshold after removing the outliers. No significant across-ear correlations were found between the mean neural health estimates, mean pitch-ranking thresholds, and AzBio sentence recognition score. This study suggests that the central axon demyelination reflected by the IPG offset effect may be important for the place pitch sensitivity of CI users and that the IPG offset effect may be used to predict the perceptual resolution of virtual channels for CI programming.
Collapse
Affiliation(s)
- Niyazi O. Arslan
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, 975 S. Myrtle Av., Tempe, AZ 85287 USA
| | - Xin Luo
- Program of Speech and Hearing Science, College of Health Solutions, Arizona State University, 975 S. Myrtle Av., Tempe, AZ 85287 USA
| |
Collapse
|
8
|
Schweinberger SR, von Eiff CI. Enhancing socio-emotional communication and quality of life in young cochlear implant recipients: Perspectives from parameter-specific morphing and caricaturing. Front Neurosci 2022; 16:956917. [PMID: 36090287 PMCID: PMC9453832 DOI: 10.3389/fnins.2022.956917] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/30/2022] [Accepted: 07/26/2022] [Indexed: 11/25/2022] Open
Abstract
The use of digitally modified stimuli with enhanced diagnostic information to improve verbal communication in children with sensory or central handicaps was pioneered by Tallal and colleagues in 1996, who targeted speech comprehension in language-learning impaired children. Today, researchers are aware that successful communication cannot be reduced to linguistic information—it depends strongly on the quality of communication, including non-verbal socio-emotional communication. In children with cochlear implants (CIs), quality of life (QoL) is affected, but this can be related to the ability to recognize emotions in a voice rather than speech comprehension alone. In this manuscript, we describe a family of new methods, termed parameter-specific facial and vocal morphing. We propose that these provide novel perspectives for assessing sensory determinants of human communication, but also for enhancing socio-emotional communication and QoL in the context of sensory handicaps, via training with digitally enhanced, caricatured stimuli. Based on promising initial results with various target groups including people with age-related macular degeneration, people with low abilities to recognize faces, older people, and adult CI users, we discuss chances and challenges for perceptual training interventions for young CI users based on enhanced auditory stimuli, as well as perspectives for CI sound processing technology.
Collapse
Affiliation(s)
- Stefan R. Schweinberger
- Voice Research Unit, Friedrich Schiller University Jena, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
- Deutsche Forschungsgemeinschaft (DFG) Research Unit Person Perception, Friedrich Schiller University Jena, Jena, Germany
- *Correspondence: Stefan R. Schweinberger,
| | - Celina I. von Eiff
- Voice Research Unit, Friedrich Schiller University Jena, Jena, Germany
- Department for General Psychology and Cognitive Neuroscience, Institute of Psychology, Friedrich Schiller University Jena, Jena, Germany
| |
Collapse
|
9
|
Parameter-Specific Morphing Reveals Contributions of Timbre to the Perception of Vocal Emotions in Cochlear Implant Users. Ear Hear 2022; 43:1178-1188. [PMID: 34999594 PMCID: PMC9197138 DOI: 10.1097/aud.0000000000001181] [Citation(s) in RCA: 2] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
Objectives: Research on cochlear implants (CIs) has focused on speech comprehension, with little research on perception of vocal emotions. We compared emotion perception in CI users and normal-hearing (NH) individuals, using parameter-specific voice morphing. Design: Twenty-five CI users and 25 NH individuals (matched for age and gender) performed fearful-angry discriminations on bisyllabic pseudoword stimuli from morph continua across all acoustic parameters (Full), or across selected parameters (F0, Timbre, or Time information), with other parameters set to a noninformative intermediate level. Results: Unsurprisingly, CI users as a group showed lower performance in vocal emotion perception overall. Importantly, while NH individuals used timbre and fundamental frequency (F0) information to equivalent degrees, CI users were far more efficient in using timbre (compared to F0) information for this task. Thus, under the conditions of this task, CIs were inefficient in conveying emotion based on F0 alone. There was enormous variability between CI users, with low performers responding close to guessing level. Echoing previous research, we found that better vocal emotion perception was associated with better quality of life ratings. Conclusions: Some CI users can utilize timbre cues remarkably well when perceiving vocal emotions.
Collapse
|
10
|
Age-Related Changes in Voice Emotion Recognition by Postlingually Deafened Listeners With Cochlear Implants. Ear Hear 2022; 43:323-334. [PMID: 34406157 PMCID: PMC8847542 DOI: 10.1097/aud.0000000000001095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Identification of emotional prosody in speech declines with age in normally hearing (NH) adults. Cochlear implant (CI) users have deficits in the perception of prosody, but the effects of age on vocal emotion recognition by adult postlingually deaf CI users are not known. The objective of the present study was to examine age-related changes in CI users' and NH listeners' emotion recognition. DESIGN Participants included 18 CI users (29.6 to 74.5 years) and 43 NH adults (25.8 to 74.8 years). Participants listened to emotion-neutral sentences spoken by a male and female talker in five emotions (happy, sad, scared, angry, neutral). NH adults heard them in four conditions: unprocessed (full spectrum) speech, 16-channel, 8-channel, and 4-channel noise-band vocoded speech. The adult CI users only listened to unprocessed (full spectrum) speech. Sensitivity (d') to emotions and Reaction Times were obtained using a single-interval, five-alternative, forced-choice paradigm. RESULTS For NH participants, results indicated age-related declines in Accuracy and d', and age-related increases in Reaction Time in all conditions. Results indicated an overall deficit, as well as age-related declines in overall d' for CI users, but Reaction Times were elevated compared with NH listeners and did not show age-related changes. Analysis of Accuracy scores (hit rates) were generally consistent with d' data. CONCLUSIONS Both CI users and NH listeners showed age-related deficits in emotion identification. The CI users' overall deficit in emotion perception, and their slower response times, suggest impaired social communication which may in turn impact overall well-being, particularly so for older CI users, as lower vocal emotion recognition scores have been associated with poorer subjective quality of life in CI patients.
Collapse
|
11
|
Picou EM, Rakita L, Buono GH, Moore TM. Effects of Increasing the Overall Level or Fitting Hearing Aids on Emotional Responses to Sounds. Trends Hear 2021; 25:23312165211049938. [PMID: 34866509 PMCID: PMC8825634 DOI: 10.1177/23312165211049938] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech
sounds compared to their peers with normal hearing. The purpose of this study was to
evaluate two possible strategies for addressing the effects of hearing loss on emotional
responses: (a) increasing overall level and (b) hearing aid use (with and without
nonlinear frequency compression, NFC). Twenty-three adults (mean age = 65.5 years) with
mild-to-severe sensorineural hearing loss and 17 adults (mean age = 56.2 years) with
normal hearing participated. All adults provided ratings of valence and arousal without
hearing aids in response to nonspeech sounds presented at a moderate and at a high level.
Adults with hearing loss also provided ratings while using individually fitted study
hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use
impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant
sounds as less pleasant than their peers, confirming findings in the extant literature.
For both groups, increasing the overall level resulted in lower ratings of valence. For
listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower
ratings of valence but to a lesser extent than increasing the overall level. Activating
NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate
presentation level) but did not improve ratings to match those from the listeners with
normal hearing. These findings suggest that current interventions do not ameliorate the
effects of hearing loss on emotional responses to sound.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | - Lori Rakita
- Department of Otolaryngology, 1866Massachusetts Ear and Eye Infirmary, Harvard Medical School, Boston, MA, USA
| | - Gabrielle H Buono
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | | |
Collapse
|
12
|
Tamati TN, Moberly AC. Talker Adaptation and Lexical Difficulty Impact Word Recognition in Adults with Cochlear Implants. Audiol Neurootol 2021; 27:260-270. [PMID: 34535583 DOI: 10.1159/000518643] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 06/11/2021] [Accepted: 07/19/2021] [Indexed: 11/19/2022] Open
Abstract
INTRODUCTION Talker-specific adaptation facilitates speech recognition in normal-hearing listeners. This study examined talker adaptation in adult cochlear implant (CI) users. Three hypotheses were tested: (1) high-performing adult CI users show improved word recognition following exposure to a talker ("talker adaptation"), particularly for lexically hard words, (2) individual performance is determined by auditory sensitivity and neurocognitive skills, and (3) individual performance relates to real-world functioning. METHODS Fifteen high-performing, post-lingually deaf adult CI users completed a word recognition task consisting of 6 single-talker blocks (3 female/3 male native English speakers); words were lexically "easy" and "hard." Recognition accuracy was assessed "early" and "late" (first vs. last 10 trials); adaptation was assessed as the difference between late and early accuracy. Participants also completed measures of spectral-temporal processing and neurocognitive skills, as well as real-world measures of multiple-talker sentence recognition and quality of life (QoL). RESULTS CI users showed limited talker adaptation overall, but performance improved for lexically hard words. Stronger spectral-temporal processing and neurocognitive skills were weakly to moderately associated with more accurate word recognition and greater talker adaptation for hard words. Finally, word recognition accuracy for hard words was moderately related to multiple-talker sentence recognition and QoL. CONCLUSION Findings demonstrate a limited talker adaptation benefit for recognition of hard words in adult CI users. Both auditory sensitivity and neurocognitive skills contribute to performance, suggesting additional benefit from adaptation for individuals with stronger skills. Finally, processing differences related to talker adaptation and lexical difficulty may be relevant to real-world functioning.
Collapse
Affiliation(s)
- Terrin N Tamati
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA.,Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Aaron C Moberly
- Department of Otolaryngology, Head & Neck Surgery, The Ohio State University Wexner Medical Center, Columbus, Ohio, USA
| |
Collapse
|
13
|
Fuller C, Free R, Maat B, Başkent D. Self-reported music perception is related to quality of life and self-reported hearing abilities in cochlear implant users. Cochlear Implants Int 2021; 23:1-10. [PMID: 34470590 DOI: 10.1080/14670100.2021.1948716] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/20/2022]
Abstract
OBJECTIVES To investigate the relationship between self-reported music perception and appreciation and (1) quality of life (QoL), and (2) self-assessed hearing ability in 98 post-lingually deafened cochlear implant (CI) users with a wide age range. METHODS Participants filled three questionnaires: (1) the Dutch Musical Background Questionnaire (DMBQ), which measures the music listening habits, the quality of the sound of music and the self-assessed perception of elements of music; (2) the Nijmegen Cochlear Implant Questionnaire (NCIQ), which measures health-related QoL; (3) the Speech, Spatial and Qualities (SSQ) of hearing scale, which measures self-assessed hearing ability. Additionally, speech perception was behaviorally measured with a phoneme-in-word identification. RESULTS A decline in music listening habits and a low rating of the quality of music after implantation are reported in DMBQ. A significant relationship is found between the music measures and the NCIQ and SSQ; no significant relationships are observed between the DMBQ and speech perception scores. CONCLUSIONS The findings suggest some relationship between CI users' self-reported music perception ability and QoL and self-reported hearing ability. While the causal relationship is not currently evaluated, the findings may imply that music training programs and/or device improvements that improve music perception may improve QoL and hearing ability.
Collapse
Affiliation(s)
- Christina Fuller
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands.,Treant Zorggroep, Emmen, Netherlands
| | - Rolien Free
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Bert Maat
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University of Groningen, University Medical Center Groningen, Groningen, Netherlands.,Research School of Behavioral and Cognitive Neurosciences, Graduate School of Medical Sciences, University of Groningen, Groningen, Netherlands
| |
Collapse
|
14
|
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) show reduced word and auditory emotion recognition abilities relative to their peers with normal hearing. Modern CI processing strategies are designed to preserve acoustic cues requisite for word recognition rather than those cues required for accessing other signal information (e.g., talker gender or emotional state). While word recognition is undoubtedly important for communication, the inaccessibility of this additional signal information in speech may lead to negative social experiences and outcomes for individuals with hearing loss. This study aimed to evaluate whether the emphasis on word recognition preservation in CI processing has unintended consequences on the perception of other talker information, such as emotional state. DESIGN Twenty-four young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence (word recognition task) or selected the emotion of the talker (emotion recognition task) from a list of options (Angry, Calm, Happy, and Sad). Sentences were blocked by task type (emotion recognition versus word recognition) and processing condition (unprocessed versus 8-channel noise vocoder) and presented randomly within the block at three signal-to-noise ratios (SNRs) in a background of speech-shaped noise. Confusion matrices showed the number of errors in emotion recognition by listeners. RESULTS Listeners demonstrated better emotion recognition performance than word recognition performance at the same SNR. Unprocessed speech resulted in higher recognition rates than vocoded stimuli. Recognition performance (for both words and emotions) decreased with worsening SNR. Vocoding speech resulted in a greater negative impact on emotion recognition than it did for word recognition. CONCLUSIONS These data confirm prior work that suggests that in background noise, emotional prosodic information in speech is easier to recognize than word information, even after simulated CI processing. However, emotion recognition may be more negatively impacted by background noise and CI processing than word recognition. Future work could explore CI processing strategies that better encode prosodic information and investigate this effect in individuals with CIs as opposed to vocoded simulation. This study emphasized the need for clinicians to consider not only word recognition but also other aspects of speech that are critical to successful social communication.
Collapse
|
15
|
Voice Emotion Recognition by Mandarin-Speaking Children with Cochlear Implants. Ear Hear 2021; 43:165-180. [PMID: 34288631 DOI: 10.1097/aud.0000000000001085] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/25/2022]
Abstract
Objectives Emotional expressions are very important in social interactions. Children with cochlear implants can have voice emotion recognition deficits due to device limitations. Mandarin-speaking children with cochlear implants may face greater challenges than those speaking nontonal languages; the pitch information is not well preserved in cochlear implants, and such children could benefit from child-directed speech, which carries more exaggerated distinctive acoustic cues for different emotions. This study investigated voice emotion recognition, using both adult-directed and child-directed materials, in Mandarin-speaking children with cochlear implants compared with normal hearing peers. The authors hypothesized that both the children with cochlear implants and those with normal hearing would perform better with child-directed materials than with adult-directed materials. Design Thirty children (7.17-17 years of age) with cochlear implants and 27 children with normal hearing (6.92-17.08 years of age) were recruited in this study. Participants completed a nonverbal reasoning test, speech recognition tests, and a voice emotion recognition task. Children with cochlear implants over the age of 10 years also completed the Chinese version of the Nijmegen Cochlear Implant Questionnaire to evaluate the health-related quality of life. The voice emotion recognition task was a five-alternative, forced-choice paradigm, which contains sentences spoken with five emotions (happy, angry, sad, scared, and neutral) in a child-directed or adult-directed manner. Results Acoustic analyses showed substantial variations across emotions in all materials, mainly on measures of mean fundamental frequency and fundamental frequency range. Mandarin-speaking children with cochlear implants displayed a significantly poorer performance than normal hearing peers in voice emotion perception tasks, regardless of whether the performance is measured in accuracy scores, Hu value, or reaction time. Children with cochlear implants and children with normal hearing were mainly affected by the mean fundamental frequency in speech emotion recognition tasks. Chronological age had a significant effect on speech emotion recognition in children with normal hearing; however, there was no significant correlation between chronological age and accuracy scores in speech emotion recognition in children with implants. Significant effects of specific emotion and test materials (better performance with child-directed materials) in both groups of children were observed. Among the children with cochlear implants, age at implantation, percentage scores of nonverbal intelligence quotient test, and sentence recognition threshold in quiet could predict recognition performance in both accuracy scores and Hu values. Time wearing cochlear implant could predict reaction time in emotion perception tasks among children with cochlear implants. No correlation was observed between the accuracy score in voice emotion perception and the self-reported scores of health-related quality of life; however, the latter were significantly correlated with speech recognition skills among Mandarin-speaking children with cochlear implants. Conclusions Mandarin-speaking children with cochlear implants could have significant deficits in voice emotion recognition tasks compared with their normally hearing peers and can benefit from the exaggerated prosody of child-directed speech. The effects of age at cochlear implantation, speech and language development, and cognition could play an important role in voice emotion perception by Mandarin-speaking children with cochlear implants.
Collapse
|
16
|
Weighting of Prosodic and Lexical-Semantic Cues for Emotion Identification in Spectrally Degraded Speech and With Cochlear Implants. Ear Hear 2021; 42:1727-1740. [PMID: 34294630 PMCID: PMC8545870 DOI: 10.1097/aud.0000000000001057] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Normally-hearing (NH) listeners rely more on prosodic cues than on lexical-semantic cues for emotion perception in speech. In everyday spoken communication, the ability to decipher conflicting information between prosodic and lexical-semantic cues to emotion can be important: for example, in identifying sarcasm or irony. Speech degradation in cochlear implants (CIs) can be sufficiently overcome to identify lexical-semantic cues, but the distortion of voice pitch cues makes it particularly challenging to hear prosody with CIs. The purpose of this study was to examine changes in relative reliance on prosodic and lexical-semantic cues in NH adults listening to spectrally degraded speech and adult CI users. We hypothesized that, compared with NH counterparts, CI users would show increased reliance on lexical-semantic cues and reduced reliance on prosodic cues for emotion perception. We predicted that NH listeners would show a similar pattern when listening to CI-simulated versions of emotional speech. DESIGN Sixteen NH adults and 8 postlingually deafened adult CI users participated in the study. Sentences were created to convey five lexical-semantic emotions (angry, happy, neutral, sad, and scared), with five sentences expressing each category of emotion. Each of these 25 sentences was then recorded with the 5 (angry, happy, neutral, sad, and scared) prosodic emotions by 2 adult female talkers. The resulting stimulus set included 125 recordings (25 Sentences × 5 Prosodic Emotions) per talker, of which 25 were congruent (consistent lexical-semantic and prosodic cues to emotion) and the remaining 100 were incongruent (conflicting lexical-semantic and prosodic cues to emotion). The recordings were processed to have 3 levels of spectral degradation: full-spectrum, CI-simulated (noise-vocoded) to have 8 channels and 16 channels of spectral information, respectively. Twenty-five recordings (one sentence per lexical-semantic emotion recorded in all five prosodies) were used for a practice run in the full-spectrum condition. The remaining 100 recordings were used as test stimuli. For each talker and condition of spectral degradation, listeners indicated the emotion associated with each recording in a single-interval, five-alternative forced-choice task. The responses were scored as proportion correct, where "correct" responses corresponded to the lexical-semantic emotion. CI users heard only the full-spectrum condition. RESULTS The results showed a significant interaction between hearing status (NH, CI) and congruency in identifying the lexical-semantic emotion associated with the stimuli. This interaction was as predicted, that is, CI users showed increased reliance on lexical-semantic cues in the incongruent conditions, while NH listeners showed increased reliance on the prosodic cues in the incongruent conditions. As predicted, NH listeners showed increased reliance on lexical-semantic cues to emotion when the stimuli were spectrally degraded. CONCLUSIONS The present study confirmed previous findings of prosodic dominance for emotion perception by NH listeners in the full-spectrum condition. Further, novel findings with CI patients and NH listeners in the CI-simulated conditions showed reduced reliance on prosodic cues and increased reliance on lexical-semantic cues to emotion. These results have implications for CI listeners' ability to perceive conflicts between prosodic and lexical-semantic cues, with repercussions for their identification of sarcasm and humor. Understanding instances of sarcasm or humor can impact a person's ability to develop relationships, follow conversation, understand vocal emotion and intended message of a speaker, following jokes, and everyday communication in general.
Collapse
|
17
|
de Boer MJ, Jürgens T, Cornelissen FW, Başkent D. Degraded visual and auditory input individually impair audiovisual emotion recognition from speech-like stimuli, but no evidence for an exacerbated effect from combined degradation. Vision Res 2020; 180:51-62. [PMID: 33360918 DOI: 10.1016/j.visres.2020.12.002] [Citation(s) in RCA: 7] [Impact Index Per Article: 1.8] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/07/2020] [Revised: 11/06/2020] [Accepted: 12/06/2020] [Indexed: 10/22/2022]
Abstract
Emotion recognition requires optimal integration of the multisensory signals from vision and hearing. A sensory loss in either or both modalities can lead to changes in integration and related perceptual strategies. To investigate potential acute effects of combined impairments due to sensory information loss only, we degraded the visual and auditory information in audiovisual video-recordings, and presented these to a group of healthy young volunteers. These degradations intended to approximate some aspects of vision and hearing impairment in simulation. Other aspects, related to advanced age, potential health issues, but also long-term adaptation and cognitive compensation strategies, were not included in the simulations. Besides accuracy of emotion recognition, eye movements were recorded to capture perceptual strategies. Our data show that emotion recognition performance decreases when degraded visual and auditory information are presented in isolation, but simultaneously degrading both modalities does not exacerbate these isolated effects. Moreover, degrading the visual information strongly impacts recognition performance and on viewing behavior. In contrast, degrading auditory information alongside normal or degraded video had little (additional) effect on performance or gaze. Nevertheless, our results hold promise for visually impaired individuals, because the addition of any audio to any video greatly facilitates performance, even though adding audio does not completely compensate for the negative effects of video degradation. Additionally, observers modified their viewing behavior to degraded video in order to maximize their performance. Therefore, optimizing the hearing of visually impaired individuals and teaching them such optimized viewing behavior could be worthwhile endeavors for improving emotion recognition.
Collapse
Affiliation(s)
- Minke J de Boer
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands.
| | - Tim Jürgens
- Institute of Acoustics, Technische Hochschule Lübeck, Lübeck, Germany
| | - Frans W Cornelissen
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Laboratory of Experimental Ophthalmology, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neuroscience (BCN), University of Groningen, Groningen, The Netherlands; Department of Otorhinolaryngology - Head and Neck Surgery, University Medical Center Groningen, University of Groningen, Groningen, The Netherlands
| |
Collapse
|
18
|
Shafiro V, Hebb M, Walker C, Oh J, Hsiao Y, Brown K, Sheft S, Li Y, Vasil K, Moberly AC. Development of the Basic Auditory Skills Evaluation Battery for Online Testing of Cochlear Implant Listeners. Am J Audiol 2020; 29:577-590. [PMID: 32946250 DOI: 10.1044/2020_aja-19-00083] [Citation(s) in RCA: 17] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/09/2022] Open
Abstract
Purpose Cochlear implant (CI) performance varies considerably across individuals and across domains of auditory function, but clinical testing is typically restricted to speech intelligibility. The goals of this study were (a) to develop a basic auditory skills evaluation battery of tests for comprehensive assessment of ecologically relevant aspects of auditory perception and (b) to compare CI listeners' performance on the battery when tested in the laboratory by an audiologist or independently at home. Method The battery included 17 tests to evaluate (a) basic spectrotemporal processing, (b) processing of music and environmental sounds, and (c) speech perception in both quiet and background noise. The battery was administered online to three groups of adult listeners: two groups of postlingual CI listeners and a group of older normal-hearing (ONH) listeners of similar age. The ONH group and one CI group were tested in a laboratory by an audiologist, whereas the other CI group self-tested independently at home following online instructions. Results Results indicated a wide range in the performance of CI but not ONH listeners. Significant differences were not found between the two CI groups on any test, whereas on all but two tests, CI listeners' performance was lower than that of the ONH participants. Principal component analysis revealed that four components accounted for 82% of the variance in measured results, with component loading indicating that the test battery successfully captures differences across dimensions of auditory perception. Conclusions These results provide initial support for the use of the basic auditory skills evaluation battery for comprehensive online assessment of auditory skills in adult CI listeners.
Collapse
Affiliation(s)
- Valeriy Shafiro
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Megan Hebb
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Chad Walker
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Jasper Oh
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Ying Hsiao
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Kelly Brown
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Stanley Sheft
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Yan Li
- Department Communication Disorders & Sciences, Rush University Medical Center, Chicago, IL
| | - Kara Vasil
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| | - Aaron C. Moberly
- Department of Otolaryngology, The Ohio State University Wexner Medical Center, Columbus
| |
Collapse
|
19
|
Skidmore JA, Vasil KJ, He S, Moberly AC. Explaining Speech Recognition and Quality of Life Outcomes in Adult Cochlear Implant Users: Complementary Contributions of Demographic, Sensory, and Cognitive Factors. Otol Neurotol 2020; 41:e795-e803. [PMID: 32558759 PMCID: PMC7875311 DOI: 10.1097/mao.0000000000002682] [Citation(s) in RCA: 9] [Impact Index Per Article: 2.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
HYPOTHESES Adult cochlear implant (CI) outcomes depend on demographic, sensory, and cognitive factors. However, these factors have not been examined together comprehensively for relations to different outcome types, such as speech recognition versus quality of life (QOL). Three hypotheses were tested: 1) speech recognition will be explained most strongly by sensory factors, whereas QOL will be explained more strongly by cognitive factors. 2) Different speech recognition outcome domains (sentences versus words) and different QOL domains (physical versus social versus psychological functioning) will be explained differentially by demographic, sensory, and cognitive factors. 3) Including cognitive factors as predictors will provide more power to explain outcomes than demographic and sensory predictors alone. BACKGROUND A better understanding of the contributors to CI outcomes is needed to prognosticate outcomes before surgery, explain outcomes after surgery, and tailor rehabilitation efforts. METHODS Forty-one adult postlingual experienced CI users were assessed for sentence and word recognition, as well as hearing-related QOL, along with a broad collection of predictors. Partial least squares regression was used to identify factors that were most predictive of outcome measures. RESULTS Supporting our hypotheses, speech recognition abilities were most strongly dependent on sensory skills, while QOL outcomes required a combination of cognitive, sensory, and demographic predictors. The inclusion of cognitive measures increased the ability to explain outcomes, mainly for QOL. CONCLUSIONS Explaining variability in adult CI outcomes requires a broad assessment approach. Identifying the most important predictors depends on the particular outcome domain and even the particular measure of interest.
Collapse
Affiliation(s)
- Jeffrey A Skidmore
- The Ohio State University Wexner Medical Center, Department of Otolaryngology-Head & Neck Surgery, Columbus, Ohio
| | | | | | | |
Collapse
|
20
|
Luo X, Kolberg C, Pulling KR, Azuma T. Psychoacoustic and Demographic Factors for Speech Recognition of Older Adult Cochlear Implant Users. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2020; 63:1712-1725. [PMID: 32501736 DOI: 10.1044/2020_jslhr-19-00225] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/11/2023]
Abstract
Purpose This study aimed to evaluate the effects of aging and cochlear implant (CI) on psychoacoustic and speech recognition abilities and to assess the relative contributions of psychoacoustic and demographic factors to speech recognition of older CI (OCI) users. Method Twelve OCI users, 12 older acoustic-hearing (OAH) listeners age-matched to OCI users, and 12 younger normal-hearing (YNH) listeners underwent tests of temporal amplitude modulation detection, temporal gap detection in noise, and spectral-temporal modulated ripple discrimination. Speech reception thresholds were measured for sentence recognition in multitalker, speech-babble noise. Results Statistical analyses showed that, for the small sample of OAH listeners, the degree of hearing loss did not significantly affect any outcome measure. Temporal resolution, spectral resolution, and speech recognition all significantly degraded with both age and the use of a CI (i.e., YNH better than OAH and OAH better than OCI performance). Although both were significantly correlated with OCI users' speech recognition, the duration of CI use no longer had a significant effect on speech recognition once the effect of spectral-temporal ripple discrimination performance was taken into account. For OAH listeners, the only significant predictor of speech recognition was temporal gap detection performance. Conclusion The preliminary results suggest that speech recognition of OCI users may improve with longer duration of CI use, mainly due to higher perceptual acuity to spectral-temporal modulated ripples in acoustic stimuli.
Collapse
Affiliation(s)
- Xin Luo
- College of Health Solutions, Arizona State University, Tempe
| | | | | | - Tamiko Azuma
- College of Health Solutions, Arizona State University, Tempe
| |
Collapse
|
21
|
Nagels L, Gaudrain E, Vickers D, Matos Lopes M, Hendriks P, Başkent D. Development of vocal emotion recognition in school-age children: The EmoHI test for hearing-impaired populations. PeerJ 2020; 8:e8773. [PMID: 32274264 PMCID: PMC7130108 DOI: 10.7717/peerj.8773] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/09/2019] [Accepted: 02/21/2020] [Indexed: 11/20/2022] Open
Abstract
Traditionally, emotion recognition research has primarily used pictures and videos, while audio test materials are not always readily available or are not of good quality, which may be particularly important for studies with hearing-impaired listeners. Here we present a vocal emotion recognition test with pseudospeech productions from multiple speakers expressing three core emotions (happy, angry, and sad): the EmoHI test. The high sound quality recordings make the test suitable for use with populations of children and adults with normal or impaired hearing. Here we present normative data for vocal emotion recognition development in normal-hearing (NH) school-age children using the EmoHI test. Furthermore, we investigated cross-language effects by testing NH Dutch and English children, and the suitability of the EmoHI test for hearing-impaired populations, specifically for prelingually deaf Dutch children with cochlear implants (CIs). Our results show that NH children's performance improved significantly with age from the youngest age group onwards (4-6 years: 48.9%, on average). However, NH children's performance did not reach adult-like values (adults: 94.1%) even for the oldest age group tested (10-12 years: 81.1%). Additionally, the effect of age on NH children's development did not differ across languages. All except one CI child performed at or above chance-level showing the suitability of the EmoHI test. In addition, seven out of 14 CI children performed within the NH age-appropriate range, and nine out of 14 CI children did so when performance was adjusted for hearing age, measured from their age at CI implantation. However, CI children showed great variability in their performance, ranging from ceiling (97.2%) to below chance-level performance (27.8%), which could not be explained by chronological age alone. The strong and consistent development in performance with age, the lack of significant differences across the tested languages for NH children, and the above-chance performance of most CI children affirm the usability and versatility of the EmoHI test.
Collapse
Affiliation(s)
- Leanne Nagels
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands.,Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, Groningen, The Netherlands
| | - Etienne Gaudrain
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, Groningen, The Netherlands.,CNRS, Lyon Neuroscience Research Center, Université de Lyon, Lyon, France
| | - Deborah Vickers
- Cambridge Hearing Group, Clinical Neurosciences Department, University of Cambridge, Cambridge, United Kingdom
| | - Marta Matos Lopes
- Hearbase Ltd, The Hearing Specialists, Kent, United Kingdom.,The Ear Institute, University College London, London, United Kingdom
| | - Petra Hendriks
- Center for Language and Cognition Groningen (CLCG), University of Groningen, Groningen, The Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center Groningen, Groningen, The Netherlands
| |
Collapse
|
22
|
Suneel D, Davidson LS, Lieu J. Self-reported hearing quality of life measures in pediatric cochlear implant recipients with bilateral input. Cochlear Implants Int 2020; 21:83-91. [PMID: 31590628 PMCID: PMC7002198 DOI: 10.1080/14670100.2019.1670486] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 10/25/2022]
Abstract
Objective: Self-reported hearing quality of life (QoL) for pediatric cochlear implant (CI) recipients was examined, asking whether 1) children with CIs have similar QoL as those with less severe hearing loss (HL); 2) children with different bilateral CI (BCI) device configurations report different QoL; and 3) do audiological, demographic and spoken language factors affect hearing QoL?Design: One hundred four children (ages 7-11 years) using bimodal devices or BCIs participated. The Hearing Environments and Reflection of Quality of Life (HEAR-QL) questionnaire, receptive language and speech perception tests were administered. HEAR-QL scores of CI recipients were compared to scores of age-mates with normal hearing and mild to profound HL.Results: HEAR-QL scores for CI participants were similar to those of children with less severe HL and did not differ with device configuration. Emotion identification and word recognition in noise correlated significantly with HEAR-QL scores.Discussion: CI recipients reported that HL hinders social participation. Better understanding of speech in noise and emotional content was associated with fewer hearing-related difficulties on the HEAR-QL.Conclusions: Noisy situations encountered in educational settings should be addressed for children with HL. The link between perception of emotion and hearing-related QoL for CI recipients should be further examined.
Collapse
Affiliation(s)
| | - Lisa S. Davidson
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Mo USA
- Program in Audiology and Communication Sciences, Washington University School of Medicine, St. Louis, Mo USA
| | - Judith Lieu
- Department of Otolaryngology, Washington University School of Medicine, St. Louis, Mo USA
| |
Collapse
|
23
|
Damm SA, Sis JL, Kulkarni AM, Chatterjee M. How Vocal Emotions Produced by Children With Cochlear Implants Are Perceived by Their Hearing Peers. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2019; 62:3728-3740. [PMID: 31589545 PMCID: PMC7201339 DOI: 10.1044/2019_jslhr-s-18-0497] [Citation(s) in RCA: 6] [Impact Index Per Article: 1.2] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 12/15/2018] [Revised: 06/13/2019] [Accepted: 07/16/2019] [Indexed: 06/10/2023]
Abstract
Purpose Cochlear implants (CIs) transmit a degraded version of the acoustic input to the listener. This impacts the perception of harmonic pitch, resulting in deficits in the perception of voice features critical to speech prosody. Such deficits may relate to changes in how children with CIs (CCIs) learn to produce vocal emotions. The purpose of this study was to investigate happy and sad emotional speech productions by school-age CCIs, compared to productions by children with normal hearing (NH), postlingually deaf adults with CIs, and adults with NH. Method All individuals recorded the same emotion-neutral sentences in a happy manner and a sad manner. These recordings were then used as stimuli in an emotion recognition task performed by child and adult listeners with NH. Their performance was taken as a measure of how well the 4 groups of talkers communicated the 2 emotions. Results Results showed high variability in the identifiability of emotions produced by CCIs, relative to other groups. Some CCIs produced highly identifiable emotions, while others showed deficits. The postlingually deaf adults with CIs produced highly identifiable emotions and relatively small intersubject variability. Age at implantation was found to be a significant predictor of performance by CCIs. In addition, the NH listeners' age predicted how well they could identify the emotions produced by CCIs. Thus, older NH child listeners were better able to identify the CCIs' intended emotions than younger NH child listeners. In contrast to the deficits in their emotion productions, CCIs produced highly intelligible words in the sentences carrying the emotions. Conclusions These results confirm previous findings showing deficits in CCIs' productions of prosodic cues and indicate that early auditory experience plays an important role in vocal emotion productions by individuals with CIs.
Collapse
Affiliation(s)
- Sara A. Damm
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Jenni L. Sis
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
- Department of Special Education and Communication Disorders, University of Nebraska–Lincoln Barkley Memorial Center
| | - Aditya M. Kulkarni
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| | - Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE
| |
Collapse
|
24
|
Vasil KJ, Lewis J, Tamati T, Ray C, Moberly AC. How Does Quality of Life Relate to Auditory Abilities? A Subitem Analysis of the Nijmegen Cochlear Implant Questionnaire. J Am Acad Audiol 2019; 31:292-301. [PMID: 31580803 DOI: 10.3766/jaaa.19047] [Citation(s) in RCA: 14] [Impact Index Per Article: 2.8] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
BACKGROUND Objective speech recognition tasks are widely used to measure performance of adult cochlear implant (CI) users; however, the relationship of these measures with patient-reported quality of life (QOL) remains unclear. A comprehensive QOL measure, the Nijmegen Cochlear Implant Questionnaire (NCIQ), has historically shown a weak association with speech recognition performance, but closer examination may indicate stronger relations between QOL and objective auditory performance, particularly when examining a broad range of auditory skills. PURPOSE The aim of the present study was to assess the NCIQ for relations to speech and environmental sound recognition measures. Identifying associations with certain QOL domains, subdomains, and subitems would provide evidence that speech and environmental sound recognition measures are relevant to QOL. A lack of relations among QOL and various auditory abilities would suggest potential areas of patient-reported difficulty that could be better measured or targeted. RESEARCH DESIGN A cross-sectional study was performed in adult CI users to examine relations among subjective QOL ratings on NCIQ domains, subdomains, and subitems with auditory outcome measures. STUDY SAMPLE Participants were 44 adult experienced CI users. All participants were postlingually deafened and had met candidacy requirements for traditional cochlear implantation. DATA COLLECTION AND ANALYSIS Participants completed the NCIQ as well as several speech and environmental sound recognition tasks: monosyllabic word recognition, standard and high-variability sentence recognition, audiovisual sentence recognition, and environmental sound identification. Bivariate correlation analyses were performed to investigate relations among patient-reported NCIQ scores and the functional auditory measures. RESULTS The total NCIQ score was not strongly correlated with any objective auditory outcome measures. The physical domain and the advanced sound perception subdomain related to several measures, in particular monosyllabic word recognition and AzBio sentence recognition. Fourteen of the 60 subitems on the NCIQ were correlated with at least one auditory measure. CONCLUSIONS Several subitems demonstrated moderate-to-strong correlations with auditory measures, indicating that these auditory measures are relevant to the QOL. A lack of relations with other subitems suggests a need for the development of objective measures that will better capture patients' hearing-related obstacles. Clinicians may use information obtained through the NCIQ to better estimate real-world performance, which may support improved counseling and development of recommendations for CI patients.
Collapse
Affiliation(s)
- Kara J Vasil
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Jessica Lewis
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Terrin Tamati
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Christin Ray
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| | - Aaron C Moberly
- Department of Otolaryngology-Head & Neck Surgery, The Ohio State University, Columbus, OH
| |
Collapse
|
25
|
Chatterjee M, Kulkarni AM, Siddiqui RM, Christensen JA, Hozan M, Sis JL, Damm SA. Acoustics of Emotional Prosody Produced by Prelingually Deaf Children With Cochlear Implants. Front Psychol 2019; 10:2190. [PMID: 31632320 PMCID: PMC6779094 DOI: 10.3389/fpsyg.2019.02190] [Citation(s) in RCA: 12] [Impact Index Per Article: 2.4] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/31/2019] [Accepted: 09/11/2019] [Indexed: 11/27/2022] Open
Abstract
Purpose: Cochlear implants (CIs) provide reasonable levels of speech recognition quietly, but voice pitch perception is severely impaired in CI users. The central question addressed here relates to how access to acoustic input pre-implantation influences vocal emotion production by individuals with CIs. The objective of this study was to compare acoustic characteristics of vocal emotions produced by prelingually deaf school-aged children with cochlear implants (CCIs) who were implanted at the age of 2 and had no usable hearing before implantation with those produced by children with normal hearing (CNH), adults with normal hearing (ANH), and postlingually deaf adults with cochlear implants (ACI) who developed with good access to acoustic information prior to losing their hearing and receiving a CI. Method: A set of 20 sentences without lexically based emotional information was recorded by 13 CCI, 9 CNH, 9 ANH, and 10 ACI, each with a happy emotion and a sad emotion, without training or guidance. The sentences were analyzed for primary acoustic characteristics of the productions. Results: Significant effects of Emotion were observed in all acoustic features analyzed (mean voice pitch, standard deviation of voice pitch, intensity, duration, and spectral centroid). ACI and ANH did not differ in any of the analyses. Of the four groups, CCI produced the smallest acoustic contrasts between the emotions in voice pitch and emotions in its standard deviation. Effects of developmental age (highly correlated with the duration of device experience) and age at implantation (moderately correlated with duration of device experience) were observed, and interactions with the children's sex were also observed. Conclusion: Although prelingually deaf CCI and postlingually deaf ACI are listening to similar degraded speech and show similar deficits in vocal emotion perception, these groups are distinct in their productions of contrastive vocal emotions. The results underscore the importance of access to acoustic hearing in early childhood for the production of speech prosody and also suggest the need for a greater role of speech therapy in this area.
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, Omaha, NE, United States
| | | | | | | | | | | | | |
Collapse
|