1
|
Schinkel-Bielefeld N, Burke L, Holube I, Iankilevitch M, Jenstad LM, Lelic D, Naylor G, Singh G, Smeds K, von Gablenz P, Wolters F, Wu YH. Implementing Ecological Momentary Assessment in Audiological Research: Opportunities and Challenges. Am J Audiol 2024; 33:648-673. [PMID: 38950171 PMCID: PMC11427935 DOI: 10.1044/2024_aja-23-00249] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 07/03/2024] Open
Abstract
Ecological momentary assessment (EMA) is a way to evaluate experiences in everyday life. It is a powerful research tool but can be complex and challenging for beginners. Application of EMA in audiological research brings with it opportunities and challenges that differ from other research disciplines. This tutorial discusses important considerations when conducting EMA studies in hearing care. While more research is needed to develop specific guidelines for the various potential applications of EMA in hearing research, we hope this article can alert hearing researchers new to EMA to pitfalls when using EMA and help strengthen their study design. The current article elaborates study design details, such as choice of participants, representativeness of the study period for participants' lives, and balancing participant burden with data requirements. Mobile devices and sensors to collect objective data on the acoustic situation are reviewed alongside different possibilities for EMA setups ranging from online questionnaires paired with a timer to proprietary apps that also have access to parameters of a hearing device. In addition to considerations for survey design, a list of questionnaire items from previous studies is provided. For each item, an example and a list of references are given. EMA typically provides data sets that are rich but also challenging in that they are noisy, and there is often unequal amount of data between participants. After recommendations on how to check the data for compliance, reactivity, and careless responses, methods for statistical analysis on the individual level and on the group level are discussed including special methods for direct comparison of hearing device programs.
Collapse
Affiliation(s)
| | - Louise Burke
- School of Medicine, University of Nottingham, United Kingdom
| | - Inga Holube
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | - Maria Iankilevitch
- Department of Psychology, University of Victoria, British Columbia, Canada
| | - Lorienne M Jenstad
- School of Audiology and Speech Sciences, The University of British Columbia, Vancouver, Canada
| | | | - Graham Naylor
- School of Medicine, University of Nottingham, United Kingdom
- National Institute for Health and Care Research, Nottingham Biomedical Research Centre, United Kingdom
| | - Gurjit Singh
- Sonova Canada, Kitchener, Ontario, Canada
- Department of Speech-Language Pathology, University of Toronto, Ontario, Canada
- Department of Psychology, Toronto Metropolitan University, Ontario, Canada
| | | | - Petra von Gablenz
- Institute of Hearing Technology and Audiology, Jade University of Applied Sciences, Oldenburg, Germany
- Cluster of Excellence Hearing4All, Oldenburg, Germany
| | | | - Yu-Hsiang Wu
- Department of Communication Sciences and Disorders, The University of Iowa, Iowa City
| |
Collapse
|
2
|
Marcrum SC, Rakita L, Picou EM. Effect of Sound Genre on Emotional Responses for Adults With and Without Hearing Loss. Ear Hear 2024:00003446-990000000-00328. [PMID: 39129128 DOI: 10.1097/aud.0000000000001561] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 08/13/2024]
Abstract
OBJECTIVES Adults with permanent hearing loss exhibit a reduced range of valence ratings in response to nonspeech sounds; however, the degree to which sound genre might affect such ratings is unclear. The purpose of this study was to determine if ratings of valence covary with sound genre (e.g., social communication, technology, music), or only expected valence (pleasant, neutral, unpleasant). DESIGN As part of larger study protocols, participants rated valence and arousal in response to nonspeech sounds. For this study, data were reanalyzed by assigning sounds to unidimensional genres and evaluating relationships between hearing loss, age, and gender and ratings of valence. In total, results from 120 adults with normal hearing (M = 46.3 years, SD = 17.7, 33 males and 87 females) and 74 adults with hearing loss (M = 66.1 years, SD = 6.1, 46 males and 28 females) were included. RESULTS Principal component analysis confirmed valence ratings loaded onto eight unidimensional factors: positive and negative social communication, positive and negative technology, music, animal, activities, and human body noises. Regression analysis revealed listeners with hearing loss rated some genres as less extreme (less pleasant/less unpleasant) than peers with better hearing, with the relationship between hearing loss and valence ratings being similar across genres within an expected valence category. In terms of demographic factors, female gender was associated with less pleasant ratings of negative social communication, positive and negative technology, activities, and human body noises, while increasing age was related to a subtle rise in valence ratings across all genres. CONCLUSIONS Taken together, these results confirm and extend previous findings that hearing loss is related to a reduced range of valence ratings and suggest that this effect is mediated by expected sound valence, rather than sound genre.
Collapse
Affiliation(s)
- Steven C Marcrum
- Department of Otolaryngology, University Hospital Regensburg, Regensburg, Germany
| | - Lori Rakita
- Meta Platforms, Inc., Menlo Park, California, USA
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, Nashville, Tennessee, USA
| |
Collapse
|
3
|
Irino T, Hanatani Y, Kishida K, Naito S, Kawahara H. Effects of age and hearing loss on speech emotion discrimination. Sci Rep 2024; 14:18328. [PMID: 39112612 PMCID: PMC11306396 DOI: 10.1038/s41598-024-69216-7] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/28/2024] [Accepted: 08/01/2024] [Indexed: 08/10/2024] Open
Abstract
Better communication with older people requires not only improving speech intelligibility but also understanding how well emotions can be conveyed and the effect of age and hearing loss (HL) on emotion perception. In this paper, emotion discrimination experiments were conducted using a vocal morphing method and an HL simulator in young normal hearing (YNH) and older participants. Speech sounds were morphed to represent intermediate emotions between all combinations of happiness, sadness, and anger. Discrimination performance was compared when the YNH listened to normal sounds, when the same YNH listened to HL simulated sounds, and when older people listened to the same normal sounds. The results showed that there was no significant difference between discrimination with and without HL simulation, suggesting that peripheral HL may not affect emotion perception. The discrimination performance of the older participants was significantly worse only for the anger-happiness pair than for the other emotion pairs and for the YNH. It was also found that the difficulty increases with age, not just with hearing level.
Collapse
Affiliation(s)
- Toshio Irino
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan.
- Graduate School of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan.
| | - Yukiho Hanatani
- Graduate School of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Kazuma Kishida
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Shuri Naito
- Faculty of Systems Engineering, Wakayama University, Wakayama, 640-8510, Japan
| | - Hideki Kawahara
- Center for Innovative and Joint Research, Wakayama University, Wakayama, 640-8510, Japan
| |
Collapse
|
4
|
Tang E, Gong J, Zhang J, Zhang J, Fang R, Guan J, Ding H. Chinese Emotional Speech Audiometry Project (CESAP): Establishment and Validation of a New Material Set With Emotionally Neutral Disyllabic Words. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2024; 67:1945-1963. [PMID: 38749011 DOI: 10.1044/2024_jslhr-23-00625] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 05/21/2024]
Abstract
PURPOSE The Chinese Emotional Speech Audiometry Project (CESAP) aims to establish a new material set for Chinese speech audiometry tests, which can be used in both neutral and emotional prosody settings. As the first endeavor of CESAP, this study demonstrates the development of the material foundation and reports its validation in neutral prosody. METHOD In the development step, 40 phonetically balanced word lists consisting of 30 Chinese disyllabic words with neutral valence were first generated. In a following affective rating experiment, 35 word lists were qualified for validation based on the familiarity and valence ratings from 30 normal-hearing (NH) participants. For validation, performance-intensity functions of each word list were fitted with responses from 60 NH subjects under six presentation levels (-1, 3, 5, 7, 11, and 20 dB HL). The final material set was determined by the intelligibility scores at each decibel level and the mean slopes. RESULTS First, 35 lists satisfied the criteria of phonetic balance, limited repetitions, high familiarity, and neutral valence and were selected for validation. Second, 15 lists were compiled in the final material set based on the pairwise differences in intelligibility scores and the fitted 20%-80% slopes. The established material set had high reliability and validity and was sensitive to detect intelligibility changes (50% slope: 6.20%/dB; 20%-80% slope: 5.45%/dB), with small covariance of variation for thresholds (15%), 50% slope (12%), and 20%-80% slope (12%). CONCLUSION Our final material set of 15 word lists takes the initiative to control the emotional aspect of audiometry tests, which enriches available Mandarin speech recognition materials and warrants future assessments in emotional prosody among populations with hearing impairments. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.25742814.
Collapse
Affiliation(s)
- Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- Department of Psychology and Behavioral Sciences, Zhejiang University, Hangzhou, China
| | - Jie Gong
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-being, Shanghai, China
| | - Jing Zhang
- SONOVA Innovation Center, Shanghai, China
| | - Jiaqi Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-being, Shanghai, China
| | - Ruomei Fang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-being, Shanghai, China
| | | | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
- National Research Centre for Language and Well-being, Shanghai, China
| |
Collapse
|
5
|
Hood KE, Hurley LM. Listening to your partner: serotonin increases male responsiveness to female vocal signals in mice. Front Hum Neurosci 2024; 17:1304653. [PMID: 38328678 PMCID: PMC10847236 DOI: 10.3389/fnhum.2023.1304653] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/29/2023] [Accepted: 12/28/2023] [Indexed: 02/09/2024] Open
Abstract
The context surrounding vocal communication can have a strong influence on how vocal signals are perceived. The serotonergic system is well-positioned for modulating the perception of communication signals according to context, because serotonergic neurons are responsive to social context, influence social behavior, and innervate auditory regions. Animals like lab mice can be excellent models for exploring how serotonin affects the primary neural systems involved in vocal perception, including within central auditory regions like the inferior colliculus (IC). Within the IC, serotonergic activity reflects not only the presence of a conspecific, but also the valence of a given social interaction. To assess whether serotonin can influence the perception of vocal signals in male mice, we manipulated serotonin systemically with an injection of its precursor 5-HTP, and locally in the IC with an infusion of fenfluramine, a serotonin reuptake blocker. Mice then participated in a behavioral assay in which males suppress their ultrasonic vocalizations (USVs) in response to the playback of female broadband vocalizations (BBVs), used in defensive aggression by females when interacting with males. Both 5-HTP and fenfluramine increased the suppression of USVs during BBV playback relative to controls. 5-HTP additionally decreased the baseline production of a specific type of USV and male investigation, but neither drug treatment strongly affected male digging or grooming. These findings show that serotonin modifies behavioral responses to vocal signals in mice, in part by acting in auditory brain regions, and suggest that mouse vocal behavior can serve as a useful model for exploring the mechanisms of context in human communication.
Collapse
Affiliation(s)
- Kayleigh E. Hood
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| | - Laura M. Hurley
- Hurley Lab, Department of Biology, Indiana University, Bloomington, IN, United States
- Center for the Integrative Study of Animal Behavior, Indiana University, Bloomington, IN, United States
| |
Collapse
|
6
|
Chatterjee M, Feller A, Kulkarni AM, Galvin JJ. Emotional prosody perception and production are linked in prelingually deaf children with cochlear implantsa). JASA EXPRESS LETTERS 2023; 3:120001. [PMID: 38117231 PMCID: PMC10759799 DOI: 10.1121/10.0023996] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 09/18/2023] [Accepted: 12/06/2023] [Indexed: 12/21/2023]
Abstract
Links between perception and production of emotional prosody by children with cochlear implants (CIs) have not been extensively explored. In this study, production and perception of emotional prosody were measured in 20 prelingually deaf school-age children with CIs. All were implanted by the age of 3, and most by 18 months. Emotion identification was well-predicted by prosody productions in terms of voice pitch modulation and duration. This finding supports the idea that in prelingually deaf children with CIs, production of emotional prosody is associated with access to auditory cues that support the perception of emotional prosody.
Collapse
Affiliation(s)
- Monita Chatterjee
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Ava Feller
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - Aditya M Kulkarni
- Auditory Prostheses and Perception Laboratory, Center for Hearing Research, Boys Town National Research Hospital, 555 North 30th Street, Omaha, Nebraska 68131, USA
| | - John J Galvin
- House Institute Foundation, 1127 Wilshire Boulevard, Los Angeles, California 90017, , , ,
| |
Collapse
|
7
|
Gedik Toker Ö, Hüsam H, Behmen MB, Bal N, Gültekin M, Toker K. Validity and Reliability of the Turkish Version of the Emotional Communication in Hearing Questionnaire. Am J Audiol 2023:1-13. [PMID: 37956697 DOI: 10.1044/2023_aja-23-00093] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2023] Open
Abstract
PURPOSE The Emotional Communication in Hearing Questionnaire (EMO-CHeQ) is designed to evaluate awareness of vocal emotion information and perception of emotion. This study sought to translate the EMO-CHeQ into Turkish in accordance with international standards and to ascertain its validity and reliability statistically by administering it to native Turkish-speaking subjects. METHOD This empirical study involved collecting data from participants using a scale. A total of 460 individuals, comprising 158 women and 302 men (Mage = 33.43 ± 13.14 years), participated. The data encompassed 295 subjects with normal hearing, 101 hearing aid users, and 64 cochlear implant users. Exploratory factor analysis, followed by confirmatory factor analysis, was employed to ensure construct validity. Internal consistency was assessed with Cronbach's alpha reliability analysis, and content validity was applied to examine how effectively the Turkish version of the scale fulfilled its intended purpose. RESULTS The total Cronbach's alpha internal consistency coefficient of the scale was .949, and the explained variance was 74.385%. The Turkish version of the EMO-CHeQ demonstrated high construct validity, internal consistency, and explanatory efficacy. The scale revealed significant differences (p < .05) in emotional communication among the normal-hearing group, hearing aid users, and cochlear implant users. CONCLUSIONS The Turkish adaptation of the EMO-CHeQ is a credible and robust tool for evaluating how individuals perceive emotion in speech. Emotion perception was found to be suboptimal among hearing aid users compared to cochlear implant users, although it was most proficient in those with normal hearing. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24520624.
Collapse
Affiliation(s)
- Özge Gedik Toker
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Hilal Hüsam
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Meliha Başöz Behmen
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| | - Nilüfer Bal
- Department of Audiology, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
- Department of Audiology, Faculty of Medicine, Marmara University, Istanbul, Turkey
| | | | - Kerem Toker
- Department of Health Management, Faculty of Health Sciences, Bezmialem Vakıf University, Istanbul, Turkey
| |
Collapse
|
8
|
Patel B, Zhang Z, McGettigan C, Belyk M. Speech With Pauses Sounds Deceptive to Listeners With and Without Hearing Impairment. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:3735-3744. [PMID: 37672786 DOI: 10.1044/2023_jslhr-22-00618] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 09/08/2023]
Abstract
PURPOSE Communication is as much persuasion as it is the transfer of information. This creates a tension between the interests of the speaker and those of the listener, as dishonest speakers naturally attempt to hide deceptive speech and listeners are faced with the challenge of sorting truths from lies. Listeners with hearing impairment in particular may have differing levels of access to the acoustical cues that give away deceptive speech. A greater tendency toward speech pauses has been hypothesized to result from the cognitive demands of lying convincingly. Higher vocal pitch has also been hypothesized to mark the increased anxiety of a dishonest speaker. METHOD Listeners with or without hearing impairments heard short utterances from natural conversations, some of which had been digitally manipulated to contain either increased pausing or raised vocal pitch. Listeners were asked to guess whether each statement was a lie in a two-alternative forced-choice task. Participants were also asked explicitly which cues they believed had influenced their decisions. RESULTS Statements were more likely to be perceived as a lie when they contained pauses, but not when vocal pitch was raised. This pattern held regardless of hearing ability. In contrast, both groups of listeners self-reported using vocal pitch cues to identify deceptive statements, though at lower rates than pauses. CONCLUSIONS Listeners may have only partial awareness of the cues that influence their impression of dishonesty. Listeners with hearing impairment may place greater weight on acoustical cues according to the differing degrees of access provided by hearing aids. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.24052446.
Collapse
Affiliation(s)
- Bindiya Patel
- Department of Audiological Sciences, University College London, United Kingdom
| | - Ziyun Zhang
- Department of Speech Hearing and Phonetic Sciences, University College London, United Kingdom
| | - Carolyn McGettigan
- Department of Speech Hearing and Phonetic Sciences, University College London, United Kingdom
| | - Michel Belyk
- Department of Psychology, Edge Hill University, Ormskirk, United Kingdom
| |
Collapse
|
9
|
Holman JA, Ali YHK, Naylor G. A qualitative investigation of the hearing and hearing-aid related emotional states experienced by adults with hearing loss. Int J Audiol 2023; 62:973-982. [PMID: 36036164 DOI: 10.1080/14992027.2022.2111373] [Citation(s) in RCA: 7] [Impact Index Per Article: 7.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 10/18/2021] [Accepted: 07/29/2022] [Indexed: 11/05/2022]
Abstract
OBJECTIVE Despite previous research into the psychosocial impact of hearing loss, little detail is known regarding the hearing and hearing-aid-related emotional states experienced by adults with hearing loss in everyday life, and how they occur. DESIGN Individual remote semi-structured interviews were audio-recorded, transcribed verbatim and qualitatively analysed with reflexive and inductive thematic analysis. STUDY SAMPLE Seventeen participants (9 female) with hearing loss (age range 44-74 years) participated. Ten used bilateral hearing aids, four unilateral and three used no hearing aids at the time of interviews. RESULTS The four main themes which emerged from the data were: identity and self-image, autonomy and control, personality and dominant emotional states and situational cost/benefit analysis with respect to use of hearing aids. CONCLUSIONS This study goes beyond previous literature by providing a more detailed insight into emotions related to hearing and hearing-aids in adults. Hearing loss causes a multitude of negative emotions, while hearing aids generally reduce negative emotions and allow for more positive emotions. However, factors such as lifestyle, personality, situational control, the relationship with those in conversation and the attribution of blame are key to individual emotional experience. Clinical implications include the important role of social relationships in assessment and counselling.
Collapse
Affiliation(s)
- Jack A Holman
- Hearing Sciences (Scottish Section), Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Glasgow, UK
| | - Yasmin H K Ali
- Hearing Sciences, Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Nottingham, UK
| | - Graham Naylor
- Hearing Sciences (Scottish Section), Mental Health and Clinical Neurosciences, School of Medicine, University of Nottingham, Glasgow, UK
| |
Collapse
|
10
|
Moffat R, Başkent D, Luke R, McAlpine D, Van Yper L. Cortical haemodynamic responses predict individual ability to recognise vocal emotions with uninformative pitch cues but do not distinguish different emotions. Hum Brain Mapp 2023; 44:3684-3705. [PMID: 37162212 PMCID: PMC10203806 DOI: 10.1002/hbm.26305] [Citation(s) in RCA: 2] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 07/09/2022] [Revised: 02/23/2023] [Accepted: 03/30/2023] [Indexed: 05/11/2023] Open
Abstract
We investigated the cortical representation of emotional prosody in normal-hearing listeners using functional near-infrared spectroscopy (fNIRS) and behavioural assessments. Consistent with previous reports, listeners relied most heavily on F0 cues when recognizing emotion cues; performance was relatively poor-and highly variable between listeners-when only intensity and speech-rate cues were available. Using fNIRS to image cortical activity to speech utterances containing natural and reduced prosodic cues, we found right superior temporal gyrus (STG) to be most sensitive to emotional prosody, but no emotion-specific cortical activations, suggesting that while fNIRS might be suited to investigating cortical mechanisms supporting speech processing it is less suited to investigating cortical haemodynamic responses to individual vocal emotions. Manipulating emotional speech to render F0 cues less informative, we found the amplitude of the haemodynamic response in right STG to be significantly correlated with listeners' abilities to recognise vocal emotions with uninformative F0 cues. Specifically, listeners more able to assign emotions to speech with degraded F0 cues showed lower haemodynamic responses to these degraded signals. This suggests a potential objective measure of behavioural sensitivity to vocal emotions that might benefit neurodiverse populations less sensitive to emotional prosody or hearing-impaired listeners, many of whom rely on listening technologies such as hearing aids and cochlear implants-neither of which restore, and often further degrade, the F0 cues essential to parsing emotional prosody conveyed in speech.
Collapse
Affiliation(s)
- Ryssa Moffat
- School of Psychological SciencesMacquarie UniversitySydneyNew South WalesAustralia
- International Doctorate of Experimental Approaches to Language and Brain (IDEALAB)Universities of Potsdam, Germany; Groningen, Netherlands; Newcastle University, UK; and Macquarie UniversityAustralia
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center GroningenUniversity of GroningenGroningenThe Netherlands
| | - Deniz Başkent
- Department of Otorhinolaryngology/Head and Neck Surgery, University Medical Center GroningenUniversity of GroningenGroningenThe Netherlands
- Research School of Behavioral and Cognitive Neuroscience, Graduate School of Medical SciencesUniversity of GroningenGroningenThe Netherlands
| | - Robert Luke
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
- Bionics InstituteEast MelbourneVictoriaAustralia
| | - David McAlpine
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
| | - Lindsey Van Yper
- Macquarie University Hearing, and Department of LinguisticsMacquarie UniversitySydneyNew South WalesAustralia
- Institute of Clinical ResearchUniversity of Southern DenmarkOdenseDenmark
| |
Collapse
|
11
|
Zhu M, Jin H, Bai Z, Li Z, Song Y. Image-Evoked Emotion Recognition for Hearing-Impaired Subjects with EEG Signals. SENSORS (BASEL, SWITZERLAND) 2023; 23:5461. [PMID: 37420628 DOI: 10.3390/s23125461] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 05/16/2023] [Revised: 06/03/2023] [Accepted: 06/07/2023] [Indexed: 07/09/2023]
Abstract
In recent years, there has been a growing interest in the study of emotion recognition through electroencephalogram (EEG) signals. One particular group of interest are individuals with hearing impairments, who may have a bias towards certain types of information when communicating with those in their environment. To address this, our study collected EEG signals from both hearing-impaired and non-hearing-impaired subjects while they viewed pictures of emotional faces for emotion recognition. Four kinds of feature matrices, symmetry difference, and symmetry quotient based on original signal and differential entropy (DE) were constructed, respectively, to extract the spatial domain information. The multi-axis self-attention classification model was proposed, which consists of local attention and global attention, combining the attention model with convolution through a novel architectural element for feature classification. Three-classification (positive, neutral, negative) and five-classification (happy, neutral, sad, angry, fearful) tasks of emotion recognition were carried out. The experimental results show that the proposed method is superior to the original feature method, and the multi-feature fusion achieved a good effect in both hearing-impaired and non-hearing-impaired subjects. The average classification accuracy for hearing-impaired subjects and non-hearing-impaired subjects was 70.2% (three-classification) and 50.15% (five-classification), and 72.05% (three-classification) and 51.53% (five-classification), respectively. In addition, by exploring the brain topography of different emotions, we found that the discriminative brain regions of the hearing-impaired subjects were also distributed in the parietal lobe, unlike those of the non-hearing-impaired subjects.
Collapse
Affiliation(s)
- Mu Zhu
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
| | - Haonan Jin
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
| | - Zhongli Bai
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
| | - Zhiwei Li
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
| | - Yu Song
- Tianjin Key Laboratory for Control Theory and Applications in Complicated Systems, School of Electrical Engineering and Automation, Tianjin University of Technology, Tianjin 300384, China
| |
Collapse
|
12
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
13
|
Henry JD, Grainger SA, von Hippel W. Determinants of Social Cognitive Aging: Predicting Resilience and Risk. Annu Rev Psychol 2023; 74:167-192. [PMID: 35973407 DOI: 10.1146/annurev-psych-033020-121832] [Citation(s) in RCA: 8] [Impact Index Per Article: 8.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023]
Abstract
This review focuses on conceptual and empirical research on determinants of social cognitive aging. We present an integrated model [the social cognitive resource (SCoRe) framework] to organize the literature and describe how social cognitive resilience is determined jointly by capacity and motivational resources. We discuss how neurobiological aging, driven by genetic and environmental influences, is associated with broader sensory, neural, and physiological changes that are direct determinants of capacity as well as indirect determinants of motivation via their influence on expectation of loss versus reward and cognitive effort valuation. Research is reviewed that shows how contextual factors, such as relationship status, familiarity, and practice, are fundamental to understanding the availability of both types of resource. We conclude with a discussion of the implications of social cognitive change in late adulthood for everyday social functioning and with recommendations for future research.
Collapse
Affiliation(s)
- Julie D Henry
- School of Psychology, The University of Queensland, St Lucia, Australia; , ,
| | - Sarah A Grainger
- School of Psychology, The University of Queensland, St Lucia, Australia; , ,
| | - William von Hippel
- School of Psychology, The University of Queensland, St Lucia, Australia; , ,
| |
Collapse
|
14
|
Liu Y, Xu Y, Yang X, Miao G, Wu Y, Yang S. Sensory impairment and cognitive function among older adults in China: The mediating roles of anxiety and depressive symptoms. Int J Geriatr Psychiatry 2023; 38:e5866. [PMID: 36639927 DOI: 10.1002/gps.5866] [Citation(s) in RCA: 1] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 09/16/2022] [Accepted: 01/06/2023] [Indexed: 01/10/2023]
Abstract
OBJECTIVES Through a cross-sectional study, we explored the association between sensory impairment and cognitive function in Chinese older adults, and tested the mediating roles of anxiety and depressive symptoms between this relationship. METHODS Based on the 2018 Chinese Longitudinal Healthy Longevity Survey, a total of 10,895 older adults aged 65 and above were selected as samples for research. Anxiety, depressive symptoms and cognitive function were evaluated by the Generalized Anxiety Disorder, the Center for Epidemiologic Studies Depression (CES-D10) and the Chinese version of modified Mini-Mental State Examination scales, respectively. Sensory impairment was assessed from self-reported vision and hearing functions. Multiple linear regression and SPSS Macro PROCESS were used for statistical analysis. RESULTS Compared with no sensory impairment, vision impairment (B = -1.012, 95%CI: -1.206, -0.818), hearing impairment (B = -2.683, 95%CI: -2.980, -2.386) and dual sensory impairment (B = -6.302, 95%CI: -6.585, -6.020) have a significant association with cognitive function in older adults, respectively. Anxiety and depressive symptoms not only acted as independent mediators, but also played sequential mediating effects on the relationship between sensory impairment and cognitive function. CONCLUSIONS Greater attention should be paid to anxiety and depressive symptoms of older adults with sensory impairment, which might be beneficial to maintain cognitive function.
Collapse
Affiliation(s)
- Yixuan Liu
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| | - Yanling Xu
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| | - Xinyan Yang
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| | - Guomei Miao
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| | - Yinghui Wu
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| | - Shujuan Yang
- Department of Social Medicine and Health Management, School of Public Health, Jilin University, Changchun, Jilin, China
| |
Collapse
|
15
|
Balan JR, Jaisinghani P. Effect of Sensory Modality on Reaction Time in Individuals with Auditory Neuropathy Spectrum Disorder. JOURNAL OF COMMUNICATION DISORDERS 2022; 100:106278. [PMID: 36343389 DOI: 10.1016/j.jcomdis.2022.106278] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 08/30/2021] [Revised: 10/27/2022] [Accepted: 10/31/2022] [Indexed: 06/16/2023]
Abstract
PURPOSE To investigate and compare the reaction time of individuals with auditory neuropathy in three modalities, auditory, visual, and audio-visual. The reaction time of individuals with auditory neuropathy was also compared with those with normal hearing. The relationship between reaction time across modalities and the duration of hearing loss in auditory neuropathy was also investigated. METHODS AND MATERIALS The reaction time of adults with auditory neuropathy and those with normal hearing was measured in the three modalities using the Choice reaction time task. RESULTS The auditory neuropathy group significantly had a longer reaction time than the normal hearing in all modalities. The trend of the mean reaction time differed across groups. Further, a significant difference in reaction time of the auditory neuropathy group was noted between auditory and visual mode, auditory and audio-visual mode. However, no significant difference between visual and audio-visual modalities was noted in reaction time. CONCLUSION Significantly longer reaction time in auditory neuropathy is presumed to have resulted from neural conduction delay and impaired processing. The auditory neuropathy group can utilize visual cues for faster processing, and the study recommends an audio-visual mode for their management. In addition, the duration of hearing loss in auditory neuropathy had no relationship with reaction time across all modalities.
Collapse
Affiliation(s)
- Jithin Raj Balan
- The University of Texas at Austin, Moody College of Communication, Austin, Texas.
| | - Priyanka Jaisinghani
- Department of Communication Sciences and Disorders, Baylor University, Waco, Texas.
| |
Collapse
|
16
|
Cui Q, Chen N, Wen C, Xi J, Huang L. Research trends and hotspot analysis of age-related hearing loss from a bibliographic perspective. Front Psychol 2022; 13:921117. [PMID: 36211873 PMCID: PMC9536176 DOI: 10.3389/fpsyg.2022.921117] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 05/10/2022] [Accepted: 08/19/2022] [Indexed: 11/13/2022] Open
Abstract
BackgroundUp-to-date information about the trends of age-related hearing loss (ARHL) and how this varies between countries is essential to plan for an adequate health-system response. Therefore, this study aimed to assess the research hotpots and trends in ARHL and to provide the basis and direction for future research.Materials and methodsThe Web of Science Core Collection database was searched and screened according to the inclusion criteria during 2002–2021. Bibliometric analyses were conducted by CiteSpace (Chaomei Chen, Drexel University, Philadelphia, PA, United States) software and VOSviewer (Center for Science and Technology Studies, Leiden University, Leiden, The Netherlands) software.ResultsThe query identified 1,496 publications, which showed a growth trend of this filed. These publications were from 62 countries, the United States of America (United States) showed its tremendous impact on this field in publication outputs, total citations, and international collaborations, China following in second. The Journal of Hearing Research was the most productive journal. Weijia Kong published the most papers, and the most productive institution was Washington University. The keyword “presbycusis” ranked first in research frontiers and appeared earlier, and the keywords “age-related hearing loss,” “risk,” “dementia,” “auditory cortex,” “association,” and “decline” began to appear in recent years.ConclusionThe annual number of publications has grown rapidly in the past two decades and will continue to grow. Epidemiological investigation and laboratory research are lasting hot spots, besides future research will focus on the association between ARHL and cognitive decline, dementia, and Alzheimer’s disease.
Collapse
Affiliation(s)
- Qingjia Cui
- Rehabilitation Centre of Otolaryngology-Head and Neck, Beijing Rehabilitation Hospital, Capital Medical University, Beijing, China
| | - Na Chen
- Rehabilitation Centre of Otolaryngology-Head and Neck, Beijing Rehabilitation Hospital, Capital Medical University, Beijing, China
| | - Cheng Wen
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Key Laboratory of Otolaryngology-Head and Neck Surgery, Ministry of Education, Beijing Institute of Otolaryngology, Beijing, China
| | - Jianing Xi
- Beijing Rehabilitation Hospital, Capital Medical University, Beijing, China
- Jianing Xi,
| | - Lihui Huang
- Department of Otolaryngology-Head and Neck Surgery, Beijing Tongren Hospital, Capital Medical University, Beijing, China
- Key Laboratory of Otolaryngology-Head and Neck Surgery, Ministry of Education, Beijing Institute of Otolaryngology, Beijing, China
- *Correspondence: Lihui Huang,
| |
Collapse
|
17
|
Age-Related Changes in Voice Emotion Recognition by Postlingually Deafened Listeners With Cochlear Implants. Ear Hear 2022; 43:323-334. [PMID: 34406157 PMCID: PMC8847542 DOI: 10.1097/aud.0000000000001095] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.5] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/27/2022]
Abstract
OBJECTIVES Identification of emotional prosody in speech declines with age in normally hearing (NH) adults. Cochlear implant (CI) users have deficits in the perception of prosody, but the effects of age on vocal emotion recognition by adult postlingually deaf CI users are not known. The objective of the present study was to examine age-related changes in CI users' and NH listeners' emotion recognition. DESIGN Participants included 18 CI users (29.6 to 74.5 years) and 43 NH adults (25.8 to 74.8 years). Participants listened to emotion-neutral sentences spoken by a male and female talker in five emotions (happy, sad, scared, angry, neutral). NH adults heard them in four conditions: unprocessed (full spectrum) speech, 16-channel, 8-channel, and 4-channel noise-band vocoded speech. The adult CI users only listened to unprocessed (full spectrum) speech. Sensitivity (d') to emotions and Reaction Times were obtained using a single-interval, five-alternative, forced-choice paradigm. RESULTS For NH participants, results indicated age-related declines in Accuracy and d', and age-related increases in Reaction Time in all conditions. Results indicated an overall deficit, as well as age-related declines in overall d' for CI users, but Reaction Times were elevated compared with NH listeners and did not show age-related changes. Analysis of Accuracy scores (hit rates) were generally consistent with d' data. CONCLUSIONS Both CI users and NH listeners showed age-related deficits in emotion identification. The CI users' overall deficit in emotion perception, and their slower response times, suggest impaired social communication which may in turn impact overall well-being, particularly so for older CI users, as lower vocal emotion recognition scores have been associated with poorer subjective quality of life in CI patients.
Collapse
|
18
|
Effects of mild-to-moderate sensorineural hearing loss and signal amplification on vocal emotion recognition in middle-aged–older individuals. PLoS One 2022; 17:e0261354. [PMID: 34995305 PMCID: PMC8740977 DOI: 10.1371/journal.pone.0261354] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/20/2021] [Accepted: 11/29/2021] [Indexed: 11/19/2022] Open
Abstract
Previous research has shown deficits in vocal emotion recognition in sub-populations of individuals with hearing loss, making this a high priority research topic. However, previous research has only examined vocal emotion recognition using verbal material, in which emotions are expressed through emotional prosody. There is evidence that older individuals with hearing loss suffer from deficits in general prosody recognition, not specific to emotional prosody. No study has examined the recognition of non-verbal vocalization, which constitutes another important source for the vocal communication of emotions. It might be the case that individuals with hearing loss have specific difficulties in recognizing emotions expressed through prosody in speech, but not non-verbal vocalizations. We aim to examine whether vocal emotion recognition difficulties in middle- aged-to older individuals with sensorineural mild-moderate hearing loss are better explained by deficits in vocal emotion recognition specifically, or deficits in prosody recognition generally by including both sentences and non-verbal expressions. Furthermore a, some of the studies which have concluded that individuals with mild-moderate hearing loss have deficits in vocal emotion recognition ability have also found that the use of hearing aids does not improve recognition accuracy in this group. We aim to examine the effects of linear amplification and audibility on the recognition of different emotions expressed both verbally and non-verbally. Besides examining accuracy for different emotions we will also look at patterns of confusion (which specific emotions are mistaken for other specific emotion and at which rates) during both amplified and non-amplified listening, and we will analyze all material acoustically and relate the acoustic content to performance. Together these analyses will provide clues to effects of amplification on the perception of different emotions. For these purposes, a total of 70 middle-aged-older individuals, half with mild-moderate hearing loss and half with normal hearing will perform a computerized forced-choice vocal emotion recognition task with and without amplification.
Collapse
|
19
|
Tawdrous MM, D'Onofrio KL, Gifford R, Picou EM. Emotional Responses to Non-Speech Sounds for Hearing-aid and Bimodal Cochlear-Implant Listeners. Trends Hear 2022; 26:23312165221083091. [PMID: 35435773 PMCID: PMC9019384 DOI: 10.1177/23312165221083091] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Grants] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 04/15/2021] [Revised: 12/19/2021] [Accepted: 02/06/2022] [Indexed: 02/03/2023] Open
Abstract
The purpose of this project was to evaluate differences between groups and device configurations for emotional responses to non-speech sounds. Three groups of adults participated: 1) listeners with normal hearing with no history of device use, 2) hearing aid candidates with or without hearing aid experience, and 3) bimodal cochlear-implant listeners with at least 6 months of implant use. Participants (n = 18 in each group) rated valence and arousal of pleasant, neutral, and unpleasant non-speech sounds. Listeners with normal hearing rated sounds without hearing devices. Hearing aid candidates rated sounds while using one or two hearing aids. Bimodal cochlear-implant listeners rated sounds while using a hearing aid alone, a cochlear implant alone, or the hearing aid and cochlear implant simultaneously. Analysis revealed significant differences between groups in ratings of pleasant and unpleasant stimuli; ratings from hearing aid candidates and bimodal cochlear-implant listeners were less extreme (less pleasant and less unpleasant) than were ratings from listeners with normal hearing. Hearing aid candidates' ratings were similar with one and two hearing aids. Bimodal cochlear-implant listeners' ratings of valence were higher (more pleasant) in the configuration without a hearing aid (implant only) than in the two configurations with a hearing aid (alone or with an implant). These data support the need for further investigation into hearing device optimization to improve emotional responses to non-speech sounds for adults with hearing loss.
Collapse
Affiliation(s)
- Marina M. Tawdrous
- School of Communication Sciences and Disorders, Western University, 1151 Richmond St, London, ON, N6A 3K7
| | - Kristen L. D'Onofrio
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - René Gifford
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| | - Erin M. Picou
- Department of Hearing and Speech Sciences, Graduate School, Vanderbilt University, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
- Department of Hearing and Speech Sciences, School of Medicine, Vanderbilt University Medical
Center, 1215 21st Ave South, Room 8310, Nashville, TN, 37232
| |
Collapse
|
20
|
Zhao X, Zhou Y, Wei K, Bai X, Zhang J, Zhou M, Sun X. Associations of sensory impairment and cognitive function in middle-aged and older Chinese population: The China Health and Retirement Longitudinal Study. J Glob Health 2021; 11:08008. [PMID: 34956639 PMCID: PMC8684796 DOI: 10.7189/jogh.11.08008] [Citation(s) in RCA: 26] [Impact Index Per Article: 8.7] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 12/21/2022] Open
Abstract
Background Little is known about the associations between vision impairment, hearing impairment, and cognitive function. The aim of this study was to examine whether vision and hearing impairment were associated with a high risk for cognitive impairment in middle-aged and older Chinese adults. Methods A total of 13 914 Chinese adults from the China Health and Retirement Longitudinal Study (CHARLS) baseline were selected for analysis. Sensory impairment was assessed from a single self-report question, and we categorized sensory impairment into four groups: no sensory impairment, vision impairment, hearing impairment, and dual sensory impairment. Cognitive assessment covered memory, mental state, and cognition, and the data was obtained through a questionnaire. Results Memory was negatively associated with hearing impairment (β = -0.043, 95% confidence interval (CI) = -0.076, -0.043) and dual sensory impairment (β = -0.033, 95% CI = -0.049, -0.017); mental status was negatively associated with vision impairment (β = -0.034, 95% CI = -0.049, -0.018), hearing impairment (β = -0.070, 95% CI = -0.086, -0.055), and dual sensory impairment (β = -0.054, 95% CI = -0.070, -0.039); and cognition was negatively associated with vision impairment (β = -0.028, 95% CI = -0.044, -0.013), hearing impairment (β = -0.074, 95% CI = -0.090, -0.059), and dual sensory impairment (β = -0.052, 95% CI = -0.067, -0.036), even after adjusting for demographics, social economic factors, and lifestyle behavior. Conclusions Vision and hearing impairment are negatively associated with memory, mental status, and cognition for middle-aged and elderly Chinese adults. There were stronger negative associations between sensory impairment and cognitive-related indicators in the elderly compared to the middle-aged.
Collapse
Affiliation(s)
- Xiaohuan Zhao
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Yifan Zhou
- Putuo People's Hospital, Tongji University, Shanghai 200060, China
| | - Kunchen Wei
- Department of Plastic and Reconstructive Surgery, Shanghai Ninth People's Hospital, Shanghai Jiaotong University School of Medicine, Shanghai, China
| | - Xinyue Bai
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Jingfa Zhang
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Minwen Zhou
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| | - Xiaodong Sun
- Department of Ophthalmology, Shanghai General Hospital (Shanghai First People's Hospital), Shanghai Jiao Tong University School of Medicine, Shanghai, China.,National Clinical Research Center for Eye Diseases, Shanghai, China.,Shanghai Key Laboratory of Fundus Diseases, Shanghai, China
| |
Collapse
|
21
|
Picou EM, Rakita L, Buono GH, Moore TM. Effects of Increasing the Overall Level or Fitting Hearing Aids on Emotional Responses to Sounds. Trends Hear 2021; 25:23312165211049938. [PMID: 34866509 PMCID: PMC8825634 DOI: 10.1177/23312165211049938] [Citation(s) in RCA: 3] [Impact Index Per Article: 1.0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 01/21/2023] Open
Abstract
Adults with hearing loss demonstrate a reduced range of emotional responses to nonspeech
sounds compared to their peers with normal hearing. The purpose of this study was to
evaluate two possible strategies for addressing the effects of hearing loss on emotional
responses: (a) increasing overall level and (b) hearing aid use (with and without
nonlinear frequency compression, NFC). Twenty-three adults (mean age = 65.5 years) with
mild-to-severe sensorineural hearing loss and 17 adults (mean age = 56.2 years) with
normal hearing participated. All adults provided ratings of valence and arousal without
hearing aids in response to nonspeech sounds presented at a moderate and at a high level.
Adults with hearing loss also provided ratings while using individually fitted study
hearing aids with two settings (NFC-OFF or NFC-ON). Hearing loss and hearing aid use
impacted ratings of valence but not arousal. Listeners with hearing loss rated pleasant
sounds as less pleasant than their peers, confirming findings in the extant literature.
For both groups, increasing the overall level resulted in lower ratings of valence. For
listeners with hearing loss, the use of hearing aids (NFC-OFF) also resulted in lower
ratings of valence but to a lesser extent than increasing the overall level. Activating
NFC resulted in ratings that were similar to ratings without hearing aids (with a moderate
presentation level) but did not improve ratings to match those from the listeners with
normal hearing. These findings suggest that current interventions do not ameliorate the
effects of hearing loss on emotional responses to sound.
Collapse
Affiliation(s)
- Erin M Picou
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | - Lori Rakita
- Department of Otolaryngology, 1866Massachusetts Ear and Eye Infirmary, Harvard Medical School, Boston, MA, USA
| | - Gabrielle H Buono
- Department of Hearing and Speech Sciences, 12328Vanderbilt University School of Medicine, Nashville TN, USA
| | | |
Collapse
|
22
|
de Boer MJ, Jürgens T, Başkent D, Cornelissen FW. Auditory and Visual Integration for Emotion Recognition and Compensation for Degraded Signals are Preserved With Age. Trends Hear 2021; 25:23312165211045306. [PMID: 34617829 PMCID: PMC8642111 DOI: 10.1177/23312165211045306] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/24/2022] Open
Abstract
Since emotion recognition involves integration of the visual and auditory
signals, it is likely that sensory impairments worsen emotion recognition. In
emotion recognition, young adults can compensate for unimodal sensory
degradations if the other modality is intact. However, most sensory impairments
occur in the elderly population and it is unknown whether older adults are
similarly capable of compensating for signal degradations. As a step towards
studying potential effects of real sensory impairments, this study examined how
degraded signals affect emotion recognition in older adults with normal hearing
and vision. The degradations were designed to approximate some aspects of
sensory impairments. Besides emotion recognition accuracy, we recorded eye
movements to capture perceptual strategies for emotion recognition. Overall,
older adults were as good as younger adults at integrating auditory and visual
information and at compensating for degraded signals. However, accuracy was
lower overall for older adults, indicating that aging leads to a general
decrease in emotion recognition. In addition to decreased accuracy, older adults
showed smaller adaptations of perceptual strategies in response to video
degradations. Concluding, this study showed that emotion recognition declines
with age, but that integration and compensation abilities are retained. In
addition, we speculate that the reduced ability of older adults to adapt their
perceptual strategies may be related to the increased time it takes them to
direct their attention to scene aspects that are relatively far away from
fixation.
Collapse
Affiliation(s)
- Minke J de Boer
- Research School of Behavioural and Cognitive Neuroscience, University of Groningen, Groningen, the Netherlands.,Department of Otorhinolaryngology, 10173University Medical Center Groningen, University of Groningen, Groningen, the Netherlands.,Laboratory of Experimental Ophthalmology, 10173University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Tim Jürgens
- Institute of Acoustics, Technische Hochschule Lübeck, Lübeck, Germany
| | - Deniz Başkent
- Research School of Behavioural and Cognitive Neuroscience, University of Groningen, Groningen, the Netherlands.,Department of Otorhinolaryngology, 10173University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| | - Frans W Cornelissen
- Research School of Behavioural and Cognitive Neuroscience, University of Groningen, Groningen, the Netherlands.,Laboratory of Experimental Ophthalmology, 10173University Medical Center Groningen, University of Groningen, Groningen, the Netherlands
| |
Collapse
|
23
|
Diaz MT, Yalcinbas E. The neural bases of multimodal sensory integration in older adults. INTERNATIONAL JOURNAL OF BEHAVIORAL DEVELOPMENT 2021; 45:409-417. [PMID: 34650316 DOI: 10.1177/0165025420979362] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/16/2022]
Abstract
Although hearing often declines with age, prior research has shown that older adults may benefit from multisensory input to a greater extent when compared to younger adults, a concept known as inverse effectiveness. While there is behavioral evidence in support of this phenomenon, less is known about its neural basis. The present fMRI study examined how older and younger adults processed multimodal auditory-visual (AV) phonemic stimuli which were either congruent or incongruent across modalities. Incongruent AV pairs were designed to elicit the McGurk effect. Behaviorally, reaction times were significantly faster during congruent trials compared to incongruent trials for both age groups, and overall older adults responded more slowly. The interaction was not significant suggesting that older adults processed the AV stimuli similarly to younger adults. Although there were minimal behavioral differences, age-related differences in functional activation were identified: Younger adults elicited greater activation than older adults in primary sensory regions including superior temporal gyrus, the calcarine fissure, and left post-central gyrus. In contrast, older adults elicited greater activation than younger adults in dorsal frontal regions including middle and superior frontal gyri, as well as dorsal parietal regions. These data suggest that while there is age-related stability in behavioral sensitivity to multimodal stimuli, the neural bases for this effect differed between older and younger adults. Our results demonstrated that older adults underrecruited primary sensory cortices and had increased recruitment of regions involved in executive function, attention, and monitoring processes, which may reflect an attempt to compensate.
Collapse
Affiliation(s)
- Michele T Diaz
- Department of Psychology, The Pennsylvania State University
| | - Ege Yalcinbas
- Neurosciences Department, University of California, San Diego
| |
Collapse
|
24
|
Saatci Ö, Geden H, Güneş Çiftçi H, Çiftçi Z, Arıcı Düz Ö, Yulug B. Decreased Facial Emotion Recognition in Elderly Patients With Hearing Loss Reflects Diminished Social Cognition. Ann Otol Rhinol Laryngol 2021; 131:671-677. [PMID: 34404263 DOI: 10.1177/00034894211040057] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/15/2022]
Abstract
OBJECTIVE The main objective of this research was to evaluate the correlation between the severity of hearing loss and the facial emotional recognition as a critical part of social cognition in elderly patients. METHODS The prospective study was comprised of 85 individuals. The participants were divided into 3 groups. The first group consisted of 30 subjects older than 65 years with a bilateral pure-tone average mean >30 dB HL. The second group consisted of 30 subjects older than 65 years with a PTA mean ≤30 dB HL. The third group consisted of 25 healthy subjects with ages ranging between 18 and 45 years and a PTA mean ≤25 dB HL. A Facial Emotion Identification Test and a Facial Emotion Discrimination Test were administered to all groups. RESULTS Elderly subjects with hearing loss performed significantly worse than the other 2 groups on the facial emotion identification and discrimination tests (P < .05). Appealingly, they identified a positive emotion, "happiness," more accurately in comparison to the other negative emotions. CONCLUSIONS Our results suggest that increased age might be associated with decreased facial emotion identification and discrimination scores, which could be deteriorated in the presence of significant hearing loss.
Collapse
Affiliation(s)
- Özlem Saatci
- Department of Otorhinolaryngology, Istanbul Sancaktepe, Education and Research Hospital, Istanbul, Turkey
| | - Hakan Geden
- Department of Otorhinolaryngology, Istanbul Sancaktepe, Education and Research Hospital, Istanbul, Turkey
| | - Halide Güneş Çiftçi
- Department of Otorhinolaryngology, Istanbul Sancaktepe, Education and Research Hospital, Istanbul, Turkey
| | - Zafer Çiftçi
- Department of Otorhinolaryngology, Istanbul Sancaktepe, Education and Research Hospital, Istanbul, Turkey
| | - Özge Arıcı Düz
- Department of Neurology, Istanbul Medipol University, Istanbul, Turkey
| | - Burak Yulug
- Department of Neurology, Alanya Alaaddin Keykubat University, Antalya/Alanya, Turkey
| |
Collapse
|
25
|
Belkhiria C, Vergara RC, Martinez M, Delano PH, Delgado C. Neural links between facial emotion recognition and cognitive impairment in presbycusis. Int J Geriatr Psychiatry 2021; 36:1171-1178. [PMID: 33503682 DOI: 10.1002/gps.5501] [Citation(s) in RCA: 8] [Impact Index Per Article: 2.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/17/2020] [Revised: 10/05/2020] [Accepted: 01/22/2021] [Indexed: 12/29/2022]
Abstract
OBJECTIVES Facial emotion recognition (FER) is impaired in people with dementia and with severe to profound hearing loss, probably reflecting common neural changes. Here, we aim to study the association between brain structures and FER impairment in mild to moderate age-related hearing loss participants. METHODS We evaluated FER in a cross-sectional cohort of 111 Chilean nondemented elderly participants. They were assessed for FER in seven different categories using 35 facial stimuli. We collected pure-tone average (PTA) audiometric thresholds, cognitive and neuropsychiatric assessments, and morphometric brain imaging using a 3-Tesla MRI. RESULTS According to PTA threshold levels, participants were classified as controls (≤25 dB, n = 56) or presbycusis (>25 dB, n = 55), with an average PTA of 17.08 ± 4.8 dB HL and 36.27 ± 9.5 dB HL respectively. Poorer total FER score was correlated with worse hearing thresholds (r = -0.23, p < 0.05) in participants with presbycusis. Multiple regression models explained 57 % of the variability of FER in presbycusis and 10% in controls. In both groups, the main determinant of FER was cognitive performance. In the brain structure of presbycusis participants, FER was correlated with the atrophy of the right insula, right hippocampus, bilateral cingulate cortex and multiple areas of the temporal cortex. In controls, FER was only associated with bilateral middle temporal cortex volume. CONCLUSIONS FER impairment in presbycusis is distinctively associated with atrophy of neural structures engaged in the perceptual and conceptual level of face emotion processing.
Collapse
Affiliation(s)
- Chama Belkhiria
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Rodrigo C Vergara
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Kinesiology Department, Facultad de Artes y Educación Física, Universidad Metropolitana de Ciencias de la Educación, Santiago, Chile
| | - Melissa Martinez
- Neurology and Neurosurgery Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| | - Paul H Delano
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Otolaryngology Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile
- Centro Avanzado de Ingeniería Eléctrica y Electrónica, AC3E, Universidad Técnica Federico Santa María, Valparaíso, Chile
- Biomedical Neuroscience Institute, Facultad de Medicina, Universidad de Chile, Santiago, Chile
| | - Carolina Delgado
- Neuroscience Department, Facultad de Medicina, Universidad de Chile, Santiago, Chile
- Neurology and Neurosurgery Department, Hospital Clínico de la Universidad de Chile, Santiago, Chile
| |
Collapse
|
26
|
Abstract
OBJECTIVES Individuals with cochlear implants (CIs) show reduced word and auditory emotion recognition abilities relative to their peers with normal hearing. Modern CI processing strategies are designed to preserve acoustic cues requisite for word recognition rather than those cues required for accessing other signal information (e.g., talker gender or emotional state). While word recognition is undoubtedly important for communication, the inaccessibility of this additional signal information in speech may lead to negative social experiences and outcomes for individuals with hearing loss. This study aimed to evaluate whether the emphasis on word recognition preservation in CI processing has unintended consequences on the perception of other talker information, such as emotional state. DESIGN Twenty-four young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence (word recognition task) or selected the emotion of the talker (emotion recognition task) from a list of options (Angry, Calm, Happy, and Sad). Sentences were blocked by task type (emotion recognition versus word recognition) and processing condition (unprocessed versus 8-channel noise vocoder) and presented randomly within the block at three signal-to-noise ratios (SNRs) in a background of speech-shaped noise. Confusion matrices showed the number of errors in emotion recognition by listeners. RESULTS Listeners demonstrated better emotion recognition performance than word recognition performance at the same SNR. Unprocessed speech resulted in higher recognition rates than vocoded stimuli. Recognition performance (for both words and emotions) decreased with worsening SNR. Vocoding speech resulted in a greater negative impact on emotion recognition than it did for word recognition. CONCLUSIONS These data confirm prior work that suggests that in background noise, emotional prosodic information in speech is easier to recognize than word information, even after simulated CI processing. However, emotion recognition may be more negatively impacted by background noise and CI processing than word recognition. Future work could explore CI processing strategies that better encode prosodic information and investigate this effect in individuals with CIs as opposed to vocoded simulation. This study emphasized the need for clinicians to consider not only word recognition but also other aspects of speech that are critical to successful social communication.
Collapse
|
27
|
Johnson KC, Xie Z, Shader MJ, Mayo PG, Goupell MJ. Effect of Chronological Age on Pulse Rate Discrimination in Adult Cochlear-Implant Users. Trends Hear 2021; 25:23312165211007367. [PMID: 34028313 PMCID: PMC8150454 DOI: 10.1177/23312165211007367] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [Key Words] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022] Open
Abstract
Cochlear-implant (CI) users rely heavily on temporal envelope cues to understand speech. Temporal processing abilities may decline with advancing age in adult CI users. This study investigated the effect of age on the ability to discriminate changes in pulse rate. Twenty CI users aged 23 to 80 years participated in a rate discrimination task. They attempted to discriminate a 35% rate increase from baseline rates of 100, 200, 300, 400, or 500 pulses per second. The stimuli were electrical pulse trains delivered to a single electrode via direct stimulation to an apical (Electrode 20), a middle (Electrode 12), or a basal location (Electrode 4). Electrically evoked compound action potential amplitude growth functions were recorded at each of those electrodes as an estimate of peripheral neural survival. Results showed that temporal pulse rate discrimination performance declined with advancing age at higher stimulation rates (e.g., 500 pulses per second) when compared with lower rates. The age-related changes in temporal pulse rate discrimination at higher stimulation rates persisted after statistical analysis to account for the estimated peripheral contributions from electrically evoked compound action potential amplitude growth functions. These results indicate the potential contributions of central factors to the limitations in temporal pulse rate discrimination ability associated with aging in CI users.
Collapse
Affiliation(s)
- Kelly C Johnson
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Zilong Xie
- Department of Hearing and Speech, University of Kansas Medical Center, Kansas City, United States
| | - Maureen J Shader
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States.,Bionics Institute, Melbourne, Australia.,Department of Medical Bionics, The University of Melbourne, Melbourne, Australia
| | - Paul G Mayo
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| | - Matthew J Goupell
- Department of Hearing and Speech Sciences, University of Maryland, College Park, United States
| |
Collapse
|
28
|
Morgan SD. Comparing Emotion Recognition and Word Recognition in Background Noise. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:1758-1772. [PMID: 33830784 DOI: 10.1044/2021_jslhr-20-00153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.7] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/12/2023]
Abstract
Purpose Word recognition in quiet and in background noise has been thoroughly investigated in previous research to establish segmental speech recognition performance as a function of stimulus characteristics (e.g., audibility). Similar methods to investigate recognition performance for suprasegmental information (e.g., acoustic cues used to make judgments of talker age, sex, or emotional state) have not been performed. In this work, we directly compared emotion and word recognition performance in different levels of background noise to identify psychoacoustic properties of emotion recognition (globally and for specific emotion categories) relative to word recognition. Method Twenty young adult listeners with normal hearing listened to sentences and either reported a target word in each sentence or selected the emotion of the talker from a list of options (angry, calm, happy, and sad) at four signal-to-noise ratios in a background of white noise. Psychometric functions were fit to the recognition data and used to estimate thresholds (midway points on the function) and slopes for word and emotion recognition. Results Thresholds for emotion recognition were approximately 10 dB better than word recognition thresholds, and slopes for emotion recognition were half of those measured for word recognition. Low-arousal emotions had poorer thresholds and shallower slopes than high-arousal emotions, suggesting greater confusion when distinguishing low-arousal emotional speech content. Conclusions Communication of a talker's emotional state continues to be perceptible to listeners in competitive listening environments, even after words are rendered inaudible. The arousal of emotional speech affects listeners' ability to discriminate between emotion categories.
Collapse
Affiliation(s)
- Shae D Morgan
- Department of Otolaryngology - Head and Neck Surgery and Communicative Disorders, University of Louisville, KY
| |
Collapse
|
29
|
Brewster KK, Golub JS, Rutherford BR. Neural circuits and behavioral pathways linking hearing loss to affective dysregulation in older adults. NATURE AGING 2021; 1:422-429. [PMID: 37118018 PMCID: PMC10154034 DOI: 10.1038/s43587-021-00065-z] [Citation(s) in RCA: 10] [Impact Index Per Article: 3.3] [Reference Citation Analysis] [Abstract] [Grants] [Track Full Text] [Subscribe] [Scholar Register] [Received: 01/15/2021] [Accepted: 04/12/2021] [Indexed: 04/30/2023]
Abstract
Substantial evidence now links age-related hearing loss to incident major depressive disorder in older adults. However, research examining the neural circuits and behavioral mechanisms by which age-related hearing loss leads to depression is at an early phase. It is known that hearing loss has adverse structural and functional brain consequences, is associated with reduced social engagement and loneliness, and often results in tinnitus, which can independently affect cognitive control and emotion processing circuits. While pathways leading from these sequelae of hearing loss to affective dysregulation and depression are intuitive to hypothesize, few studies have yet been designed to provide conclusive evidence for specific pathophysiological mechanisms. Here we review the neurobiological and behavioral consequences of age-related hearing loss, present a model linking them to increased risk for major depressive disorder and suggest how future studies may facilitate the development of rationally designed therapeutic interventions for older adults with impaired hearing to reduce risk for depression and/or ameliorate depressive symptoms.
Collapse
Affiliation(s)
- Katharine K Brewster
- Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA.
- New York State Psychiatric Institute, New York, NY, USA.
| | - Justin S Golub
- Department of Otolaryngology-Head and Neck Surgery, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
| | - Bret R Rutherford
- Department of Psychiatry, Vagelos College of Physicians and Surgeons, Columbia University, New York, NY, USA
- New York State Psychiatric Institute, New York, NY, USA
| |
Collapse
|
30
|
Cortes DS, Tornberg C, Bänziger T, Elfenbein HA, Fischer H, Laukka P. Effects of aging on emotion recognition from dynamic multimodal expressions and vocalizations. Sci Rep 2021; 11:2647. [PMID: 33514829 PMCID: PMC7846600 DOI: 10.1038/s41598-021-82135-1] [Citation(s) in RCA: 13] [Impact Index Per Article: 4.3] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 08/27/2020] [Accepted: 01/15/2021] [Indexed: 12/20/2022] Open
Abstract
Age-related differences in emotion recognition have predominantly been investigated using static pictures of facial expressions, and positive emotions beyond happiness have rarely been included. The current study instead used dynamic facial and vocal stimuli, and included a wider than usual range of positive emotions. In Task 1, younger and older adults were tested for their abilities to recognize 12 emotions from brief video recordings presented in visual, auditory, and multimodal blocks. Task 2 assessed recognition of 18 emotions conveyed by non-linguistic vocalizations (e.g., laughter, sobs, and sighs). Results from both tasks showed that younger adults had significantly higher overall recognition rates than older adults. In Task 1, significant group differences (younger > older) were only observed for the auditory block (across all emotions), and for expressions of anger, irritation, and relief (across all presentation blocks). In Task 2, significant group differences were observed for 6 out of 9 positive, and 8 out of 9 negative emotions. Overall, results indicate that recognition of both positive and negative emotions show age-related differences. This suggests that the age-related positivity effect in emotion recognition may become less evident when dynamic emotional stimuli are used and happiness is not the only positive emotion under study.
Collapse
Affiliation(s)
- Diana S Cortes
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| | | | - Tanja Bänziger
- Department of Psychology, Mid Sweden University, Östersund, Sweden
| | | | - Håkan Fischer
- Department of Psychology, Stockholm University, Stockholm, Sweden
| | - Petri Laukka
- Department of Psychology, Stockholm University, Stockholm, Sweden.
| |
Collapse
|
31
|
Weighting of Prosodic and Lexical-Semantic Cues for Emotion Identification in Spectrally Degraded Speech and With Cochlear Implants. Ear Hear 2021; 42:1727-1740. [PMID: 34294630 PMCID: PMC8545870 DOI: 10.1097/aud.0000000000001057] [Citation(s) in RCA: 6] [Impact Index Per Article: 2.0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022]
Abstract
OBJECTIVES Normally-hearing (NH) listeners rely more on prosodic cues than on lexical-semantic cues for emotion perception in speech. In everyday spoken communication, the ability to decipher conflicting information between prosodic and lexical-semantic cues to emotion can be important: for example, in identifying sarcasm or irony. Speech degradation in cochlear implants (CIs) can be sufficiently overcome to identify lexical-semantic cues, but the distortion of voice pitch cues makes it particularly challenging to hear prosody with CIs. The purpose of this study was to examine changes in relative reliance on prosodic and lexical-semantic cues in NH adults listening to spectrally degraded speech and adult CI users. We hypothesized that, compared with NH counterparts, CI users would show increased reliance on lexical-semantic cues and reduced reliance on prosodic cues for emotion perception. We predicted that NH listeners would show a similar pattern when listening to CI-simulated versions of emotional speech. DESIGN Sixteen NH adults and 8 postlingually deafened adult CI users participated in the study. Sentences were created to convey five lexical-semantic emotions (angry, happy, neutral, sad, and scared), with five sentences expressing each category of emotion. Each of these 25 sentences was then recorded with the 5 (angry, happy, neutral, sad, and scared) prosodic emotions by 2 adult female talkers. The resulting stimulus set included 125 recordings (25 Sentences × 5 Prosodic Emotions) per talker, of which 25 were congruent (consistent lexical-semantic and prosodic cues to emotion) and the remaining 100 were incongruent (conflicting lexical-semantic and prosodic cues to emotion). The recordings were processed to have 3 levels of spectral degradation: full-spectrum, CI-simulated (noise-vocoded) to have 8 channels and 16 channels of spectral information, respectively. Twenty-five recordings (one sentence per lexical-semantic emotion recorded in all five prosodies) were used for a practice run in the full-spectrum condition. The remaining 100 recordings were used as test stimuli. For each talker and condition of spectral degradation, listeners indicated the emotion associated with each recording in a single-interval, five-alternative forced-choice task. The responses were scored as proportion correct, where "correct" responses corresponded to the lexical-semantic emotion. CI users heard only the full-spectrum condition. RESULTS The results showed a significant interaction between hearing status (NH, CI) and congruency in identifying the lexical-semantic emotion associated with the stimuli. This interaction was as predicted, that is, CI users showed increased reliance on lexical-semantic cues in the incongruent conditions, while NH listeners showed increased reliance on the prosodic cues in the incongruent conditions. As predicted, NH listeners showed increased reliance on lexical-semantic cues to emotion when the stimuli were spectrally degraded. CONCLUSIONS The present study confirmed previous findings of prosodic dominance for emotion perception by NH listeners in the full-spectrum condition. Further, novel findings with CI patients and NH listeners in the CI-simulated conditions showed reduced reliance on prosodic cues and increased reliance on lexical-semantic cues to emotion. These results have implications for CI listeners' ability to perceive conflicts between prosodic and lexical-semantic cues, with repercussions for their identification of sarcasm and humor. Understanding instances of sarcasm or humor can impact a person's ability to develop relationships, follow conversation, understand vocal emotion and intended message of a speaker, following jokes, and everyday communication in general.
Collapse
|
32
|
Buono GH, Crukley J, Hornsby BWY, Picou EM. Loss of high- or low-frequency audibility can partially explain effects of hearing loss on emotional responses to non-speech sounds. Hear Res 2020; 401:108153. [PMID: 33360158 DOI: 10.1016/j.heares.2020.108153] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 05/06/2020] [Revised: 11/20/2020] [Accepted: 12/08/2020] [Indexed: 11/16/2022]
Abstract
Hearing loss can disrupt emotional responses to sound. However, the impact of stimulus modality (multisensory versus unisensory) on this disruption, and the underlying mechanisms responsible, are unclear. The purposes of this project were to evaluate the effects of stimulus modality and filtering on emotional responses to non-speech stimuli. It was hypothesized that low- and high-pass filtering would result in less extreme ratings, but only for unisensory stimuli. Twenty-four adults (22- 34 years old; 12 male) with normal hearing participated. Participants made ratings of valence and arousal in response to pleasant, neutral, and unpleasant non-speech sounds and/or pictures. Each participant completed ratings of five stimulus modalities: auditory-only, visual-only, auditory-visual, filtered auditory-only, and filtered auditory-visual. Half of the participants rated low-pass filtered stimuli (800 Hz cutoff), and half of the participants rated high-pass filtered stimuli (2000 Hz cutoff). Combining auditory and visual modalities resulted in more extreme (more pleasant and more unpleasant) ratings of valence in response to pleasant and unpleasant stimuli. In addition, low- and high-pass filtering of sounds resulted in less extreme ratings of valence (less pleasant and less unpleasant) and arousal (less exciting) in response to both auditory-only and auditory-visual stimuli. These results suggest that changes in audible spectral information are partially responsible for the noted changes in emotional responses to sound that accompany hearing loss. The findings also suggest the effects of hearing loss will generalize to multisensory stimuli if the stimuli include sound, although further work is warranted to confirm this in listeners with hearing loss.
Collapse
Affiliation(s)
- Gabrielle H Buono
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Jeffery Crukley
- Department of Speech-Language Pathology, University of Toronto, Canada
| | - Benjamin W Y Hornsby
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States
| | - Erin M Picou
- Department of Hearing and Speech Sciences, Vanderbilt University Medical Center, 1215 21st Ave South, Room 8310, Nashville, TN 37232, United States.
| |
Collapse
|
33
|
Ruiz R, Fontan L, Fillol H, Füllgrabe C. Senescent Decline in Verbal-Emotion Identification by Older Hearing-Impaired Listeners - Do Hearing Aids Help? Clin Interv Aging 2020; 15:2073-2081. [PMID: 33173288 PMCID: PMC7648619 DOI: 10.2147/cia.s281469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 10/14/2020] [Indexed: 11/23/2022] Open
Abstract
Purpose To assess the ability of older-adult hearing-impaired (OHI) listeners to identify verbal expressions of emotions, and to evaluate whether hearing-aid (HA) use improves identification performance in those listeners. Methods Twenty-nine OHI listeners, who were experienced bilateral-HA users, participated in the study. They listened to a 20-sentence-long speech passage rendered with six different emotional expressions (“happiness”, “pleasant surprise”, “sadness”, “anger”, “fear”, and “neutral”). The task was to identify the emotion portrayed in each version of the passage. Listeners completed the task twice in random order, once unaided, and once wearing their own bilateral HAs. Seventeen young-adult normal-hearing (YNH) listeners were also tested unaided as controls. Results Most YNH listeners (89.2%) correctly identified emotions compared to just over half of the OHI listeners (58.7%). Within the OHI group, verbal emotion identification was significantly correlated with age, but not with audibility-related factors. The number of OHI listeners who were able to correctly identify the different emotions did not significantly change when HAs were worn (54.8%). Conclusion In line with previous investigations using shorter speech stimuli, there were clear age differences in the recognition of verbal emotions, with OHI listeners showing a significant reduction in unaided verbal-emotion identification performance that progressively declined with age across older adulthood. Rehabilitation through HAs did not provide compensation for the impaired ability to perceive emotions carried by speech sounds.
Collapse
Affiliation(s)
- Robert Ruiz
- Laboratoire de Recherche en Audiovisuel (LARA-SEPPIA), Université Toulouse II Jean Jaurès, Toulouse, France
| | | | - Hugo Fillol
- Service d'Oto-Rhino-Laryngologie, d'Oto-Neurologie et d'ORL Pédiatrique, Centre Hospitalier Universitaire de Toulouse, Toulouse, France.,Ecole d'Audioprothèse de Cahors, Université Toulouse III Paul Sabatier, Toulouse, France
| | - Christian Füllgrabe
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, UK
| |
Collapse
|
34
|
Legris E, Henriques J, Aussedat C, Aoustin JM, Robier M, Bakhos D. Emotional prosody perception in presbycusis patients after auditory rehabilitation. Eur Ann Otorhinolaryngol Head Neck Dis 2020; 138:163-168. [PMID: 33162354 DOI: 10.1016/j.anorl.2020.10.004] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/17/2022]
Abstract
OBJECTIVE Perception of emotion plays a major role in social interaction. Studies have shown that hearing loss and aging degrade emotional recognition. The main aim of the present study was to evaluate the benefit of first-time hearing aids (HA) for emotional prosody perception in presbycusis patients. Secondary objectives comprised comparison with normal-hearing subjects, and assessment of the impact of demographic and audiologic factors. METHODS To assess HA impact, 29 subjects with presbycusis were included. They were tested without HA and 1 month after starting to use HA. A test with emotional hearing stimuli (Montreal Affective Voice test: MAV) was performed at various intensities (50, 65 and 80dB SPL). Patients' experience was evaluated on the Profile of Emotional Competence questionnaire, before and after HA fitting. Results were compared with those of 29 normal-hearing subjects. RESULTS Auditory rehabilitation did not significantly improve MAV results (P>0.005), or subjective questionnaire results (P>0.005). Scores remained lower than those of normal-hearing subjects (P<0.001). MAV results, before and after HA, showed significant correlation with pure-tone average (r=-0.88, P<0.001) and age (r=0.44, P=0.018). The older the presbycusis patient and the more severe the hearing loss, the greater the difficulty in recognising emotional prosody. CONCLUSION Despite hearing rehabilitation, presbycusis patients' results remained poorer than in normal-hearing subjects.
Collapse
Affiliation(s)
- E Legris
- Ear, nose and throat department, CHRU de Tours, 2, boulevard Tonnellé, 37000 Tours, France.
| | | | - C Aussedat
- Ear, nose and throat department, CHRU de Tours, 2, boulevard Tonnellé, 37000 Tours, France
| | - J-M Aoustin
- Ear, nose and throat department, CHRU de Tours, 2, boulevard Tonnellé, 37000 Tours, France
| | - M Robier
- Ear, nose and throat department, CHRU de Tours, 2, boulevard Tonnellé, 37000 Tours, France
| | - D Bakhos
- Ear, nose and throat department, CHRU de Tours, 2, boulevard Tonnellé, 37000 Tours, France; Université François-Rabelais de Tours, UMR-S1253, Tours, France; INSERM U1253, iBrain, équipe 3, Tours, France
| |
Collapse
|