1
|
Karimi-Boroujeni M, Dajani HR, Giguère C. Perception of Prosody in Hearing-Impaired Individuals and Users of Hearing Assistive Devices: An Overview of Recent Advances. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2023; 66:775-789. [PMID: 36652704 DOI: 10.1044/2022_jslhr-22-00125] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/17/2023]
Abstract
PURPOSE Prosody perception is an essential component of speech communication and social interaction through which both linguistic and emotional information are conveyed. Considering the importance of the auditory system in processing prosody-related acoustic features, the aim of this review article is to review the effects of hearing impairment on prosody perception in children and adults. It also assesses the performance of hearing assistive devices in restoring prosodic perception. METHOD Following a comprehensive online database search, two lines of inquiry were targeted. The first summarizes recent attempts toward determining the effects of hearing loss and interacting factors such as age and cognitive resources on prosody perception. The second analyzes studies reporting beneficial or detrimental impacts of hearing aids, cochlear implants, and bimodal stimulation on prosodic abilities in people with hearing loss. RESULTS The reviewed studies indicate that hearing-impaired individuals vary widely in perceiving affective and linguistic prosody, depending on factors such as hearing loss severity, chronological age, and cognitive status. In addition, most of the emerging information points to limitations of hearing assistive devices in processing and transmitting the acoustic features of prosody. CONCLUSIONS The existing literature is incomplete in several respects, including the lack of a consensus on how and to what extent hearing prostheses affect prosody perception, especially the linguistic function of prosody, and a gap in assessing prosody under challenging listening situations such as noise. This review article proposes directions that future research could follow to provide a better understanding of prosody processing in those with hearing impairment, which may help health care professionals and designers of assistive technology to develop innovative diagnostic and rehabilitation tools. SUPPLEMENTAL MATERIAL https://doi.org/10.23641/asha.21809772.
Collapse
Affiliation(s)
| | - Hilmi R Dajani
- School of Electrical Engineering and Computer Science, University of Ottawa, Ontario, Canada
| | - Christian Giguère
- School of Rehabilitation Sciences, University of Ottawa, Ontario, Canada
| |
Collapse
|
2
|
Tang E, Zhang M, Chen Y, Lin Y, Ding H. Recognition of affective prosody in bipolar and depressive conditions: A systematic review and meta-analysis. J Affect Disord 2022; 313:126-136. [PMID: 35780961 DOI: 10.1016/j.jad.2022.06.065] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.5] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 12/08/2021] [Revised: 05/16/2022] [Accepted: 06/23/2022] [Indexed: 11/20/2022]
Abstract
BACKGROUND Inconsistent results have been reported about the affective prosody recognition (APR) ability in patients with bipolar (BD) and depressive (DD) disorders. We aimed to (i) evaluate the magnitude of APR dysfunction in BD and DD patients, (ii) identify moderators for heterogeneous results, and (iii) highlight research trends in this field. METHODS A computerized literature search was conducted in five electronic databases from the inception to May 9th, 2022 to identify behavioural experiments that studied APR in BD or DD patients. Effect sizes were calculated using a random-effect model and recalculated after removing outliers and adjusting publication bias. RESULTS Twelve eligible articles totalling 16 studies were included in the meta-analysis, aggregating 612 patients and 809 healthy controls. Individual r2 ranged from 0.008 to 0.355, six of which reached a medium-to-large association strength. A medium-to-large pooled effect size (Hedges g = -0.58, 95 % CI -0.75 to -0.40, p < 0.001) for overall APR impairment in BD and DD patients was obtained. The Beck Depression Inventory score and answer option number were significant moderators. Neuropsychological mechanisms, multi-modal interaction and comorbidity effects have become primary research concerns. LIMITATIONS Extant statistics were insufficient for disorder-specific analysis. CONCLUSIONS Current findings demonstrate deficits of overall APR in BD and DD patients at a medium-to-large magnitude. APR can clinically serve for early screening and prognosis, but the depression severity, task complexity and confounding variables influence patients' APR performance. Future studies should incorporate neuroimaging approaches and investigate the effects of tonal language stimuli and clinical interventions.
Collapse
Affiliation(s)
- Enze Tang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Minyue Zhang
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yu Chen
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China.
| |
Collapse
|
3
|
Hilviu D, Gabbatore I, Parola A, Bosco FM. A cross-sectional study to assess pragmatic strengths and weaknesses in healthy ageing. BMC Geriatr 2022; 22:699. [PMID: 35999510 PMCID: PMC9400309 DOI: 10.1186/s12877-022-03304-z] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 03/04/2021] [Accepted: 07/15/2022] [Indexed: 11/13/2022] Open
Abstract
Background Ageing refers to the natural and physiological changes that individuals experience over the years. This process also involves modifications in terms of communicative-pragmatics, namely the ability to convey meanings in social contexts and to interact with other people using various expressive means, such as linguistic, extralinguistic and paralinguistic aspects of communication. Very few studies have provided a complete assessment of communicative-pragmatic performance in healthy ageing. Methods The aim of this study was to comprehensively assess communicative-pragmatic ability in three samples of 20 (N = 60) healthy adults, each belonging to a different age range (20–40, 65–75, 76–86 years old) and to compare their performance in order to observe any potential changes in their ability to communicate. We also explored the potential role of education and sex on the communicative-pragmatic abilities observed. The three age groups were evaluated with a between-study design by means of the Assessment Battery for Communication (ABaCo), a validated assessment tool characterised by five scales: linguistic, extralinguistic, paralinguistic, contextual and conversational. Results The results indicated that the pragmatic ability assessed by the ABaCo is poorer in older participants when compared to the younger ones (main effect of age group: F(2,56) = 9.097; p < .001). Specifically, significant differences were detected in tasks on the extralinguistic, paralinguistic and contextual scales. Whereas the data highlighted a significant role of education (F(1,56) = 4.713; p = .034), no sex-related differences were detected. Conclusions Our results suggest that the ageing process may also affect communicative-pragmatic ability and a comprehensive assessment of the components of such ability may help to better identify difficulties often experienced by older individuals in their daily life activities. Supplementary Information The online version contains supplementary material available at 10.1186/s12877-022-03304-z.
Collapse
Affiliation(s)
- Dize Hilviu
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy
| | - Ilaria Gabbatore
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy. .,Research Unit of Logopedics, Faculty of Humanities, University of Oulu, Oulu, Finland.
| | - Alberto Parola
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy.,Department of Linguistics, Cognitive Science and Semiotics, Aarhus University, Aarhus, Denmark
| | - Francesca M Bosco
- GIPSI Research Group, Department of Psychology, University of Turin, Turin, Italy.,Neuroscience Institute of Turin - NIT, University of Turin, Turin, Italy
| |
Collapse
|
4
|
Sinvani RT, Sapir S. Sentence vs. Word Perception by Young Healthy Females: Toward a Better Understanding of Emotion in Spoken Language. Front Glob Womens Health 2022; 3:829114. [PMID: 35692948 PMCID: PMC9174644 DOI: 10.3389/fgwh.2022.829114] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 12/04/2021] [Accepted: 05/04/2022] [Indexed: 11/13/2022] Open
Abstract
Expression and perception of emotions by voice are fundamental for basic mental health stability. Since different languages interpret results differently, studies should be guided by the relationship between speech complexity and the emotional perception. The aim of our study was therefore to analyze the efficiency of speech stimuli, word vs. sentence, as it relates to the accuracy of four different categories of emotions: anger, sadness, happiness, and neutrality. To this end, a total of 2,235 audio clips were presented to 49 females, native Hebrew speakers, aged 20–30 years (M = 23.7; SD = 2.13). Participants were asked to judge audio utterances according to one of four emotional categories: anger, sadness, happiness, and neutrality. Simulated voice samples were consisting of words and meaningful sentences, provided by 15 healthy young females Hebrew native speakers. Generally, word vs. sentence was not originally accepted as a means of emotional recognition of voice; However, introducing a variety of speech utterances revealed a different perception. Thus, the emotional conveyance provided new, even higher precision to our findings: Anger emotions produced a higher impact to the single word (χ2 = 10.21, p < 0.01) as opposed to the sentence, while sadness was identified more accurately with a sentence (χ2 = 3.83, p = 0.05). Our findings resulted in a better understanding of how speech types can interpret perception, as a part of mental health.
Collapse
Affiliation(s)
- Rachel-Tzofia Sinvani
- School of Occupational Therapy, Faculty of Medicine, The Hebrew University of Jerusalem, Jerusalem, Israel
- *Correspondence: Rachel-Tzofia Sinvani
| | - Shimon Sapir
- Department of Communication Sciences and Disorders, Faculty of Social Welfare and Health Sciences, University of Haifa, Haifa, Israel
| |
Collapse
|
5
|
Pawełczyk A, Łojek E, Radek M, Pawełczyk T. Prosodic deficits and interpersonal difficulties in patients with schizophrenia. Psychiatry Res 2021; 306:114244. [PMID: 34673310 DOI: 10.1016/j.psychres.2021.114244] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/24/2021] [Revised: 10/05/2021] [Accepted: 10/10/2021] [Indexed: 10/20/2022]
Abstract
The present study examines the use of receptive emotional and linguistic prosody in patients with schizophrenia; particularly, its aim was to evaluate the type and number of errors made when comprehending the emotions and modes implied by meaningless utterances. Seventy-eight participants were enrolled to the study, i.e. two groups (patients with schizophrenia and healthy controls) consisting of 39 subjects. The severity of illness was evaluated with the Positive and Negative Syndrome Scale; comprehension of emotional and linguistic prosody was assessed by the subtests of the Polish Version of the Right Hemisphere Language Battery. Neither emotional nor linguistic prosody comprehension both correlated with schizophrenia symptoms. The study group experienced more difficulties in distinguishing between happiness and anger, and were more likely to misunderstand imperative utterances, confusing them with interrogative or affirmative ones. Such impairments are significant as they may affect the ability to form and sustain relationships with other people, achieve success in the work environment, and integrate in the community. They may also be a trait mark of the illness independent of psychotic symptoms. Further research is needed to translate this knowledge into meaningful and therapeutic interventions to improve quality of life, both for affected individuals and for their communication partners.
Collapse
Affiliation(s)
- Agnieszka Pawełczyk
- Department of Neurosurgery, Spine Surgery and Peripheral Nerve Surgery, Medical University of Łódź, Poland.
| | - Emila Łojek
- Chair of Neuropsychology and Psychotherapy, University of Warsaw, Poland
| | - Maciej Radek
- Department of Neurosurgery, Spine Surgery and Peripheral Nerve Surgery, Medical University of Łódź, Poland
| | - Tomasz Pawełczyk
- Chair of Psychiatry, Department of Affective and Psychotic Disorders, Medical University of Łódź, Poland
| |
Collapse
|
6
|
Lin Y, Ding H, Zhang Y. Unisensory and Multisensory Stroop Effects Modulate Gender Differences in Verbal and Nonverbal Emotion Perception. JOURNAL OF SPEECH, LANGUAGE, AND HEARING RESEARCH : JSLHR 2021; 64:4439-4457. [PMID: 34469179 DOI: 10.1044/2021_jslhr-20-00338] [Citation(s) in RCA: 4] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Indexed: 06/13/2023]
Abstract
Purpose This study aimed to examine the Stroop effects of verbal and nonverbal cues and their relative impacts on gender differences in unisensory and multisensory emotion perception. Method Experiment 1 investigated how well 88 normal Chinese adults (43 women and 45 men) could identify emotions conveyed through face, prosody and semantics as three independent channels. Experiments 2 and 3 further explored gender differences during multisensory integration of emotion through a cross-channel (prosody-semantics) and a cross-modal (face-prosody-semantics) Stroop task, respectively, in which 78 participants (41 women and 37 men) were asked to selectively attend to one of the two or three communication channels. Results The integration of accuracy and reaction time data indicated that paralinguistic cues (i.e., face and prosody) of emotions were consistently more salient than linguistic ones (i.e., semantics) throughout the study. Additionally, women demonstrated advantages in processing all three types of emotional signals in the unisensory task, but only preserved their strengths in paralinguistic processing and showed greater Stroop effects of nonverbal cues on verbal ones during multisensory perception. Conclusions These findings demonstrate clear gender differences in verbal and nonverbal emotion perception that are modulated by sensory channels, which have important theoretical and practical implications. Supplemental Material https://doi.org/10.23641/asha.16435599.
Collapse
Affiliation(s)
- Yi Lin
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Hongwei Ding
- Speech-Language-Hearing Center, School of Foreign Languages, Shanghai Jiao Tong University, China
| | - Yang Zhang
- Department of Speech-Language-Hearing Sciences & Center for Neurobehavioral Development, University of Minnesota, Minneapolis
| |
Collapse
|
7
|
Van den Bossche C, Wolf D, Rekittke LM, Mittelberg I, Mathiak K. Judgmental perception of co-speech gestures in MDD. J Affect Disord 2021; 291:46-56. [PMID: 34023747 DOI: 10.1016/j.jad.2021.04.085] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 10/28/2020] [Revised: 02/06/2021] [Accepted: 04/25/2021] [Indexed: 01/10/2023]
Abstract
Cognitive bias in depression may increase sensitivity to judgmental appraisal of communicative cues. Nonverbal communication encompassing co-speech gestures is crucial for social functioning and is perceived differentially by men and women, however, little is known about the effect of depression on the perception of appraisal. We investigate if a cognitive bias influences the perception of appraisal and judgement of nonverbal communication in major depressive disorder (MDD). During watching videos of speakers retelling a story and gesticulating, 22 patients with MDD and 22 matched healthy controls pressed a button when they perceived the speaker as appraising in a positive or negative way. The speakers were presented in four different conditions (with and without speech and with natural speaker or as stick-figures) to evaluate context effects. Inter-subject covariance (ISC) of the button-press time series measured consistency across the groups of the response pattern depending on the factors diagnosis and gender. Significant effects emerged for the factors diagnosis (p = .002), gender (p = .007), and their interaction (p < .001). The female healthy controls perceived the gestures more consistently appraising than male controls, the female patients, and male patients whereas the latter three groups did not differ. Further, the ISC measure for consistency correlated negatively with depression severity. The natural speaker video without audio speech yielded the highest responses consistency. Indeed co-speech gestures may drive these ISC effects because number of gestures but not facial shrugs correlated with ISC amplitude. During co-speech gestures, a cognitive bias led to disturbed perception of appraisal in MDD for females. Social communication is critical for functional outcomes in mental disorders; thus perception of gestural communication is important in rehabilitation.
Collapse
Affiliation(s)
| | - Dhana Wolf
- Dept. Psychiatry, Psychosomatik and Psychosomatics, RWTH Aachen University
| | | | - Irene Mittelberg
- Dept. Linguistics and Cognitive Semiotics, RWTH Aachen University
| | - Klaus Mathiak
- Dept. Psychiatry, Psychosomatik and Psychosomatics, RWTH Aachen University; Translational Brain Research, Jülich Aachen Research Alliance.
| |
Collapse
|
8
|
Xu J, Dong H, Li N, Wang Z, Guo F, Wei J, Dang J. Weighted RSA: An Improved Framework on the Perception of Audio-visual Affective Speech in Left Insula and Superior Temporal Gyrus. Neuroscience 2021; 469:46-58. [PMID: 34119576 DOI: 10.1016/j.neuroscience.2021.06.002] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.7] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/08/2021] [Revised: 05/16/2021] [Accepted: 06/02/2021] [Indexed: 12/24/2022]
Abstract
Being able to accurately perceive the emotion expressed by the facial or verbal expression from others is critical to successful social interaction. However, only few studies examined the multimodal interactions on speech emotion, and there is no consistence in studies on the speech emotion perception. It remains unclear, how the speech emotion of different valence is perceived on the multimodal stimuli by our human brain. In this paper, we conducted a functional magnetic resonance imaging (fMRI) study with an event-related design, using dynamic facial expressions and emotional speech stimuli to express different emotions, in order to explore the perception mechanism of speech emotion in audio-visual modality. The representational similarity analysis (RSA), whole-brain searchlight analysis, and conjunction analysis of emotion were used to interpret the representation of speech emotion in different aspects. Significantly, a weighted RSA approach was creatively proposed to evaluate the contribution of each candidate model to the best fitted model and provided a supplement to RSA. The results of weighted RSA indicated that the fitted models were superior to all candidate models and the weights could be used to explain the representation of ROIs. The bilateral amygdala has been shown to be associated with the processing of both positive and negative emotions except neutral emotion. It is indicated that the left posterior insula and the left anterior superior temporal gyrus (STG) play important roles in the perception of multimodal speech emotion.
Collapse
Affiliation(s)
- Junhai Xu
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Haibin Dong
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; State Grid Tianjin Electric Power Company, China
| | - Na Li
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Zeyu Wang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China
| | - Fei Guo
- School of Computer Science and Engineering, Central South University, Changsha 410083, China.
| | - Jianguo Wei
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China.
| | - Jianwu Dang
- College of Intelligence and Computing, Tianjin Key Lab of Cognitive Computing and Application, Tianjin University, Tianjin, China; School of Information Science, Japan Advanced Institute of Science and Technology, Japan
| |
Collapse
|
9
|
Charpentier J, Latinus M, Andersson F, Saby A, Cottier JP, Bonnet-Brilhault F, Houy-Durand E, Gomot M. Brain correlates of emotional prosodic change detection in autism spectrum disorder. NEUROIMAGE-CLINICAL 2020; 28:102512. [PMID: 33395999 PMCID: PMC8481911 DOI: 10.1016/j.nicl.2020.102512] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Subscribe] [Scholar Register] [Received: 06/16/2020] [Revised: 11/17/2020] [Accepted: 11/20/2020] [Indexed: 11/30/2022]
Abstract
We used an oddball paradigm with vocal stimuli to record hemodynamic responses. Brain processing of vocal change relies on STG, insula and lingual area. Activity of the change processing network can be modulated by saliency and emotion. Brain processing of vocal deviancy/novelty appears typical in adults with autism.
Autism Spectrum Disorder (ASD) is currently diagnosed by the joint presence of social impairments and restrictive, repetitive patterns of behaviors. While the co-occurrence of these two categories of symptoms is at the core of the pathology, most studies investigated only one dimension to understand underlying physiopathology. In this study, we analyzed brain hemodynamic responses in neurotypical adults (CTRL) and adults with autism spectrum disorder during an oddball paradigm allowing to explore brain responses to vocal changes with different levels of saliency (deviancy or novelty) and different emotional content (neutral, angry). Change detection relies on activation of the supratemporal gyrus and insula and on deactivation of the lingual area. The activity of these brain areas involved in the processing of deviancy with vocal stimuli was modulated by saliency and emotion. No group difference between CTRL and ASD was reported for vocal stimuli processing or for deviancy/novelty processing, regardless of emotional content. Findings highlight that brain processing of voices and of neutral/ emotional vocal changes is typical in adults with ASD. Yet, at the behavioral level, persons with ASD still experience difficulties with those cues. This might indicate impairments at latter processing stages or simply show that alterations present in childhood might have repercussions at adult age.
Collapse
Affiliation(s)
| | | | | | - Agathe Saby
- Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | | | | | - Emmanuelle Houy-Durand
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France; Centre universitaire de pédopsychiatrie, CHRU de Tours, Tours, France
| | - Marie Gomot
- UMR 1253 iBrain, Inserm, Université de Tours, Tours, France.
| |
Collapse
|
10
|
Ruiz R, Fontan L, Fillol H, Füllgrabe C. Senescent Decline in Verbal-Emotion Identification by Older Hearing-Impaired Listeners - Do Hearing Aids Help? Clin Interv Aging 2020; 15:2073-2081. [PMID: 33173288 PMCID: PMC7648619 DOI: 10.2147/cia.s281469] [Citation(s) in RCA: 1] [Impact Index Per Article: 0.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 09/12/2020] [Accepted: 10/14/2020] [Indexed: 11/23/2022] Open
Abstract
Purpose To assess the ability of older-adult hearing-impaired (OHI) listeners to identify verbal expressions of emotions, and to evaluate whether hearing-aid (HA) use improves identification performance in those listeners. Methods Twenty-nine OHI listeners, who were experienced bilateral-HA users, participated in the study. They listened to a 20-sentence-long speech passage rendered with six different emotional expressions (“happiness”, “pleasant surprise”, “sadness”, “anger”, “fear”, and “neutral”). The task was to identify the emotion portrayed in each version of the passage. Listeners completed the task twice in random order, once unaided, and once wearing their own bilateral HAs. Seventeen young-adult normal-hearing (YNH) listeners were also tested unaided as controls. Results Most YNH listeners (89.2%) correctly identified emotions compared to just over half of the OHI listeners (58.7%). Within the OHI group, verbal emotion identification was significantly correlated with age, but not with audibility-related factors. The number of OHI listeners who were able to correctly identify the different emotions did not significantly change when HAs were worn (54.8%). Conclusion In line with previous investigations using shorter speech stimuli, there were clear age differences in the recognition of verbal emotions, with OHI listeners showing a significant reduction in unaided verbal-emotion identification performance that progressively declined with age across older adulthood. Rehabilitation through HAs did not provide compensation for the impaired ability to perceive emotions carried by speech sounds.
Collapse
Affiliation(s)
- Robert Ruiz
- Laboratoire de Recherche en Audiovisuel (LARA-SEPPIA), Université Toulouse II Jean Jaurès, Toulouse, France
| | | | - Hugo Fillol
- Service d'Oto-Rhino-Laryngologie, d'Oto-Neurologie et d'ORL Pédiatrique, Centre Hospitalier Universitaire de Toulouse, Toulouse, France.,Ecole d'Audioprothèse de Cahors, Université Toulouse III Paul Sabatier, Toulouse, France
| | - Christian Füllgrabe
- School of Sport, Exercise and Health Sciences, Loughborough University, Loughborough, UK
| |
Collapse
|
11
|
Daniluk B, Borkowska AR. Pragmatic aspects of verbal communication in elderly people: A study of Polish seniors. INTERNATIONAL JOURNAL OF LANGUAGE & COMMUNICATION DISORDERS 2020; 55:493-505. [PMID: 32185862 DOI: 10.1111/1460-6984.12532] [Citation(s) in RCA: 5] [Impact Index Per Article: 1.3] [Reference Citation Analysis] [Abstract] [Key Words] [MESH Headings] [Track Full Text] [Subscribe] [Scholar Register] [Received: 06/17/2019] [Revised: 01/09/2020] [Accepted: 02/25/2020] [Indexed: 06/10/2023]
Abstract
BACKGROUND Behavioural and neuropsychological studies on elderly populations concentrate on many aspects of cognitive functioning, but significantly less research concerns communication processes, including aspects of verbal communication skills, pragmatic issues that are important for performing social tasks at every age. AIMS To characterize the variability in changes that occur with age in the domain of pragmatic aspects of verbal communication skills in a group of individuals aged > 65 years and to define their determinants. METHODS & PROCEDURES A group of 109 normally ageing individuals (aged 64.9-90 years) participated in the study (62 women and 47 men). Participants were divided into two age groups: < 70 and > 71 years old. The verbal communication skills were examined using the Polish version of the Right Hemisphere Language Battery (RHLB-PL), and cognitive skills using the Mini Mental State Examination (MMSE). OUTCOMES & RESULTS Comparison between the subgroups showed that there was a significant decline in the older group in all the subtests except for the Discourse Analysis. Age did not differentiate discursive abilities in seniors. These data apparently confirm the hypothesis that discursive competences are stable throughout one's lifespan. In order to compare younger and older seniors in terms of the 11 aspects of pragmatic communication, two performance profiles were prepared for the groups and subjected to comparative analyses. The shape of the two profiles of all communication competences was similar. The biggest differences were identified between the groups in the Comments, Humour and Metaphor comprehension and explanation subtests. Analysis of the determinants of changes in pragmatic aspects of verbal communication skills in elderly individuals revealed that the important factors include age, overall level of cognitive function, higher education and female sex. CONCLUSION & IMPLICATIONS The relationship between age and pragmatic aspects of verbal communication skills is complex. The results indicate that treating seniors as a homogenous group in terms of pragmatic aspects of verbal communication functioning is incorrect. Age differentially affects the various aspects of communication functions. The level of cognitive functioning mediates the relationship between age and pragmatic aspects of verbal communication skills. What this paper adds What is already known on the subject? Behavioural and neuropsychological studies on elderly populations concentrate on many aspects of mnestic functions, executive functions, cognitive flexibility, fluency, cognitive control, working memory, semantic processing, arithmetic competences and perception speed. Significantly less research concerns communication processes, including verbal communication. Older and younger people have usually been compared in particular areas of communication: discourse, understanding of metaphors or prosody. At present there is a paucity of research regarding changes in communication functions at different stages of ageing and profiles of various aspects of verbal communication in old age. What this paper adds to existing knowledge The study indicates that normally ageing individuals are a non-homogeneous group in terms of pragmatic aspects of verbal communication. Various communication functions change at different rate at various stages of ageing. The study clarified the determinants of changes in pragmatic aspects of verbal communication skills in elderly individuals. These aspects are cognitive abilities, age, a high education level and sex. What are the potential or actual clinical implications of this work? The research shows that diagnosis of communication competencies in elderly individuals is necessary. Furthermore, the kind of abilities is very important for social relationships and quality of life. It is essential to inform a senior's family about communication changes that occur in normal ageing. Understanding potential verbal communication difficulties in seniors and their determinants are fundamental issues.
Collapse
Affiliation(s)
- Beata Daniluk
- University of Maria Curie Skłodowska, Faculty of Pedagogy and Psychology, Lublin, Poland
| | - Aneta R Borkowska
- University of Maria Curie Skłodowska, Faculty of Pedagogy and Psychology, Lublin, Poland
| |
Collapse
|
12
|
Selective impairment of musical emotion recognition in patients with amnesic mild cognitive impairment and mild to moderate Alzheimer disease. Chin Med J (Engl) 2019; 132:2308-2314. [PMID: 31567383 PMCID: PMC6819050 DOI: 10.1097/cm9.0000000000000460] [Citation(s) in RCA: 2] [Impact Index Per Article: 0.4] [Reference Citation Analysis] [Abstract] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Indexed: 11/26/2022] Open
Abstract
Background Patients with Alzheimer disease (AD) and amnesic mild cognitive impairment (aMCI) have deficits in emotion recognition. However, it has not yet been determined whether patients with AD and aMCI also experience difficulty in recognizing the emotions conveyed by music. This study was conducted to investigate whether musical emotion recognition is impaired or retained in patients with AD and aMCI. Methods All patients were recruited from the First Affiliated Hospital of Anhui Medical University between March 1, 2015 and January 31, 2017. Using the musical emotion recognition test, patients with AD (n = 16), patients with aMCI (n = 19), and healthy controls (HCs, n = 16) were required to choose one of four emotional labels (happy, sad, peaceful, and fearful) that matched each musical excerpt. Emotion recognition scores in three groups were compared using one-way analysis of variance (ANOVA) test. We also investigated the relationship between the emotion recognition scores and Mini-Mental State Examination (MMSE) using Pearson's correlation analysis test in patients with AD and aMCI. Results Compared to the HC group, both of the patient groups showed deficits in the recognition of fearful musical emotions (HC: 7.88 ± 1.36; aMCI: 5.05 ± 2.34; AD: 3.69 ± 2.02), with results of a one-way ANOVA confirming a significant main effect of group (F(2,50) = 18.70, P < 0.001). No significant differences were present among the three groups for the happy (F(2,50)=2.57, P = 0.09), peaceful (F(2,50) = 0.38, P = 0.09), or sad (F(2,50) = 2.50, P = 0.09) musical emotions. The recognition of fearful musical emotion was positively associated with general cognition, which was evaluated by MMSE in patients with AD and aMCI (r = 0.578, P < 0.001). The correlations between the MMSE scores and recognition of the remaining emotions were not significant (happy, r = 0.228, P = 0.11; peaceful, r = 0.047, P = 0.74; sad, r = 0.207, P = 0.15). Conclusion This study showed that both patients with AD and aMCI had decreased ability to distinguish fearful emotions, which might be correlated with diminished cognitive function.
Collapse
|
13
|
Altamura M, Santamaria L, Elia A, Angelini E, Padalino FA, Altamura C, Padulo C, Mammarella N, Bellomo A, Fairfield B. Emotional Prosody Effects on Verbal Memory in Euthymic Patients With Bipolar Disorder. Front Psychiatry 2019; 10:466. [PMID: 31333516 PMCID: PMC6620865 DOI: 10.3389/fpsyt.2019.00466] [Citation(s) in RCA: 0] [Impact Index Per Article: 0] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Journal Information] [Submit a Manuscript] [Subscribe] [Scholar Register] [Received: 03/21/2019] [Accepted: 06/12/2019] [Indexed: 11/30/2022] Open
Abstract
A growing body of evidence suggests that emotional prosody influences the ability to remember verbal information. Although bipolar disorder (BD) has been shown to be associated with deficits in verbal memory and emotional processing, the relation between these processes in this population remains unclear. In the present study, we aimed to investigate the impact of emotional prosody on verbal memory in euthymic BD patients compared with controls. Participants were randomly divided into three subgroups according to different prosody listening conditions (a story read with a positive, negative, or neutral prosody) and effects on a yes-no recognition memory task were investigated. Results showed that euthymic bipolar patients remembered comparable numbers of words after listening to the story with a negative or neutral prosody but remembered fewer words after listening to the positive version compared with healthy controls. Results suggest that verbal memory is hindered in BD patients after listening to the story read with a positive prosody. This recognition bias for information with a positive prosody may lead to negative intrusive verbal memories and poor emotion regulation.
Collapse
Affiliation(s)
- Mario Altamura
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Licia Santamaria
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Antonella Elia
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Eleonora Angelini
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Flavia A Padalino
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Claudia Altamura
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Caterina Padulo
- Department of Psychological, Health and Territorial Sciences, University of Chieti, Chieti, Italy
| | - Nicola Mammarella
- Department of Psychological, Health and Territorial Sciences, University of Chieti, Chieti, Italy
| | - Antonello Bellomo
- Department of Clinical and Experimental Medicine, Psychiatry Unit, University of Foggia, Foggia, Italy
| | - Beth Fairfield
- Department of Psychological, Health and Territorial Sciences, University of Chieti, Chieti, Italy
| |
Collapse
|
14
|
Lausen A, Schacht A. Gender Differences in the Recognition of Vocal Emotions. Front Psychol 2018; 9:882. [PMID: 29922202 PMCID: PMC5996252 DOI: 10.3389/fpsyg.2018.00882] [Citation(s) in RCA: 32] [Impact Index Per Article: 5.3] [Reference Citation Analysis] [Abstract] [Key Words] [Track Full Text] [Download PDF] [Figures] [Journal Information] [Subscribe] [Scholar Register] [Received: 02/02/2018] [Accepted: 05/15/2018] [Indexed: 11/22/2022] Open
Abstract
The conflicting findings from the few studies conducted with regard to gender differences in the recognition of vocal expressions of emotion have left the exact nature of these differences unclear. Several investigators have argued that a comprehensive understanding of gender differences in vocal emotion recognition can only be achieved by replicating these studies while accounting for influential factors such as stimulus type, gender-balanced samples, number of encoders, decoders, and emotional categories. This study aimed to account for these factors by investigating whether emotion recognition from vocal expressions differs as a function of both listeners' and speakers' gender. A total of N = 290 participants were randomly and equally allocated to two groups. One group listened to words and pseudo-words, while the other group listened to sentences and affect bursts. Participants were asked to categorize the stimuli with respect to the expressed emotions in a fixed-choice response format. Overall, females were more accurate than males when decoding vocal emotions, however, when testing for specific emotions these differences were small in magnitude. Speakers' gender had a significant impact on how listeners' judged emotions from the voice. The group listening to words and pseudo-words had higher identification rates for emotions spoken by male than by female actors, whereas in the group listening to sentences and affect bursts the identification rates were higher when emotions were uttered by female than male actors. The mixed pattern for emotion-specific effects, however, indicates that, in the vocal channel, the reliability of emotion judgments is not systematically influenced by speakers' gender and the related stereotypes of emotional expressivity. Together, these results extend previous findings by showing effects of listeners' and speakers' gender on the recognition of vocal emotions. They stress the importance of distinguishing these factors to explain recognition ability in the processing of emotional prosody.
Collapse
Affiliation(s)
- Adi Lausen
- Department of Affective Neuroscience and Psychophysiology, Institute for Psychology, University of Goettingen, Goettingen, Germany.,Leibniz Science "Primate Cognition", Goettingen, Germany
| | - Annekathrin Schacht
- Department of Affective Neuroscience and Psychophysiology, Institute for Psychology, University of Goettingen, Goettingen, Germany.,Leibniz Science "Primate Cognition", Goettingen, Germany
| |
Collapse
|